2011
DOI: 10.1007/978-3-642-21227-7_62
|View full text |Cite
|
Sign up to set email alerts
|

Fast and Efficient Saliency Detection Using Sparse Sampling and Kernel Density Estimation

Abstract: Abstract. Salient region detection has gained a great deal of attention in computer vision. It is useful for applications such as adaptive video/image compression, image segmentation, anomaly detection, image retrieval, etc. In this paper, we study saliency detection using a center-surround approach. The proposed method is based on estimating saliency of local feature contrast in a Bayesian framework. The distributions needed are estimated particularly using sparse sampling and kernel density estimation. Furth… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
11
0

Year Published

2016
2016
2023
2023

Publication Types

Select...
5
3
2

Relationship

0
10

Authors

Journals

citations
Cited by 127 publications
(20 citation statements)
references
References 16 publications
(43 reference statements)
0
11
0
Order By: Relevance
“…Saliency Models and Datasets. For all experiments that follow we use 12 classical saliency models (AIM [12], BMS [73], CVS [20], DVA [28], FES [51], GBVS [26], IKN [32], IMSIG [29], LDS [21], RARE2012 [43], SSR [46] and SUN [74]) and 8 deep learning models (DGII [39], DVAP [59], eDN [58], ICF [40], MLNet [18], oSALICON [30,52], SalGAN [41] and SAM-ResNet [19]). We use the SMILER framework [64] to run all models without center bias on P 3 and with center bias on O 3 .…”
Section: Methodsmentioning
confidence: 99%
“…Saliency Models and Datasets. For all experiments that follow we use 12 classical saliency models (AIM [12], BMS [73], CVS [20], DVA [28], FES [51], GBVS [26], IKN [32], IMSIG [29], LDS [21], RARE2012 [43], SSR [46] and SUN [74]) and 8 deep learning models (DGII [39], DVAP [59], eDN [58], ICF [40], MLNet [18], oSALICON [30,52], SalGAN [41] and SAM-ResNet [19]). We use the SMILER framework [64] to run all models without center bias on P 3 and with center bias on O 3 .…”
Section: Methodsmentioning
confidence: 99%
“…We use the metric of AUC-Borji and sAUC to compare our model to the models shown in Figure 4, with AUC scores ranging from low to highest reported scores. We use the IttiKoch model (Walther and Koch, 2006) as an extension of the base model of Itti et al (1998), the Achanta model (Achanta et al, 2009) as one of the most cited models in the frequency domain, and the SUN saliency model (Zhang et al, 2008) and the Fast and Efficient Saliency (FES) model (Tavakoli et al, 2011) as two of the Bayesian-based models. We use the Murray model (Murray et al, 2011) that uses wavelet transform to generate scales like our model.…”
Section: Comparing DI Erent Versions Of the Model To The Other Modelsmentioning
confidence: 99%
“…To integrate the center bias into the saliency mapping framework, we utilize the object‐biased Gaussian refinement process [18, 41, 42] to generate the center prior. To each pixel z whose coordinate is false(x,yfalse)$(x, y)$, we denote the salient mapping of the center prior as: Sal+(z)badbreak=exp[](xzxobj)22σx2+(yzyobj)22σy2,$$\begin{equation} \mathit {Sal}^+(z) = \exp {\left[-{\left(\frac{(x_z-x_\mathrm{obj})^2}{2\sigma _\mathrm{x}^2}+\frac{(y_z-y_\mathrm{obj})^2}{2\sigma _\mathrm{y}^2}\right)}\right]}, \end{equation}$$where σ x , σ y are set as 14$\frac{1}{4}$ the image's height and width and x obj , y obj are the object centers, defined as: xobj=z=1Nωzxz,yobj=z=1Nωzyz,$$\begin{align} x_\mathrm{obj} = \sum _{z=1}^N \omega _z x_z,\quad y_\mathrm{obj} = \sum _{z=1}^N \omega _z y_z, \end{align}$$where, ωz$\omega _z$ is the weight, which are defined by the reconstruction error on pixel level Efalse(zfalse)$E(z)$ in (): ωz=E(z)z…”
Section: Dca‐based Sparse Coding With Mcp For Saliency Mappingmentioning
confidence: 99%