Salient region detection and gmentation

更新时间:2023-06-23 15:00:47 阅读: 评论:0

Salient Region Detection and Segmentation
Radhakrishna Achanta,Francisco Estrada,Patricia Wils,and Sabine S¨u sstrunk School of Computer and Communication Sciences(I&C),
Ecole Polytechnique F´e d´e rale de Lausanne(EPFL), {radhakrishna.achanta,francisco.estrada,patricia.wils,sabine.susstrunk}@
epfl.ch
ivrg.epfl.ch/
Abstract.Detection of salient image regions is uful for applications
like image gmentation,adaptive compression,and region-bad image
retrieval.In this paper we prent a novel method to determine salient
regions in images using low-level features of luminance and color.The
method is fast,easy to implement and generates high quality saliency
maps of the same size and resolution as the input image.We demonstrate
the u of the algorithm in the gmentation of mantically meaningful
whole objects from digital images.
Key words:Salient regions,low-level features,gmentation
1Introduction
Identifying visually salient regions is uful in applications such as object bad image retrieval,adaptive content delivery[11,12],adaptive region-of-interest bad image compression,and smart image resizing[2].We identify salient re-gions as tho regions of an image that are visually more conspicuous by virtue of their contrast with respect to surrounding regions.Similar definitions of saliency exist in literature where saliency in images is referred to as local contrast[9,11].
Our method forfinding salient regions us a contrast determinationfilter that operates at various scales to generate saliency maps containing“saliency values”per pixel.Combined,the individual maps result in ourfinal saliency map.We demonstrate the u of thefinal saliency map in gmenting whole objects with the aid of a relatively simple gmentation technique.The novelty of our approach
lies infinding high quality saliency maps of the same size and resolution as the input image and their u in gmenting whole objects.The method is effective on a wide range of images including tho of paintings,video frames,and images containing noi.
The paper is organized as follows.The relevant state of the art in salient region detection is prented in Section2.Our algorithm for detection of salient regions and its u in gmenting salient objects is explained in Section3.The parameters ud in our algorithm,the results of saliency map generation,g-mentation,and comparisons against the method of Itti et al.[9]are given in Section4.Finally,in Section5conclusions are prented.
早上问候语图片
2Authors Suppresd Due to Excessive Length
2Approaches for Saliency Detection
The approaches for determining low-level saliency can be bad on biological models or purely computational ones.Some approaches consider saliency over veral scales while others operate on a single scale.In general,all methods u some means of determining local contrast of image regions with their sur-roundings using one or more of the features of color,intensity,and orientation. Usually,parate feature maps are created for each of the features ud and then combined[8,11,6,4]
to obtain thefinal saliency map.A complete survey of all saliency detection and gmentation rearch is beyond the scope of this paper,here we discuss tho approaches in saliency detection and saliency-bad gmentation that are most relevant to our work.
Ma and Zhang[11]propo a local contrast-bad method for generating saliency maps that operates at a single scale and is not bad on any biological model.The input to this local contrast-bad map is a resized and color quan-tized CIELuv image,sub-divided into pixel blocks.The saliency map is obtained from summing up differences of image pixels with their respective surrounding pixels in a small neighborhood.This framework extracts the points and regions of attention.A fuzzy-growing method then gments salient regions from the saliency map.
Hu et al.[6]create saliency maps by thresholding the color,intensity,and orientation maps using histogram entropy thresholding analysis instead of a scale space approach.They then u a spatial compactness measure,computed as the area of the convex hull encompassing the salient region,and saliency density, which is a function of the magnitudes of saliency values in the saliency feature maps,to weigh the individual saliency maps before combining them.
Itti et al.[9]have built a computational model of saliency-bad spatial at-tention derived from a biolog
ically plausible architecture.They compute saliency maps for features of luminance,color,and orientation at different scales that ag-gregate and combine information about each location in an image and feed into a combined saliency map in a bottom-up manner.The saliency maps produced by Itti’s approach have been ud by other rearchers for applications like adapting images on small devices[3]and unsupervid object gmentation[5,10].
Segmentation using Itti’s saliency maps(a480x320pixel image generates a saliency map of size30x20pixels)or any other sub-sampled saliency map from a different method requires complex approaches.For instance,a Markov random field model is ud to integrate the ed values from the saliency map along with low-level features of color,texture,and edges to grow the salient object regions [5].Ko and Nam[10],on the other hand,u a Support Vector Machine trained on the features of image gments to lect the salient regions of interest from the image,which are then clustered to extract the salient objects.We show that using our saliency maps,salient object gmentation is possible without needing such complex gmentation algorithms.校园安全
Recently,Frintrop et al.[4]ud integral images[14]in VOCUS(Visual Object Detection with a Computational Attention System)to speed up com-putation of center-surround differences forfinding salient regions using parate
Salient Region Detection and Segmentation3 feature maps of color,intensity,and orientation.Although they obtain better resolution saliency maps as compared to Itti’s method,they resize the feature saliency maps to a lower scale,thereby losing resolution.We u integral images in our approach but we resize thefilter at each scale instead of the image and thus maintain the same resolution as the original image at all scales.
3Salient region detection and gmentation
小李琳老公
This ction prents details of our approach for saliency determination and its u in gmenting whole objects.An overview of the complete algorithm is prented in Figure1.Using the saliency calculation method described later, saliency maps are created at different scales.The maps are added pixel-wi to get thefinal saliency maps.The input image is then over-gmented and the gments who average saliency exceeds a certain threshold are chon.
什么年丰Fig.1.Overview of the process offinding salient regions.(a)Input image.(b)Saliency maps at different scales are computed,added pixel-wi,and normalized to get thefinal saliency map.(c)Thefinal saliency map and the gmented image.(d)The output image containing the salient object that is made of only tho gments that have an average saliency value greater than the threshold T(given in Section3.1).
3.1Saliency calculation
In our work,saliency is determined as the local contrast of an image region with respect to its neighborhood at various scales.This is evaluated as the distance between the average feature vector of the pixels of an image sub-region with the average feature vector of the pixels of its neighborhood.This allows obtaining a combined feature map at a given scale by using feature vectors for each pixel, instead of combining parate saliency maps for scalar values of each feature.At a given scale,the contrast bad saliency value c i,j for a pixel at position(i,j)in the image is determined as the distance D between the average vectors of pixel
4Authors Suppresd Due to Excessive
Length
Fig.2.(a)Contrast detectionfilter showing inner square region R1and outer square region R2.(b)The width of R1remains constant while that of R2ranges according to Equation3by halving it for each new scale.(c)Filtering the image at one of the scales in a raster scan fashion.
features of the inner region R1and that of the outer region R2(Figure2)as:
c i,j=D
1
N1
N1
p=1
v p
,
1
N2
N2
q=1
v q
(1)
where N1and N2are the number of pixels in R1and R2respectively,and v is the vector of feature elements corresponding to a pixel.The distance D is a Euclidean distance if v is a vector of uncorrelated feature elements,and it is a Mahalanobis distance(or any other suitable distance measure)if the elements of the vector are correlated.In this work,we u the CIELab color space[7],a
口红是怎么做的ssuming sRGB images,to generate feature vectors for color and luminance.Since perceptual differences in CIELab color space are approximately Euclidian,D in Equation 1is:
c i,j= v1−v2 (2) where v1=[L1,a1,b1]T an
d v2=[L2,a2,b2]T ar
e the average vectors for regions R1and R2,respectively.Since only average feature vector values o
f R1and R2need to be found,we u the integral image approach as ud in[14]for computational efficiency.A change in scale is affected by scalin
g the region R2 instead of scaling the image.Scaling thefilter instead of the image allows the generation of saliency maps of the same size and resolution as the input image. Region R1is usually chon to be one pixel.If the image is noisy(for instance if hig
h ISO values are ud when capturing images,as can often be determined with the help of Exif data(Exchangeable File Information Format[1])then R1 can be a small region of N×N pixels(in Figure5(f)N is9).
For an image of width w pixels and height h pixels,the width of region R2,
namely w R
2is varied as:
w
2
≥(w R
2
怀念过去歌词
)≥
w
8
(3)
assuming w to be smaller than h(el we choo h to decide the dimensions of R2).This is bad on
the obrvation that the largest size of R2and the smaller ones(smaller than w/8)are of less u infinding salient regions(e Figure 3).The former might highlight non-salient regions as salient,while the latter are basically edge detectors.So for each image,filtering is performed at three
Salient Region Detection and Segmentation5
Fig.3.From left to right,original image followed byfiltered images.Filtering is done using R1of size one pixel and varying width of R2.When R2has the maximum width,certain non salient parts are also highlighted(the ground for instance).It is the saliency maps at the intermediate scales that consistently highlight salient regions. The last three images on the right mainly show edges.
different scales(according to Eq.3)and thefinal saliency map is determined as a sum of saliency values across the scales S:
c i,j(4)
m i,j=
S
关于校园欺凌∀i∈[1,w],j∈[1,h]where m i,j is an element of the combined saliency map M obtained by point-wi summation of saliency values across the scales.
3.2Whole Object Segmentation using Saliency Maps
The image is over-gmented using a simple K-means algorithm.The K eds for the K-means gmentation are automatically determined using the hill-climbing algorithm[13]in the three-dimensional CIELab histogram of the image.The
手指关节大Fig.4.(a)Finding peaks in a histogram using a arch window like(b)for a one dimensional histogram.
hill-climbing algorithm can be en as a arch window being run across the space of the d-dimensi
onal histogram tofind the largest bin within that window. Figure4explains the algorithm for a one-dimensional ca.Since the CIELab feature space is three-dimensional,each bin in the color histogram has3d−1=26 neighbors where d is the number of dimensions of the feature space.The number

本文发布于:2023-06-23 15:00:47,感谢您对本站的认可!

本文链接:https://www.wtabcd.cn/fanwen/fan/89/1051423.html

版权声明:本站内容均来自互联网,仅供演示用,请勿用于商业和其他非法用途。如果侵犯了您的权益请与我们联系,我们将在24小时内删除。

标签:问候语   歌词   欺凌   手指
相关文章
留言与评论(共有 0 条评论)
   
验证码:
推荐文章
排行榜
Copyright ©2019-2022 Comsenz Inc.Powered by © 专利检索| 网站地图