Background subtraction techniques

更新时间:2023-06-23 15:30:08 阅读: 评论:0

Background Subtraction Techniques
Alan M.M c Ivor
Reveal Ltd
PO Box128-221,Remuera,Auckland,New Zealand
<
Abstract
Background subtraction is a commonly ud class of techniques for gmenting out objects of interest in a scene for applications such as surveillance.This paper surveys a repre-ntative sample of the published techiques for background subtraction,and analys them with respect to three important attributes:foreground detection;background maintenance; and postprocessing.
Keywords:Background subtraction,surveillance,gmentation
1Introduction
Background subtraction is a commonly ud class of techniques for gmenting out objects of interest in a scene for applications such as surveillance.It involves comparing an obrved image with an estimate of the image if it contained no objects of interest.The areas of the image plane where there is a significant difference between the obrved and estimated images indicate the location of the objects of interest.The name“background subtraction”comes from the simple technique of subtracting the obrved image from the estimated image and thresholding the result to generate the objects of interest.
This paper surveys veral techniques which are reprentative of this class,and compares three important attributes of them:how the object areas are distinguished from the background;how the background is maintained over time;and,how the gmented object areas are postprocesd to reject fal positives,etc.
Several algorithms were implemented to evaluate their relative performance under a variety of different operating conditions.From this,some conclusions are drawn about what features are important in an algorithm of this class.
2Heikkila and Olli
In[8],a pixel is marked as foreground if
|I t−B t|>τ(1) whereτis a“predefined”threshold.The thresholding is followed by closing with a3×3 kernel and the discarding of small regions.
The background update is
B t+1=αI t+(1−α)B t(2) whereαis kept small to prevent artifical“tails”forming behind moving objects.
Two background corrections are applied:
1.If a pixel is marked as foreground for more than m of the last M frames,then the
background is updated as B t+1=I t.This correction is designed to compensate for sudden illumination changes and the appearance of static new objects.
2.If a pixel changes state from foreground to background frequently,it is masked out
from inclusion in the foreground.This is designed to compensate forflucuating illumination,such as swinging branches.
3Adaptive Mixture of Gaussians
Each pixel is modeled parately[3,9,10]by a mixture of K Gaussians
P(I t)=红薯英语
K
i=1
ωi,tη(I t;µi,t,Σi,t)(3)
where K=4in[9]and 5in[10].In[3,10],it is assumed thatΣi,t=σ2i,t I.
The background is updated,before the foreground is detected,as follows:
1.If I t matches component ,I t is withinλstandard deviations ofµi,t(whereλis
2in[3]and2.5in in[9,10]),then the i th component is updated as follows:
ωi,t=ωi,t−1(4)
µi,t=(1−ρ)µi,t−1+ρI t(5)
σ2i,t=(1−ρ)σ2i,t−1+ρ(I t−µi,t)T(I t−µi,t)(6) whereρ=αPr(I t|µi,t−1,Σi,t−1).
视频嗅探
2.Components which I t don’t match are updated by
ωi,t=(1−α)ωi,t−1(7)
µi,t=µi,t−1(8)
σ2i,t=σ2i,t−1(9)
(10)
3.If I t does not match any component,then the least likely component is replaced
with a new one which hasµi,t=I t,Σi,t large,andωi,t low.
After the updates,the weightsωi,t are renormalid.
The foreground is detected as follows.All components in the mixture are sorted into the order of decr
easingωi,t/ Σi,t .So higher importance gets placed on components with the most evidence and lowest variance,which are assumed to be the background.Let
B=argmin b  b i=1ωi,t
i=1ωi,t>T
(11)
for some threshold T.B are assumed to be background.So if I t does not match one of the components,the pixel is marked as foreground.Foreground pixels are then gmented into regions using connected component labelling.Detected regions are reprented by their centroid[11].
4Pfinder
Pfinder[13]us a simple scheme,where background pixels are modeled by a single value, updated by
B t=(1−α)B t−1+αI t(12) and foreground pixels are explicitly modeled by a mean and covariance,which are updated recursively.It requires an empty scene at start-up.
5W4
In[5,6,7],a pixel is marked as foreground if
|M−I t|>D or|N−I t|>D(13) where the(per pixel)parameters M,N,and D reprent the minimum,maximum,and largest interframe absolute difference obrvable in the background scene.The parame-ters are initially estimated from thefirst few conds of video and are periodically updated for tho parts of the scene not containing foreground objects.
The resulting foreground“image”is eroded to eliminate1-pixel thick noi,then connected component labelled and small regions rejected.Finally,the remaining regions are dilated and then eroded.
6LOTS
In[1],three background models are simultaneously kept,a primary,a condary,and an old background.They are updated as follows:
1.The primary background is updated as
B t+1=αI t+(1−α)B t(14)
if the pixel is not marked as foreground,and is updated as
B t+1=βI t+(1−β)B t(15)
if the pixel is marked as foreground.In the above,αwas lected from within the range[0.25],with the default valueα=0.0078125,andβ=0.25α.
2.The condary background is updated as
B t+1=αI t+(1−α)B t(16)
at pixels where the incoming image is not significantly different from the current value of the condary background,whereαis as for the primary background.At pixels where there is a significant difference,the condary background is updated by
B t+1=I t(17)
3.The old background is a copy of the incoming image from9000to18000frames ago. Foreground detection is bad on adaptive thresholding with hystersis,with spatially vary-ing thresholds.Several corrections are applied:
1.Small foreground regions are rejected.
2.The number of pixels above threshold in the current frame is compared to the number
in the previous frame.A significant change is interpreted as a rapid lighting change.
秋风引刘禹锡In respon the global threshold is temporarily incread.
寒假日记600字3.The pixel values in each foreground region are compared to tho in the correspond-
ing parts of the primary and condary backgrounds,after scaling to match the mean intensity.The eliminate artifacts due to local lighting changes and stationary fore-ground objects,respectively.
7Halevy
In[4],the background is updated by
B t+1=αS(I t)+(1−α)B t(18) at all pixels,where S(I t)is a smoothed version of I t.Foreground pixels are identified by tracking the maxima of S(I t−B t),as oppod to thresholding.They uα=[0.5] and rely on the streaking effect to help in determining correspondence between frames.
浙江高考满分
They also note that(1−α)t<0.1gives an indication of the number of frames t needed for the background to ttle down after initialisation.
8Cutler
In[2],colour images are ud becau it is claimed to give better gmentation than monochrome,especially in low contrast areas,such as objects in dark shadows.
The background estimate is defined to be the temporal median of the last N frames,with typical values of N ranging from50to200.
Pixels are marked as foreground if
|I t(C)−B t(C)|>Kσ(19)
C∈R,G,B
whereσis an offline generated estimate of the noi standard deviation,and K is an apriori lected constant(typically10).
东莞东城
This method also us template matching to help in lecting candidate matches.
9Wallflower
魔法摄像头In[12],two auto-regressive background models are ud:
B t=−
p
k=1
a k B t−k(20)
ˆI
t=−
p
k=1
a k I t−k(21)
along with a background threshold
E(e2t)=E(B2t)+
p
k=1
a k E(B t B t−k)(22)
τ=4 E(e2t)(23) Pixels are marked as background if
|I t−B t|<τand|I t−ˆI t|<τ(24) The coefficients a k are updated each frame time from the sample covariances of the ob-rved background values.In the implementation,the last50values are ud to estimate 30parameters.
If more than70%of the image is classified as foreground,the model is abandoned and replaced with a“back-up”model.
10Discussion
Many other algorithms,which have not been discusd here,assume that the background does not vary and hence can be captured apriori.This limits their ufulness in most practical applications.Very few of the papers describe their algorithms in sufficient detail to be able to easily reimplement them.
A significant number of the described algorithms u a simple IIRfilter applied to each pixel independently to update the ,(2),and u thresholding to classify pixels into foreground/background.This is followed by some postprocessing to correct classification failures.
In[4],it was noted that the performance of the method in Section7was found to degrade if more than one condary background was ud.It was postulated that this is becau it introduces a greater range of values that a pixel can take on without being marked as foreground.However,the adaptive mixture of Gaussians approach operates effectively with even more component models.From this it can be en that using more models is beneficial only if by adding them the ,variance)of the individual components gets reduced such that the nett range of background values actually decreas.
The Wallflower algorithm requires the storage of over130images,many of which are float valued.It requires significant statistical analysis per pixel per frame to adapt the coefficients.
References
[1]T.E.Boult,R.Micheals,X.Gao,P.Lewis,C.Power,W.Yin and A.Erkan:Frame-
rate omnidirectional surveillance and tracking of camouflaged and occluded targets in:Second IEEE Workshop on Visual Surveillance Fort Collins,Colorado(Jun.1999) pp.48–55.
[2]R.Cutler and L.Davis:View-bad detection and analysis of periodic motion in:
International Conference on Pattern Recognition Brisbane,Australia(Aug.1998) pp.495–500.
[3]W.E.L.Grimson,C.Stauffer,R.Romano and L.Lee:Using adaptive tracking to
classify and monitor activities in a site in:Computer Vision and Pattern Recognition Santa Barbara,California(Jun.1998)pp.1–8.
[4]G.Halevy and D.Weinshall:Motion of disturbances:detection and tracking of multi-
body non-rigid motion Machine Vision and Applications11(1999)122–137.
[5]I.Haritaoglu,R.Cutler,D.Harwood and L.S.Davis:Backpack:Detection of people
carrying objects using silhouettes in:International Conference on Computer Vision (1999)pp.102–107.
[6]I.Haritaoglu,D.Harwood and L.S.Davis:W4:Who?when?where?what?a real
time system for detecting and tracking people in:Third Face and Gesture Recognition Conference(Apr.1998)pp.222–227.
[7]I.Haritaoglu,D.Harwood and L.S.Davis:Hydra:Multiple people detection and
tracking using silhouettes in:Second IEEE Workshop on Visual Surveillance Fort Collins,Colorado(Jun.1999)pp.6–13.
[8]J.Heikkila and O.Silven:A real-time system for monitoring of cyclists and pedestri-
ans in:Second IEEE Workshop on Visual Surveillance Fort Collins,Colorado(Jun.
1999)pp.74–81.
[9]Y.Ivanov,  C.Stauffer,  A.Bobick and W.E.L.Grimson:Video surveillance of
interactions in:Second IEEE Workshop on Visual Surveillance Fort Collins,Colorado (Jun.1999)pp.82–90.
[10]C.Stauffer and W.E.L.Grimson:Adaptive background mixture models for real-time
tracking in:Computer Vision and Pattern Recognition Fort Collins,Colorado(Jun.
1999)pp.246–252.
[11]G.P.Stein:Tracking from multiple view points:Self-calibration of space and time
in:Computer Vision and Pattern Recognition Fort Collins,Colorado(Jun.1999)pp.
521–527.
[12]K.Toyama,J.Krumm,B.Brumitt and B.Meyers:Wallflower:Principles and prac-
tice of background maintenance in:International Conference on Computer Vision (1999)pp.255–261.
[13]C.Wren,A.Azabayejani,T.Darrell and A.Pentland:Pfinder:Real-time tracking of
the human body IEEE Transactions on Pattern Analysis and Machine Intelligence 19(1997)780–785.
初中趣味数学

本文发布于:2023-06-23 15:30:08,感谢您对本站的认可!

本文链接:https://www.wtabcd.cn/fanwen/fan/89/1051452.html

版权声明:本站内容均来自互联网,仅供演示用,请勿用于商业和其他非法用途。如果侵犯了您的权益请与我们联系,我们将在24小时内删除。

标签:日记   高考   东莞   魔法
相关文章
留言与评论(共有 0 条评论)
   
验证码:
推荐文章
排行榜
Copyright ©2019-2022 Comsenz Inc.Powered by © 专利检索| 网站地图