Interactive(De)Weathering of an Image using Physical Models∗
Srinivasa G.Narasimhan and Shree K.Nayar Computer Science Dept.,Columbia University,New York,NY10027 E-mail:{srinivas,nayar}@cs.columbia.edu
Abstract
Images of scenes acquired in bad weather have poor con-trasts and colors.It is known that the degradation of image quality due to bad weather is exponential in the depths of the scene points.Therefore,restoring scene colors and contrasts from a single image of the scene is inherently under-constrained.Recently,it has been shown that multiple images of the same scene taken under different weather conditions or multiple images taken by varying imaging optics can be ud to break the ambiguities in deweathering.In this paper,we ad-dress the question of deweathering a single image using simple additional information provided interactively by the ur.We exploit the physics-bad models described in prior work and develop three interactive algorithms to remove weather effects from,and add weather effects to, a single image.We demonstrate effective color and con-trast restoration using veral images taken under poor weather conditions.Furthermore,we show an exam-ple of adding weather effects to images.Our interactive methods for image(de)weathering can rve as
easy-to-u plug-ins for a variety of image processing software. 1Need for Interactive Deweathering Images taken under bad weather conditions such as fog, mist,rain and snow suffer from poor contrasts and verely corrupted colors.In bad weather,the radiance from a scene point is significantly altered due to atmo-spheric scattering.The amount of scattering depends on the distances of scene points from the obrver.There-fore,restoring clear day contrasts and colors of a scene from a single image taken in bad weather is inherently under-constrained.
Recently,there has been considerable rearch in the vi-sion and image processing communities on color and con-trast restoration in bad weather.Deweathering an im-age has been demonstrated when accurate scene depths are known[6,8]and when preci information about the atmospheric condition is known[1].In computer vi-sion,algorithms have been developed to compute scene structure and restore scene contrasts and colors auto-matically without requiring any information about the atmosphere or scene depths.The algorithms break the ambiguities that exist in deweathering by using multiple ∗This work was supported by a DARPA HumanID Contract (N000-14-00-1-0916)and an NSF Award(IIS-99-87979).images of the same scene taken under different weather conditions[3,4]or multiple images acquired by varying the imaging optics[7].
In this work,we address the question of how to deweather a single image of a scene without using pr
eci weather or depth information.Recall that previous work showed that multiple images of the same scene are neces-sary to break the ambiguities in deweathering.However, in many cas,it may not be possible to acquire multi-ple images.For instance,today,there are millions of pictures corrupted by weather,that are taken by ama-teur and professional photographers,with virtually no information about the depths or the atmosphere tagged to them.Very often,all we may have is a single pho-tograph of a scene that we wish to deweather.In such cas,we will show that using minimal additional input from the ur can successfully break the ambiguities in deweathering an image.
We begin by reviewing two scattering models[3,4]that describe the colors and contrasts of a scene under bad weather conditions.Bad on the models,we then prent three algorithms to interactively deweather a single image.In all the cas,the ur provides simple inputs through a visual interface to our physics-bad al-gorithms for restoring contrasts and colors of the scene. The types of input(for instance,approximate direction in which scene depths increa,or a rough depth g-mentation or a region of good colorfidelity)may vary from scene to scene,but are easy to provide for a hu-man ur.We also u similar interactive methods to add physically-bad weather effects to images.
We show veral results that illustrate effective deweath-ering of both color and gray-scale images captured un-der harsh weather conditions.Our algorithms do not require preci information about scene structure or at-mospheric condition and can thus rve as easy-to-u plug-ins for existing image processing software,such as Adobe Photoshop T M.We believe that our interactive methods will make(de)weathering widely applicable.
2Colors and Contrasts in Bad Weather In this ction,we review two single scattering models that describe colors and contrasts of scene points in bad weather.The models are ud in our interactive meth-ods to deweather,and add weather to images.
红豆观察日记∧
Figure1:Dichromatic atmospheric scattering model.The color E of a scene point on a foggy or hazy day,is a linear combination of the directionˆD of direct transmission(clear day)color,and the directionˆA of airlight(fog or haze)color.
The dichromatic atmospheric scattering model[3]states that the color of a scene point E in fog or haze,obrved by a color camera,is given by a linear combination of the directionˆA of airlight(fog or haze)color,and the directionˆD of the color of the scene point as en on a clear day(e illustration infigure1).Mathematically,
E=pˆD+qˆA
p=Re−βd
q=E∞(1−e−βd),(1) where,E∞is the sky brightness,R is radiance of the scene point on a clear day,βis the scattering coefficient of the atmosphere and d is the depth of the scene point. Note that the dichromatic model assumes that the scat-tering coefficientβis the same for all the color chan-nels.Also,obrve that the problem of deweathering an image,by computing clear day colors pˆD solely from obrved color vectors E is verely under-constrained. The contrast or monochrome model[4]gives a mathe-matical expression for the intensity E of a scene point in bad weather,as reco
rded by a monochrome camera: E=Re−βd+E∞(1−e−βd).(2) As can be en from both the models,the color and contrast of a scene point degrade exponentially with its depth from the obrver1.Hence,traditional space in-variant techniques for color and contrast enhancement cannot be ud to satisfactorily deweather images.In the following ctions,we describe our interactive tech-niques for image deweathering using simple inputs from the ur.
1Note that the models are bad on single-scattering and hence are not valid for turbulence and aerosol blurring and scat-tering by pollutants.If the atmosphere is non-homogeneous along the line of sight,βwill be a function of depth d.Then scaled
depthβd is replaced by optical thickness T=
d
0β(x)dx.
3Dichromatic Color Transfer
Consider a scene with points at different depths but with
similar clear day colors.For instance,trees at differ-
ent distances,or buildings at different depths,may have
similar color directions(although their magnitudes may
vary)on a clear day.In this scenario,the colors of near
scene points are less corrupted by bad weather as com-
pared to the distant scene points.We now describe an
algorithm to transfer colors from nearby regions to re-
小米邮箱
place colors of regions that are most effected by bad
weather,in a physically consistent manner.In other
words,we impo constraints bad on the dichromatic
model(1)to lect colors of near scene points to replace
colors of far scene points.
3.1Interactive Step
Only two manual inputs are necessary for the color
transfer algorithm.First,we lect a nearby“good”
region in the image,where colors D are not corrupted
(or,minimally altered)by bad weather,as shown by the
white rectangle infigure2(a).Then,we mark a region
(say,sky)that most rembles the color of airlight,as
shown by the black rectangle infigure2(a)2.The aver-
age color within this region is computed to estimate the
directionˆA of airlight color.
3.2Automated Step
For each pixel,with color E i,in the weather effected re-
gion,we arch for the best matching color in the“good”
region.The arch is restricted to a t of pixels in the
“good”region that satisfy the dichromatic planarity(1),
E i.(D׈A)=0.
From this t,we choo a pixel who colorˆD i is far-
thest(in terms of angle)from the fog colorˆA,using
min{ˆD.ˆA}.
In order to compute the magnitude of the color ud to
replace the pixel E i,we u the dichromatic model(1)
to decompo the scene color E i into two components:
E i=pˆD i+qˆA.
Finally,we replace the color E i of the pixel by the
deweathered color,pˆD i.Note that the ambiguities in
the dichromatic model are broken due to the prence
of similar colored scene points at different distances.
This algorithm does not require any information regard-
ing scene depths or atmospheric conditions.Further,it
2If such a region does not exist in the image,then the ur may provide the hue of the sky and assume the sky intensity to be
the maximum intensity in the image.Another way of computingnomy
the color of airlight is by intercting dichromatic planes of two
different ur provided scene colors[3].
(a)Input :Misty image
(b)Output :Color corrected image
Ur lected
Ur lected Figure 2:Color correction by dichromatic color transfer.(a)Input misty image consisting of green bushes at different distances.A region of “good”color is marked in the white rectangle.A region that most rembles the color of mist is marked in the black rectangle.(b)Colors from the near “good”region are transfered to farther regions.Notice the bluish colors of the farther bushes replaced by greenish colors.
does not assume homogeneity of the atmosphere over the entire field of view.The result of applying this method is shown in figure 2(b).Notice the significant change in the colors of the far bushes.
4Deweathering using Depth Heuristics
A limitation of the color transfer method is that all col-ors in the weather effected region may not have corre-sponding colors in the “good”color region.In this c-tion,we describe deweathering using heuristics on scene depths.Note that subtle weather effects within small depth ranges are not captured by a camera with limited dynamic range (say,8bits).Therefore,preci distances are not required for effective deweathering.Moreover,in many cas,it may be possible to input approximate “trends”in the depths of scene points (say,the direction of increasing depths).For instance,a scene with a street
along the viewing direction is common in surveillance or tracking scenarios (e figure 6).The deweathering al-gorithm is detailed below.
4.1Interactive Step
We lect a region of the sky to obtain the sky intensity
E ∞(and sky color direction ˆA
,if the input is a color im-age).Then,the “depth trend”is interactively specified in the following manner.First,we input the approxi-mate location of a vanishing point along the direction of increasing distance in the image (e red circle in fig-ure 4).The distances of the scene points are inverly related to their image distances to the vanishing point.Next,we input the approximate minimum and maxi-mum distances and interpolate distances (say,using a linear or quadratic function)for points in between.For illustration purpos,we ud
d =d min +α(d max −d min ),
(3)
where,α∈(0,1)is the fractional image distance from a pixel to the vanishing point.For d =d max ,α=1and for d =d min ,α=0.The resulting depth trend is shown in figure 3(a).
4.2Automated Step
Consider the model given in equation 2.At every pixel,the depth estimate d is known,and the sky brightness E ∞is measured.Generally,the atmosphere condition remains constant (or varies slowly)over small distance ranges and fields of view that are relevant to computer vision applications.If we assume homogeneity of the at-mosphere,then the scattering coefficient βis constant for all pixels in the image.Then,note that different values of the scattering coefficient βproduce the effects of different densities of bad weather (moderate,heavy,etc.).Thus,by continuously changing β(imagine a slider in Adobe Photoshop T M ),we can progressively es-timate the clear day radiances R at each pixel as,
R =[E −E ∞(1−e −βd )]e βd .
(4)
Similarly,note that the dichromatic model (equation 1)can be ud to restore colors in an RGB image.
There-fore,while the color transfer method can be applied only to color images,this method can be applied to both color and gray-scale images.In this ca,the homogeneity of the atmosphere breaks the ambiguity in deweathering an image.The results shown in figures 4,5and 6illus-trate that approximate depth information can be ud effectively for image deweathering.
5
Restoration using Planar Depth Segments
来电铃声怎么设置
In the previous ction,we described an interactive tech-nique where depth trends can be followed.However,
0.5
11.52
2.50.5
11.522.5
30.5
11.522.5
3(a)(b)(c)
Figure 3:Depth heuristics ud to deweather images shown in figures 4,5and 6respectively.The vanishing point corresponding to the direction of increasing distances is marked.Approximate minimum and maximum distances are input to the algorithm and the intermediate distances are interpolated.
租房委托书模板
The
depths
are not ud
for sky
regions
(empty
spaces).
(a)Input :Misty image
(c)Zoomed-in regions of (a)(b)Output :Deweathering to different extents
(by choosing different values for )
b (d)Contrast and color restoration in zoomed-in regions.
Ur
lected Ur lected Figure 4:Restoring clear day scene colors using depth heuristics.(a)Input ima
ge captured in mist.The colors and contrasts of scene points,especially in farther regions,are corrupted verely.(b)Two images illustrating different amounts of mist removed from the image in (a).The images were computed using the depth “trend”shown in figure 3(a).(c)Zoomed in regions lected from (a)at different depths showing different amounts of mist.(d)Corresponding zoomed in regions of the deweathered images.Notice the significant color and contrast enhancement.
(a)Input:Misty image
(c)Zoomed-in regions of(a)
(b)Output:Deweathering to different extents
(by choosing different values for)
b
(d)Contrast and color restoration in zoomed-in
regions.
Figure5:Restoring clear day scene colors using depth heuristics.(a)Input image captured in mist.The
colors and contrasts of scene points,especially in farther regions,are corrupted verely.(b)Two images illustrating different amounts of mist removed from the image in(a).The images were computed using the depth“trend”shown infigure3(b).(c) Zoomed in regions lected from(a)at different depths showing different amounts of mist.(d)Corresponding zoomed in regions of the deweathered images.Notice the significant color and contrast enhancement.
urban scenes with strong depth discontinuities and -
定位基准vere occlusions(induced by different buildings)are not
suitable for the previous approach where depth trends
were smoothly interpolated.In such cas,it is bet-
ter to provide a rough depth gmentation of the scene.
Recall that preci depth information is not needed to
deweather images.For instance,the brightness levels of
fog for a frontal planar surface are approximately equal
to the brightness levels for a curved surface at the same
distance.Thus,planar depth gments should suffice for
deweathering in urban scenes(efigure7(b)).
The deweathering algorithm is similar to the one pre-
nted in the previous ction.The depths,however,are
provided as approximate planes.In our experiments,we
ud images from the Columbia Weather and Illumina-
tion Databa(WILD)[2].Orthographic depths were
obtained from satellite orthophotos(efigure7(b)).
除法的由来
Once again,the sky brightness E∞was measured by -
lecting a region of the sky.The images can be deweath-
ered by computing clear day scene radiances R or colors
pˆD depending on whether a gray-scale or a color im-
age is input to the algorithm.Results of deweathering
a misty scene is shown infigure7(c).Notice the signif-
icant increa in contrasts of the scene points at various
depths.In summary,the above results demonstrate that
weather effects can be sufficiently removed from images
even when only approximate depths are known.
(a)Input :Foggy image
(c)Zoomed-in regions of (a)(b)Output :Deweathering to different extents初一数学知识点
(by choosing different values for )
b (d)Contrast restoration in zoomed-in regions.
Figure 6:Restoring clear day scene contrasts using depth heuristics.(a)Input gray-scale image captured in fog.The contrasts of scene points,especially in farther regions,are degraded verely.(b)Two images illustrating different amounts of fog removed from the image in (a).The images were computed using the depth “trend”shown in figure 3(c).(c)Zoomed in regions lected from (a)at
different depths showing different amounts of fog.(d)Corresponding zoomed in regions of the deweathered images.Notice the significant contrast enhancement.