一种基于NSCT与GoogLeNet的多传感器图像融合算法

更新时间:2023-05-20 10:14:31 阅读: 评论:0

Vol.37No.S
Transactions of Nanjing University of Aeronautics and Astronautics Nov.2020Multi‑nsors Image Fusion via NSCT and GoogLeNet
LI Yangyu ,WANG Caiyun *,YAO Chen
College of Astronautics ,Nanjing University of Aeronautics and Astronautics ,Nanjing 211106,P.R.China
(Received 17June 2020;revid 10September 2020;accepted 20September 2020)afraid什么意思
Abstract:In order to improve the detail prervation and target information integrity of different nsor fusion images ,an image fusion method of different nsors bad on non -subsampling contourlet transform (NSCT )and GoogLeNet neural network model is propod.First ,the different nsors images ,i.e.,infrared and visible images ,are transformed by NSCT to obtain a low frequency sub -band and a ries of high frequency sub -bands respectively.Then ,the high frequency sub -bands are fud with the max regional energy lection strategy ,the low frequency sub -bands are input into GoogLeNet neural network model to extract feature maps ,and the fusion weight matrices are ad
aptively calculated from the feature maps.Next ,the fud low frequency sub -band is obtained with weighted summation.Finally ,the fud image is obtained by inver NSCT.The experimental results demonstrate that the propod method improves the image visual effect and achieves better performance in both edge retention and mutual information.
Key words :image fusion ;non -subsampling contourlet transform ;GoogLeNet neural network ;infrared image ;
visible image
CLC number :TN911.73
Document code :A
Article ID :1005‑1120(2020)S‑0088‑07
0Introduction
Image fusion technology is an effective method
to fu the same scene image acquired by different nsors into an image with clear target and back‑ground [1]
conclusion是什么意思.The technology is widely ud in the
fields of remote nsing ,military and medical image processing.Infrared and visible images are typical for different nsors.The principle of visible light nsor is to u reflected light to form images which have clear texture ,but the effect is poor in night vi‑sion and complex weather conditions.So ,it is easy to neglect target.On the contrary ,infrared nsors form images bad on the object ’s own thermal radi‑ation.The detection effect of the target is better in the infrared images.However ,due to the limitation of infrared imaging mechanism ,the definition is low and edges are blurred.Therefore ,with the u of the complementarity and redundancy of images ob‑tained by different nsors ,we can combine the rich
background texture information in visible image and the target information in infrared image to generate fud images ,which have good target information and clear background information.
Image fusion methods start from the simple weighted average fusion in the spatial domain and the method bad on principal component analysis ,to the subquent image fusion bad on multi -sc
英语翻译成汉语ale transformation and some other multi -source image fusion methods bad on spar reprentation (SR ),saliency ,deep learning and so on.The im ‑age fusion method using the theory of multi -scale transformation (MST )has gradually developed into a classic method due to its simple but effective and practical algorithm.Han et al.[2]and Wang et al.[3]rearched the infrared and visible image fusion bad on discrete wavelet transform (DWT ).Lewis et al.[4]improved the DWT -bad method and pro‑pod a kind of fusion method bad on dual -tree
*Corresponding author ,E -mail address :*******************.
How to cite this article :LI Yangyu ,WANG Caiyun ,YAO Chen.Multi -nsors image fusion via NSCT and googlenet [J ].Transactions of Nanjing University of Aeronautics and Astronautics ,2020,37(S):88‑94.http :///10.16356/j.1005‑1120.2020.S.011
No.S LI Yangyu,et al.Multi -nsors Image Fusion via NSCT and GoogLeNet complex wavelet transform (DTCWT ).Becau the non -subsampling contourlet transform (NSCT )can capture the directional feature and edge informa‑tion ,the NSCT -bad fusion methods [5‑7]became the rearch focus.The SR -bad fusion methods are ud in multi -focus image fusion.Yang et al.[8]and Liu et al.[9]applied this method to achieve good results in multi -focus image fusion.Sali
ency -bad fusion method [10]gets the salient features of the source image ,divides the image into the target sa‑liency map and the non -saliency map ,and fus the two parts parately.Some other methods such as
gradient transfer fusion (GTF )[11]and multi -scale singular value decomposition (MSVD )[12]are under
rearch recently.Deep learning method [13]is a kind
of weighted fusion method ,and the weight matrix value depends on the feature maps ,which are ex‑tracted from source images with the u of neural network.
In this paper ,we prent a novel and effective fusion method via NSCT and GoogLeNet neural network for infrared and visible image fusion.
1Preliminaries
NSCT is propod by Cunha et al.[14]in 2006,
as shown in Fig.1.NSCT us the non -subsampled pyramid decomposition (NSP )and the non -s
ubsam ‑pled direction filter banks (NSDFB )to obtain the decomposition coefficients of different scales and di‑rections of source images.
NSCT is a kind of multiscale analysis tool ,and multiscale analysis has been proved to be very effec‑tive for image fusion and other image processing tasks.The NSCT fusion methods can be summed up in three steps :(1)The low frequency and high frequency sub -bands in the transform domain are ob‑tained by decomposing the source images ;(2)De‑sign fusion rules and fu the low frequency and high frequency sub -bands respectively ;(3)Obtain the fud image by corresponding inver transform with the u of fud low frequency and high fre‑quency sub -bands.The image fusion framework bad on multiscale analysis involves two basic prob‑lems :The lection of multiscale decomposition method and the strategy for multiscale coefficient fu‑
sion.Most of the image features adopted by the fu‑sion strategy are simple ,while the fud image ef‑fect is not good enough.If we can get the features that can express the source image better ,we can de‑sign more suitable fusion strategy.
Deep learning methods have good performance in image target recognition tasks ,due to the excel‑lent feature extraction and description capabilities at each layer of the neural network.The ima
ge fea‑tures extracted by neural network are richer and more accurate reprentation of image information than previous features.Among the neural networks ,GoogLeNet neural network [15]is a brand new neural network model propod by Szegedy.It reduces the number of training parameters and the possibility
of
Fig.1Decompod structure of NSCT
89
Vol.37
恭敬Transactions of Nanjing University of Aeronautics and Astronautics over -fitting when the training data t is limited ,and also improves the classification accuracy.Com ‑pared with other network models ,GoogLeNet neu‑ral network model has a depth of only 22layers.The feature graph of each layer contains the con‑tours and gradient features ,and it is the key to high classification accuracy.
As analyzed above ,the strategy for multi -scale coefficient fusion determines the quality of the fu‑
sion image ,and the feature maps of GoogLeNet ,which is trained on ImageNet ,can express the source image precily.We design fusion strategy of low frequency sub -bands with GoogLeNet feature maps.The fusion strategy is adaptive and compre‑hensive.
2The Propod Fusion Method
This paper prents a new method of infrared
and visible image fusion ,as shown in Fig.2.
2.1Decomposition with NSCT
Suppo that there are K preregistered source
images ,and in our paper we choo K =2.But the fusion strategy is the same for K >2.Here ,we name the visible source image as I V (x ,y )and the in‑frared source image as I R (x ,
y ).As mentioned above ,NSCT is an effective de‑compo method.So in our paper ,we u this method to decompo the source images.
The low frequency sub -band image L V
(x ,
y )and L R (x ,
y )and high frequency sub -band images H V j ,r (x ,y )and H V
j ,r (x ,
y )can be obtained by Eq.(1)and Eq.(2).
I V (x ,y )=L V
J
(x ,y )+
∑j =1
J
H
V j ,r春节快乐英文
(x ,y )(1)I R (x ,y )=L R J (x ,y )+H R
j ,r (x ,y )
(2)
where j and r reprent the scale and direction num ‑ber of decomposition.2.2
雪花秘扇 豆瓣Adaptive fusion strategy with GoogLeNet The low frequency sub -band images contain the basic contour information ,which carries most of the energy of the source image.The processing of low frequency sub -band image will affect the final fu‑sion result.In our paper ,this part of the fusion strat‑egy is designed as follow.
(1)Input the low frequency sub -band image L V J (x ,y )and L R J (x ,
y )into the GoogLeNet net‑work model ,trained with the ImageNet t.With the u of feature extraction function of network model ,feature graphs Fea i V and Fea i R of different depths of neural network are obtained ,where i =1,2,…,n ,Fea i V corresponds to visible image ,
and
Fig.2Framework of the propod method
90
No.S LI Yangyu,et al.Multi-nsors Image Fusion via NSCT and GoogLeNet Fea i R corresponds to infrared image.
(2)Calculate the maximum value of pixel
points corresponding to n feature graphs with l1-
norm,then we can obtain feature map Fea R of infra‑
red low frequency sub-band images and feature map
Fea V of visible low frequency sub-band images.
(3)The feature map is extended to the size of
the source image by the method of up-sampling in‑
terpolation.Next the feature matrix which can repre‑
nt image contour feature information is obtained,
报关英文and we would calculate the corresponding weight
matrix W R and W V from feature matrix by Eqs.(3)
and(4).
W R(x,y)=Fea R (x,y)
Fea R(x,y)+Fea V(x,y)
(3)
W R(x,y)=Fea R (x,y)
Fea R(x,y)+Fea V(x,y)
(4)
The fud low frequency sub-band image L F J(x,y)is obtained by combining the low frequen‑cy sub-band image with the corresponding weight matrix by Eq.(5).
L F J(x,y)=L R J(x,y)×W R(x,y)+L V J(x,y)×W V(x,y)(5) 2.3Maximum of regional energy
The high frequency sub-band image energy is low but contains the main target energy,which af‑fects the sharpness of fud images.In order to make the target area of fusion image clearer,high frequency sub-band images are fud with the rule of the maximum of regional energy.Taking th
e (3×3)region center at(x,y)in high frequency sub-band images H R j,r(x,y)and H V j,r(x,y),the regional energy of the two regions E R j,r(x,y)and E V j,r(x,y) is calculated by Eq.(6).
E j,r=∑x'=-11∑y'=-11H j,r(x+x',y+y')2(6)
The high frequency sub-band images H F j,r(x,y) is calculated by Eq.(7).
H F j,r(x,y)={H R j,r(x,y)E R j,r(x,y)≥E V j,r(x,y)
H V j,r(x,y)El
(7) 2.4Reconstruction
The final infrared and visible fusion image I F(x,y)is obtained by corresponding inver NSCT of the fud low frequency sub-band image L F J(x,y)and the fud high frequency sub-band im‑ages H F j,r(x,y)of each layer and each direction.
2.5Summary of the propod method
The propod image fusion method is shown in Table1.
3Experimental Results
3.1Experimental ttings
In our experiment,the source infrared and visi‑ble images were collected from TNO Human Fac‑tors Rearch Institute[16].Three pairs of infrared and visible images were lected,which are UN‑camp,Sandpath and Band respectively.
For comparison,we lected veral recent and classical fusion methods to perform the same ex‑periment,including:DTCWT[4],NSCT[5],ASR[9],GTF[11]and MSVD[12].The decomposition levels for DTCWT and NSCT are all4,and the de‑composition directions in NSCT are[2334].All
Table1NSCT‑GoogLeNet image fusion algorithm steps
Input:Infrared image I R(x,y),visible image I V(x,y)
Output:Fud image I F(x,y)
(1)Image decomposition:Decompo I R(x,y)and I V(x,y)to L R J(x,y),L V J(x,y)and H V j,r(x,y),H V j,r(x,y);daad
(2)Adaptive fusion strategy with GoogLeNet:Obtain GoogLeNet feature graphs Fea i V and Fea i R;calculate weight matrix W R and W V;obtain L F J(x,y);
(3)Maximum of regional energy:Obtain H F j,r(x,y)with the rule of maximum of regional energy;
(4)Reconstruction:Obtain I F(x,y)with corresponding inver NSCT.
91
最新英文网名Vol.37
Transactions of Nanjing University of Aeronautics and Astronautics the fusion algorithms are implemented in MAT ‑LAB R2018a on Intel Core i7-8750h @2.20GHz CPU with 8GB RAM.3.2
Subjective results
As shown in Figs.3—5,there are the fusion re‑sults of three pairs of infrared and visible images ob‑tained by different fusion methods.The target infor‑mation in the infrared image and the background in‑formation in the visible image are well retained in
the fusion image by all six methods.The target brightness in the GTF fusion image is the highest but ill -defined.The MSVD fusion images are blurri‑er than other methods.As we can e in the result ,fusion images by the propod method have clear target information and fine texture.3.3
Objective evaluation
Five objective evaluation indexes are ud to compare and evaluate the fusion method.They are spatial frequency SF ,information entropy IE ,edge
information retention Q abf [17]
,weighted fusion index
Q w [18]and fusion runtime.It should be noted that a
larger value means the better fusion performance ,while runtime is just the opposite.
During the experiment ,we add Gaussian noi with variance of 5,10and 25to all three pairs of source images ,and then calculate the average value of the evaluation results after multiple tests.The re‑sults are shown in Tables 2—4.
From above results ,the following conclusions can be drawn.
(1)The IE of the propod method is the
high‑
Fig.3
Result on “UNcamp ”
image
Fig.4
Result on “Sandpath ”
image
Fig.5
Result on “Band ”image
Table 2Comparison of UNcamp fusion result evaluation Method SF IE Q abf Q w
Runtime/s DTCWT 11.9876.5070.4470.3970.192
NSCT 12.2046.5670.4890.4171.142GTF 8.8786.6810.4080.4580.671MSVD 9.7626.2650.3260.287
0.104ASR 9.8326.3450.4310.48285.066Ours
11.9866.7250.4980.489
1.750
Table 3Comparison of sandpath fusion result evaluation Method SF IE
Q abf Q w
Runtime/s DTCWT 11.1026.4010.4840.4370.273amazing
NSCT 11.5346.4430.4960.4462.923GTF 10.3056.5020.4920.5144.261
MSVD 10.0126.1480.3360.366
0.263ASR 8.9246.1590.4130.445240.88Ours
11.586
6.7650.4910.509
1.652
Table 4Comparison of band fusion result evaluation Method SF IE Q abf Q w
Runtime/s DTCWT 23.1886.9390.6720.4830.092
NSCT 23.2256.9610.6790.4940.765GTF 21.8156.7780.5800.4760.446
MSVD 9.2886.4780.1660.185
0.079ASR 22.7796.8540.6740.44565.806Ours
24.424
6.9650.6880.511
1.354
92

本文发布于:2023-05-20 10:14:31,感谢您对本站的认可!

本文链接:https://www.wtabcd.cn/fanwen/fan/78/706169.html

版权声明:本站内容均来自互联网,仅供演示用,请勿用于商业和其他非法用途。如果侵犯了您的权益请与我们联系,我们将在24小时内删除。

标签:雪花   豆瓣   秘扇
相关文章
留言与评论(共有 0 条评论)
   
验证码:
推荐文章
排行榜
Copyright ©2019-2022 Comsenz Inc.Powered by © 专利检索| 网站地图