Low-LightImageEnhancement弱光照图像增强算法资源整理

更新时间:2023-07-22 23:41:50 阅读: 评论:0

Low-LightImageEnhancement弱光照图像增强算法资源整理Resources for Low Light Image Enhancement
-------------------------------------------------------------
Paper
TIP 2021
Spar Gradient Regularized Deep Retinex Network for Robust Low-Light Image Enhancement Wenhan Yang; Wenjing Wang; Haofeng Huang; Shiqi Wang; Jiaying Liu
Abstract
指示提单Due to the abnce of a desirable objective for low-light image enhancement, previous data-driven methods may provide undesirable enhanced results including amplified noi, degraded contrast and biad colors. In this work, inspired by Retinex theory, we design an end-to-end signal prior-guided layer paration and data-driven mapping network with layer-specified constraints for single-image low-light enhancement. A Spar Gradient Minimization sub-Network (SGM-Net) is constructed to remove the low-amplitude structures and prerve major edge information, which facilitates extracting
paired illumination maps of low/normal-light images. After the learned decomposition, two sub-networks (Enhance-Net and Restore-Net) are utilized to predict the enhanced illumination and reflectance maps, respectively, which helps stretch the contrast of the illumination map and remove intensive noi in the reflectance map. The effects of all the configured constraints, including the signal structure regularization and loss, combine together reciprocally, which leads to good reconstruction results in overall visual quality. The evaluation on both synthetic and real images, particularly on tho containing intensive noi, compression artifacts and their interleaved artifacts, shows the effectiveness of our novel models, which significantly outperforms the state-of-the-art methods.
TIP 2021
EnlightenGAN: Deep Light Enhancement Without Paired Supervision
Yifan Jiang; Xinyu Gong; Ding Liu; Yu Cheng; Chen Fang; Xiaohui Shen; Jianchao Yang; Pan Zhou; Zhangyang Wang Abstract
CVPR 2020
themassFrom Fidelity to Perceptual Quality: A Semi-Supervid Approach for Low-Light Image Enhancement Wenhan Yang, Shiqi Wang, Yuming Fang, Yue Wang, Jiaying Liu
Abstract
Under-exposure introduces a ries of visual degradation, i.e. decread visibility, intensive noi, and biad color, etc. To address the problems, we propo a novel mi-supervid learning approach for low-light image enhancement. A deep recursive band network (DRBN) is propod to recover a linear band reprentation of an enhanced normal-light image with paired low/normal-light images, and then obtain an improved one by recomposing the given bands via another learnable linear transformation bad on a perceptual quality-driven adversarial learning with unpaired data. The architecture is powerful and flexible to have the merit of training with both paired and unpaired data. On one hand, the propod network is well designed to extract a ries of coar-to-fine band reprentations, who estimations are mutually beneficial in a recursive process. On the other hand, the extracted band reprentation of the enhanced image in the first stage of DRBN (recursive band learning) bridges the gap between the restoration knowledge of paired data and the perceptual quality preference to real high-quality images. Its cond stage (band recomposition) learns to recompo the band reprentation towards fitting perceptual properties of highquality images via ad
versarial learning. With the help of this two-stage design, our approach generates the enhanced results with well reconstructed details and visually promising contrast and color distributions. Extensive evaluations demonstrate the superiority of our DRBN.
Resources
Paper:
Demo:
土库曼斯坦共和国
Figure 2. The framework of the propod Deep Recursive Band Network (DRBN), which consists of two stages: recursive band learning and band recomposition. (1) In the first stage, a coar-to-fine band reprentation is learned and different band signals are inferred jointly in a recursive process. The enhanced result from the last recurrence is ud as the guidance of the next recurrence and the later recurrence is only responsible to learn the residue in the feature and image domains at different scales. (2) In the cond stage, the band reprentation is recompod to improve perceptual quality
of the enhanced low-light image via a perceptual quality-guided adversarial learning.
IJCV 2020
Benchmarking Low-Light Image Enhancement and Beyond
Jiaying Liu*, Dejia Xu, Wenhan Yang*, Minhao Fan
Abstract
Resources
Paper:
Fig. 15 The propod Enhancement and Detection Twins Network (EDTNet) for joint low-light enhancement and face detection. The features extracted by the enhancement module are fed into the same level of the detection module. Thus, the features are interwined and unitedly learn uful information across two phas for face detection in lowlight conditions. HCC Enhancement enables exploiting both paired and unpaired data, while Dual-path fusion helps utilize of information at both original and enhanced exposure levels”
ACM MM2020
Integrating Semantic Segmentation and Retinex Model for Low-Light Image Enhancement
believe意思
Minhao Fan, Wenjing Wang, Wenhan Yang, and Jiaying Liu
Abstract
Resources
Paper:
Project:
Figure 2: The architecture of the propod mantic-aware Retinex-bad low-light enhancement nestayalive
twork, including three components: Information Extraction, Reflectance Restoration, and Illumination Adjustment. We first estimate mantic gmentation, reflectance, and illumination from the input underexpod image. Then, we enhance reflectance with the help of mantic information, and u the reconstructed reflectance to adjust the illumination. The final result is generated by fusing both reflectance and illumination.
TIP 2020
Lightening Network for Low-Light Image Enhancement
; ; ;
Abstract
wwww什么意思
Low-light image enhancement is a challenging task that has attracted considerable attention. Pictures taken in low-light conditions often have bad visual quality. To address the problem, we regard the low-light enhancement as a residual learning problem that is to estimate the residual between low- and normal-light images. In this paper, we propo a novel Deep Lightening Network (DLN) that benefits from the recent development of Convolutional Neural Networks (CNNs). The pro
pod DLN consists of veral Lightening Back-Projection (LBP) blocks. The LBPs perform lightening and darkening process iteratively to learn the residual for normal-light estimations. To effectively utilize the local and global features, we also propo a Feature Aggregation (FA) block that adaptively fus the results of different LBPs. We evaluate the
propod method on different datats. Numerical results show that our propod DLN approach outperforms other methods under both objective and subjective metrics.
STAR: A Structure and Texture Aware Retinex Model
; ; ; ; ; ; ;
Abstract:
ecosystemResources
Paper:
Demo:
neenerTIP 2020
hash hash
LR3M: Robust Low-Light Enhancement via Low-Rank Regularized Retinex Model
Xutong Ren, Wenhan Yang , Member, IEEE, Wen-Huang Cheng , Senior Member, IEEE, and Jiaying Liu , Senior Member, IEEE
Abstract
Noi caus unpleasant visual effects in low-light image/video enhancement. In this paper, we aim to make the enhancement model and method aware of noi in the whole process. To deal with heavy noi which is not handled in previous methods, we introduce a robust low-light enhancement approach, aiming at well enhancing low-light images/videos and suppressing intensive noi jointly. Our method is bad on the propod Low-Rank Regularized Retinex Model (LR3M), which is the fir
st to inject low-rank prior into a Retinex decomposition process to suppress noi in the reflectance map.
Our method estimates a piece-wi smoothed illumination and a noi-suppresd reflectance quentially, avoiding remaining noi in the illumination and reflectance maps which are usually prented in alternative decomposition methods. After getting the estimated illumination and reflectance, we adjust the illumination layer and generate our enhancement result. Furthermore, we apply our LR3M to video low-light enhancement. We consider inter-frame coherence of illumination maps and find similar patches through reflectance maps of successive frames to form the low-rank prior to make u of temporal correspondence. Our method performs well for a wide variety of images and videos, and achieves better quality both in enhancing and denoising, compared with the state-of-the-art methods.
Resources
Paper:
Demo:
TIP 2019
Low-Light Image Enhancement via a Deep Hybrid Network
; ; ; ; ; ; ;
Abstract
Camera nsors often fail to capture clear images or videos in a poorly lit environment. In this paper, we propo a trainable hybrid network to enhance the visibility of such degraded images. The propod network consists of two distinct streams to simultaneously learn the global content and the salient structures of the clear image in a unified network. More specifically, the content stream estimates the global content of the low-light input through an encoder-decoder network. However, the encoder in the content stream tends to lo some structure details. To remedy this, we propo a novel spatially variant recurrent neural network (RNN) as an edge stream to model edge details, with the guidance of another auto-encoder. The experimental results show that the propod network favorably performs against the state-of-the-art low-light image enhancement algorithms.
Resources
Paper:
挪威神曲狐狸叫下载

本文发布于:2023-07-22 23:41:50,感谢您对本站的认可!

本文链接:https://www.wtabcd.cn/fanwen/fan/78/1111703.html

版权声明:本站内容均来自互联网,仅供演示用,请勿用于商业和其他非法用途。如果侵犯了您的权益请与我们联系,我们将在24小时内删除。

标签:资源   神曲   提单   土库曼斯坦   算法
相关文章
留言与评论(共有 0 条评论)
   
验证码:
推荐文章
排行榜
Copyright ©2019-2022 Comsenz Inc.Powered by © 专利检索| 网站地图