TMI202106论文汇总(IEEETransactionsonMedicalImaging)

更新时间:2023-06-07 12:23:01 阅读: 评论:0

TMI202106论⽂汇总(IEEETransactionsonMedicalImaging)1. Segmentation-Renormalized Deep Feature Modulation for Unpaired Image Harmonization
日语网站⽤于不成对图像协调的分割重归⼀化深度特征调制
Mengwei Ren. Neel Dey. James Fishbaugh. Guido Gerig.
Deep networks are now ubiquitous in large-scale multi-center imaging studies. However, the direct aggregation of images across sites is contraindicated for downstream statistical and deep learning-bad image analysis due to inconsistent contrast, resolution, and noi. To this end, in the abnce of paired data, variations of Cycle-consistent Generative Adversarial Networks have been ud to harmonize image ts between a source and target domain.
Importantly, the methods are prone to instability, contrast inversion, intractable manipulation of pathology, and steganographic mappings which limit their reliable adoption in real-world medical imaging. In this work, bad on an underlying assumption that morphological shape is consistent across imaging sites, we propo a gmentation-renormalized image translation framework to reduce inter-scanner heterogeneity while prerving anatomical layout.
We replace the affine transformations ud in the normalization layers within generative networks with trainable scale and shift parameters conditioned on jointly learned anatomical gmentation embeddings to modulate features at every level of translation. We evaluate our methodologies against recent balines across veral imaging modalities (T1w MRI, FLAIR MRI, and OCT) on datats with and without lesions. Segmentation-renormalization for translation GANs yields superior image harmonization as quantified by Inception distances, demonstrates improved downstream utility via post-hoc gmentation accuracy, and improved robustness to translation perturbation and lf-adversarial attacks.
2. A Cervical Histopathology Datat for Computer Aided Diagnosis of Precancerous Lesions
⽤于计算机辅助诊断癌前病变的宫颈组织病理学数据集
Zhu Meng. Zhicheng Zhao. Bingyang Li. Fei Su. Limei Guo.
外贸基础知识
Cervical cancer, as one of the most frequently diagnod cancers worldwide, is curable when detected early.
deafening
Histopathology images play an important role in precision medicine of the cervical lesions. However, few computer aided algorithms have been explored on cervical histopathology images due to the lack of public datats. In this article, we relea a new cervical histopathology image datat for automated precancerous diagnosis. Specifically, 100 slides from 71 patients are annotated by three independent pathologists. To show the difficulty of the task, benchmarks are obtained through both fully and weakly supervid learning. Extensive experiments bad on typical classification and mantic gmentation networks are carried out to provide strong balines. In particular, a strategy of asmbling classification, gmentation, and pudo-labeling is propod to further improve the performance. The Dice coefficient reaches 0.7833, indicating the feasibility of computer aided diagnosis and the effectiveness of our weakly supervid enmble algorithm. The datat and evaluation codes are publicly available. To the best of our knowledge, it is the first public cervical histopathology datat for automated precancerous gmentation. We believe that this work will attract rearchers to explore novel algorithms on cervical automated diagnosis, thereby assisting doctors and patients clinically.
3. Learning With Context Feedback Loop for Robust Medical Image Segmentation
使⽤上下⽂反馈循环进⾏稳健医学图像分割的学习
hedgeKibrom Berihu Girum. Gilles Créhange. Alain Lalande.
Deep learning has successfully been leveraged for medical image gmentation. It employs convolutional neural networks (CNN) to learn distinctive image features from a defined pixel-wi objective function. However, this
approach can lead to less output pixel interdependence producing incomplete and unrealistic gmentation results. In this paper, we prent a fully automatic deep learning method for robust medical image gmentation by formulating the gmentation problem as a recurrent framework using two systems. The first one is a forward system of an encoder-decoder CNN that predicts the gmentation result from the input image. The predicted probabilistic output of the forward system is then encoded by a fully convolutional network (FCN)-bad context feedback system. The encoded feature space of the FCN is then integrated back into the forward system’s feed-forward learning process.
Using the FCN-bad context feedback loop allows the forward system to learn and extract more high-level image features and fix previous mistakes, thereby improving prediction accuracy over time. Experimental results, performed on four different clinical datats, demonstrate our method’s p
otential application for single and multi-structure medical image gmentation by outperforming the state of the art methods. With the feedback loop, deep learning methods can now produce results that are both anatomically plausible and robust to low contrast images. Therefore, formulating image gmentation as a recurrent framework of two interconnected networks via context feedback loop can be a potential method for robust and efficient medical image analysis.
medal of honor
4. Cascaded Regression Neural Nets for Kidney Localization and Segmentation-free Volume Estimation
⽤于肾脏定位和⽆分割体积估计的级联回归神经⽹络
Mohammad Arafat Hussain. Ghassan Hamarneh. Rafeef Garbi.
Kidney volume is an esntial biomarker for a number of kidney dia diagnos, for example, chronic kidney dia. Existing total kidney volume estimation methods often rely on an intermediate kidney gmentation step. On the other hand, automatic kidney localization in volumetric medical images is a critical step that often precedes subquent data processing and ana
lysis. Most current approaches perform kidney localization via an intermediate classification or regression step. This paper propos an integrated deep learning approach for (i) kidney localization in computed tomography scans and (ii) gmentation-free renal volume estimation. Our localization method us a lection -convolutional neural network that approximates the kidney inferior-superior span along the axial direction.
Cross-ctional (2D) slices from the estimated span are subquently ud in a combined sagittal-axial Mask-RCNN that detects the organ bounding boxes on the axial and sagittal slices, the combination of which produces a final 3D organ bounding box. Furthermore, we u a fully convolutional network to estimate the kidney volume that skips the gmentation procedure. We also prent a mathematical expression to approximate the ‘volume error’ metric from the ‘Sørenn–Dice coefficient.’ We accesd 100 patients’ CT scans from the Vancouver General Hospital records and obtained 210 patients’ CT scans from the 2019 Kidney Tumor Segmentation Challenge databa to validate our method. Our method produces a kidney boundary wall localization error of ~2.4mm and a mean volume estimation error of ~5%.
5. Direct Differentiation of Pathological Changes in the Human Lung Parenchyma With Grating-Bad Spectral X-ray Dark-Field Radiography
⽤基于光栅的光谱 X 射线暗场射线照相直接鉴别⼈肺实质的病理变化
Kirsten Taphorn. Korbinian Mechlem. Thorsten Sellerer. Fabio De Marco. Manuel Viermetz. Franz Pfeiffer. Daniela Pfeiffer. Julia Herzen.
Diagnostic lung imaging is often associated with high radiation do and lacks nsitivity, especially for diagnosing early stages of structural lung dias. Therefore, diagnostic imaging methods are required which provide sound diagnosis of lung dias with a high nsitivity as well as low patient do. In small animal experiments, the
忧苦nsitivity of grating-bad X-ray dark-field imaging to structural changes in the lung tissue was demonstrated. The energy-dependence of the X-ray dark-field signal of lung tissue is a function of its microstructure and not yet known.
Furthermore, conventional X-ray dark-field imaging is not capable of differentiating different types of pathological changes, such as fibrosis and emphyma. Here we demonstrate the potential diagnostic power of grating-bad X-ray dark-field in combination with spectral imaging in human chest radiography for the direct differentiation of lung dias. We investigated the energy-dependent linear diffusion coefficient of simulated lung tissue with different dias in wave-propag
ation simulations and validated the results with analytical calculations. Additionally, we modeled spectral X-ray dark-field chest radiography scans to exploit the differences in energy-dependency. The results demonstrate the potential to directly differentiate structural changes in the human lung. Conquently, grating-bad spectral X-ray dark-field imaging potentially contributes to the differential diagnosis of structural lung dias at a clinically relevant do level.
6. Development and Initial Results of a Brain PET Inrt for Simultaneous 7-Tesla PET/MRI Using an FPGA-Only Signal Digitization Method
使⽤仅 FPGA 的信号数字化⽅法开发⽤于同时进⾏ 7-Tesla PET/MRI 的脑 PET 插⼊物的开发和初步结果
Jun Yeon Won. Haewook Park. Seungeun Lee. Jeong-Whan Son. Yina Chung. Guen Bae Ko. Kyeong Yun Kim. Junghyun Song. Seongho Seo. Yeunchul Ryu. Jun-Young Chung. Jae Sung Lee.
In study, we developed a positron emission tomography (PET) inrt for simultaneous brain imaging within 7-Tesla (7T) magnetic resonance (MR) imaging scanners. The PET inrt has 18 ctors, and
each ctor is asmbled with two-layer depth-of-interaction (DOI)-capable high-resolution block detectors. The PET scanner features a 16.7-cm-long axial field-of-view (FOV) to provide entire human brain images without bed movement. The PET scanner early digitizes
a large number of block detector signals at a front-end data acquisition (DAQ) board using a novel field-programmable
gate array (FPGA)-only signal digitization method. All the digitized PET data from the front-end DAQ boards are transferred using gigabit transceivers via non-magnetic high-definition multimedia interface (HDMI) cables. A back-end DAQ system provides a common clock and synchronization signal for FPGAs over the HDMI cables. An active cooling system using copper heat pipes is applied for thermal regulation. All the 2.17-mm-pitch crystals with two-layer DOI information were clearly identified in the block detectors, exhibiting a system-level energy resolution of 12.6%. The PET scanner yielded clear hot-rod and Hoffman brain phantom images and demonstrated 3D PET imaging capability without bed movement. We also performed a pilot simultaneous PET/MR imaging study of a brain phantom. The PET scanner achieved a spatial resolution of 2.5 mm at the center FOV (NU 4) and a nsitivity of 18.9 kcps/MBq (NU 2) and 6.19% (NU 4) in accordance with the National Electrical Manufacturers Association (NEMA) standards.
7. Multi-Modal Retinal Image Classification With Modality-Specific Attention Network
importing具有特定模态注意⽹络的多模态视⽹膜图像分类
Xingxin He. Ying Deng. Leyuan Fang. Qinghua Peng.
Recently, automatic diagnostic approaches have been widely ud to classify ocular dias. Most of the
approaches are bad on a single imaging modality (e.g., fundus photography or optical coherence tomography (OCT)), which usually only reflect the oculopathy to a certain extent, and neglect the modality-specific information among different imaging modalities. This paper propos a novel modality-specific attention network (MSAN) for multi-modal retinal image classification, which can effectively utilize the modality-specific diagnostic features from fundus and OCT images. The MSAN compris two attention modules to extract the modality-specific features from fundus and OCT images, respectively. Specifically, for the fundus image, ophthalmologists need to obrve local and global pathologies at multiple scales (e.g., from microaneurysms at the micrometer level, optic disc at millimeter level to blood vesls through the whole eye). Therefore, we propo a multi-scale attention module to extract both the local and global features from fundus images. Moreover, large b
ackground regions exist in the OCT image, which is meaningless for diagnosis. Thus, a region-guided attention module is propod to encode the retinal layer-related features and ignore the background in OCT images. Finally, we fu the modality-specific features to form a multi-modal feature and train the multi-modal retinal image classification network. The fusion of modality-specific features allows the model to combine the advantages of fundus and OCT modality for a more accurate diagnosis. Experimental results on
a clinically acquired multi-modal retinal image (fundus and OCT) datat demonstrate that our MSAN outperforms
other well-known single-modal and multi-modal retinal image classification methods.
8. Learning Tubule-Sensitive CNNs for Pulmonary Airway and Artery-Vein Segmentation in CT
在 CT 中学习⽤于肺⽓道和动脉静脉分割的⼩管敏感 CNN
Yulei Qin. Hao Zheng. Yun Gu. Xiaolin Huang. Jie Yang. Lihui Wang. Feng Yao. Yue-Min Zhu. Guang-Zhong Yang.
9. 3D Multi-Attention Guided Multi-Task Learning Network for Automatic Gastric Tumor Segmentation and Lymph Node Classification
⽤于⾃动胃肿瘤分割和淋巴结分类的 3D 多注意⼒引导多任务学习⽹络
Yongtao Zhang. Haimei Li. Jie Du. Jing Qin. Tianfu Wang. Yue Chen. Bing Liu. Wenwen Gao. Guolin Ma. Baiying Lei.
地址英文“
脱销什么意思啊10. Relation-Induced Multi-Modal Shared Reprentation Learning for Alzheimer’s Dia Diagnosis
⽤于阿尔茨海默病诊断的关系诱导的多模态共享表⽰学习
Zhenyuan Ning. Qing Xiao. Qianjin Feng. Wufan Chen. Yu Zhang.
The fusion of multi-modal data (e.g., magnetic resonance imaging (MRI) and positron emission tomography (PET)) has been prevalent for accurate identification of Alzheimer’s dia (AD) by providing complementary structural and functional information. However, most of the existing method
s simply concatenate multi-modal features in the original space and ignore their underlying associations which may provide more discriminative characteristics for AD
identification. Meanwhile, how to overcome the overfitting issue caud by high-dimensional multi-modal data remains appealing. To this end, we propo a relation-induced multi-modal shared reprentation learning method for AD diagnosis. The propod method integrates reprentation learning, dimension reduction, and classifier modeling into a unified framework. Specifically, the framework first obtains multi-modal shared reprentations by learning a bi-directional mapping between original space and shared space. Within this shared space, we utilize veral relational regularizers (including feature-feature, feature-label, and sample-sample regularizers) and auxiliary regularizers to encourage learning underlying associations inherent in multi-modal data and alleviate overfitting, respectively. Next, we project the shared reprentations into the target space for AD diagnosis. To validate the effectiveness of our propod approach, we conduct extensive experiments on two independent datats (i.e., ADNI-1 and ADNI-2), and the experimental results demonstrate that our propod method outperforms veral state-of-the-art methods.
11. Hierarchical Temporal Attention Network for Thyroid Nodule Recognition Using Dynamic CEUS Imaging
使⽤动态 CEUS 成像进⾏甲状腺结节识别的分层时间注意⽹络
Peng Wan. Fang Chen. Chunrui Liu. Wentao Kong. Daoqiang Zhang.
Contrast-enhanced ultrasound (CEUS) has emerged as a popular imaging modality in thyroid nodule diagnosis due to its ability to visualize vascular distribution in real time. Recently, a number of learning-bad methods are dedicated to mine pathological-related enhancement dynamics and make prediction at one step, ignoring a native diagnostic dependency. In clinics, the differentiation of benign or malignant nodules always precedes the recognition of
pathological types. In this paper, we propo a novel hierarchical temporal attention network (HiTAN) for thyroid nodule diagnosis using dynamic CEUS imaging, which unifies dynamic enhancement feature learning and hierarchical nodules classification into a deep framework. Specifically, this method decompos the diagnosis of nodules into an ordered two-stage classification task, where diagnostic dependency is modeled by Gated Recurrent Units (GRUs).
Besides, we design a local-to-global temporal aggregation (LGTA) operator to perform a comprehen
sive temporal fusion along the hierarchical prediction path. Particularly, local temporal information is defined as typical enhancement patterns identified with the guidance of perfusion reprentation learned from the differentiation level. Then, we leverage an attention mechanism to embed global enhancement dynamics into each identified salient pattern. In this study, we evaluate the propod HiTAN method on the collected CEUS datat of thyroid nodules. Extensive
experimental results validate the efficacy of dynamic patterns learning, fusion and hierarchical diagnosis mechanism.
12. Measurement of the Lag Correction Factor in Low-Do Fluoroscopic Imaging
低剂量透视成像中滞后校正因⼦的测量
Dong Sik Kim. Eunae Lee.
英语歌曲排行榜

本文发布于:2023-06-07 12:23:01,感谢您对本站的认可!

本文链接:https://www.wtabcd.cn/fanwen/fan/90/136998.html

版权声明:本站内容均来自互联网,仅供演示用,请勿用于商业和其他非法用途。如果侵犯了您的权益请与我们联系,我们将在24小时内删除。

标签:分割   图像   射线   注意
相关文章
留言与评论(共有 0 条评论)
   
验证码:
Copyright ©2019-2022 Comsenz Inc.Powered by © 专利检索| 网站地图