From 2759603d09f02c8a2ba95d2e76a2a9f70e7ed985 Mon Sep 17 00:00:00 2001 From: Zhaozixiang1228 <44187438+Zhaozixiang1228@users.noreply.github.com> Date: Tue, 18 Jul 2023 09:45:18 +0200 Subject: [PATCH] Update README.md --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index 888feaf..2ff6976 100644 --- a/README.md +++ b/README.md @@ -30,7 +30,7 @@ Codes for ***CDDFuse: Correlation-Driven Dual-Branch Feature Decomposition for M Multi-modality (MM) image fusion aims to render fused images that maintain the merits of different modalities, e.g., functional highlight and detailed textures. To tackle the challenge in modeling cross-modality features and decomposing desirable modality-specific and modality-shared features, we propose a novel Correlation-Driven feature Decomposition Fusion (CDDFuse) network. Firstly, CDDFuse uses Restormer blocks to extract cross-modality shallow features. We then introduce a dual-branch Transformer-CNN feature extractor with Lite Transformer (LT) blocks leveraging long-range attention to handle low-frequency global features and Invertible Neural Networks (INN) blocks focusing on extracting high-frequency local information. A correlation-driven loss is further proposed to make the low-frequency features correlated while the high-frequency features uncorrelated based on the embedded information. Then, the LT-based global fusion and INN-based local fusion layers output the fused image. Extensive experiments demonstrate that our CDDFuse achieves promising results in multiple fusion tasks, including infrared-visible image fusion and medical image fusion. We also show that CDDFuse can boost the performance in downstream infrared-visible semantic segmentation and object detection in a unified benchmark. -## Usage +## 🌐 Usage ### ⚙ Network Architecture