2023
DOI: 10.1007/s11760-022-02470-2
|View full text |Cite
|
Sign up to set email alerts
|

Gradient-based multi-focus image fusion using foreground and background pattern recognition with weighted anisotropic diffusion filter

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
2
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
6
1
1

Relationship

0
8

Authors

Journals

citations
Cited by 11 publications
(3 citation statements)
references
References 50 publications
0
2
0
Order By: Relevance
“…The effects of shadows on object tracking and recognition have historically been a source of difficulty for the shadow detection field in security camera systems. This is often because color-based and gradient-based algorithms are prone to variations in illumination and backdrop brightness Vasu et al (2023), Abro et al (2021). Recent research offers creative solutions incorporating deep learning techniques, especially Convolutional Neural Networks (CNNs), for shadow detection in order to get around these issues Luo et al (2020).…”
Section: Literature Reviewmentioning
confidence: 99%
“…The effects of shadows on object tracking and recognition have historically been a source of difficulty for the shadow detection field in security camera systems. This is often because color-based and gradient-based algorithms are prone to variations in illumination and backdrop brightness Vasu et al (2023), Abro et al (2021). Recent research offers creative solutions incorporating deep learning techniques, especially Convolutional Neural Networks (CNNs), for shadow detection in order to get around these issues Luo et al (2020).…”
Section: Literature Reviewmentioning
confidence: 99%
“…For fusion methods in the spatial domain, such as guided filter [13,14,15], the output of the filter is considered as base layer, the source frame is subtracted from the base layer to obtain the detail layer. However, during the subtraction process, if the pixel value of a region in the source frame is less than the pixel value of the corresponding region in the base layer, the result of the subtraction is a negative number, since the uint8 type of image displays all the negatives as 0, this results in the loss of the features of that region in the detail layer.…”
Section: Introductionmentioning
confidence: 99%
“…Due to the limited depth of field of the optical lens, the imaging device sometimes cannot achieve clear focus imaging of all objects or areas in the same scene, resulting in defocus and blurring of the scene content outside the depth of field [ 1 , 2 , 3 , 4 , 5 ]. In order to solve the above problems, multi-focus image fusion technology provides an effective way to synthesize the complementary information contained in multiple partially focused images in the same scene, and then generate an all-in-focus fusion image, which is more suitable for human observation or computer processing, and has wide application value in digital photography, microscopic imaging, holographic imaging, integrated imaging, and other fields [ 6 , 7 , 8 , 9 , 10 , 11 , 12 , 13 , 14 , 15 ].…”
Section: Introductionmentioning
confidence: 99%