info@aditum.org    +1(205)-633 44 24

Three Filters for Enhancing Images Acquired from Blue Fluorescence Imaging, Low Light Condition, and a Near Infrared Camera

Authors

Yingcheng Lin1, Ling Zhang1*, Jingmei Xu2*, and Ye Wu 2*

1College of Microelectronics and Communication Engineering, Chongqing University, Chongqing 400044, China.

2School of Electrical and Automation Engineering, Nanjing Normal University, Nanjing 210046, China.

Article Information

*Corresponding author: Ling Zhang, Jingmei Xu, Ye Wu, 2School of Electrical and Automation Engineering, Nanjing Normal University, Nanjing 210046, China.
1College of Microelectronics and Communication Engineering, Chongqing University, Chong qing 400044, China zhangling1993@cqu.edu.cn.

Received: August 28, 2025                   |              Accepted: September 04, 2025               |               Published: September 13, 2025

Citation: Lin Y, Zhang L, Xu J and Wu Y., (2025) “Three Filters for Enhancing Images Acquired from Blue Fluorescence Imaging, Low Light Condition, and a Near Infrared Camera”. International Journal of Epidemiology and Public Health Research, 7(1); DOI: 10.61148/2836- 2810/IJEPHR/165.

Copyright: © 2025. Ling Zhang, Jingmei Xu, Ye Wu. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

Fluorescence imaging has been widely used in research labs and industry, which brings remarkable impact in biophysics, neuroscience, and biochemistry. Fluorescence generated with low-intensity can be a major issue in the application of fluorescence imaging. In this work, three filters combined with lifting wavelets and simple mathematic functions have been made for enhancing the images acquired from the fluorescence imaging. Furthermore, they are found to be effective for highlighting the profiles in the images acquired from low-light condition and a near-infrared camera. A matched filter was applied for comparing study. Moreover, they can be utilized for extracting the profiles in the underwater images and fusing images. Our filters are envisioned to be useful in pattern recognition, portrait rendering, and intelligent surveillance.

Keywords:

Fluorescence imaging; bio-imaging; near infrared imaging; wavelets

Introduction:

1. Introduction

Molecules can emit fluorescence with longer wavelength when excited by a light source with shorter wavelength. This phenomena of optical physics has been developed into a way of fluorescence imaging, which is important for biomedical and biochemical applications [1-5]. One major problem that accompanies the fluorescence imaging process is that the fluorescence intensity is insufficient. This can be due to the thermal noise of instruments or the existing of complex imaging environment. This problem is exacerbated in the imaging of biological tissues due to the presence of bodily fluids or unidentified molecules in biological samples. These biological substances will scatter the light or dilute the fluorescence. This results in blurred imaging features and loss of important information, which makes biological features difficult to understand. The improvement of process technology or the advances in hardware is certainly one way to change this problem. Another way to solve this problem may be using mathematical methods [6], where the technique of image enhancement is used by using a wavelet method. Generally, the wavelet can suppress the high-frequency information that is associated with the noise while enhance the low- frequency section that is related with the main feature of the object. This makes the wavelet very useful in the image enhancement. The modification of its framework may introduce more robust method for the image enhancement. In the work done by Yang, R. et al. [7], a very flexible framework based on basic functions was proposed. It can be used to extract or highlight the main profile of the images. This is inspiring, which means that a combinational set of functions can be integrated. Firstly, a wavelet transform can be performed. Secondly, functions are used for filtering specific frequency. Thirdly, a new output image is generated based on that selected frequency.

The enhancement of low-light images are found to be useful in many application fields, including satellite remote sensing, security surveillance, autonomous driving, and rail transport. Most of the approaches on image enhancement are based on convolutional neural networks and machine learning [8-11]. Zhou, J. et al. built a double-teacher knowledge distilling network for enhancing underwater images. In their framework, a dynamic teaching strategy was applied, which gained improvement of the network’s ability to handle complicated underwater environment [8]. In one work, it is found that underwater image enhancement can be done via building a dual-branch attention mechanism, where the network takes cross-view inputs and utilizes the feature alignment module to explore degradation from different views [9]. In another work, it can be seen that the dual-color space and dense multiscale attention blocks can be utilized to enhancing images, where a color-guided map is introduced to guide the network to process color- degradation information. This can enormously improve color restoration [10]. In some cases, the images are degraded that a lot of structural and statistical information are destroyed. This can lead to the drift of the feature representation in a global scale. To solve these issues, the researchers established estimation about the degree of feature drifts using a statistical mechanism [11]. A variational model was further developed to achieve much correction of the color. In addition, a histogram-equalization was performed to improve the image contrast [11] to get the ultimate enhancement of the images.

Another aspect in this research field is the application of Retinex theory [12-16], where the low-light images are decomposed into several layers. Zhou, J. et al. combined underwater noise, texture, and gradients into a variational Retinex model to reduce noise amplification [12]. Li, X. et al. developed a method called as Deep Parametric Retinex Decomposition, where three modules of parametric Retinex decomposition, enhancement, and refinement are combined [13]. This new Retinex model can greatly avoid color deviation. Liu, W. et al. employed termed Reflectance-Correction Retinex for enhancing thermal images acquired from carbon fiber reinforced polymers [14]. Li, C. et al. constructed a fractional structure and texture aware model, which is integrated in the Retinex model to get the image enhancement [15]. Ma, T. et al. used a multi-scale feature extraction module using the framework of the Retinex theory, which showed the capability of extracting image details and structural information [16]. All these works show the vivid application of the Retinex theory in the image enhancement. A database may be required for using these approaches. It brings difficulty for those who are not professional at artificial intelligence and who have limited resources in databases. Simple and quick approaches are constantly in need.

Near infrared imaging is not easy to be done in the research lab, given that the near infrared camera contains a lot of thermal noise [17-20]. Especially, the intensity of near infrared light can be very low in the imaging [21]. This generated those images with very low intensity and dilute features. A convenient method is required for processing those images acquired from the near infrared imaging. Herein, we have proposed three frameworks by combing a wavelet and a set of simple functions. Those images acquired from blue- fluorescence imaging, low-light conditions, and near infrared camera were enhanced. Furthermore, these frameworks were used for image fusion, especially involving low light images as the input images. Thirdly, their application for extracting the profiles in underwater imaging was explored. In Sec. 2, how the frameworks were constructed was presented in details. In Sec. 3, several kinds of research outcome for these filters were shown. The limitations of our current study and future works were discussed. In Sec. 4, we summarized our finding. Our frameworks are presenting approaches simple and effective, which are envisioned to play an important role in advanced imaging and fluorescence microscopes.

2. Materials and Methods

The blue-fluorescence images were taken by a lab-built facility. A UV-laser with emission wavelength of 350-355 nm was used to irradiate the samples from the side. A blue-light filter was put above the sample which would reduce the UV light irradiate. A camera was connected with the blue-light filter and collect the blue fluorescence image. The low-light images were taken by a cell phone (iPhone Xs Max).

The images acquired from a near infrared camera were obtained from a lab-built setup. A near-infrared laser with emission wavelength of 1064 nm was used to irradiate the samples. A near infrared camera was put above the sample to capture the image of the sample.

Three filters were developed using simple mathematical functions. Their frameworks are shown in Algorithms 1-3. In these frameworks, a lifting wavelet was firstly used to reduce some of the inherent noise in the images. Moreover, several functions were tested for the enhancement of the main features of the images. If one specific set of functions were found to be effective, a continuous operation would be followed using these functions. Ultimately, the filtered values could be used for constructing a new output image.

Algorithm 1. The Log filter

  1. We used a lifting-wavelet method for preprocessing the image, which is similar to Ref.[6]. We briefly introduce this preprocessing in the following steps of 2)-17).
  2. An input colored image is firstly read and converted into a gray image. We introduce a 2-dimensional matrix Q which represents the input image. The size of Q is made to be the scale of 1064×1064. We define a parameter M, which represents the length of Q. We define a parameter MM, which is equal to one half of M.
  3. We define two equations based on the value of Q:Q1=Q ([1: 2: M-1], :);Q2=Q ([2: 2: M], :).

4) for each iteration j_hc=1:MM do

high_col (j_hc, :)=Q1(j_hc, : )-Q2 (j_hc, : );

end for

5) for each iteration j_lc=1:MM do

low_col ( j_lc,:)=Q2(j_lc,:)+1/2*high_col (j_lc,:);

end for

6) We define two equations:

f_col ([1:1: MM], :)=low_col ([1:MM], :);

f_col ([MM+1:1: M], :)=high_col ([1:MM], : );

7) The values for Q1 and Q2 are assigned as:

Q1=f_col (: , [1: 2: M-1]);

Q2=f_col (: , [2: 2: M]).

8) for each iteration j_hr=1:MM do

high_row (: , j_hr)=Q1(:, j_hr)-Q2(:, j_hr);

end for

9) for each iteration j_lr=1:MM do

low_row(: , j_lr)=Q2(:, j_lr)+1/2*high_row(:, j_lr);

end for

10) We build four equations:

f_row(: , [1:1:MM])=low_row(:,[1:MM]);

f_row (: , [MM+1:1:M])=high_row(:,[1:MM]);

Q1=f_row(: , [MM+1:1:M]);

Q2=f_row(: , [1:1:M]).

11) for each iteration j_Ir=1:MM do

low_row (: , j_Ir)=f2 (: , j_Ir)-1/2*f1 (: , j_Ir);

end for

12) for each iteration j_hr=1:TT do

high_row (: , j_hr)=f1(:,j_hr)+low_row (:, j_hr);

end for

13) We build four equations:

f_row (: , [2: 2: M])=low_row(:, [1:MM]);

f_row (: , [1: 2: M-1])=high_row(:, [1:MM]);

Q1=f_row ([MM+1: 1: M], : );

Q2=f_row ([1:1:MM], :).

14) for each iteration j_lc=1:MM do

low_col (i_lc, :)=Q2 (i_lc, :)-1/2*Q1(i_lc, :);

end for

15) for j_hc=1:MM do

high_col (j_hc, :)=Q1(i_hc,:)+low_col (j_hc,:);

end for

16) We build two equations:

f_col ([2: 2: M], :)=low_col ([1: MM], :);

f_col ([1: 2: M-1], :)=high_col ([1: MM], :).

17) We define a parameter of V, which is defined as V=a*f_col. Here, a is a constant. We conduct wavelet transform of V using a traditional wavelet of sym4. This will produce a coefficient r, which represents a specific grayscale values r.

18) The following calculation is performed with respect to the value of r:

r2=exp(r);

r3=sinh(r);

r4=sinh(cos(r2))./log(r3);

r5=a2*r4.^a1;

Here, a1 and a2 are both constants.

19) An output image is shown based on the value of r5.

Algorithm 2. The Hypot filter

1) A lifting-wavelet transform is done based on the method proposed on Ref. [6], which will generate a specific grayscale value of t. This process is similar to those steps of 2)-17) in Algorithm 1.

2) The following calculation is applied with respect to the value of t. 

t2=b1*cosh(t);

t3=b2*sinh(t);

t4=hypot(b3*t3, t2);

t5=b5*t4.^b4;

3) An output image will be shown using the value of t5.

Algorithm 3. The Sech filter

1) A lifting-wavelet transform is carried on by using the approach proposed by Man Jia et al. [6], which will generate some grayscale value of u. This is similar to those steps of 2)-17) in Algorithm 1.

2) The value of u is continued to be processed by the following :

u2=d1*acosh(u);

u3=d2*asinh(u);

u4=d3*sech(d4*cos(u3))./u2;

u5=d5*u4.^d6;

3) An output image is shown via the value of u5.

3. Results

3.1 Different profiles of the images can be generated through the filters

Figure 1: Sample images used for testing the filters. They are denoted as follows: (a) River. (b) Sky. (c) Yard. (d) Roof. (e) Wall.

Figure 2: Processing the sample images using Log filter with different scale number. (a) a1=2. (b) a1=400. (c) a1=800. The images from left to right are River, Sky, Yard, Roof, and Wall.

Figure 1: listed several sample images for testing the performance of these three filters. Several key parameters in these filters were varied to see the output features of these filters. As shown in Figs.2-4, different profiles can be generated when varying the value of a1, b3, and d4. It can be seen that these filters can be effectively modified when different output profiles of the images are expected.

Figure 3: Processing the sample images using Hypot filter with different value of b3. (a) b3=0.1. (b) b3=1.7. (c) b3=2.7. The images from left to right are River, Sky, Yard, Roof, and Wall.

Figure 4: Processing the sample images using Sech filter with different value of d4. (a) d4=0.1. (b) d4=8. (c) d4=80. The images from left to right are River, Sky, Yard, Roof, and Wall.

Figure 5: Sample blue fluorescence images were acquired. (a) Tube1. This image is acquired from a 1.5 mL tube filled with Xanthan gum. (b) Tube2. This image is acquired from a 10 mL tube filled with Xanthan gum. (c) Tube3. This image is acquired from a gl ass tube filled with xanthan gum. (d) Pipette. This image is acquired from a 100 μL pipette filled with the xanthan gum. (e) Bottle. This image is acquired from a glass bottle filled with xanthan gum.

Figure 6: The sample blue fluorescence images are processed by the Log filter. (a) Tube1 processed. (b) Tube2 processed. (c) Tube3 processed. (d) Pipette processed. (e) Bottle processed.

Table 1: PSNR values of the input images processed by the three filters

The images

The Log filter

The Hypot filter

The Sech filter

Tube1

1.5632

23.2950

22.3847

Tube2

5.4696

14.6732

26.2403

Tube3

7.6156

15.3289

25.8500

Pipette

4.5120

10.9682

24.5644

Bottle

2.7849

19.1207

27.9680

The average value

4.3891

16.6772

25.4015

3.2. Enhancing blue fluorescence images

Several input images are shown in Fig. 5 a specifically built instrument for acquired the blue-fluorescence images was used to collect these images. Figs. 6-10 present the processed images via the Log filter, the Hypot filter, and the Sech filter. The fluorescence intensity as shown in the input images is very low. However, after using these filters, the main features are extracted and strengthened, which showed the effectiveness of these filters for the image enhancement. The peak signal-to-noise ratio (PSNR) values are listed in Table 1. It clearly showed that the Sech filter got the highest average value for the PSNR.

3.3 Enhancing weak-light images

Low-light images are acquired from the environment where the light intensity is not enough. We used five sample images as input images to test the performance of the three filters (Fig.9). After processing, the main features that hided in the block background were shown (Figs. 10-12). We calculated the corresponding values of the PSNR. It can be seen that the Sech filter got the highest average value for the PSNR (see Table 2).

Figure 7: The sample blue fluorescence images are processed by the Hypot filter. (a) Tube1 processed. (b) Tube2 processed. (c) Tube3 processed. (d) Pipette processed. (e) Bottle processed.

Figure 8: Sample blue fluorescence images are processed by the Sech filter. (a) Tube1 processed. (b) Tube2 processed. (c) Tube3 processed. (d) Pipette processed. (e) Bottle processed.

Figure 9: Sample images were acquired in low-light condition via a cell phone. (a) Weak37. (b)Weak42. (c)Weak43. (d) Weak44. (e) Weak45.

Figure 10: Sample images acquired from low-light condition were processed by the Log filter. (a) Weak37 processed. (b)Weak42 processed. (c)Weak43 processed. (d) Weak44 processed. (e) Weak45 processed.

Table 2: PNSR values of the low-light images processed by the three filters

The images

The Log filter

The Hypot filter

The Sech filter

Weak37

1.5222

4.0648

14.5794

Weak42

2.1731

3.5237

12.5184

Weak43

1.8186

3.6654

13.4960

Weak44

4.0834

2.9681

9.3496

Weak45

2.8832

3.0624

10.3166

The average value

2.4961

3.4569

12.0520

Figure 11: Sample images acquired from low-light condition were processed by the Hypot filter. (a) Weak37 processed. (b)Weak42 processed. (c)Weak43 processed. (d) Weak44 processed. (e) Weak45 processed.

Figure 12: Sample images acquired from low-light condition were processed by the Sech filter. (a) Weak37 processed. (b)Weak42 processed. (c)Weak43 processed. (d) Weak44 processed. (e) Weak45 processed.

Figure 13: Sample images were acquired from a near infrared camera. (a) Nir4. (b) Nir5. (c) Nir6. (d) Nir7. (e) Nir9.

Figure 14: The images acquired from a near infrared camera were processed by the Log filter. (a) Nir4 processed. (b) Nir5 processed. (c) Nir6 processed. (d) Nir7 processed. (e) Nir9 processed.

3.4 Enhancing the images acquired from near infrared imaging Several images obtained from the near infrared camera were used as the input images (Fig.13). Figs.14-16 show that clear features can be found after using these filters. Table 3 presents that the Sech filter shows the highest average PNSR-value, which indicates that the Sech filter shows the highest performance in these three filters.

Figure 15: The images acquired from a near infrared camera were processed by the Hypot filter. (a) Nir4 processed. (b) Nir5 processed. (c) Nir6 processed. (d) Nir7 processed. (e) Nir9 processed.

Figure 16: The images acquired from a near infrared camera were processed by the Sech filter. (a) Nir4 processed. (b) Nir5 processed. (c) Nir6 processed. (d) Nir7 processed. (e) Nir9 processed.

Figure 17: The blue fluorescence images processed by a matched filter. (a) Tube1. (b) Tube2. (c) Tube3. (d) Pipette. (e) Bottle.

Figure 18: The low-light images processed by a matched filter. (a) Weak37. (b) Weak42. (c) Weak43. (d) Weak44. (e) Weak45.

Figure 19: The images acquired from a near infrared camera was processed by a matched filter.

Table 3: PNSR values of the images acquired form a near infrared camera

 

The Log filter

The Hypot filter

The Sech filter

Nir4

18.6266

6.4112

38.0966

Nir5

21.6068

6.4957

39.1760

Nir6

22.1910

6.5001

41.1884

Nir7

24.2181

6.5082

46.4245

Nir9

7.9523

4.8970

37.0348

The average value

18.919

6.1624

40.3841

Table 4: PNSR values of the images processed by the matched filter

The images

The matched filter

The images

The matched filter

The images

The matched filter

Tube1

6.6127

Weak37

6.3534

Nir4

0.1364

Tube2

1.7516

Weak42

4.8549

Nir5

0.4934

Tube3

1.0023

Weak43

5.4550

Nir6

0.4852

Pipette

3.0075

Weak44

3.1352

Nir7

0.2825

Bottle

3.6810

Weak45

3.8254

Nir9

1.2444

Average value

3.2110

Average value

4.7248

Average value

0.5284


Figure 20: The application of the filters in image fusion. (a) A sample image for image fusion. (b) Another sample image for the image fusion. (c) The image fusion via the Log filter. (d) The image fusion via the Sech filter. (e) The image fusion via the Hypot filter.

Figure 21: Sample images acquired from underwater condition. (a) Water13. (b) Water14. (c) Water15. (d) Water17. (e)Water19. Here, several objects are put underwater for imaging, including a red rock, a yellow rock, a small flower bottle, a sweet potato, and a bulk rock.

Figure 22: Sample images acquired from underwater condition were processed by the Log filter. (a) Water13. (b) Water14. (c) Water15. (d) Water17. (e)Water19.

Figure 23: Sample images acquired from underwater condition were processed by the Sech filter. (a) Water13. (b) Water14. (c) Water15. (d) Water17. (e) Water19.

Figure 24: The underwater images processed by the Hypot filter. (a) Water13. (b) Water14. (c) Water15. (d) Water17. (e)Water19.

Figure 25: The underwater images processed by the Sobel operator. (a) Water13. (b) Water14. (c) Water15. (d) Water17. (e)Water19.

3.5. Comparing the performance of a matched filter

We used a matched filter to compare the performance [22]. Figs.17a-17e show the output of the blue fluorescence images processed by the matched filter. Figs.18a-18e show the output of the low-light images processed by the matched filter. Figs.19a-19e show the output of the near-infrared images processed by the matched filter. Table 4 shows the corresponding PNSR values associated with those images processed by the matched filter. The average values of the PSNR are 3.2111, 4.7248, and 0.5284 separately for the blue fluorescence images, the low-light images, and the near-infrared images. Clearly, the images acquired via the matched filter do not look as good as those obtained through our filters (see Figs.17-19).

3.6 Application in image fusion

The images obtained from different imaging techniques, including visible and near infrared imaging, can provide a vivid and comprehensive understanding of the features of the samples being studied. Therefore, the technology of the image fusion has been widely used in industry and medical care [23-26]. Given that our filters are able to enhance the low light images, we used them for the image fusion. Here, our filters were used to extract the low light images firstly. Then, a wavelet transform was applied to combine the frequency of the two input images together. Finally, an inverse of the wavelet transform was used to get the final fused image. Figs.20a-20e showed the fusing impact. We used one low-light image as the input image. Our three filters can capture those key features in the low-light image and fused those features in the output image. They showed the effectiveness in the image fusion involved with low-light images.

3.7 Extracting profiles of the image acquired from underwater condition

Getting images with clear features are essential prerequisite for seabed resources exploration and marine environment monitoring, which makes the profiles extraction from the underwater images is very important [27-31]. The complexity of water and lighting environment makes underwater images may go through color distortion and contrast loss. Therefore, new methods or frameworks should be continuing to develop for this purpose. We used our filters to process the underwater images. Figs.21a-21e are 5 input underwater images. Figs.22-24 are the processed images via our three filters. It can be seen that they showed the effectiveness for extracting the key profiles of the underwater images. We used a Sobel operator for comparing (see Figs.25a- 25e). It can be noted that the Sobel operator is not good enough to acquire the main profile for the underwater images. For example, the profiles of the rock, the bottle, and the sweet potato were not shown in the output images processed by the Sobel operation.

4. Discussion

One of the limitation in our framework is that we have to adjust the parameter and the constants in our filters in order to have a best output. For us, it is not a difficult job given that we are proficient with the operation of our framework. This may be a challenge for those who are not familiar with our framework. Future research would be focused on making GUI that can integrate all the parameter and the constants.

It has been shown the images acquired with the assistance of the nanoparticles and special small molecules can show good image contrast and high penetration depth [32-38]. This may be due to the existence of the nanoparticles that can change light scattering and absorption. This would be our future endeavor.

Our work can raise the general interest of the readers in the field of electrical engineering given that the processing of the fluorescence images, the underwater images, the low-light images, and the near infrared images are required in many instruments in this field. Our work can provide general design of filters for the image enhancement. It can provide a worthwhile about setting up a useful filter for processing the low-light images, the underwater images, the near-infrared images, and the fluorescence images. Specially, our algorithms provide an alternative way of thinking for researchers who don't want to follow the rules and ecosystems of machine learning, artificial intelligence, and neural networks. Most of the previously published works are related with machine learning, which needs a large database to be trained. However, our manuscript provides optional method for processing, which does not need a database. Moreover, our frameworks can raise the research interest about those scientists who are doing works about the image processing given that our designed filters can be environed to be integrated into several popular filters, including the Butterworth filter, the matched filter, and the filter based on watershed algorithm.

Research interest in seeking simple, affordable, and portable calculation platforms is growing, which is especially important in developing countries, resource-limited areas, and rural regions. It doesn’t need to use massive calculation systems for running our filters. No GPU or supercomputer or CPU array were used in our study. Only a Dell Desktop (OptiPlex 7070) is required for running our filters. This may raise the interest of the researchers in Asian, African, and Latin American countries who have limited access to computer resources.

Several simple functions including logarithm, hypotenuse, and hyperbolic secant are used. However, it does not mean that other functions are not workable. The selectivity of the functions need to be further studied in our future work.

It has been shown that the technique of imaging in the second near- infrared window (NIR-II) enables imaging with high resolution and good contrast by using the reduced light scattering and auto- fluoresecence [39-43]. This would be one of our future research direction that the imaging camera and technique in the NIR-II region are employed, where images with higher contrast can be generated.

It should be noted that organic molecules that contained macrocyclic rings show major promise in imaging technology [44- 50]. They provide multiple π-electrons group for enhancing the fluorescence. This would be another direction of our future research, where advanced imaging molecules would be used to generate fluorescence images with high quality.

Profiles extraction is a key method for identifying dental diseases, breast cancer, and pancreatic ductal adenocarcinoma [51-53]. Our filters with powerful ability of the profiles extraction may be a possible assist to this research problem. Future research would be focused on animal study combined with our filters.

5. Conclusions

In this work, we present three new filters for enhancing the images acquired in low light condition, blue fluorescence imaging, and near infrared camera. Utilizing the lifting wavelet combined with simple mathematical functions, our method implements a contrast enhancement. Additionally, we applied these filters for the image fusion as well as extraction of the profiles for the underwater images. Comparative study using the matched filter and the traditional Sobel filter indicates the effectiveness of our approach. This work showed a potential path towards making almighty filters for processing the images via simple functions.

Author Contributions: Investigation, Y.L., L. Z., J.X., Y.W.; resources, L.Z., J.X., Y.W.; writing—original draft preparation, Y.L., L. Z., J.X., Y.W.; writing—review and editing, Y.L., L. Z., J.X., Y.W. All authors have read and agreed to the published version of the manuscript.

Institutional Review Board Statement: Not Applicable

Informed Consent Statement: Not applicable.

Data Availability Statement: The data presented in this study are available in this article.

Acknowledgments: The authors would like to thank all the administrational support from the faculty of Nanjing Normal University. The support from the National Key Research and Development Program of China (grant number 2021YFC3340502) was appreciated.

Conflicts of Interest: The authors declare no conflicts of interest.

References

  1. Zhou, Y.; Mao, S.; Fei, P. Light sheet fluorescence microscopy: Advancing biological discovery with more dimensions, higher speed, and lower phototoxicity. The Innovation 2024, 5, 100692.
  2. Chen, L.; Meng, J.; Zhou, Y.; Zhao, F.; Ma, Y.; Feng, W.;Chen, X.; Gao, S.; Liu, J.; Zhang, M.; Liu, A.; Hong, Z.; Tang, J.; Kuang, D.; Huang, L.; Zhang, Y.; Fei, P. Efficient 3D imaging and pathological analysis of the human lymphoma tumor microenvironment using light-sheet immunofluorescence microscopy. Theranostics 2024, 14, 406.
  3. Ding, Y.; Lee, J. ; Ma, J.; Sung, K.; Yokota, T.; Singh, N.; Dooraghi, M.; Abiri, P.; Wang, Y.; Kulkarni, R. P.; Nakano, A.; Nguyen, T. P.; Fei, P.; Hsiai, T. K. Light-sheet fluorescence imaging to localize cardiac lineage and protein distribution. Scientific reports 2017, 7, 42209.
  4. Jiang, H.; Zhu, T.; Zhang, H.; Nie, J.; Guan, Z.; Ho, C.-M.; Liu, S.; Fei, P. Droplet-based light-sheet fluorescence microscopy for high-throughput sample preparation, 3-D imaging and quantitative analysis on a chip. Lab on a Chip 2017, 17, 2193-2197.
  5. Fei, P.; Lee, J.; Packard, R. R. S.; Sereti, K.-I.; Xu, H.; Ma,J.; Ding, Y.; Kang, H. ; Chen, H.; Sung, K.; Kulkarni R.; Ardehali, R.; Kuo, C-C J.; Xu, X.; Ho, C.-M.; Hsiai T. K. Cardiac light-sheet fluorescent microscopy for multi-scale and rapid imaging of architecture and function. Scientific reports 2016, 6, 22489.
  6. Jia, M.; Xu, J.; Yang, R.; Li, Z.; Zhang, L.; Wu, Y. Three filters for the enhancement of the images acquired from fluorescence microscope and weak-light-sources and the image compression. Heliyon 2023, 9, e20191.
  7. Yang, R.; Chen, L.; Zhang, L.; Li, Z.; Lin, Y.; Wu, Y. Image enhancement via special functions and its application for near infrared imaging. Global Challenges 2023, 2200179.
  8. Zhou, J.; Zhang, D.; Vivone, G.; Jiang, Q. DTKD-Net: Dual- Teacher Knowledge Distillation lightweight network for water-related optics image enhancement. IEEE Transactions on Geoscience and Remote Sensing 2024, 62, 4207213.
  9. Zhou, J.; Zhang, D.; Zhang, W. Cross-view enhancement network for underwater images. Engineering Applications of Artificial Intelligence 2023, 121, 105952.
  10. Zhou, J.; Li, B.; Zhang, D.; Yuan, J.; Zhang, W.; Cai, Z. UGIF-Net: An efficient fully guided information flow network for underwater image enhancement. IEEE Transactions on Geoscience and Remote Sensing 2023, 61, 4206117.
  11. Zhou, J.; Zhang, D.; Zhang, W. Underwater image enhancement method via multi-interval subhistogram perspective equalization. IEEE Journal of Oceanic Engineering 2023, 48, 474-488.
  12. Zhou, J.; Wang, S.; Lin, Z.; Jiang, Q.; Sohel, F. A pixel distribution remapping and multi-prior Retinex variational model for underwater image enhancement. IEEE Transactions on Multimedia 2024, 26, 7838-7849.
  13. Li, X.; Wang, W.; Feng, X.; Li, M. Deep parametric Retinex decomposition model for low-light image enhancement. Computer Vision and Image Understanding 2024, 241, 103948.
  14. Liu, W.; Zhao, P.; Zhao, Y.; Fu, Y.; Dai, J.; Zhou, L. A reflectance-correction retinex framework for thermal image enhancement in nondestructive defect detection of CFRP. Measurement 2024, 237, 115070.
  15. Li, C.; He, C. Fractional structure and texture aware model for image Retinex and low-light enhancement. Applied Mathematical Modelling 2024, 130, 496-513.
  16. Ma, T.; Fu, C.; Yang, J.; Zhang, J.; Shang, C. RF-Net: Unsupervised low-light image enhancement based on Retinex and exposure fusion. Computers, Materials & Continua 2023, 77, 1103-1122.
  17. Zhu, J.; Shao, X.-J.; Li, Z.; Lin, C.-H.; Wang, C.-W.-Q.; Jiao, K.; Xu, J.; Pan, H.-X.; Wu, Y. Synthesis of Holmium-Oxide Nanoparticles for Near-Infrared Imaging and Dye- Photodegradation. Molecules 2022, 27, 3522.
  18. Wu, Y.; Lin, Y.; Xu, J. Synthesis of Ag-Ho, Ag-Sm, Ag-Zn, Ag-Cu, Ag-Cs, Ag-Zr, Ag-Er, Ag-Y and Ag-Co metal organic nanoparticles for UV-Vis-NIR wide-range bio-tissue imaging. Photochemical & photobiological sciences 2019, 18, 1081-1091.
  19. Huang, Y.; Yang, R.; Geng, X.; Li, Z.; Wu, Y. Two filters for acquiring the profiles from images obtained from weak-light background, fluorescence microscope, transmission electron microscope, and near-infrared camera. Sensors 2023, 23 (13), 6207.
  20. Wu, Y.; Ou, P.; Song, J.; Zhang, L.; Lin, Y.; Song, P.; Xu, J. Synthesis of praseodymium-and molybdenum- sulfide nanoparticles for dye-photodegradation and near-infrared deep-tissue imaging. Mater. Res. Express 2020, 7, 036203.
  21. Wu, Y.; Ou, P.; Fronczek, F. R.; Song, J.; Lin, Y.; Wen, H. - M.; Xu, J. Simultaneous Enhancement of Near-Infrared Emission and Dye Photodegradation in a Racemic Aspartic Acid Compound via Metal-Ion Modification. ACS Omega 2019, 4,19136-19144.
  22. Chaudhuri, S.; Chatterjee, S.; Katz, N.; Nelson, M.; Goldbaum, M. Detection of blood vessels in retinal images using two-dimensional matched filters. IEEE Transactions on Medical Imaging, 1989, 8,263-269.
  23. Lin, Y.; Cao, D.; Zhou, X. Adaptive infrared and visible image fusion method by using rolling guidance filter and saliency detection. Optik 2022, 262, 169218.
  24. Zhang, L.; Yang, X.; Wan, Z.; Cao, D.; Lin, Y. A real-time FPGA implementation of infrared and visible image fusion using guided filter and saliency detection. Sensors 2022, 22, 8487.
  25. Lin, Y. C.; Chiang, P. Y.; Miaou, S. G. Enhancing deep- learning object detection performance based on fusion of infrared and visible images in advanced driver assistance systems. IEEE Access 2022, 105214-105231.
  26. Liu, Y.; Chen, X.; Cheng, J.; Peng, H.; Wang, Z. Infrared and visible image fusion with convolutional neural networks. International Journal of Wavelets, Multiresolution and Information Processing 2018, 16, 1850018.
  27. Wang, H.; Zhang, W.; Ren, P. Self-organized underwater image enhancement. ISPRS Journal of Photogrammetry and Remote Sensing 2024, 215, 1-14.
  28. Jiang, Q.; Gu, Y.; Li, C.; Cong, R.; Shao, F. Underwater image enhancement quality evaluation: Benchmark dataset and objective metric. IEEE Transactions on Circuits and Systems for Video Technology 2022, 32, 5959-5974.
  29. Zhang, W., Wang, Y., Li, C. Underwater image enhancement by attenuated color channel correction and detail preserved contrast enhancement. IEEE Journal of Oceanic Engineering 2022, 47, 718-735.
  30. Zhang, W., Zhuang, P., Sun, H. H., Li, G., Kwong, S., Li, C. Underwater image enhancement via minimal color loss and locally adaptive contrast enhancement. IEEE Transactions on Image Processing 2022,31, 3997-4010.
  31. Zhuang, P., Wu, J., Porikli, F., Li, C. Underwater image enhancement with hyper-laplacian reflectance priors. IEEE Transactions on Image Processing 2022, 31, 5442-5455.
  32. Li, Z.; Lin, Y.; Alam, M. Z.; Wu, Y. Synthesizing Ag+: MgS, Ag+: Nb2S5, Sm3+: Y2S3, Sm3+: Er2S3, and Sm3+: ZrS2 compound nanoparticles for multicolor fluorescence imaging of biotissues. ACS Omega 5, 32868-32876.
  33. Wu, Y.; Ou, P.; Fronczek, F. R.; Song J.; Lin, Y.; Wen, H. M.; Xu, J. Simultaneous enhancement of near-infrared emission and dye photodegradation in a racemic aspartic acid compound via metal-ion modification. ACS Omega 2019, 4, 19136-19144.
  34. Ding, L.; Chen, C.; Shan, X.; Liu, B.; Wang, D.; Du, Z.; Zhao, G.; Su, Q.P.; Yang, Y.; Halkon, B.; Tran, T. T.; Liao, J.; Aharonovich, I.; Zhang, M.; Cheng, F.; Fu, L.; Xu, X.;Wang, F. Optical nonlinearity enabled super-resolved multiplexing microscopy. Advanced Materials 2024, 36, 2308844.
  35. Liu, B.; Liao, J.; Song, Y.; Chen, C.; Ding, L.; Lu, J.; Zhou, J.; Wang, F. Multiplexed structured illumination super- resolution imaging with lifetime-engineered upconversion nanoparticles. Nanoscale Adv. 2022, 4, 30-38.
  36. Verma, N. C.; Yadav, A.; Rao, C.; Mishra, P. M.; Nandi, C. K. Emergence of carbon naoodots as a probe for super- resolution microscopy. The Journal of Physical Chemistry C 2021, 125, 1637-1653.
  37. Nandi, S.; Caicedo, K.; Cognet, L. When super-resolution localization microscopy meets carbon nanotubes. Nanomaterials 2022, 12, 1433.
  38. Li, W.; Kaminski Schierle, G. S.; Lei, B.; Liu, Y.; Kaminski, C. F. Fluorescent nanoparticles for super-resolution imaging. Chemical Reviews 2022,122, 12495-12543.
  39. Schmidt, E.L.; Ou, Z.; Ximendes, E.; Cui, H.; Keck, C.H.C.; Jaque, D.; Hong, G. Near-infrared II fluorescence imaging. Nature Reviews Methods Primers 2024, 4, 23.
  40. Li, Z.; Li, Z.; Zaid, W.; Osborn, M. L.; Li, Y.; Yao, S.; Xu, J. Mouthwash as a non-invasive method of indocyanine green delivery for near-infrared fluorescence dental imaging. J. Biomed Opt. 2022, 27, 066001.
  41. Xu, S.; Sun, P.; Yu, Z.; Chen, K.; Chu, Y.; Wang, S.; Shen, Q.; Chen, P.; Yao, Y.; Fan, Q. Water-soluble lipophilic near- infrared Region II fluorophores for high brightness lipid layer and lipid droplets imaging applications. Small 2024, 20, 2406159.
  42. Shen, H.; Sun, F.; Zhu, X.; Zhang, J.; Ou, X.; Zhang, J.; Xu, C.; Sung, H.H.Y.; Williams, I.D.; Chen, S.; Kwok, R.T.K.; Lam, J.W.Y.; Sun, J.; Zhang, F.; Tang, B.Z. Rational design of NIR-II AlEgens with ultrahigh quantum yields for photo- and chemiluminescence imaging. J. Am. Chem. Soc. 2022, 144, 33, 15391-15402.
  43. Chen, L.; Peng, M.; Ouyang, Y.; Chen, J.; Li, H.; Wu, M.;Ou, R.; Zhou, W.; Zhang, C.; Jiang, Y.; Xu, S.; Wu, W.; Jiang, X.; Zhen, X. Tuning second near-infrared fluorescence activation by regulating the excited-state charge transfer dynamics change ratio. J.Am. Chem. Soc. 2025, 147, 20, 17330-17341.
  44. Qi, Q.; Liu, Y.; Puranik, V.; Patra, S.; Svindrych, Z.; Gong, X.; She, Z.; Zhang, Y., Aprahamian, I. Photoswichable fluorescent hydrazone for super-resolution cell membrane imaging. J. Am. Chem. Soc. 2025, 147, 19, 16404-16411.
  45. Venkatesh, Y.; Narayan, K. B.; Baumgart, T.; Petersson, E.J Strategic modulation of polarity and viscosity sensitivity of bimane molecular rotor-baseed fluorophores for imaging α- synuclein. J. Am. Chem.Soc. 2025, 147, 18,15115-15125.
  46. Wang, Z.; Kristensen, L. G.; Ho, Y.H.; Liu, Y.; Valencia, L. A.; Nadig, I.; Range, K.L.; Rad, B.; Ralston, C. Y.; Cohen, B. E. A solvatochromic near infrared fluorophore sensitive to the full amyloid beta aggregation pathway. J. Am. Chem. Soc. 2025, 147, 22,18685-18693.
  47. Balamut, B.; Aprahamian, I. Molecular steganography using multistate photoswitchable hydrazones. J. Am. Chem. Soc.2025, 147,23,19444-19449.
  48. Sosnin, D.; Izadyar, M.; Abedi, S.A. A.; Liu, X.; Aprahamian, I. “Clicked” hydrazone photoswitches. J. Am. Chem. Soc. 2025, 147, 18, 14930-14935.
  49. Gong, Q.; Shao, J.; Li, W.; Guo, X.; Ling, S.; Wu, Y.; Wei, Y.; Xu, X.; Jiang, X.; Jiao, L.; Hao, E. Fully conjugated thiophene-fused oligo-BODIPYs: a class of intensely near- infrared absorbing, arc-shaped materials with up to 30 linearly-fused rings. J. Am. Chem. Soc. 2025, 147, 24, 21041-21052.
  50. Xie, W.; Cao, X.; Huang, M.; Xu, K.; Gui, C.; Chen, Z.;Song, X.-F.; Wei, Y.; Liu, H.; Hua, T.; Yang, M.;Yin, X.;Miao, J.; Yang, C. 1,4-Azaborine Participation Enables Inaccessible Cycloarene with Unique Photophysical Properties. J. Am. Chem. Soc.2025, 147, 10, 8178-8187.
  51. Li, Z.; Li, Z.; Zhang, Y.; Wang, H.; Li, X.; Zhang, J.; Zaid, W.; Yao, S.; Xu, J. Human tooth crack image analysis with multiple deep learning approaches. Annals of Biomedical Engineering 2025, 53, 348-357.
  52. Zhang, Y.; Li, Z.; Li, Z.; Wang, H.; Regmi, D.; Zhang, J.; Feng, J.; Yao, S.; Xu, J. Employing Raman spectroscopy and machine learning for the identification of breast cancer. Biological Procedures Online 2024, 26, 28.
  53. Aslam, M.; Rajbdad, F.; Azmat, S.; Li, Z.; Boudreaux, J.P.; Thiagarajan, R.; Yao, S.; Xu, J. A novel method for detection of pancreatic ductal adenocarcinoma using explainable machine learning. Computer Methods and Programs in Biomedicine 2024, 245, 108019.