Research Article Vol. 30, No. 26 / 19 Dec 2022 / Optics Express 46450 Lensless phase-only holographic retinal projection display based on the error diffusion algorithm Z I WANG , 1,2,4 K EFENG T U , 1,2,4 Y UJIAN PANG , 1,2 M IAO X U , 1,2 G UOQIANG LV, 2,* Q IBIN F ENG , 1,2 A NTING WANG , 3 AND H AI M ING 3 1 National Engineering Laboratory of Special Display Technology, Special Display and Imaging Technology Innovation Center of Anhui Province, Academy of Opto-electric Technology, Hefei University of Technology, Hefei, Anhui 230009, China 2 Anhui Province Key Laboratory of Measuring Theory and Precision Instrument, School of Instrumentation and Opto-Electronics Engineering, Hefei University of Technology, Hefei, Anhui 230009, China 3 Department of Optics and Optical Engineering, University of Science and Technology of China, Hefei 230026, China 4 Equal contributors. * guoqianglv@hfut.edu.cn Abstract: Holographic retinal projection display (RPD) can project images directly onto the retina without any lens by encoding a convergent spherical wave phase with the target images. Conventional amplitude-type holographic RPD suffers from strong zero-order light and conjugate. In this paper, a lensless phase-only holographic RPD based on error diffusion algorithm is demonstrated. It is found that direct error diffusion of the complex Fresnel hologram leads to low image quality. Thus, a post-addition phase method is proposed based on angular spectrum diffraction. The spherical wave phase is multiplied after error diffusion process, and acts as an imaging lens. In this way, the error diffusion functions better due to reduced phase difference between adjacent pixels, and a virtual image with improved quality is produced. The viewpoint is easily deflected just by changing the post-added spherical phase. A full-color holographic RPD with adjustable eyebox is demonstrated experimentally with time-multiplexing technique. © 2022 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement 1. Introduction Holographic retinal projection display (RPD) is a kind of emerging augmented reality (AR) near-eye display (NED) technology [1,2]. It directly forms a focus-free image on the retina by encoding a convergent spherical wave phase with the target image. The always-in-focus image naturally solves the fundamental vergence accommodation conflict (VAC) issue in conventional NED [3,4]. Based on this characteristic, the holographic RPD has wide applications in visual aid for visually impaired people, realizing stereoscopic display without visual fatigue, increasing the safety of auxiliary vehicle displays and so on. Its lensless feature makes the total system compact and aberration-free. Compared with lens-type RPD or laser scanning display, which usually uses a lens or lens-type holographic optical element (HOE) to converge the image light [5–13], the holographic RPD has the advantages of flexible control of beam width, image depth and convergence position, thus easily realizing adjustable viewpoint positions and meeting the requirement of eye movement. Previous holographic RPD usually uses interference with reference light to convert the complex hologram to an amplitude-type hologram [14–21]. It suffers from strong direct current (DC) noise and conjugate noise. The encoded DC term in the hologram occupies most energy and causes low optical efficiency. Many research works have been reported to convert the complex hologram to a phase-only hologram for high diffraction efficiency and low noise [22–29]. The Gerchberg-Saxton algorithm, error diffusion algorithm, and double-phase method are the three #477816 Journal © 2022 https://doi.org/10.1364/OE.477816 Received 11 Oct 2022; revised 17 Nov 2022; accepted 17 Nov 2022; published 7 Dec 2022 Research Article Vol. 30, No. 26 / 19 Dec 2022 / Optics Express 46451 widely used algorithms for generating phase-only holograms [22]. However, the GS algorithm is computationally inefficient and brings random noise to the reconstructed image. And the double-phase holography reduces the spatial resolution and leads to more complex spectral components. In our work, the bidirectional error diffusion method is adopted to optimize the phase distribution on the holographic plane for RPD, which reconstructs high quality images and suppresses speckle noise. Although some researchers have reported the phase-only holographic RPD, an optical lens is still used to converge the light rays, which increases aberration and system complexity [30,31]. In this paper, we propose a phase-only holographic RPD display with lensless feature. Firstly, the error diffusion algorithm is directly used on the complex Fresnel hologram and its performance is analyzed. It is found that the reconstruction image quality is low, especially at the image edge. It is mainly caused by the phase variation of the pre-added spherical wave phase. Second, a post-addition phase method based on angular spectrum diffraction is proposed to improve the error diffusion process. The reconstruction image quality is improved by reducing the phase difference among adjacent pixels caused by the pre-added spherical wave phase. Finally, a full-color holographic RPD is demonstrated in optical experiment with adjustable viewpoint positions. 2. Direct error diffusion of complex Fresnel hologram Figure 1 shows the conventional Fresnel diffraction calculation of holographic RPD. The target image A(x1 ,y1 ) is first multiplied with a convergent spherical wave phase to form the complex amplitude distribution: ]︃ [︃ −jk(x1 2 + y1 2 ) , (1) U(x1 , y1 ) = A(x1 , y1 ) · exp 2(z1 + z2 ) where k = 2π/λ is the wave number, z1 is the distance from the target image to the hologram, and z2 is the distance from the hologram to the pupil plane. The added spherical wave is focused at the pupil plane. Then, the complex amplitude distribution H(x2 ,y2 ) on the hologram plane is calculated through a Fresnel diffraction based on a single fast Fourier transform (S-FFT): [︄ ]︄ {︄ [︄ ]︄ }︄ jk(x22 + y22 ) jk(x12 + y21 ) H(x2 , y2 ) = exp F U(x1 , y1 ) exp . (2) 2z1 2z1 Then, the complex amplitude distribution H(x2 ,y2 ) is usually encoded into an amplitude-type hologram as: HA (x2 , y2 ) = 2Re[H(x2 , y2 )] + C (3) where C is the constant DC noise. Although the amplitude-type hologram is a kind of accurate encoding method because it preserves the complex amplitude information, the strong DC and the conjugate light will cause low efficiency and a small eyebox. To eliminate the DC and the conjugate, the error diffusion algorithm is adopted to convert the complex amplitude distribution to a phase-only hologram. The error diffusion algorithm extracts the phase in each pixel sequentially, and simultaneously spreads the error caused by phase extraction to adjacent pixels regularly [21]. Starting from the first pixel, the phase is extracted by: Hp (x2 , y2 ) = exp{j · arg[H(x2 , y2 )]} (4) where arg denotes the phase extraction operation. And the error is expressed as: E(x2 , y2 ) = H(x2 , y2 ) − exp{j · arg[H(x2 , y2 )]} (5) Research Article Vol. 30, No. 26 / 19 Dec 2022 / Optics Express 46452 Fig. 1. Conventional Fresnel diffraction calculation of holographic RPD. Fig. 2. (a) Scanning sequence of error diffusion. The errors are diffused to neighbor pixels when in (b) odd rows and (c) even rows. Next, the error is diffused to the neighborhood pixels that have not been visited previously. Its neighborhood members are updated according to the following equations: H(x2 , y2 + 1) = H(x2 , y2 + 1) + w1 E(x2 , y2 ) (6) H(x2 + 1, y2 − 1) = H(x2 + 1, y2 − 1) + w2 E(x2 , y2 ) (7) H(x2 + 1, y2 ) = H(x2 + 1, y2 ) + w3 E(x2 , y2 ) (8) H(x2 + 1, y2 + 1) = H(x2 + 1, y2 + 1) + w4 E(x2 , y2 ) (9) where w1 = 7/16, w2 = 3/16, w3 = 5/16, and w4 = 1/16 are always used in the calculation. Equations (6-9) show the error diffusion of pixels on odd rows, scanned from left to right. As shown in Fig. 2, the scan direction of odd and even rows are different. When it goes to even rows, the error diffusion is scanned from right to left in the following equations: H(x2 , y2 − 1) = H(x2 , y2 − 1) + w1 E(x2 , y2 ) (10) H(x2 + 1, y2 + 1) = H(x2 + 1, y2 + 1) + w2 E(x2 , y2 ) (11) H(x2 + 1, y2 ) = H(x2 + 1, y2 ) + w3 E(x2 , y2 ) (12) H(x2 + 1, y2 − 1) = H(x2 + 1, y2 − 1) + w4 E(x2 , y2 ) (13) Figure 3 shows the simulated reconstruction results of the hologram after error diffusion. Research Article Vol. 30, No. 26 / 19 Dec 2022 / Optics Express 46453 Fig. 3. (a)-(d) Original images. (e)-(h) The corresponding simulated reconstruction results of the direct error diffusion method. The parameters are set as: z1 = 474 mm, z2 = 130 mm, λ = 520 nm, and the hologram contains 4096 × 2160 pixels with 3.6 µm pixel interval. To reconstruct the image of the hologram, the wavefront is first propagated to the pupil plane, and multiplied with a circular function which acts as a human eye. Since the human pupil aperture usually ranges from 2-8 mm, we choose 4 mm aperture size as the size of the circular function. Then, the filtered wavefront is back propagated to the image plane to reconstruct the image. Figure 3(e)–(h) show the corresponding reconstruction results of the original images in Fig. 3(a)–(d). It shows that the reconstructed images based on the conventional Fresnel diffraction calculation of holographic RPD has severe noises especially on the edge part. The peak signal to noise ratios (PSNR) are quite low. The quality of the reconstructed images are limited by the error diffusion algorithm. It can be explained from the phase variation of the added spherical wave phase. In the conventional holographic RPD Fig. 4. (a) Two reconstruction results with z2 = 60 mm and z2 = 200 mm. (b) The relation curve between z2 and image quality. Research Article Vol. 30, No. 26 / 19 Dec 2022 / Optics Express 46454 hologram calculation, the target image is firstly multiplied with a spherical wave phase as shown in Eq. (1), and propagated to the hologram plane. Compared with a uniform phase, the spherical wave phase increases the phase difference among adjacent pixels, especially in the image edge. It is known that the error diffusion functions by spreading error to adjacent pixels. If the phase variations among adjacent pixels are large, the error diffusion will not function well. To verify this inference, the simulated reconstruction results with different values of z2 are presented in Fig. 4, while keeping z1 unchanged. Figure 4(a) shows that the image quality with z2 = 200 mm is better than that with z2 = 60 mm. Figure 4(b) shows that as z2 increases, the PSNR also increases. It can be explained that as z2 increases, the focal length of the spherical wave increases, and the curvature decreases. It results in a smaller phase difference between adjacent pixels. Thus, increasing z2 helps improve the effect of error diffusion. However, a larger z2 means a longer eye relief, reducing the compactness of the total system. Thus, there exists a trade-off between image quality and system compactness in terms of the value of z2 . 3. Post-addition phase method based on angular spectrum diffraction To improve the error diffusion process for better reconstruction quality, we propose a post-addition phase method. Compared with the aforementioned calculation process, the difference here is that the spherical wave phase is added after the error diffusion to reduce the conversion error. Figure 5 shows that the proposed method contains three steps. First, the target image is multiplied with a uniform phase at the image plane, and propagated to the hologram plane by angular spectrum diffraction (ASD): {︃ [︃ ]︃ }︃ √︂ H(x2 , y2 ) = F −1 F [A(x1 , y1 )] exp jkz0 1 − λ2 fx 2 − λ2 fy 2 (14) Compared with previous spherical wave phase, the uniform phase will cause less phase variations in the complex hologram. Second, H(x2 ,y2 ) is converted to an intermediate phase-only hologram Hm (x2 ,y2 ) by error diffusion. Finally, Hm (x2 ,y2 ) is multiplied with a convergent spherical wave phase to form the final phase hologram: [︃ ]︃ −jk(x2 2 + y2 2 ) Hp (x2 , y2 ) = Hm (x2 , y2 ) · exp , (15) 2z2 It is noted that the last added spherical phase acts as a lens with focal length z2 . Thus, the target image will be imaged to a magnified virtual image through the lens imaging equation: 1 1 1 + = , z0 −z1 z2 (16) which gives the virtual image distance z1 = z2 z0 /(z2 -z0 ). To ensure a virtual image is produced, the object distance z0 should be smaller than focal length z2 . That is the reason why the angular spectrum diffraction is used, for it is suitable for near-distance diffraction calculation. By adjusting the value of z0 , adjustable image depth can be easily obtained. The virtual image is magnified by z1 /z0 and its size is given by: z1 z1 + z2 S = N∆x2 = N∆x2 , (17) z0 z2 where N is the pixel number and ∆x2 is the pixel pitch of the hologram. Thus, the field of view (FOV) is given as: [︄ z2 +z2 ]︄ N∆x2 z2 N∆x2 FOV = 2 arctan = 2 arctan (18) 2(z2 + z2 ) 2z2 Another advantage of ASD is that the image size remains unchanged regardless of diffraction distance. Thus, the FOV remains the same at each image depth. However, in conventional Research Article Vol. 30, No. 26 / 19 Dec 2022 / Optics Express 46455 Fresnel calculation, the FOV changes with different image depths [17]. Figure 6(a)–(d) shows the simulated reconstruction results of proposed method. Compared to the results in Fig. 3, the PSNRs of the reconstructed images are greatly increased, mainly because the image edges are improved a lot. Fig. 5. (a) The process to generate phase-only holograms by post-addition phase method. (b) The principle of retinal projection display based on post-addition phase method. Fig. 6. (a)-(d) The simulated reconstruction results of proposed post-addition phase method. (e)-(h) The optical reconstruction results of direct error diffusion method. (i)-(l) The optical reconstruction results of proposed post-addition phase method. Research Article Vol. 30, No. 26 / 19 Dec 2022 / Optics Express 46456 The local fringe frequency of the post-added spherical phase along the x-direction is expressed as: (︄ )︄ x2 1 d kx22 =− flocal = (19) 2π dx −2z2 λz2 At the edge of the hologram, the maximum local fringe frequency is N∆x2 /2λz2 . The maximum local fringe frequency should not exceed the Nyquist frequency of the hologram, leading to: N∆x22 N∆x2 1 < ⇒ z2 > 2λz2 2∆x2 λ (20) Thus, there is a lower limit of focal length z2 . Optical experiment was performed to verify the proposed method. A laser beam with 532 nm wavelength was collimated to illuminate the SLM. A phase-type SLM (3.6 µm pixel pitch, 4096 × 2160 resolution) was used to load the hologram. The distances z1 and z2 were set to 1.5 m and 0.11 m. Figure 6(e)–(h) shows the optical reconstruction results of the conventional direct error diffusion method. We can see that in the red box, strong noises appear and degrade the image quality, which is consistent with the simulation results. Figure 6(i)–(l) shows the optical reconstruction results of the proposed post-addition phase method. Since the spherical wave phase no longer affects the error diffusion process, the quality of the reconstructed image is improved. The experimental improvement is not as good as the simulation because of the speckle noise. The left parts of Figs. 6(i)–(l) look darker than other regions because the intensity of the perceived image is modulated by sinc2 (px x/λz1 , px y/λz1 ) due to the diffraction of the limited pixel aperture px , so the edge part is darker than the center. In addition, a deflected spherical phase is used to separate the viewpoint from the zero-order light (caused by the dead-space area of the SLM), so the image position is deflected as well. This increases the nonuniform intensity distribution. It can be improved by intensity compensation [19]. 4. Full-color holographic RPD with adjustable viewpoint positions Next, full-color holographic RPD display is demonstrated in Fig. 7. The R, G, B sub-holograms were sequentially loaded on the SLM, while the R, G, B lasers synchronously illuminated the SLM. Due to the persistence of vision, a full-color image was perceived. Figure 7(b) shows the optical reconstruction results when the camera lens was focused at 0.8 m and 1.6 m, respectively. The virtual image is always in-focus while the real objects are out of focus. Another advantage of the proposed method is that it has no chromatic dispersion. In conventional Fresnel diffraction method, the image sizes of different colors are related with the corresponding wavelengths. The blue laser will reconstruct the smallest image size, so the red and green channels need to be demagnified to match the blue channel. While in the proposed method, the ASD reconstruct the same image size regardless of the wavelength. In full-color display, each sub-hologram is multiplied with the corresponding spherical phase of the same focal length, so the imaging relations of different wavelengths are the same. To match the pupil position, the viewpoint position can be adjusted by adding a deflected spherical wave phase in Eq. (15): }︃ −jk[(x2 − xm )2 + (y2 − ym )2 ] Hp (x2 , y2 ) = Hm (x2 , y2 ) · exp , 2z2 {︃ (21) where (xm , ym ) is the deflected viewpoint position. In Fig. 8, four deflected viewpoints with 3 mm interval were generated sequentially, and the corresponding reconstruction results were captured. The viewpoint shift is easily confirmed by the relative position change among the real objects and Research Article Vol. 30, No. 26 / 19 Dec 2022 / Optics Express 46457 Fig. 7. (a) The process of computer-generated RGB three-channel hologram and experimental setup for full-color retinal projection display (b) Optical reconstruction results captured at different depths. the virtual image. With the help of pupil tracking technique, the viewpoint position can be freely adjusted to coincide with the pupil. The maximum diffraction angle of the SLM is given as: βmax = sin−1 λ ∆x2 Fig. 8. Positions of (a) viewpoint 1, (b) viewpoint 2, (c) viewpoint 3, and (d) viewpoint 4 in the pupil plane and their reconstruction results. (22) Research Article Vol. 30, No. 26 / 19 Dec 2022 / Optics Express 46458 Then the maximum adjustable range is given as: (︃ )︃ λ λz2 E = z2 tan βmax = z2 tan sin−1 ≈ ∆x2 ∆x2 (23) Although all the results are based on 2D image display, the proposed method can support 3D display in 3 aspects. Firstly, by combining binocular parallax-based 3D displays and proposed RPD, two parallax images are projected onto the retinas of both left and right eyes, and 3D display is realized without vergence accommodation conflict. Secondly, in our previous work, the holographic super multi-view (SMV) display [21], multiple parallax images of 3-D objects captured from different viewpoints are converged into the pupil, which makes the retinal projection display have monocular depth cues. This study is based on multiple 2D parallax images to provide a correct accommodation depth cue. Thus, by combining the proposed RPD and SMV, 3D display can be realized. In addition, in future work, we will study how to combine multi-plane display and the proposed RPD to provide depth cues, which can be useful for RPD in 3D display applications. 5. Conclusion In this paper, a lensless phase-only holographic RPD is proposed with improved image quality. The error diffusion algorithm is adopted to convert the complex Fresnel hologram to a phase-only hologram. Its performance is examined and analyzed. It is found that direct error diffusion does not function well due to the phase variations of pre-added spherical wave phase. A post-addition phase method based on angular spectrum diffraction is proposed to make the error diffusion algorithm more effective. The post-added spherical phase acts as a lens and produces a virtual image. The image quality is improved compared with direct error diffusion. A full-color holographic RPD with adjustable viewpoint position is demonstrated with time-multiplexing technique. Each color channel shares the same FOV and no chromatic dispersion appears. The viewpoint is easily deflected just by changing the post-added spherical phase. The proposed method is promising for future RPD near-eye display with compact structure and adjustable eyebox. Funding. National Natural Science Foundation of China (61805065, 62275071); Major Science and Technol- ogy Projects in Anhui Province (202203a05020005); Fundamental Research Funds for the Central Universities (JZ2021HGTB0077). Disclosures. The authors declare no conflicts of interest. Data availability. Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request. References 1. J. Xiong, E. L. Hsiang, Z. He, T. Zhan, and S. T. Wu, “Augmented reality and virtual reality displays: emerging technologies and future perspectives,” Light: Sci. Appl. 10(1), 216 (2021). 2. Z. He, X. Sui, G. Jin, and L. Cao, “Progress in virtual reality and augmented reality based on holographic display,” Appl. Opt. 58(5), A74–A81 (2019). 3. C. P. Chen, L. Zhou, J. Ge, Y. Wu, L. Mi, Y. Wu, B. Yu, and Y. Li, “Design of retinal projection displays enabling vision correction,” Opt. Express 25(23), 28223–28235 (2017). 4. L. Mi, C. P. Chen, Y. Lu, W. Zhang, J. Chen, and N. Maitlo, “Design of lensless retinal scanning display with diffractive optical element,” Opt. Express 27(15), 20493–20507 (2019). 5. X. Shi, J. Liu, Z. Zhang, Z. Zhao, and S. Zhang, “Extending eyebox with tunable viewpoints for see-through near-eye display,” Opt. Express 29(8), 11613–11626 (2021). 6. C. Jang, K. Bang, S. Moon, J. Kim, S. Lee, and B. Lee, “Retinal 3D: augmented reality near-eye display via pupil-tracked light field projection on retina,” ACM Trans. Graph. 36(6), 1–13 (2017). 7. J. Xiong, Y. Li, K. Li, and S. T. Wu, “Aberration-free pupil steerable Maxwellian display for augmented reality with cholesteric liquid crystal holographic lenses,” Opt. Lett. 46(7), 1760–1763 (2021). 8. M. K. Hedili, B. Soner, E. Ulusoy, and H. Urey, “Light-efficient augmented reality display with steerable eyebox,” Opt. Express 27(9), 12572–12581 (2019). Research Article Vol. 30, No. 26 / 19 Dec 2022 / Optics Express 46459 9. S. B. Kim and J. H. Park, “Optical see-through Maxwellian near-to-eye display with an enlarged eyebox,” Opt. Lett. 43(4), 767–770 (2018). 10. C. Jang, K. Bang, G. Li, and B. Lee, “Holographic near-eye display with expanded eye-box,” ACM Trans. Graph. 37(6), 1–14 (2018). 11. S. Zhang, Z. Zhang, and J. Liu, “Adjustable and continuous eyebox replication for a holographic Maxwellian near-eye display,” Opt. Lett. 47(3), 445–448 (2022). 12. D. Wang, C. Liu, C. Shen, Y. Xing, and Q. H. Wang, “Holographic capture and projection system of real object based on tunable zoom lens,” PhotoniX 1(1), 6 (2020). 13. Y. Wu, C. Chen, L. Mi, W. Zhang, J. Zhao, Y. Lu, W. Guo, B. Yu, Y. Li, and N. Maitlo, “Design of retinal-projectionbased near-eye display with contact lens,” Opt. Express 26(9), 11553–11567 (2018). 14. Y. Takaki and N. Fujimoto, “Flexible retinal image formation by holographic Maxwellian-view display,” Opt. Express 26(18), 22985–22999 (2018). 15. C. Chang, W. Cui, J. Park, and L. Gao, “Computational holographic Maxwellian near-eye display with an expanded eyebox,” Sci. Rep. 9(1), 18749 (2019). 16. Z. Wang, X. Zhang, G. Lv, Q. Feng, H. Feng, and A. Wang, “Hybrid holographic Maxwellian near-eye display based on spherical wave and plane wave reconstruction for augmented reality display,” Opt. Express 29(4), 4927 (2021). 17. Z. Wang, X. Zhang, K. Tu, G. Lv, Q. Feng, A. Wang, and H. Ming, “Lensless full-color holographic Maxwellian near-eye display with a horizontal eyebox expansion,” Opt. Lett. 46(17), 4112–4115 (2021). 18. Z. Wang, X. Zhang, G. Lv, Q. Feng, A. Wang, and H. Ming, “Conjugate wavefront encoding: an efficient eyebox extension approach for holographic Maxwellian near-eye display,” Opt. Lett. 46(22), 5623–5626 (2021). 19. Z. Wang, K. Tu, Y. Pang, G. Lv, Q. Feng, A. Wang, and H. Ming, “Enlarging the FOV of lensless holographic retinal projection display with two-step Fresnel diffraction,” Appl. Phys. Lett. 121(8), 081103 (2022). 20. Z. Wang, K. Tu, Y. Pang, X. Zhang, G. Lv, Q. Feng, A. Wang, and H. Ming, “Simultaneous multi-channel near-eye display: a holographic retinal projection display with large information content,” Opt. Lett. 47(15), 3876–3879 (2022). 21. X. Zhang, Y. Pang, T. Chen, K. Tu, Q. Feng, G. Lv, and Z. Wang, “Holographic super multi-view Maxwellian near-eye display with eyebox expansion,” Opt. Lett. 47(10), 2530–2533 (2022). 22. P. W. M. Tsang and T. C. Poon, “Review on the State-of-the-Art Technologies for Acquisition and Display of Digital Holograms,” IEEE Trans. Ind’l. Info. 12(3), 886–901 (2016). 23. P.W.M. Tsang and T. C. Poon, “Novel method for converting digital Fresnel hologram to phase-only hologram based on bidirectional error diffusion,” Opt. Express 21(20), 23680–23686 (2013). 24. X. Sui, Z. He, G. Jin, and L. Cao, “Spectral-envelope modulated double-phase method for computer-generated holography,” Opt. Express 30(17), 30552–30563 (2022). 25. D. Pi, J. Liu, and Y. Wang, “Review of computer-generated hologram algorithms for color dynamic holographic three-dimensional display,” Light: Sci. Appl. 11(1), 231 (2022). 26. Y. W. Zheng, D. Wang, Y. L. Li, N. N. Li, and Q. H. Wang, “Holographic near-eye display system with large viewing area based on liquid crystal axicon,” Opt. Express 30(19), 34106–34116 (2022). 27. S. Jiao, D. Zhang, C. Zhang, Y. Gao, T. Lei, and X. Yuan, “Complex-amplitude holographic projection with a digital micromirror device (DMD) and error diffusion algorithm,” IEEE J. Sel. Topics Quantum Electron. 26(5), 1–8 (2020). 28. X. Yang, S. Jiao, Q. Song, G. B. Ma, and W. Cai, “Phase-only color rainbow holographic near-eye display,” Opt. Lett. 46(21), 5445–5448 (2021). 29. H. Pang, J. Z. Wang, M. Zhang, A. X. Cao, L. F. Shi, and Q. L. Deng, “Non-iterative phase-only Fourier hologram generation with high image quality,” Opt. Express 25(13), 14323–14333 (2017). 30. A. Maimone, A. Georgiou, and J. S. Kollin, “Holographic near-eye displays for virtual and augmented reality,” ACM Trans. Graph 36(4), 1–16 (2017). 31. W. T. Song, X. Li, Y. J. Zheng, Y. Liu, and Y. T. Wang, “Full-color retinal-projection near-eye display using a multiplexing-encoding holographic method,” Opt. Express 29(6), 8098–8107 (2021).