New AI Tool Enhances Image Quality in Virtual and Augmented Reality
By Dave DeFusco
In the realm of virtual reality and augmented reality, the quality and realism of images are crucial for creating immersive experiences. In a paper published in Scientific Reports, researchers in the department of applied physical sciences explain how they’ve created an AI-based tool, ConIQA, which promises to transform the way image quality is assessed in these technologies, particularly in applications like computer-generated holography.
“ConIQA’s success is promising for the future of virtual and augmented reality, especially in enhancing the realism of these technologies,” said M. Hossein Eybposh, a first author of the paper and postdoctoral researcher in the department of applied physical sciences. ”Although it was specifically tested with computer-generated holography, ConIQA can be applied to a wide range of image synthesis and rendering techniques, making it a versatile tool in the field of image-quality assessment.”
Computer-generated holography (CGH) creates three-dimensional images by simulating the way light waves interact, producing highly realistic and immersive visuals; however, ensuring these images look natural and high-quality to the human eye is challenging. Traditional methods for assessing image quality often fall short, especially when dealing with the unique distortions and artifacts that can occur in CGH, such as ringing, which reduces the sharpness of objects in an image; speckling, which causes a hologram image to look grainy; and quantization errors, which reduce contrast in color and brightness.
“Traditional image quality methods struggle to assess these unique distortions effectively,” said Eybposh, “which is why specialized methods are needed for CGH images.”
To tackle these challenges, researchers developed ConIQA, a deep learning-based tool, to evaluate image quality in a way that closely aligns with human perception. Unlike existing methods, ConIQA can learn from both labeled and unlabeled data, making it especially useful in situations where obtaining large amounts of labeled data is difficult and costly.
To assist in the tool’s development, the researchers created a dataset called HQA1k, which consists of 1,000 natural images paired with CGH-rendered versions. These pairs were evaluated for quality by a group of 13 human participants. ConIQA was then trained on this dataset and additional unlabeled data, allowing it to learn and predict image quality with a high degree of accuracy.
When tested against 15 other image quality assessment metrics, ConIQA consistently outperformed them, achieving superior results in terms of how closely its assessments matched human judgments. This makes ConIQA a valuable tool for improving the quality of images in virtual-reality and augmented-reality applications, ultimately leading to more immersive and realistic experiences for users.
“ConIQA’s development is a significant step forward in the quest for better image quality in virtual and augmented reality,” said Dr. Nicolas Pégard, senior author of the paper and assistant professor of applied physical sciences. “As these technologies continue to grow and find applications in fields like healthcare, education and entertainment, tools like ConIQA will play a vital role in ensuring that the visual experiences they offer are as realistic and engaging as possible.”
January 6, 2025