The creation of virtual scenes and their realistic embedding in the real environment
Loading...
Date issued
Authors
Editors
Journal Title
Journal ISSN
Volume Title
Publisher
Reuse License
Description of rights: CC-BY-4.0
Abstract
The creation of virtual worlds has fascinated researchers for several decades. Advances in hardware have made it ever more feasible for the virtual and real worlds to merge seamlessly in the field of mixed reality (MR). This research field combines a wide range of disciplines, such as computer graphics, computer vision, machine learning, and physics. Despite great progress in the last decade, several challenges remain, particularly in achieving real-time performance on devices with limited hardware resources or in providing convincing visual quality with existing methods.
This thesis explores various aspects of MR and presents methods to improve the quality of applications in this domain. It includes a set of techniques for sampling the volume and surface of 3D objects using discrete points. Building on these techniques, a novel method for real-time simulation of granular materials with large particle numbers is introduced. By structuring the simulation in two stages, the method first simulates a small number of particles, including all acting forces. Subsequently, a larger number of particles are added for visualization, partially following the velocity field defined by the first stage. This approach allows for a more immersive experience in virtual reality applications where interactions with materials like sand or gravel are simulated, such as in heavy equipment operator training.
Moreover, this thesis focuses on improving the photometric registration of virtual objects in augmented reality applications. A method for analyzing the real-world lighting situation is presented, using geometric and photometric parameters derived from an RGB camera image with an artificial neural network. Furthermore, a novel method for the efficient visualization of soft cast shadows of virtual objects is introduced. In this approach, shadow textures are encoded within tiny artificial neural networks, which can be queried based on the current lighting situation to produce realistic shadows with minimal computational overhead.
