Our modern world is truly changing at incredible speed and data obsolescence is a key issue.
The actual surveying methods are accurate but take time which is sometimes not tuned to the speed of changes and new data may not reflect the new situation.
Nowadays, GPS, Survey and Photogrammetry are consolidated technologies to ensure correct processing capable of generating accurate 3D data for maps and digital twins. They can even achieve a sub-centimeter accuracy.
The big requirement is to provide accurate and up-to-date data. Therefore, management and planning systems, as well as decision-makers, will be able to act appropriately for the benefit of users.
There we have the new neural network approach.
Neural networks are layers upon layers of variables that adapt and adjust themselves to the properties of the data they are trained on, and they become capable of performing tasks.
Imagine making and render realistic 3D scenes based on a collection of 2D image inputs. It looks like photogrammetry and computer vision.
But take a closer look at the new tool – it’s impressive, i.e. @Nvidia’s example.
The Neural Radiance Field NeRF empowers an inverse rendering approach to use artificial intelligence and analyze the behavior of light comparing to the real world. It predicts the behavior of light radiating in any direction, from any point in 3D space, transforming 2D snapshots into 3D rendered scenes, The technique can even work around occlusions when objects seen in some images are blocked by obstructions and it works at the speed of speaking.
Is the deep learning neural approach about to open new challenges for mapping and updating the digital twins of the real world at the speed the continuous changes require?
Surely, we are sitting on the top of the iceberg.