Is Unreal Engine 5 really capable of infinite details?



[ad_1]

Yes and no. Epic considers most of the possibility infinitely, not necessarily lucky to have a literal interpretation.

Epic unveiled Unreal Engine 5 yesterday, and certainly two videos from a technical demo called “Lumen in Nanite Land” raised many questions. The new video game engine features several new modules, such as Chaos for physics, Niagara for particle simulation, Nanite for mapping at the micro-polygon level, and Lument for global illumination.

In the case of Nanite, the word infinity has also emerged, specifically the system is capable of displaying a ZBrush sculpture or a virtual copy of real objects scanned by photogrammetry, without the need to save details on a normal map or manual LOD. They sound loud, it is no coincidence that a simple question has arisen: how? Well, infinity is not as infinite as hardware can support anything, obviously there will still be limits to the physical capabilities of that particular machine, but Unreal Engine 5 itself can effectively handle infinite geometry. This is largely a theoretical statement, because if there were infinite triangles in the scene, the frame could never be calculated. So Epic isn’t referring to the real value, but the fact that the renderer itself was designed to accept CGI-adapted cinematic content and be able to handle it in real time. This is based on Nanite, but does not show the geometry in its entirety, but works as a virtual micropolygon system that can convey the quality of the loaded content in enough detail, even mapping one triangle per pixel. That’s really enough to really see the myriad of triangular patterns in all their glory.

The demo was also accidentally performed on PlayStation 5. Basically, the biggest barrier to handling highly detailed geometry is no longer the raw computing power of graphics drivers, but the limit on the outdated graphics assembly line, which has been around. for about twenty years, but it results in too many limits today, even with has been continually supplemented over the years. In connection with the launch of the PlayStation 5, it was revealed that the console’s main innovation will be the primitive shader, which is a next-generation assembly line for geometry processing, replacing the current model. With this, the displayed geometry can be handled more efficiently by orders of magnitude, so not only is the hardware evolving, but the machine is also getting a huge boost at the software level. In essence, this allows for mapping at the micro-polygon level, and therefore there are no theoretical limits on the number of polygons, and the previous generation already knew a fairly serious level of drawing command handling, so this did not. it is a real problem.

Since the hardware and, essentially, the graphical side of the API have the correct computing capacity and performance, only data volume issues need to be overcome. This is because highly detailed models require a lot of memory. Effectively, even if the designed system can handle geometry incredibly efficiently, if the data doesn’t fit in memory. And here comes another great innovation for PlayStation 5, namely the ultra-fast SSD and hardware-controlled hierarchical paging memory management. In fact, this is the main component that powers Nanite. This is because the hardware itself can function in such a way that the memory is practically the application itself on the SSD. For example, if the content takes up 100 GB of space, it’s like a machine with 100 GB of system memory for PlayStation 5. In this case, part of the actual system memory, physically incorporated, is delivered to the processor, and the other part acts as a large cache, where data that is only needed for mapping is always copied. No more no less. This type of extremely efficient transmission makes the PlayStation 5 really powerful, since it is no longer controlled by software, but instead decides on a system integrated in the hardware, practically impeccable and very efficient. In this way, it is possible to scan content that previously seemed unimaginable with 16 GB of system memory. And all of this is true for textures, by the way, since Unreal Engine 5 uses virtual textures.

So Epic did no more than adapt to the hardware capabilities of the new generation of consoles. It really makes development work really easy too, as it’s really possible to get away with it in terms of content quality. However, it will also create a new limit, as highly detailed patterns and textures take up a lot of space, even when compressed. Ergo in the future will not be the memory of the machine will be the main limitation, but the data storage capacity. Also, it should be noted that PlayStation 5 and Xbox Series X can work as described above, but not previous generation consoles or essentially PC. Therefore, you can’t get away with quality unless the two new-generation consoles are directly targeted. This app is completely inoperable in the previous generation, in part due to slow hard drives. The situation is not hopeless for PCs, but obviously for this quality, you would have to start buying PCI Express 4.0 SSD on a large scale (for many, the platform that supports the connection itself), or basically that is not enough, because you need a hardware-controlled hierarchical paging memory management on GPU. And it wouldn’t hurt to have a separate (preferably standard) API that allows the program to directly tackle the SSD so that the content stored there can be properly cached by the hardware.

[ad_2]