September 14, 2016
A topic that has always attracted me is photogrammetry: let’s have a journey together in its developments over time and its multiple uses.

A topic that has always attracted me is photogrammetry.

Living in a nation full of monuments and historic buildings, not surprisingly Italy is by definition an open-air museum, photogrammetry has always been used for renovation and restoration purposes.

About 15 years ago, when I stumbled on one of the first 3D scanners, I thought that it would have been extraordinary if with the scanner we could export geometry from the proprietary software to 3dsMax or Maya and then render it with the right lights and shaders. After all, a 3D scan returns faithfully any type of geometric detail, including textures already applied to the model. At the time, the question was: “Can we export the geometry created by the scanner in 3dsMax or Maya?” and the reply given by any 3D scanner manufacturer was: “It can do that, for sure”. But, in fact, the surface created was a cloud of points and when you tried to import it, the 3D software crashed miserably, although you could count on professional workstations. It meant that as archviz artists the tools at our disposal weren’t enough to take advantage of that technology. So I gave up (like everyone else) and manually modelled my assets.

About 8 years ago, when I was working at HayesDavidson, we attended a presentation of a company that proposed a new type of more advanced and more accurate laser scanner. I still remember that all team was in the studio, we turned off the lights and started a 360° scan. The result was absolutely amazing. Everything worked, but always within the proprietary scanner software. Once again the question was: “Can we export this geometry in 3ds Max?” and the answer was what I had already heard seven years before. We weren’t ready to make the most of that technology yet.

[ux_video url=”https://vimeo.com/145248208″]

Over time, things have changed and starting from 2009, someone like Autodesk has decided to invest in this technology taking it to the next level: Autodesk 123D made its appearance on the market. Thanks to a series of photo shootings on the object, its shape could be recreated in 3D:it was photogrammetry for everyone. With a simple digital camera and few shots around the object, you could recreate it in 3D and render it with the software you liked. From then, computers have become more and more powerful and the RAM limits for geometry management greatly exceeded. We know how it is difficult to create complex 3D grounds, vegetation, food and art objects to make them look realistic because of the huge amount of details.

Today I consider photogrammetry as a technology that can be introduced in a 3D production pipeline without any problem. As a confirmation of this, we have a great buzz around this technology in the VFX world, gaming and ArchViz as well. Companies like MPC used Megascan by Quixel for the production of the Jungle Book. The result is stunning as you can see from this video.

[ux_video url=”https://youtu.be/vkNArCG80Bg”]

Last year at the AcademyDay#6 we had the opportunity to attend the presentation related to the gaming world of Unreal Engine by Epic Games. We really liked the short film “A boy and his kite” in which we found an intensive use of this technology for the creation of the environment.

[ux_video url=”https://youtu.be/0zjPiGVSnfI”]

Their making of is also very interesting because there, we can appreciate the method used for the asset creation: https://www.unrealengine.com/blog/creating-assets-for-open-world-demo

In the ArchViz, we can see how Bertrand Benoit is using photogrammetry for the creation of his assets for food (https://bertrand-benoit.com/blog/trees-for-the-forest/) and, more recently vegetation (https://bertrand-benoit.com/blog/raw-renders/).

The artist Radoslaw Ignaciuk has brought to the fore photogrammetry for the creation of the ground and vegetation in his set of images for the Archipelago House. At this link, you can find the making of for the Arcipelago House project. We’ll meet Radek at next AcademyDay#7 where he’ll show us how he created naturalistic assets through the use of photogrammetry and Agisoft PhotoScan.

More at www.academyday.com.

Naturally, also the creation of 3D people for renderings is benefiting from photogrammetry, enough to convince even the US President Barack Obama to undergo a photo shooting and then recreate his upper body with the use of a 3D printer (https://www.whitehouse.gov/share/watch-first-ever-3d-print-president).

I have repeated several times lately that we are living a time full of novelties and technological renewal. Photogrammetry enters by right in my personal ranking of the best technologies that can help us reach the photorealism that often we seek with so much effort. Currently the process for the creation of assets requires a hard job from the Artist, in particular in relation to the shots to take, the lights to use, the retopology for geometries etc. There isn’t a one-click solution, but I’m sure that soon we’ll have smartest solutions and the whole process will be greatly simplified.

Have you already had experiences with this technology? What do you think?

I’ll be awaiting your feedback!

See you soon.

Gianpiero “Peo” Monopoli

Share this