Introducing the Game-Changing Technology: Simplifying 3D Model Capture

All you need is a phone!

NeRF ( Neural radiance field) technology has been around for a few years but with the explosion of Ai it has got dramatically better and crucially, simpler to use.

The video above is a case study to demo the output of Luma labs 3D in Preview tools.

Some Notes

The capture was performed by someone using the Luma app for the first time. The Flowers and the Chair were captured with an iPhone 11 using the Luma ai iPhone application. The pictures on the walls are reference images so you can judge the quality. Why flowers? Photogrammetry has always struggled with glass, irregular surfaces and translucence. ( The chair is a control!)

The Details

2 mins to capture, 30 minutes to render : $1 per model!

What to take away from this ?

  • The results are imperfect (there are gaps and holes in the models) but it will get better without doubt.

  • The process is simple to use, fast, cheap and pretty impressive For the record these are the low-poly models ( 2.2mb) There are mid and hi-poly options but they take longer to load and the material files are not optimised for ease of use!

The implications

This feels like a significant step on the way to a mechanical 3D capture pipeline (think photography) that works off the shelf and requires little technical expertise.

3D Content creation has always been one of the major barriers to the mass adoption of 3D visualisation tools and services. Modelling is time consuming, requires skill and is expensive. With services like Luma Ai this is changing.

Use cases

For now, project planning, exhibition design, pre-visualisation and storytelling. If you’re looking for something accurate to the micron this may not be the solution you’re after. If you’re a cultural institutions telling stories, this may even help encourage people to come in and see the real thing!

Here's a link to the website if you're interested in more detail:  Luma AI 

If you're interested in experimenting with Luma models in Preview tools. There’s a ‘How To’ video guide below.

N.B The exact scale of Luma mesh export models isn’t currently fixed. Until this changes the scale needs to be set manually in the application. The simplest way to do this is to get a real world measurement and then to use the Measuring tool in Preview ( press and hold the M key ) to work out the correct scale.

A guide to importing Luma ai Models into Preview Tools.

Works by Henri Moore, Edoardo Paolazzi & Ronald Mueck.

Previous
Previous

Kit Mapper / RCA Graduate Show 2023

Next
Next

Wolfe Lenkiewicz on Ai