macOS 12 brings new ‘Object Capture’ API for creating 3D models using iPhone


    While macOS Monterey (also known as macOS 12) brings several new features for users, the update also comes with significant improvements for developers with brand new APIs that enable new possibilities for third-party apps. One of the new APIs is “Object Capture,” which will let users easily create 3D models of any object using the iPhone camera.

    Apple has been pushing the adoption of AR technologies for a while now, but creating a 3D model may not be the easiest thing in the world for some people. Usually, you would need advanced cameras to take 3D captures, and then render them all in a dedicated software. But that changes with macOS Monterey and iOS 15.

    With the Object Capture API, Apple says that this whole process of capturing and rendering 3D models will only take a few minutes now. While there is still no app available in the App Store with this new feature, Apple provides some examples of how to compile an app using this new API, and of course I had to test it myself.

    Table of Contents

    Requirements

    First, you need an iPhone or iPad with a dual-lens rear camera (and preferably a LiDAR scanner, although not required) to capture depth data. I used my iPhone 12 Pro Max running iOS 15 beta for this demonstration. With the example app provided by Apple, I had to capture multiple images of the object at different angles, so the API can then render the 3D object in 360 degrees.

    You need about 30 photos to create a 3D model, but Apple recommends using many more than that to get a high-quality result. In a real-life scenario, you should also have optimal lighting conditions, a tripod, and a mechanism to automatically rotate the object without changing its position. I don’t have any of these, so of course the quality will be affected.

    After capturing the photos, I sent them to my Mac running macOS Monterey to then render the 3D model. Once both iOS 15 and macOS 12 are available to the public, there will be apps to make this process easier, but for now I had to use another sample code provided by Apple for this step.

    Apple's Object Capture API.

    The Object Capture API lets developers and users render the objects in low or high quality, and they can choose whether to process the images sequentially or use AI to identify the order of each one (which will take more time). Interestingly, Apple says that this 3D processing API requires an Intel Mac with 16GB RAM and an AMD GPU of at least 4GB VRAM, or simply any Mac with the M1 chip.

    The result

    After 3 minutes and 43 seconds, my entry-level M1 MacBook Air rendered 40 images into a 3D object in the USDZ format — which is widely used for AR content on Apple devices. In total, the process of capturing the images with the iPhone and rendering them on the Mac took no more than 10 minutes.

    The result surprised me. As I said, I was far from having the ideal conditions when taking the photos, yet the Object Capture API created a 3D model that looks very realistic. You can see a comparison with the real object in the video below:

    Not only does Apple’s new API amaze me, but also the fact that both the iPhone and Apple Silicon Macs have a hardware powerful enough to enable the creation of such content.

    Apple is definitely working to create a new augmented reality world, and the company already has all the pieces it needs to do this. I’m sure that the new Object Capture API will not only make it easier to create 3D models, but will also drive AR adoption in more ways.

    For more details about the new Object Capture API, check out the “Create 3D models with Object Capture” WWDC 2021 session.

    FTC: We use income earning auto affiliate links. More.


    Check out 9to5Mac on YouTube for more Apple news:



    Source link

    Previous articleDell Slashes the G5 15 Gaming Laptop by $340 Today
    Next articlePSA: Facebook, Instagram, and WhatsApp users affected by outage