Blogs

Apple Simplifies AR Development Further with RealityKit2 & ARKit5 Release

BLOG  by 

CitrusBits
July 9, 2021
#XR #VR #UX #UI

By introducing features such as object capture Apple continues to take the heavy lifting out of AR app development making it more accessible to the masses

Creating 3D models is one of the most difficult parts of the augmented reality development process. The process easily takes hours and costs almost a king’s ransom – literally thousands of dollars. This fall, it all changes.

Because after LiDAR, Apple has shared yet another magic trick that is soon to make things easier in the augmented reality development space. In its 2021 developers’ conference – WWDC – last month in June, Apple announced how going forward it is taking care of this catch-22 conundrum for devs with its new updates and tools.

At WWDC21 Apple announced new major iterations for its ARKit and RealityKit frameworks to create augmented reality-based apps for iOS. Let’s review each update and see how they will significantly improve augmented reality app development.

The Much-needed RealityKit 2 Release

A Little Bit About the RealityKit

Apple has the world’s biggest AR platform, with 1 billion AR-enabled iPhones and iPads in existence worldwide, all of which run RealityKit, Apple’s renderer, animation, audio, and physics engine designed specifically for AR.

Typically, designing immersive AR experiences has always been challenging, demanding a fair bit of talent and expertise. To create high-quality AR experiences, AR developers have to master essential skills including rendering technology, physics simulation, animation, interactivity. And these are only a few examples.

Apple made it simpler with RealityKit, a set of technologies or frameworks for augmented reality development, which when paired with the awesome power of Swift, delivered a high-quality framework with a super simple API.

The RealityKit 2 Upgrade: Object Capture Bags all the Attention
It’s not really magic if you keep in mind the tech and genius involved. The RealityKit 2 release will make it simpler for developers to design 3D models from 2D images in a shorter timeframe, using only an iPhone (or iPad, DSLR, or even a drone if they prefer) to capture 2D images or videos of an object from every angle, including the bottom.

The number of pictures that RealityKit needs in order to create an accurate 3D representation varies depending on the complexity and size of the object, but adjacent shots must have substantial overlap. Apple mentioned that the users should aim for at least 70% overlap between sequential shots and never less than 50%. What’s more, the new Object Capture is able to take advantage of depth information when available to enhance the output model.

Once you have a sufficient number of pictures of your object, creating a 3D model out of them is pretty much a matter of executing some boilerplate code, customizing it to your needs. Decidedly, the critical part of the whole process is capturing high-quality images but once that is taken care of, the images will be then uploaded to macOS Monterey through the Object Capture API. And voila! A few lines of code later, your three-dimensional model is ready.

To simplify things, Apple showed two sample apps, one to take pictures on iOS devices equipped with a dual rear camera able to measure depth and gravity data, and a command-line tool for macOS to streamline the process of creating the 3D model from the images.

The Technique is called ‘Photogrammetry’

Basically, taking the manual process out of the picture, Object Capture uses a technique known as photogrammetry, which requires taking a sequence of photographs from several angles while avoiding objects which are overly thin in one dimension or very reflective.

To begin, users would open RealityKit and create a new photogrammetry session that links to the folder where the photographs were captured. The process function would then be used to build the 3D model at the appropriate level of detail.

Apple mentioned that developers can also use Object Capture to create USDZ files that can be customized for AR Quick Look, a framework that allows them to add virtual, 3D objects to apps and webpages for the iPhone and iPad. Reality Composer in Xcode may also be used to insert 3D models into AR scenes.

Wayfair & Etsy are Using Object Capture

Wayfair, Etsy, and other developers are already making use of Object Capture to design 3D models of real-world objects, according to Apple, hinting that online shopping is likely to get a huge AR update.

Wayfair, for example, is using Object Capture to create tools for their manufacturers to generate virtual representations of their products. Customers of Wayfair will be able to view more products in AR than they could before.

There’s More to RealityKit 2 than Object Capture

The wonders of ‘Object Capture’ aside, RealityKit 2 can now build procedural mesh, which is a huge step forward in terms of unleashing endless opportunities beyond the boxes, spheres, text, and planes that RealityKit 1 could do. Another intriguing new advantage is the ability to design custom Entity Component Systems to organize AR assets and make complicated AR app development easier.

Apple has also added support for custom shaders, providing AR developers more freedom over the rendering process. As a result, it will be easy to fine-tune the aesthetic appeal of AR objects and scenes.

A Bird-Eye-View of ARKit 5 Update

As for ARKit5, Apple has added Location Anchors in ARKit 5 for London and other cities throughout the US, allowing AR devs to develop immersive AR experiences for specific locations such as the London Eye, Times Square, and even your local neighborhood. Motion Tracking has been improved in ARKit 5, and Face Tracking is now supported in the iPad Pro’s Ultra-Wide camera (5th generation). You can also attach virtual material from your App Clip or ARKit app to a printed or digital App Clip Code using a new App Clip Code anchor.

Face tracking is now supported on any device with the A12 Bionic processor or later, and it can identify up to three faces in one go. Location anchors, on the other hand, allow an AR environment to be anchored to a physical location, such as a city or a well-known monument. This update will help make it possible to design virtual signposts for when a user approaches a street, a monument, etc.

The Future of AR Development with Apple…

…seems to be an interesting and innovative one. Let’s not forget here that with more than 14,000 ARKit apps created by more than 9,000 different developers on the App Store and with the more than 1 billion AR-enabled iPhones and iPads being used globally, Apple is the world’s largest AR platform. This is to say that RealityKit and ARKit were already a boon. But with the introduction of LiDAR last year and this year with Object Capture, Apple has taken a step further, strengthening the very foundation of Augmented Reality mobile app development, simplifying the process, and making it more dev-friendly than it used to be. This also means countless possibilities for present iOS devices as well as future devices such as AR or VR glasses or headsets.

With updates like object capture for creating 3D models, Apple aims to push augmented reality development companies like CitrusBits to expand the limits of AR content even further.

About the Author

CitrusBits

Content Writer

Lorem ipsum dolor sit amet consectetur. Odio ullamcorper enim eu sit. Sed sed sociis varius odio vitae viverra. Eu sapien at vitae vulputate tortor massa semper vel. Lectus sed gravida blandit lorem consequat erat integer non ut. Morbi amet dui cras posuere venenatis. Laoreet sapien lacus sit sit elementum risus massa auctor. Enim ornare pharetra quis massa fusce. Nibh vitae in erat ut mollis erat. Amet cursus ut sem condimentum ultrices. Felis morbi malesuada sit amet ultrices at ut consectetur.

Newsletter

Let’s stay in touch