AugmentedReality ARKit Xamarin

Augmented Reality Virtual Table Items Proof of Concept and Code Explanation

So after some struggling, I've finally got 'gesture recognisers' working in ARKit.

That is, the ability to use pan, rotate and pinch gestures on my iOS device to interact with items in an Augmented Reality scene.

I created a little proof of concept app for my iPhone (below) which just shows some images returned from the UnSplash API and allows me to move them around a detected /selected surface as well as scale or rotate them.

You need only use your imagination to think how this could be applied to a number of industries or markets, especially as AR Glasses start hitting the market and becoming mainstream.

And here is a video explaining the code and the approach.

What's next?

There are two awesome proof of concepts that I want to do next, but I really need to spend some time finishing my Augmented Reality talk that I am going to give at a few user groups as well as continue writing my related book for my publisher.

Proof of Concept: Vision Core ML

I intend to hook into Apples Vision Core ML to automatically detect what is in the scene and show the description of it in AR.

Proof of Concept: Real-time communication & Voice Recognition

In an AR app I aim to setup a real-time SignalR connection with a SignalR hub running on a web server. Then in a separate .NET console application have (System.Speech) speech recognition running that when recognised calls the SignalR hub, which then updates the AR app.

-- Lee