Diploma in Apple Development 🍎👩🏻💻: Coding Two: Lecture 6: Tracking images.
Back to slide index.
👋🏻 Hi!
First, don't forget to confirm your attendance using the Seats Mobile app!
Second, I'm going to start every lecture with a meditation from the fantastic
Sitting Still Like a Frog book, all read by Myla Kabat-Zinn. If you don't want to participate that's completely fine.
Before we start the lecture proper, I'd like to demonstrate the different debugging views that Xcode offers when you run a VisionOS app in either the simulator or on a headset. For reference
CreateWithSwift.com has a good article about the different modes - let's go through it and then try on the simulator and on a live headset.
This session is going to be all about how we can track known images in visionOS. Luckily,
Apple has provided an article about how to do this .Unfortunately, they haven't provided a complete project to work from. What should we do? Shall we try making a new project with it? Or...
GitHub code search to the rescue! Notice how I've specified "ImageTrackingProvider" as the search term and specified that I want only results write in Swift. Which projects look best? Let's split into three groups, one Vision Pro each and each choose a different project to get running. Once they are all running, let's demo them to each other, before taking a dive into their code. Stop press!
ImageTrackingSample located! Let's try it.
As a bonus, let's try prompting
Claude to build this app for us.
Flynn and Rosa have already been working on Image Anchors for their final projects - how are those going? Could we have a look at your code after a demo? As a final challenge, let's look at how to detect collisions between two Image Anchors, combining Flynn and Rosa's code with some example code that I've prepared ahead of time. Let's take a look at my example code.