WWDC Day 4

Finally getting into the groove of going through all the content I want to watch this week. The main issue is how much great stuff was announced, and now to figure out what it all means to me as a developer.

I attended three more Group Labs, but asked no questions, more of a listen in to all the great questions others asked. My favorite was the Accessibility techniques group lab. I’ve been so naive when it comes to accessibility in my apps. I’ve always believed that Apple’s attention to this meant my apps just worked; however after learning how I could run an accessibility audit on my apps, I was shocked to see how badly I faired.

My oldest app is Wasted Time, I have made it available on iOS, iPadOS, watchOS, macOS, tvOS, and visionOS. Every year I have tweaked it after WWDC to add a new feature or redesign how it handles the UI and data. This year, I ran Apple’s Accessibility Audit against it. I was very disappointed with how badly I fared. So I think this summer will be my “Summer of Accessibility”. My goal is to get all my apps as accessible as possible.

Anyway a quick rundown on the sessions I watched yesterday:

  • SwiftUI Group Lab – This session had one of my favorite Apple presenters – Curt. Over the years, his explanations of SwiftUI content has been simple yet detailed. The biggest part of this lab was getting me to look at the new Instrument for SwiftUI. Can’t wait to check out how to improve my scrolling card view!
  • Wake up to AlarmKit – I was expecting more details on night stand mode, but this session was all about Apple exposing their alarms API for your own app to use. Nice new API for me, but I have no use for it. Expect more apps to interrupt what you are doing 🙂
  • Automate your Deployment Process with App StoreKit API – This was a good session going through the APIs that you can use to improve your build and deploy process. While I am really enjoying Xcode Cloud, most large companies already have their own CI/CD pipelines and Apple is finally opening up many of their APIs to make it easier to upload your builds, kick off TestFlight, and retrieve feedback. Highly recommended for any enterprise CI/CD engineer.
  • Better together: SwiftUI and RealityKit – The expansion of the Spatial Web and improvements in UI consistency across platforms makes this session a must watch. Apple has made it easier to bridge between 3D content in SwiftUI and RealityKit, with Observable Entities, improved unified coordinate systems, and Object Manipulation. I expect to see more blurring between what you can do on visionOS and on your Mac. This is a good thing.
  • Bring advanced speech-to-text to your app with SpeechAnalyzer – With the opening up of Apple’s Foundation Model on various platforms, this is a big quality of life improvement for people wanting to to Speech to Text. The sample app shows you how to do realtime voice to text handling. The new model all runs on device addressing the data privacy issue, and supports many major languages and dialects (Cantonese, Chinese (3 regions), English (13 regions), French (4 regions), German (3 regions), Italian (2 regions), Japanese, Korean, Portuguese (2 regions) and Spanish (6 regions)).
  • Build a SwiftUI app with the new design – As with every year Apple has a major theme in their sessions helping developers address the biggest API announcement. For “liquid glass” this is that session. The explanation on how the app handles placement of various elements and insets is important to understand the UI impacts to your app. I have already done some testing with Wasted Time, and saw multiple things that will require fixing. Another key aspect of this session was how to address your custom controls and make them work with “liquid glass”. I have a custom button type for my apps, that I use to minimize the amount of coding I have to do in each view. I will be taking a look at how it needs to change.
  • Create icons with Icon Composer – On Wednesday I raised a feedback (FB17954788)  to get the Icon Composer team to enable visionOS and tvOS in this new tool. Right now it only support macOS, iOS, and watchOS. Given that visionOS and tvOS already have 3D icons, I would expect that his will be a low priority for Apple to address.
  • Customize your app for Assistive Access – I’ve been interested in this mode for some time. I help elderly family members use their various Apple devices and as such, I am constantly dealing with “gremlins” in the system. For the most part these “gremlins” are mis-clicks on the Mac, where emails mysteriously disappear. I’ve not had the ability to dig deep into this mode and see what it might do for the Mac, but I am interested. I am also wondering how my remote software will behave if the Mac is running in Assistive mode.
  • Deep dive into the Foundation Models framework – I didn’t take many notes on this session, as it required a lot of concentration. I do plan on watching it a few more times; however, the explanation of @Generable and @Guide macros was fantastic. And I felt that Apple has explained how they handle private on device data in a manner that makes me even more comfortable on what it will do. Having said that, I am worried that app makers will take advantage of that expectation and still exfiltrate data to their own cloud based services. Think of how Meta and others are treating your data, and you can understand the challenges that Apple must face to enforce this data isolation and privacy.
  • Design foundation from idea to interface – A really good session on how a designer thinks. I’ve always struggled between building apps that are functional verses those that make the function obvious. Much of what the presenter goes through may be obvious for good designers, for the rest of us this is a much watch.
  • Design hover interactions for visionOS – I think I watched a similar session last year (or in 2023), but without the context that I have now. The presenter really does a good job on how / when you should customize a hover effect. There are some custom controls I have developed in Wasted Time that certainly need a second look.
  • Design Interactive Snippets – As we assumed last year with the hard push on App Intents, the magic for Apple Intelligence was going to be what your app contributes. When it comes to actions, Snippets are where you can provide actions in a an easy to read and obvious manner. This session address this compact displayable view of App Intents.
  • Apple Intelligence Lab – The most exciting part of this lab was the clarity in how to test your AI features. I had always wondered what the best way was to handle this. They also explained the way the system handles limits for the on device model. It seems this is much more liberal than I expected. Nice.
  • Design widgets for visionOS – I am so excited by widgets for visionOS. I spent a lot of time this last year fixing the widgets for Wasted Time, and trying to think of new widgets for my other apps. While the session seems to make it so easy to add them, there is still a lot of clarity missing in my knowledge of widgets for visionOS. I’ve tried to enable them but I had to blackout the code. Perhaps I will make more progress over the summer.
  • Develop for Shortcuts and Spotlight with App Intents – This is a surprisingly cool session. My reaction was this would be a minor update to last year’s App Intents sessions. BUT this sessions goes deep into using Apple Intelligence and other LLMs within your Shortcuts. Highly recommend that people dig through this session in detail.
  • Discover machine learnings & AI frameworks on Apple Platforms – I didn’t take notes on this one, as once again this is a session I plan to view multiple times.
  • Accessibility techniques group Lab – As I mentioned in the paragraph above, this lab got me very excited to address the accessibility issues in my app. I wish they would make the labs available after the fact.