Note this link – https://developer.apple.com/wwdc23/10118 There is a Safari tech preview I may have to check out for the day job. Typography inspection User Preference override Element badges Breakpoint enhancements Many more features go to webkit.org for more information
Author: Michael Rowe
The SwiftUI cookbook for focus
Using Focus API’s in SwiftUI What is Focus Ingredients Recipes
Meet Swift OpenAPI Generator
Swift package plugin to support server API access. This Dynamic network requests, you need to understand the base, the endpoint and more. Most services have API documentation, which can be outdated or inaccurate. You can use inspection – but that provides incomplete understanding. Using formal, structured specification can help reduce these challenges and ambiguities. This […]
Meet Assistive Access
This is about addressing cognitive disabilities – distilling apps to their core, you can setup via a parent, guardian or parent Overview Principles Your App Optimized for assistive access
Keep up with the keyboard
The keyboard has changed a bit over the last few years. It has new languages, it floats, and it handles multiple screens Out of process keyboard Design for the keyboard New text entry APIs
Embed the Photo Picker in your App
The new photo picker, you don’t need request any permissions to use it. It is only a few lines of code to use this. Embedded Picker Options Menu HDR and Cinematic
Elevate your windowed app for Spatial Computing
Spatial computing means your apps fit in the your surroundings While SwiftUI is the focus, UIKit can take advantage of much of this content. SwiftUI in the Share Space Polish your app Brand new concepts
Dive Deeper into SwiftData
Configure Persistence Track and Persist changes Modeling at scale
Design considerations for vision and motion
This is a research based session, Visual Depth Cues Content Parameters Eye effort Motion of virtual objects Head-locked content Motion in windows Oscillating motion
Customize on-device speech recognition
iOS 10 introduced speech recognition Speech is designed to convert an acoustic model to phonetic representation, that is then transcribed to a physical representation. Sometimes there are multiple matches, so we must do more than just that. Looking at context we can disambiguate values with a language model. This was how it was modeled in […]