The SwiftUI cookbook for focus

Using Focus API’s in SwiftUI

What is Focus

  • This is a tool to decide how to respond when someone interacts with an input method.  On their own they don’t provide enough information on which onscreen control to interact with.
  • When a view has focus, the system will use it as a starting point to react to input 
  • It has a border on macOS, watchOS has a green border, and on tvOS it will hover above the UI
  • Helps users understand where input will go, it’s a special kind of cursor
  • It is a cursor for the user’s attention

Ingredients

  • Focusable views
    • Different views are focusable for different reasons.  Text fields are always focusable.  Buttons are used for clicks and taps, you need keyboard navigation system wide to allow for buttons to be tab able.
    • Buttons support focus for activation.  In iOS 17 and macOS Sonoma – there are new view modifier to define the types of interactions you support.  .focusable(interactions: .edit) or .focusable(interactions: .activate).  If you don’t define an interaction, you will get the .all value. Prior to macOS Sonoma the system only used .activate focus modifier.  (I should add focusable in my code)
  • Focus state
    • The system keeps track of which view has focus – this is @FocusState bindings -it is a bool (or custom data types for more complex interactions)
    • Views can read this to understand when they are in focus or not.
  • Focused values
    • This solves data dependencies that link remote parts of your application.  This is like using a custom environment key and values.  Set up a getter and setter, and define view Modifiers to address this
  • Focus sections
    • Gives you a way to influence how things move when people swipe on a remote or hit tab  on a keyboard.  This follows the layout order of the locale on a keyboard. But directional on Apple TV remote.
    • You use .focusSection() to guide focus to the nearest focusable content.  You will need to add spacers to align content.

Recipes

  • For custom controls you should consider which items to focus on when lists appear.   The Grocery list example in this session explains how to do this with @FocusState and .defaultFocus()
  • If you want the app to move focus programmatically you can use the same example of adding a new item to the list
    • I should address focus in my Wasted Time app for updates on the settings screen
  • If you create a custom control, the custom picker example helps with this pattern.
    • Remember to turn on Keyboard navigation systemwide
  • Focusable grid view is the final receipt – using Lazy Grid

Meet Swift OpenAPI Generator

Swift package plugin to support server API access.  This Dynamic network requests, you need to understand the base, the endpoint and more.

Most services have API documentation, which can be outdated or inaccurate. You can use inspection – but that provides incomplete understanding.

Using formal, structured specification can help reduce these challenges and ambiguities.  This session addresses how apple and swift are support OpenAPI specifications to improve APIs with server apps.

Exploring OpenAPI

  • You can use either YAML or JSON to document the server behavior.  There are tools for testing and generating interactive documentation and spec driven development.
  • There is a lot of boilerplate code you have to write in swift to deal with APIs. So by using OpenAPI you can use tooling to generate most of the boiler plate code.  Here’s a trivial example
  • If we add one optional parameter sec an see that integrated in the Yaml for the Get 
  • The generated code is generated a run time so it is always in sync with the API specification 

Making API calls from your app

  • Sample app is update to replace the sample app to generate one of the cat emoji
  • Here’s the API Specification 
  • The entirety of the code for the API is seen here. Package dependencies are available in the session notes
  • You must configure the generator plugin via the a YAML file
  • Once all the setup is done.. here’s the code:
import SwiftUI
import OpenAPIRuntime
import OpenAPIURLSession

#Preview {
    ContentView()
}

struct ContentView: View {
    @State private var emoji = "🫥"

    var body: some View {
        VStack {
            Text(emoji).font(.system(size: 100))
            Button("Get cat!") {
                Task { try? await updateEmoji() }
            }
        }
        .padding()
        .buttonStyle(.borderedProminent)
    }

    let client: Client

    init() {
        self.client = Client(
            serverURL: try! Servers.server1(),
            transport: URLSessionTransport()
        )
    }

    func updateEmoji() async throws {
        let response = try await client.getEmoji(Operations.getEmoji.Input())

        switch response {
        case let .ok(okResponse):
            switch okResponse.body {
            case .text(let text):
                emoji = text
            }
        case .undocumented(statusCode: let statusCode, _):
            print("cat-astrophe: \(statusCode)")
            emoji = "🙉"
        }
    }
}
  • Pretty simple, huh?

Adapting as the API evolves

  • As the API evolves, you can address the new versions based on app changes based on the new YAML file
  • Adding parameters are also easy to change in your code

Testing your app with mocks

  • Creating a Mock is needed to testing
  • By creating an APIProtocol{} struct you can now mock your code, and make your view generic to utilize the protocol  – this allows for running without a server.  Check out the code at https://developer.apple.com/wwdc23/10171 

Sever development in Swift

  • There is an example of a swift console App to run on the machine.  Using the API Generator simplified the server code requiremnts so you can just focus on the business logic.
  • The demo is using Vapor for the server code
  • The usage of Spec Driven development allows you to focus on your business logic on the server and the OpenAPI generator will automatically generate the stubs for you
  • OpenAPI Generator is a open source project available on GitHub https://github.com/apple/swift-openapi-generator

Meet Assistive Access

This is about addressing cognitive disabilities – distilling apps to their core, you can setup via a parent, guardian or parent

Overview

  • This can be setup via the setting apps, you are guided thru the setup process, including what apps and indicators show.  You can setup in settings or via the Accessibly shortcut.
  • It will provide a different Lock Screen and then a unique Home Screen with larger icons and text, there are five default apps setup to be used with Assistive Access

Principles

  • Design to create an effective experience that reduces cognitive strain.
    • Error prevention and recovery
    • Tasks should be completable without distractions
    • Reduce time dependency on actions
    • Drive consistency

Your App

  • Third party apps will just work.  
  • Large back button on the bottom of the screen, in a reduced frame by default

Optimized for assistive access

  • New info.plist entry – UISupportsFullScreenInAssistiveAccess set to YES if you used adaptive layout ( I should add to Wasted Time)

Keep up with the keyboard

The keyboard has changed a bit over the last few years.  It has new languages, it floats, and it handles multiple screens

Out of process keyboard

  • This is the new architecture – allows for improve privacy and security
  • This is now a process out side of your app, this will be an asynchronous process with the system and your app requesting updates from each other. Ultimately you will get a Text Insertion – it may provide some slight timing issues
  • Frees memory from your app
  • Provides flexibility for future changes 

Design for the keyboard

  • Note in the old model, you moved your view up to address for the keyboard.  However in the new model, you need to adjust your app to the intersection with the keyboard overlay (like in Stage manager).  You may have multiple scenes that have to adjust.
  • If you use the mini-toolbar, in stage manager is different than out of stage manager.
  • Keyboard layout guide was introduced in iOS 15 for UIKit, this has been updated by iOS apps, and is the recommended way to address the keyboard.
    • view.keyboardLayoutGuide.topAnchor.constraint(equalTo: textView.bottomAnchor).isActive = true
    • This has been updated to allow for more customization on iOS 17
  • SwiftUI automatically handles the common cases for you, by adjusting the safe view.
  • Notifications
    • In the past you had to listen for will show, did show, etc. and then process these yourself.  But with Stage manager introduction those patterns didn’t work
    • It certainly seems that many of these changes are why Stage Manager only got as far as it did last year.

New text entry APIs

  • Inline predictions will allow for on device processing and context to provide information.  This is enabled by default for most text fields.  This is powerful

Embed the Photo Picker in your App

The new photo picker, you don’t need request any permissions to use it.  It is only a few lines of code to use this.

Embedded Picker

  • The access model runs in a separate model outside of your app. Only what is selected is passed back to your app.
    • You can use the new .photosPickerDisabledCapabilities modifier to turn off certain features of the picker
    • You can disable various accessories to with .photosPickerAccessoryVisibility modifier
    • You can even change the size of the picker
    • You can use the new .photosPickerStyle(.inline) to make it more naturally a part of your app
    • PhotosPicker(selectionBehavior: .continuous) allows your app to respond to each selection of an image.
  • There’s a new privacy badge on the picker
  • A detailed scission of which options can be disabled in the sessions including search, selection actions, and more
  • The picker style include presentation, inline and compact style (a single row – scrolling horizontally)
  • This API is available for iOS, iPadOS, macOS, along with SwiftUI, AppKit and UIKit – it was not listed for xrOS or VisionOS

Options Menu

  • This is a new menu, which gives users control of what is shared with your app.  They can select to remove metadata and location as example.

HDR and Cinematic

  • The system may automatically transform assets to things like JPEF, but if you want to include HDR data
    • you need so set .current encoding policy
    • And use .image or .more for content type

Elevate your windowed app for Spatial Computing

Spatial computing means your apps fit in the your surroundings

While SwiftUI is the focus, UIKit can take advantage of much of this content.

SwiftUI in the Share Space

  • Most of the system applications were written in SwiftUI – so you can see that it is similar to iPad apps but taking full advance of the environment 
  • This session updates the BackYard bird sample app, so until the VisionOS extensions are available you won’t be able code along.  Watch this session to become familiar for when they are available.
  • You need to add a new run definition for the “new platform”, if you use native vs. designed for iPad it will change to the new Glass background.  You don’t have to deal with light and dark

Polish your app

  • Updating your custom views, note that physical resolution may make some assets blurry, try updating your content assets with Vector assets.  If you use Bitmaps you may see blurring
    • Change you “Scales” parameter in the inspector of the asset and choose single scale for vectors, also select preserve vector data.
  • Change solid color backgrounds, as they will not change contrast with Glass.  Add “Vibrancy” to provide additional clarity on glass.  It’s there by default with standard controls.  So things like .foreground(.tertiary), etc.   You can remove color scheme check too.
  • Interactive targets should be reviewed to see how they work on the new platform.  If you have standard controls you should be ok, but if you’ve customized them, you will need to review your code.
    • There are hover effects to show focus. If you have customer controls you should add your hover effects – by default you can just use .hoverEffect() – but you can add a contentShape modifier to clean up the view.
    • By changing a view to a button, you get more appropriate changes.

Brand new concepts

  • Top level layout changes – you should use a TabView instead of a Sidebar – this will give you the side buttons that expand with titles when the user focuses on it.
    • This also gives more room for the content, it is called an ornament 
  • The bottom toolbar  is also a toolbar modifier of .bottomOrnament placement option.
    • You can build custom ornaments 

Dive Deeper into SwiftData

  • You’ll want to download the sample app from here. Make sure to change the group and application identifier
  • You get undo and autosave when you switch applications.  Works with your basic classes and structs 
  • It uses a new @Model macro
  • Where possible SwiftData will infer structure from your code, but you can be very explicit – check out Model your Schema with SwiftData. For more information
  • The schema is applied to the ModelContainer class. And instances of the classes are mapped to a ModelContext in your code.

Configure Persistence

  • The model container is the bridge between the schema and where it is stored
  • You can instantiate easily with try ModelContainer(for: SCHEMA.self) it will infer related types
  • The ModelConfiguration class describes the persistence of the schema
    • On disk or in memory
    • File location (or generate one for you)
    • Read Only mode
    • And you can tell it which cloudKit container to use
  • Note in the above example we define all the Schema’s will use, Where we want to store, including our cloudKit container for each Schema as appropriate (since we want to keep People separate from Trips data)
  • Finally we would create the container with `let container = try ModelContainer(for: fullSchema, trips, people)`
  • You can use modeContainer modifier to a view or scene to describe which ones you will use in that view or scene 

Track and Persist changes

  • The model and modelContext 
  • The the modelContainer modifier binds the @Environment to the modelContext 
  • Changes are stored as snapshots in the modelContext until you call context.save() – this will persist changes to modelContainer
  • This modelContext works in coordination with the ModelContainer – which supports rollback and reset (for undo and autosave)
  • The modelContainer has a isUndoEnabled: value.  This means that system gestures like shake and three finger swipe will automatically do undo for you.
  • Autosave will save during system events like moving to foreground or background.  You can disable vis isAutosaveEnabled.  It is enabled by default in applications, but it is disabled for modelContexts created by hand.

Modeling at scale

  • Background operations, sync and batch processing all work with model objects
  • Using Predicate Macro you can simplify queries and subqueries using Swift.
  • You can add additional tuning parameters in the enumerate function on modelContext, this will be implicitly efficient.  Using platform best practices, like batching (which you can modify if you desire).  It will also implement mutation guards by default, you can override with allowEscapingMutations. 

Design considerations for vision and motion

This is a research based session, 

Visual Depth Cues

  • Make sure your content provides depth triggers so the brain can perceive the data correctly
  • Visual system interrupts what is perceived, Making things agree is important for vision comfort, around line of sight.  The Depth cues help the brain handle this.. if it is not right you can cause users with eye fatigue or double vision.
  • You can use size, gentle motion, color, blur, light, shadow, backgrounds, and occlusion 
  • Conflicting clues can also provide issues to your user. Repeating patterns can cause eyes to mis-focus across multiple items causing issues.

Content Parameters

  • Make sure to place reading, farther than arms length, and let users to adjust for comfort.
  • Direct interaction or quick goes can be placed closer.
  • Use blur and transparency to help users focus their eyes.
  • Use high contrast for reading, and keep it centered to keep the user from having to move head back and forth.
  • Slow transitions between dark and light scenes 

Eye effort

  • Minimize eye effort demanded from your users, it is most comfortable to look down, or left and right. So use that for content to reduce eye strain.
  • Upward and diagonal movement is the most effort.
  • Longterm should be in the center and slightly down of the field of view.
  • Allow for natural break points in your experience to allow for eye rest.

Motion of virtual objects

  • Inner ear is used, along with your eyes, to address motion.  If these two disagree you can get dizzy or sick to your stomach.
  • If things move at the user, you should make your objects semi-transparent to reduce discomfort    

Head-locked content

  • When possible use this, since they will not come at the user. You can use a lazy follow, which moves content in position slowly over time, reducing issues.

Motion in windows

  • Pay attention of the motion of content within a window, make sure that horizon stays align with the same as the real horizon.  Focus of expansion should be slow and predictable (and within the field of view), reduce pure rotational changes.  You can just use a quick fade with shift during it.  Using smaller objects and plain textures are better.

Oscillating motion

  • Sustained Oscillation should be avoided.  Think of Count Floyd’s 3D House of Beef.  If you have to do oscillation remember to make the content semi-transparent 

Customize on-device speech recognition 

iOS 10 introduced speech recognition

Speech is designed to convert an acoustic model to phonetic representation, that is then transcribed to a physical representation.  Sometimes there are multiple matches, so we must do more than just that.  Looking at context we can disambiguate values with a language model.  This was how it was modeled in iOS 10.

In iOS 17 you can customize the language model for your app to make recognition more appropriate for your app.  You will boost your model with phrases that your app needs, you can tune it to weight certain phrases in your system.  You can also use templates to load a lot a patterns like in chess.

You can also define spelling and pronunciations for domains like medical, etc.  Again a chess example:

Training data is bound to a single locale – so you will need to use standard localization methods.

Loading a language model will have latency so run on a background thread and hide behind some UI, like a loading screen or other method.

Customization data is never sent over the network, so you should focus your on the device to; however wise it will not load the language mode.

Animate with Springs

All about engaging your user with better animations

Why Springs

  • Animations provide a level of continuity, watching this move is more natural than just seeing it in a new place.  Velocity is important to make it look more natural
  • Ease in and out is a bezier animation defined by a curve and duration, to me this adds gravity to the animation
  • Spring is like an ease in and out animation; however if you use a gesture with an animation a spring looks more natural if you flick or show the object
  • Motion of a spring is not only a bouncing animation; it is about how the animation ends. It is a slow and natural stop, not like hitting a wall.

How Springs Work

  • You are modeling a motion of an object attached to a spring. This is impacted by the mass of the object, the stiffness of the spring, and the damping of the system (aka friction).
    • Initial position of the animation, and the target is the resting position of the spring.
  • While changing those properties make sense for a physical system; in software we use duration and bounce to reflect the spring.  Adding duration makes it take longer, and bounce will cause a bounce in the curve (if greater than 0, at 0 is smooth curve, less than zero takes a flatter time to stop.
  • All of the math is implemented for you, a bouncy spring is like a cosine wave, at 100% it will oscillates back and forth.  A * cos(2π * t / duration).  You can see all three curves in this chart, where the Blue and Green curves are defined at the bottom and the dark curve is defined by the previous formula.
  • To preserve velocity out cosine curve will start off with a downward curve.
  • The velocity can come from a the velocity of a gesture, or from the velocity from an interrupted animation. The session goes through the rest of the calculations for those who are interested.
  • You can use a completion handler that uses perceptual duration instead of the settling duration to process other activities 

How to use Springs

  • Springs are the default for SwiftUI
  • You can explicitly use them  presets like .snappy, .bouncy, and .smooth, and tune it with a duration or extra bounce
  • You can customize the .spring completely 
  • There is also a spring model type to programmatically covert parameters, or just do the math yourself with