Generalize APIs with parameter packs

This is an advance topic and builds on last year’s “Embrace Swift Generics” it is highly recommended that you review this before watching this session.

What parameter packs solve

  • This is about generics and variadics
  • Code is written in values and types.  You can abstract over types by having varying types, most generic code abstracts over types and values
  • If you want variable number of parameters .. you have Variadic parameters.  But they have limitations, you can’t know the number of parameters while preserving type information.  The only way to do this now is via overloading.
    • This pattern is limiting and adds redundancy.
  • This solves the overloading pattern problem

How to read parameter packs

  • In code a parameter pack can handle any qty or types, 
  • A type pack handles different types
  • A value pack handles different values
  • They are used together – they are positional, so at the type of position 0 is related to the value at position 0.  Think of it like a collection.
  • You need to iterate to process, but each element can have a different type.  So you would define a pack like <each  Type> – this is a type parameter type.  Use singular instead of plural. So you then use repeat Function<each Type> to see how this replacement works. You will get a comma separated list of types. They can only be used In things that naturally support comma separated lists, like function parameters, etc.
  • Using in a type pack in a function value, it becomes a value pack
  • You can add constraints to a parameter pack to enable conformance
  • You an force a minimum number of arguments (even though the default is zero or more arguments).

Using parameter packs

  • To use this code, you will want to use repetition patterns like repeat (each item).evaluate() if you want a list of all the values you can wrap in parenthesis like (repeat (each item).evaluate())
  • This should make your code simpler, and also allow you to write a lot less code
  • To exit an iteration – use throwing methods to allow you to break out of the iteration loop (if needed)

Discover String Catalogs

Localization is created with String Catalogs, OS supports 40 languages today

You used to have to maintain Strings and StringDict in your project.  In Xcode 15 you will have String catalogs to simplify the process

Will update my own app to this quickly.  Complex editing is enabled to make it easier to convert strings, including ability to change strings by device.

Extract

  • Localizable strings are simply text presented to user at runtime.
  • Key (required) – usually the sting itself
  • Default value – usually defaults back to Key
  • String Comments – to provide translator context about where and how it is used
  • Table corresponding to one ore more files where translations are stored. bY default this is Localizable table.
  • String Catalog contains the entire string table in a single file.  You can create multiple String Catalogs
  • Xcode will create the catalog and try to keep it in sync.
  • You should define you localizable strings as such with LocalizedStringResource
  • Change build setting – Use Compiler to Extract Swift String to make sure that Xcode will extract for your, you can also define your own localized custom macros
  • In Interface Builder strings are automatically defined as localizable.
  • Add a info.plist to the catalog to capture values in plist files 
  • Every time you build Xcode will export to the catalog – code is source of truth and the catalog will reflect state of the localized value. It will be considered stale if removed from code an you have a translation defined.

Edit

  • The editor allows for first class support of state and progress of translation.
    • New – untranslated
    • Stale – No longer seen in the code
    • Needs Review – may require change 
    • Translated – Green check mark – no action needed
  • Pluralization is a challenge
    • You need to change the grammar in English 
    • But in other languages it may be more complex
    • You used to need a StringDict file.  New feature has built in value variations via context menu
    • Complex examples like you have X items in X packages – which may change if you have 6 items in 1 package 
    • You define substitutions for items and packages in the prior statement
  • You can define manual strings that are defined in code or a database

Export

  • To work with translators – you can export localization catalog for each language so you can send to translation team.
    • The xliff file is standard format for storing translations. 
    • There are some changes based on catalog instead of stringDict format, this may require automation tools to change
  • Change make sure export works correctly set “Localization Prefers String Catalogs” in build settings to “Yes
  • Import will just add it back in correctly

Build

  • During Build catalogs are json files under the hood and easily diffable in source code
  • You can back deploy to any OS target
  • Source strings are not stored in final build

Migrate 

  • To migrate your existing project, pick files and targets to migrate
  • Right click and choose Migrate to String Catalog
  • You must use swift tools version 5.9 or higher

Develop your first Immersive App

This session will probably require tools not yet available… will have to come back to it once the SDK and simulator becomes available

Create an Xcode project

  • You carte an xrOS assistant – but this is not yet available.. will just watch this section until I can really follow along.
  • Windows are 2D and can be resized, shown along side other apps
  • Volumes are designed for 3D content sizable in all three dimensions by the app, not the user
  • Immersive Space, will allow you to start a starting point for your app, this is a Full space, hiding other app and enable hand tracking, you can also decide how much immersion the user has “Go beyond the Window with SwiftUI” should be checked out
  • Recommend always start in a window and provide clear entry and exit points for immersion
  • most of the code is in ContentView in the sample project

Simulator

  • Show you a launcher view, click simulates tap, click and hold simulates pinch
  • There are buttons on the bottom right to control the device
  • There are multiple scenes to simulate location and lighting environments,

Xcode Previews

  • When editing source file, the preview provide will open the canvas with a simulated device view to navigate around and see impact of your code changes 
  • There is an object mode to see content that extends beyond your app’s view.

Reality Composer Pro

  • Creates swift packages for your content 
  • Content is organized into scenes
  • Scene type for immersive space, uses inferred position of your feet, as the center of the coordinate system.
  • You can get access to additional data like hand position.  Users will be prompted to provide access to privacy data

Create an immersive scene

  • You can use drag and drop to import and position usdz files.
  • Double clicking on an object in the scene hierarchy will make it front and center of the scene editor window
  • Immersive spaces are placed at a fixed place based on when it is launched.. you move around in the space to look around

Taret gestures to entities

  • You can add a simple TapGesture().targetedToAnyEntity().onEnded { //command to run }
    • The targetedToAnyEntity  tells you which entity was tapped
  • You must have both InputTargetComponent and CollisionComponent for Tap gestures to work

Design widgets for the Smart Stack on Apple Watch

Layouts

  • There are six design layouts for consistency
  • This will use consistent font sizes and weights
  • Only show what is necessary, 10 seconds max
  • Look at the design resources page
  • Combo widget is by default and can be used for circle app launchers

Color and Iconography

  • To be recognizable and distinctive
  • By default you have dark material with white text
  • Try a SFSymbol

Sessions

  • An active state in an app with a clear start and end
  • Focus on helpful content to lead into or come after a session, like music to play

Relevancy 

  • There are moments to consider, time, location, start or end a workout
  • This will prioritize when a stack item floats to the top

Build Widgets for Smart Stack on Apple Watch – Code alone session

Design spatial SharePlay experiences

You can add in shared context with Spatial Persona’s so you can communicate effortlessly

Set the Scene

  • Understand the type of experience you are trying to recreate…this will impact content placement and interactivity.
  • You can share up to 1 window and 1 immersive space at a time, so you can have a dedicated shared window, or a dedicate room (they can’t access other apps during this)
  • Using window shared activity let’s people use their other apps and drag and drop things across from private and public windows

Start SharePlay

  • You can create entry points to allow for people to join.  Adding sharePlay means you can have more control, etc.
  • Start activities from those who have the right permissions, and provide information to the participants  who are trying to join, like “start tv”

Arrange participants

  • This is based on where the app is positioned, and one of three arrangements
    • Side by Side (think TV screen or whiteboard)
    • Surround (think table top game)
    • Conversational (think front and center or ambient)
    • Your app can support up to 4 other people.. so you can have between 2-5 participants in a shared space.

Share context and UI

  • You get a common coordinate system and space .. this is shared context. The system manages this by itself, so the synchronization needs to be synchronized by the app to make sure people can follow.  You should encourage participants to follow the first person who changes context.  Shared space by default uses share audio context.
  • Large physical motion and audio cues can help share context
  • Make sure that permissions or turn taking is not imposed to make the experience is the most natural.
  • Allow for differences that don’t affect others, like volume per person and subtitles by person
  • Active tools should be personalized too.

Enter a Full Space

  • When is it ideal when a person can go to a operate context (or full space)
  • Think of this as break out rooms designed for a specific person to view.
  • If a person steps out (like pressing digital crown and enabling pass thru) make sure they can still see the shared window

Design for spatial user interfaces

UI Foundations

  • App icons
    • Should be round and flat, with 3D effect to expand the layers.  So you should use layers to create parallax affect on other platforms (up to three layers).  1024×1024 layers will be cropped by a circle to create the 3D layers – Be careful with layers.
  • Materials
    • Need to adapt to lighting conditions and surrounding 
    • New language uses the glass material – this allows for ability to see what is behind your window.  
    • Avoid solid colors (will have to really focus on UI for my app)
    • There are system defined vibrancy will cover below.
    • Don’t stack lighter material on top of each other
  • Typography
    • All font styles use semantic names – check out “Principles of spatial design
    • Weight has been changed on visionPro to make it easier to see
    • Two new style son vision Proc – Extra Large Title 1
  • Vibrancy
    • Key to maintain legibility across system
    • This updates in real time since the background can constantly change 
    • Use Primary for standard
    • Use secondary for description
  • Colors
    • Consider using white text or symbols
    • If you need color, use it in a background layer or the entire button

Layout

  • Ergonomics
    • This is key, content should be placed intentionally, I.e. easier to go left or right, than up or down
    • Go wider aspect ratio, instead of taller
    • Prioritize key information in the center
  • Sizing
    • Must be a target of 60pts of space, standard button is 44pts with 8 pts of space around on all sides
    • For smaller items (disclosure button) like 28 pts ,but it still has 60 pts around it
  • Focus feedback
    • System components provide a subtle highlight (hover effect) – this gives people confidence they are focusing on the right element
    • Make sure you have a small amount of space (at least 4 pts) to make sure that hove can occur
    • Ensure nested elements have relative corners – remember – inner corner radius + padding = outer corner radius 
    • Make sure to use continuous corners

From screen to spatial

  • Window
    • Use glass material instead of opaque material
  • Tab Bar
    • This will be moved to the left side of the window – vertically
    • Avoid having more than six items
  • Side Bar
    • This will automatically expand to labels if someone looks for a while
  • Ornaments
    • These are placed at the bottom slightly in front of the window, great for tool bars
    • Also adds depth to an app.  Use borderless buttons to keep it clean
    • Overlap by 20pts to make the feel like they are integrated with the main window
    • It is only recommended to disappear when focusing on a single piece of content, like watching a movie
  • Menus
    • Can expand outside of the window and they are centered by default
    • Change the button that selects them, to selected state
    • Avoid using buttons with white backgrounds unless they are selected
  • Popovers
  • Sheets
    • Modal views in the center of the app, pushing the parent view back 
    • Consider using push navigation for nested view
    • Always place close buttons on the top left corner

Design for Spatial Input

Focus is on eye’s and hands – this is personal and unique to the user.

Eyes

  • This is primary targeting method, all inputs target where you look
  • To make this we need to
    • Comfortable
      • Your canvas may be infinite, but you should design for center of field of view to minimize head and neck movements
      • Depth should be considered too, changing focus depth can cause eye strain, this is the Z position, you can move things backwards so that current action is still in the same positioning as before
    • Ease to use
      • Focus on shapes that guide to the middle of an object, so use round shapes, not sharp edges. 
      • Keep shapes flat without shadows. (Need to adjust my own items for this)
      • Right size is at least 60pt, (which includes spacing between items).  Use standard components as much as you can.  Check out Design for spatial user interfaces.
      • Scaling (larger as it go away, and smaller as it get’s closer to keep the target the same) this is default dynamic scale behavior.  Fixed scale changes target area.  Used dynamic scale as much as possible.
      • Keep UI oriented at the user to minimize eye strain – check out “Principles of spatial design”
    • Responsive
      • All interactive elements should be highlighted.  Hover effect is subtle.
      • You should add this for any custom elements
    • Intentional
      • You can use intention (long hoover) to provide tooltips or tab bar expansion
      • Focus on microphone gliph will enable voice search mode.
      • Can also provide information for assistant technology.

Hands

  • Pinch is select, pinch and drag is scroll, and pinch with two hands enable zoom and rotate
  • The above slide is the one we’ve all been looking for
  • Use these same patterns in your own app, to make it easier for users to get up to speed.
  • To create a custom gesture you should:
    • Easy to explain and perform
    • Avoid gesture conflicts
    • Comfortable and reliable
    • Accessible to everyone
    • Look at “Create Accessible spatial experiences”
    • Consider fallback
  • Using eye direction combined with hand gesture shows intent, I.e. picture zoom
  • Direct Touch – reach out and user your finger tips
    • You can use a virtual keyboard
    • Consider that direct interaction can cause fatigue- holding your hand in the air for a long time 
    • Lack of tactile response should be considered
      • Include things like UI changes and sound feedback
    • Check out Explore Immersive Sound design

Build spatial Experiences with RealityKit

RealityKit was introduced in 2019.

Will be going thru “Hello World” sample to explain the concepts. 

RealityKit and SwiftUI

  • RealityKit let’s you add 3D elements to your Windows and Views from SwiftUI
  • By putting models in a separate window, using the new volumetric window – it will be a fully 3d object viewable from any angle.  You can define a specific size, which will be consistent, (sized in meters).  To create an immersive space, this is a scene type, not just a window View.  You create with ImmersiveSpace(id:) { RealityView }. Not that Open Immersive Space is Asynchronous 
  • Sessions to watch – Meet SwiftUI for spatial computing … and … Take SwiftUI to the next dimension
  • Go beyond the Windows with SwiftUI – takes you into the various types of spaces in more detail.

Entities and Components

  • Entities are a container object, it must have components to do “something”, comments are things like models and transformers.  (Really good description on this)
  • Check out Explore Materials in Reality Composer Pro
  • Transforms places things in a space. Properties are used for scale and motion.
  • There are functions to convert between RealityKit and SwiftUI coordinate systems
  • You can create your own components

RealityView

  • This is a SwiftUI view to place entities in your app.  You add entities to a content instance.
  • You can connect sate to component properties so that you can express connection between SwiftUI state and the model
    • These are observable so they only change when objects that they depend on change
  • Convert function will change from entity space to swiftUI space.
    • This is useful for scaling, etc.
  • You can subscribe to RealityView events in SwiftUI via .subscribe(to:) to run code based on events.
  • You can also attach SwiftUI views to entities  – check out enhance your spatial computing app with RealityKit

Input, Animation, and Audio

  • You can add gestures to realityKit views, you must have input target and collision components.
  • You can add these components via Reality Composer Pro
  • USDZ files can reference other usd files
  • Default shape for collision component is cube, you should match it as best you can to your object’s shape.
  • You can add a component (HoverEffectComponent) to make your app react to where you are looking.  This is done outside of the apps process for privacy.
  • Built in type of animations: From-To-By, Orbit, and Time Sampled
  • RealityKit sounds are spatial by default.  No additional reverb is added to ambient sources, channel is great for background music.

Custom Systems

  • You can combine existing functionality in different ways to create components or systems.
  • Work with Reality Composer pro content in Xcode – tells you more about this
  • Systems are code that act on entities and components …. This allows your structure code that implements your apps behavior
  • Registering a system in your app, makes it available across your app.
  • You can filter the entities via a entity query so only those matching entities are impacted by the system.

Meet Reality Composer Pro

This was a walk thru of the tool

Supports compositing, Particle emitters and positional audio 

Tried to follow along, but Xcode beta did not include Reality Composer Pro (Yet) and when I went to download additional tools, it was not there – and Safari crashed on me. 

Uses .usda format for the project, and packages it as a Swift Package..

You can use Object capture to add items into Reality Composer Pro.

Nice to see that they support WASD for movement around the scene…gamers will feel comfortable with this navigation.

Excellent discussion on Particle Emitter creation, along with performance implications.

Grouping objects into other objects is key for authoring both audio, particle emitters and objects.

There are three audio types. Spacial, Ambient and Channel

Work with Reality Composer Pro content in Xcode session should be reviewed to see how to add this to your code.

Statistics tab is needed to understand the performance implications of your model.  Simplifying your model can help reduce triangles.

Explore materials in Reality Composer Pro should also be reviewed to understand model transitions.

Debug with Structured Logging

Screenshot 2023-06-06 at 20.52.59 (2).png

New debug console 

  • Message is the focus not all the prefixes.. so you can add them back in based on metadata options flag.  This now shows up below the message
  • Yellow or Red background is an error and fault
  • Can also press space on a single log entry.
  • You can also use filtering so only show what you care about, you will get auto complete view too. You can also right click on a log entry and use it to show similar or hide similar etc.

Live debugging

  • Demo 

LLDB Improvements 

  • Use p instead of po so you can get the details of an object
  • Hard to remember all the new commands.. try dwim-print (or do what I mean print), you can just use “p” instead
  • Po will now do what I want plus customer object descriptions

Tips for logging

  • OSLog is for debugging, print is only for stdio
  • You should log any tasks being done, along with the results( 
  • You get all the meta data is you use OSLog – import OSLog in to the project.  Define your Logger with a subsystem (your app) and category (your function)
  • Consider
    • create multiple log handles for different components 
    • Use OSLogStore for collection in the field
    • OSLog is a tracing facility – it can do performance too