Simplify distribution in Xcode and Xcode Cloud

This is all about the CI/CD pipeline and speeding up the iteration process.

Express TestFlight Distribution

  • .xcarchive is a optimized release build of your app with debug symbols and it is repackaged for distribution
  • Xcode allows you to use the Achieve so that it either goes to your team only, without going to the App Store, or build a workflow that will ultimately be released to the App Store.
  • Nice to see the Internal test option which really makes it quick and easy to test with your team, one button click!  You can use App Store Connect to add in What to test information and receive screenshot feedbacks.
  • Xcode Cloud allows you to build workflows to customize the CI/CD process, this is now discoverable in the “Integrate” menu.  You can have workflows build based on various tags, branches, or other rules.  You can use Git Commit messages to include notes to testers. 

Automating notarization 

  • You can notarize your app so you can distribute without using the App Store, by uploading to the notary service, this will check for malware, etc.
  • To notarize in Xcode, you can go to Organizer and select an archive and click Distribute – choose Direct distribution
  • If you use Notarize post-action in Xcode cloud it can be used to automatically notarize your app for direct distribution 

Model your Schema with SwiftData

Utilize schema Macros

  • Watch meet SwiftData and How to build an App with SwiftData
  • Sample trip app is used with multiple relationships against the main @Model class
  • Schema Macros allow you to customize the behavior of Schemas
    • @Attribute(.unique) – allows for making a value is Unique – this will cause an update instead of a new items
    • You can provide constrains on primitive value types, (Numeric, String, or UUID)
    • You can also decorate a To-One relationship
  • To rename a variable – it will be seen as a new property so you can use @Attribute(originalName: “old_name”) var newName: String
    • This will ensure a simple migration 
    • @Attribute also provides support for External Data, Transformable, Spotlight Integration and Has Modifier
  • Working with Relationship – will auto setup implicit inverse relationships with default delete rule,  You can add @RElationship(.cascade) to delete them verses, just nulling them out.
  • To add non-persistent – just add @Transient and the value will be calculated at use time, must have a default logical

Evolving schemas

  • To handle updates to your schemas between releases, you use VersionSchema to define distinct releases
  • See SchemaMigrationPlan to do ordering of all updates needed to migrate the data
    • Define each migration phase  – either Lightweight (which requires no code), or Custom (which requires you to write code for things like-dedupe.
  • Annotate migrations so that when you build your plan, it will do the migration for you
  • You setup the Model with current Schema and a Migration plan in your app and it will upgrade data when needed.

Meet Object Capture for iOS

Intro for Object Capture for iOS – allows you to use Computer Vision to create lifelike 3D models which can be used in Object Capture API on Mac, but now you can do on device reconstruction for iOS.  There is a sample application to learn how to do this.   To me this looks like an update from the app shared earlier this year / late last year.

The code is available in the developer documentation, but you should certainly bookmark this page – https://developer.apple.com/documentation/Updates/wwdc2023

More objects with LiDAR

  • Performs best with extra details, but improved objects with low texture objects by using LiDAR, system augments models based on the point cloud to create objects
  • Still avoid transparent or reflective objects

Guided Capture

  • This automatically captures images with LiDAR provides feedback and guidance on how to capture.
  • Capture dial indicates which areas that has images – kinda like when you scan your face
  • You will get notified if there is not enough light.  Also use diffused light to minimize reflection 
  • You want a consistent distance when scanning and keep the object within the frame
  • Don’t forget to flip objects if it is rigid, if the object is repetitive, it may be problematic to do flipping.
  • There is now an API to tell you if the object is captured enough for flipping. And will recommend which way you should flip.

iOS API

  • imageCapture API is what you want to look up to find more information.  This is basically a State Machine to capture between ready, detecting, capturing, finishing, and finally completed
  • The APIs are in RealityKit and SwiftUI (https://developer.apple.com/documentation/realitykit/objectcapturesession
  • You should capture a space for where images are stored during Initialize phase
  • Your app will require to create your own UI to control the capture session for the user.  
  • The detection phase allows you to identify the bounding of the object so that it knows what you’d like to capture
  • Capturing will generate a point cloud to show you progress, one you are finished, you will need to create your own UI to complete the capture, or generate new captures to additional passes, or Flipping the object.
  • The Finishing process will wait for all data to be saved and then automatically move to Completing state
  • Completed state will then allow you to do On Device reconstruction, if completing fails you will have to create a new session.
  • Creating a 3D Model is the “Reconstruction API” – This is a PhotogrammetrySession which is pointed to the images to process and generate a usdz model file.  More on this wwdc21
  • Mac will also use LiDAR data, and supports higher resolution than the iOS device.  You can just use Reality Composer Pro on the Mac and won’t have to write any code.

Reconstruction enhancements 

  • Mac performance has been improved, along with providing an estimate processing time
  • You can create poses of images to pre configured and optimized poses.
  • You can also customize the number of Triangles, with a new Custom detail level

Inspectors in SwiftUI: Discover the details

Inspector

  • This show views of selected content.  For example a keynote side bar showing you about a selected item
  • This is available on macOS, iPadOs, and iOS, which includes programatic control of presentated state and width.
  • It is an structural API like NavigationStack or .popover
  • To add an inspector – you have a bool binding, and add the content in the trailing viewBuilder
  • Inspector uses Group style by default, they are not resizable by default, but you can add .inspectorColumnWidth( with an ideal value ) changes are handled by the system and retained across app launches.
  • It has different behaviors based on content it is attached to, this impacts toolbar placement and overall UI characteristics. On MacOS it is a bit easier.  If you are using a splitView it should be placed in the detail section.

Presentation customization

  • In iOS16.4 there are new features and they are also enabled for Inspectors
  • .presentationBackground( ._: )
  • .presentationBackgroundInteraction(._:) 
  • .presentationCornerRadius(_:)
  • .presentationContentInteraction(_:)
  • .presentationCompactAdaptation(_:)

Get started with privacy manifests

This is about simplifying creation of Privacy Nutrition Labels and holding developers accountable for the APIs and SDKs they use.

You are responsible for all code in your app.  I can image a fight brewing now between SDK creators and developers, who are ultimately responsible for the usage of these SDKs.

I highly recommend you check out this presentation yourself at https://developer.apple.com/wwdc23/10060

Privacy manifests

  • Third party developers can provide information about their SDK to be included in your app.
  • Create a PrivacyInfo.xcprivacy file in Xcode which defines the data collected and linked to the app, and how it is being used (Should update this for my own apps).
  • Check App privacy details on the App Store Documentation 

Privacy Report 

  • This pulls together all the information in one place.  It will aggregate across all content.  Right click on context menu for a App Archive, and choose Generate Privacy Report
    • This only works if you have created PrivacyInfo.xcprivacy files and they must be included in the archive

Tracking domains

  • Control network connections from your app.
  • Some SDKs may default to tracking or depend on you to request permission and will assume yes.
    • If you specify domains in your privacy manifest it will automatically disallow tracking unless the user agrees.
    • If the same domain has both tracking and non-tracking functions – split the domain to track this
  • Xcode Instruments will show you domains that are used to track across websites.  So you should run this instrument on your code to confirm that you are OK, Then you can declare the value in your privacy manifest
  • Fingerprinting is NEVER allowed

Required reason APIs

  • For important use cases while avoiding finger printing, Apple has created groups of categories  of APIs with a list of approved reasons
  • For example NSFileSystemFreeSize (disk space) check Required APIs in developer APIs
  • Documentation links to a feedback form if you have a valid reason
  • You must clearly state why you use these APIs in your Privacy Manifest
  • Check the Privacy-impacting SDKs in the developer documentation

Starting in fall 2023, apple will start sending informational emails to developers.  Starting in Spring 2024 they must reviewed and addressed. Also Privacy manifests will be expected at that point.

Fix failures faster with Xcode test reports

A tour of test reports in Xcode

Structuring tests

  • Test methods are individual tests
  • Test classes – are groups of test methods
  • Test Bundles – one or more test class
  • Unit vs. UI tests
    • Unit Test – Source doe
    • UI – User actions
  • Test Plan – runs over the app and has both unit and UI
  • Configurations – say how to setup the environment for your test
    • Language and location
    • Code coverage
    • Test repetitions
  • Run Destinations
    • Devices to run on 
    • In the IDE you can run against 1 destination
    • In Xcode cloud you can choose many
  • Sample test plan
  • One line is a single test run

Explore the test report

  • The report provides a high-level summary of your test run, highlighting important patterns
  • One stop shop
  • Richer failure information for UI testing
  • You can run within CI and get a report
  • Here’s a sample run report, can see insights to see the notable issues across all configurations and runs. Further down you can get the actual test errors.
  • In the run view you can actually find the failure message and the call stack so you can go directly to your source code.
  • For UI tests you can actually get a video of the test at the point where it failed so you can see the failure in context.  Of course you also have the video of the entire test so you can see the full behavior. 
  • Clicking on an event takes you to that point in the test run and shows the video scrubber

Explore SwiftUI Animation

Overview of animation capabilities (to be honest, a lot of this was over my head, and probably explains why my apps don’t have a lot of animation).

Anatomy of an update

  • SwiftUI tracker view’s dependencies – if anything  changes the view is invalid and the close of the body will call another body to redraw the view.
  • If you add an animation, the body is called with new values, including a animatable attribute. If that attribute changes, then it makes a copy and interpolates to transition from old value to new value. It will then update off the main thread for built in animations, which is very efficient  and doesn’t call your view code.
  • There are two aspects – Animatable attributes and Animations that describes how it changes over time. 

Animatable

  • You must conform to Vector Arithmetic to allow you to process a list of points in the animation
  • ScaleEffect let’s you to independently define 4 different vectors so animation.  It is public type so you can look at it if you want to learn how to create your own animatable views.
  • Really good demo of the actual updates along the timeline of the animation

Animation

  • You can customize withAnimation by passing in a specific animation, there are three basic categories – Timing Curve, Spring, and Higher Order animations (which modify a base animation)
  • Apple recommends using Spring animations – and it is the default (.smooth) if you use withAnimation { }
  • New category – “Custom” animations.  Animate, shouldMerge and Velocity are the three requirements to create a custom animation
    • All three of these are vectors  only Animate is required
    • ShouldMerge allows you to handle if a user interrupts your executing animation
    • Velocity allows velocity to be preserved when a running animation  is combined with a new one

Transaction

  • This is a family of APIs, Transaction is a dictionary to propagate all the context for the current update
  • This section of the talk explains what the actual transaction dictionary is used across the attribute graph
  • This behavior enables APIs to control an animation, use a view modifier like .transaction { //action in there }
    • Be careful with overriding – you should just .animation(.bouncy, value: selected) instead to remove accidental animation
    • There is a new version of .animation(.smooth) this will scope it only to that modifier.  So you can have it only react to the .smooth animation , this will reduce the likelihood of accidental animation.
  • You can now update the Transaction Dictionary via an extension for use in Custom Animations, with your own TransactionKey 
  • There are two new variants of the transaction modifier to make it even more unlikely to have accidental animation

Explore enhancements to App Intents

Widgets

  • Widget configuration allows you to provide options on the “back side” of the widget – these are parameters, that creates an ordered list of parameters
  • You define your scheme right in the app Intent code
  • You can provide dynamic parameters, which are supported by queries – check out Dive in to App Intents from WWDC2022
  • To migrate your widgets to app intents, via a single button.  Once you do it, you can remove your definition file.
    • You may have to adjust some of the resultant code
  • Whenever your app is updated by the customer, it will automatically upgrade their intents
  • Widgets can now perform actions
  • App Intents can be used for Siri Shortcuts too – Bring your Widget to Life is a great session to catch up on this.
  • Dynamic options in queries.  So you can create queries that are dependent on other intent information via @IntentParameterDependency
  • Array size, allows you to limit the size of a parameter so you can ensure things fit on the widget which is being displayed
  • You can also use ParameterSummary to show a sequence of items, and including a new When function to show content under certain sizes
  • Continue User Activity – allows you to position users in your app when they tap on your widget
  • RelevantIntentManager is used to help surface your intent at the right spot and time for a user

Developer experience

  • Framework support allows you to place your intents in Frameworks via AppIntentsPackage to reduce compile time, code distribution size, etc.
    • May need to refactor my Wasted Time App Intents
  • App Shortcuts in extension – you can create an AppShortcutsProvider so that your app does not to be launched when using your Shortcuts
  • The compiler will extract app intent information and the a Metadata.appIntents – this process has been sped up in Xcode 15.
  • Ability to continue an Intent in your app.  So if you start an intent outside of your app, you can then enable ForegroundContinuableIntent conformance to enable this.
  • Added support for Apple Pay in App Intents, this was enabled in iOS 16.5

Shortcuts app integration

  • You can use app intents in Shortcuts, App shortcuts, Siri Voice, Apple Watch Ultra Action Button, and Focus Filters
  • And now you can add more integration with SwiftUI Apps Interactive Live Activities, and Interactive Widgets
  • You should create a good ParameterSummary, make sure they read like a sentence.
  • Use isDiscoverable as needed – especially if it is only valuable within your app itself.  Like only in your interactive widgets
  • You can also now provide progress via ProgressReportingIntent protocol
  • Find actions can are easier to integrate with, add EntityPropertyQuery or EnumerableEntityQuery (this is easier, but is optimized for small number of entities)
  • Intent description – has been updated with ResolvedValueName to be more descriptive based on the action being taken

Evolve your ARKit app for spatial experiences

Prepare your experience 

  • Expand your app beyond the window,  by default apps launch into the shared space, like multiple apps on the desktop
  • You can do Shared Spaces, and Full Space – this gives you more features like AnchorEntities and ARKit
  • Prepare you content
    • Use usd to create your 3D content, it is at the heart of 3D content, you can use them directly in Reality Composer Pro
    • If you have custom materials you will need to rebuild them in the shader app in Reality Composer Pro 
    • You can access Reality Composer Pro directly in Xcode

Use RealityView

Bring in your content

  • In shared Space – just add them directly to the RealityView Content, you can see more details in the session above
  • In a Full Space – a key value is you can an Anchor the app to specific surroundings.  Different than iOS you do not need to ask for permission to use Anchor Entities
  • ARKit can allow of WorldAnchors and Anchor persistence – you will be required to have user permission to use ARKit capabilties

Recasting

  • Allows you to reach out beyond arm’s length
  • Requires collision components 
  • You can RayCast with system Gestures or Hand Tracking 
  • By generating a Mesh Anchor of the environment to reconstruct the scene, you then create an Entity to recognize the time, and you create a transform and collision entity to track the environment.  These entities represent the surroundings.  
  • This Recasting allows you to then add an input target to all you to hold a position in world space.
  • By using a HandAnchor from ARKit you can build an RayCasting to identify the collision in the scene – then create a world Anchor in space to save the position.  You now place the entity and transform it to the world anchor – you can now place the item in the real world.
  • By taking the placed item and creating a collision model you can now interact with it as if it is really there.

ARKit updates

  • Receiving anchors has changed to address Spatial Computing 
  • Now you have an Ala carte selection of things to create your items you wish to receive.
  • On xrOS data providers provide an asynchronous anchor update that is decoupled from other updates. You don’t get ARFrames anymore, this is automatically done by the system, which reduces latency 
  • World anchor persistence
    • System continuously persists this mapping for you
    • Just use the world anchor and use WorldTrackingProvider which will allow you to load and unload virtual content

Enhance your spatial computing app with RealityKit

There are a lot of new features  in realityKit as highlighted on this slide

RealityView attachments

  • This allows you to attach SwiftUI content on a RealityKit scene
  • By using a RealityView in your View Model, you add your various Entities and content, you need to add a new parameter in your closure for attachments, and add a view builder, where you add your SwiftUI elements.  Also add a .tag(“”) or any hashable value.  Then you use entity(for: ) with the same .tag to add it to the element.
  • This creates a view.attachment entity that you can add like any other entity.

Video Playbacks

  • VideoPlayerComponent is a new type to embed video inside of a scene.
  • Load the file from bundle (or other location) , create a AVPlayer instance and then create the VideoPlayerComponent
    • This will create a video mesh of the appropriate aspect ratio of the video
  • You an handle both 3D and 2D video, and the player will also handle Captions
  • By default the video will be 1Meter in height, but you can scale it to the size you’d like.
  • This also supports passthrough tinting to match colors in the environment.

Portals

  • Can render a view with different lighting and masked by its geometry.
  • First create an entity that is a World Component.  You can attach children to this entity, and they will only appear within this portal.
  • You create a portal with a model a transform and a portal component which is targeted to the world component.

Particle Emitters

  • Now that we have a world and a portal to it.. we can add a particle emitter to give it some excitement
  • You can create particle emitters in code in RealityKit or design them in Reality Composer Pro
  • Creating your Emitter in Reality Composer Pro, you can still modify it in code.

Anchors

  • Now let’s attach the ports to a wall… this is the purpose of anchors.
  • The  are tracking modes, .once and .continuous – .once will not move once it is placed. .continuous will move with the anchor
  • To use Anchors you must be in an ImmersiveSpace, because you will need to render things outside of the current space.
  • The AnchorEntity will look for a vertical wall that is at least 1M by 1M – and that’s it!