Design dynamic Live Activities

This will address Lock Screen, Stand By and Dynamic Island

On the Lock Screen

  • They are on the top of the list, with a 14pt margin on all layouts.  Don’t try to replicate the notification layouts.  Be unique and make it graphically rich.
  • Think about – quick glances, and only include buttons if it is a critical function.
  • Having a similar iconography, color, and fonts will make it match your app.  If you use your logo, don’t put the app icon.
  • Spacing – use space to focus information, but be as compact as possible. You a dynamically change height as you have more or less information to display.
  • Transitions – when updating between moments you can apply transitions, like the numeric content transition, to count up and down important numbers.  This is a iOS 17 beta feature.  Animation of elements between updates
  • Alerting – You should alert when there is an update that requires users attention.
  • Remove your activity when it is no longer relevant.

Stand By

  • You can update your activity for StandBy – your layout is upsized 200% and your background color is extended.  Background elements will get cut off and cause a visual issue, try removing it.  Make sure all assets and images are at a high enough resolution.  In night mode, you will get a nice red color tint.

For Dynamic Island –

  • Use rounded shapes, thicker shapes, and use of color
  • Objects should be concentric with the shape of the dynamic island
  • Think of blurring objects to see how they would look in an optically good spot, make sure you stay inside the island.  Using RoundRect will help
  • Use an inset and/ or separator line
  • There are three sizes:
    • Compact -informational only showing the most essential information. Be snug against the sensor region.  If you are showing multiples sessions, think about ticking between views. If. You need to show an alert – think about expanding the island to show the data
    • Minimal
    • Expanded view – users can press into the island to get to this.  Show the essence of your app, be in harmony with your color of your app, etc.  Try and maintain relevant placement of items between the compact view.  Also avoid having a forehead, wrap content around the island.

Demystify SwiftUI Performance

Building a mental model for performance in SwiftUI

Performance feedback Loop

  • Symptoms – slowness, broken animation, or spinner
  • Measure – to get data
  • Identify cause
  • Optimize to fix the root cause – then re-measure to verify
  • Resolve the process

Pre-requisite  

  • Identify
  • View Lifetime and Value

Watch Demystify SwiftUI from WWDC. 2021

Dependencies

  • Understanding how a View is dependent on various views and their dependencies – all views ultimately resolve to a leaf view
  • The update process can be understand in the View Graph of dependencies, this includes Dynamic Properties (like @Environment property values), the process is recurves to go thru all the nodes which require updates.  This will replicate down thru to all the leafs.
  • To improve the process you can use _printChanges method in SwiftUI, you can set a break point and look at “expression Self._printChanges()” in lldb.  This is a debugging tool for best effort understanding of why a view changed.  You can also add a let _ = Self._printChanges() and it will write to the view console.  You MUST remove this code before submitting to the App Store
  • By extracting the views to only handle the specific item you will can get better updates
  • You can use @Observable which will also help reduce dependencies

Faster View updates

  • Analyze Hangs in Instruments from WWDC23 should be review ed
  • There is a tech talk on Hitches
  • They both originate from a slow update (like data filtering)
  • You shouldn’t do synchronous work in loading your data in a view (I should check this in my own apps)

Identify in List and Tables

  • These are complex advanced controls that can cause performance issues.  Understanding Identity is key to help improve this.
  • Identity helps swiftUI handle View timeline
    • Adding a ForEach in a list, as it will impact performance, it will need to know how many rows (views) to create, the Identity is used in creating this.
    • For Filters you should not do it inline as it will still need to create all of the rows.  You can move the filter into the ForEach but that is slow too.  So you should move the filter into the model.  That will be the fastest method.
  • Avoid using AnyView in ForEach
  • Flatten nested ForEach, except in Section Lists where SwiftUI will understand how to optimize this. ( can possibly use this for my category view in my own app)
  • These same rules work in Tables too
  • Number of Rows in Tables and Lists is Constant

Create practical workflows in Xcode Cloud

This session will create sample workflows based on three different case studies.

Solo developer

  • One app available on two platforms with most code on the main branch, a few dependencies
  • Send to Test flight to friends, manually release app
  • Key is simplicity that is reliable and maintainable
  • Simple what, where and when
    • You start with what version of macOS and Xcode you want 
    • By default it will build whenever a code is pushed to Main branch
      • For solo you may want to build regardless of branch
    • Select an action of Archive and target TestFlight and App Store
    • Sent Post action to Test Flight – External team and select a group
    • Create an archive action for each platform and post action that you’d like
    • To add a secondary dependency instead of just swift packages – more information is available- you can create a “Post Clone” Script to do run a python command to include additional items for the build environment.

Medium-sized team

  • Team of dev, QA, and PM on a globally distributed.  spereate branches by develop with pull requests to merge, using tags to match a release 
  • Testing both UI and Unit is done
  • Multiple TestFlight plans – use cloud to communicate
  • Create three workflow
    • Pull request
      • Whenever a new pull is created tests are to be run 
      • Add a new start request to select target Branch of Beta and remove the branch start condition
      • Create a new Action and select a Test target
        • Add multiple devices to cover appropriate mix for simple testing
      • With a successful build on this workflow – a developer can deliver their code to the Beta branch
    • Beta workflow
      • This is to deliver your code the QA team – this will run every time a change is delivered to a pull request
      • Start condition of Branch condition to Beta from Main
      • Add archive action and select TestFlight (Internal Testing Only)
      • Add a Post Action to upload to App Store Connect to the internal testers
      • Will also add a test action to ensure you don’t deliver a broken build to the QA team.
      • You can add a customer icon via custom scripts so testers know it is a beta build.  So you run a “pre-build” script
    • Release workflow
      • Developer must merge the Beta stream to the Main Branch and add a “release candidate” tag is added
      • Steps are basically the same as Beta Workflow – you can duplicate the workflow and then update the start condition. In this case we use “Tag changes” – and add value of “tags beginning with release/“
      • Change the Archive change to TestFlight and App Store 
      • Post action change the Test Flight Internal group and add “Executive Group” also add a second test Flight to external team –
      • You can now add a Notify post action to add a notification to Slack 

Large team

  • This is a very large team, similar to the medium sized team.  There is an app that has been around since the beginning of the App Store. There are tons of tests, and many TestFlight testers both internal and external, and they use slack, but they also use CI/CD already using an in-house team developed with limited knowledge.
  • They also use Project Management (PM) tool with various dashboards that allows others to vie status
  • Migration to Xcode cloud should be done in chunks. To allow a successful migration overtime.
  • One is create a workflow that can be released – start here.. leverage flow from above Medium size team.  Once it works you can integrate into your process
  • Create a workflow for testing – Then migrate your tests, tests may fail more than before, so you can set Xcode cloud to progress with “Not required to pass” you can see the results but it won’t stop from delivering the code.  It allows you to fix your tests off your critical path.  You can create multiple test plans – migrating reliable plans over time and making them required.
  • Create integration to existing processes – finally you look at integration with existing PM system and reporting.  Some of the will be similar to prior things, you can create powerful scripts, Xcode cloud APIs, and/or Webhooks there is more about this at Apple’s documentation.  

Create a great spatial playback experience

Creating a great video playback experience in your app

Media Experience

  • Builds on same APIs as other AVFoundation apps on iOS and iPadOS, AVKit builds to create a experience on each platform
  • AVFoundation can render performantly in RealityKit
  • You must use the matching SDK for xrOS or VisionOS 
  • Use AVKit and AVFoundation
    • Adding the item after creating the play may improve performance
  • Use a UIViewControllerPresentatble and then attach it to your WindowView
  • Screen and audio are anchored to the screen so while you move around they stay consistent.  The basics of the controls will be consistent. The video player controls will appear if you look a the screen and touch it.  They will disappear in time, or you can look and touch again.

Advance Features

  • Thumbnail scrubbing allows for scene-less scrubbing. If a trick play track is available it can be used.
  • You can use interstitials UI to programmatically to identify adds or other key points in the scrubber 
  • Contextual actions allow you to add buttons
  • Custom Info View Controllers allow you to show meta data and / or related content 
  • Immersive spaces – you can add in other 3D Assets, your viewing screen will automatically anchor and position itself, moving the controls closer to the viewer
  • Provide feedback to talk about custom UI controls. Apple is working hard to update this new player

Other use cases

  • Inline media can be done in a window. This just an AVViewControlPlayer happens automatically  if it doesn’t cover the whole window.  It won’t be able to be 3D because of the other content in the windows
  • For splash screens, etc. use RealityKit Video players component.
  • This will be an Aspect ratio correct mesh with captions and better performance.
  • If you want video as an effect – you may want your own geometry use RealityKit VideoMaterial 

Conclusion

  • Take a look at this table to help decide which type of player you need:
  • Check out DestinationVideo sample project

Work with Reality Composer Pro content in Xcode

This is a continuation of a prior session on Reality Composer Pro sessions – where you created the scene and materials

Load 3D Content 

  • You can load a full scene via an Entity async initializer – just give it the Bundle you created in Reality Composer Pro , then create a RealityView 
  • It’s easiest if you use a swift Package for the package it is much easier to load and manage in Xcode
  • ECS – Entity Component System – this is the term and approach that is used by RealityKit and Reality Composer Pro
    • Entity is a thing (and can be invisible)
    • Components are attributes and data
    • System – is where behavior lives – updating once per frame

Components 

  • You can add components in swift or Reality Composer Pro –
    • you can add as many as you want, but only one of each type
    • You can create your own components too
      • Sample here is creating a PointOfInterest component

User interface

  • To put SwiftUI content in a RealityKit scene – use the Attachments API
    • They are part of the RealityView { // only run once this is the make closure } update:  { // only called when swiftUI view changes } attachments:  { }
    • The attachments view builder is just a normal SwiftUI View that needs to be hashable – so we can now add to the Entity with attachments.entity(for: “hashable value”) { content.add(attachmentEntity }
    • To make this data-driven – we need to create the attachment views dynamically.  We can do this in code to create the invisible entities (in Reality Composer pro scene) , query for them, and create a view for each of them.  Using @State will let us know when new buttons are created, and we can save them to the view builder and add them as entities.
    • We can query for all entities with the specific component attached, in our case the PointOfInterest component.

Play audio

  • Add Audio Emitter for the Audio component and preview it in the editor.  In the app you have to load the source and tell it to play
  • Use the AudioFileResource API and pull the audio from the usda file.  Prepare the audio and then call Play 
  • You can introduce faders to morph between sounds and terrains

Material properties

  • The Shader graph was created to allow for morphing between the two maps.  Look at the node map in the shader section of Realty Composer Pro
  • You can modify parameter values in code 

This session is very deep and should be watched.. this blog post will help you understand why you should go and watch the session at – https://developer.apple.com/wwdc23/10273

What’s new in privacy

Understand and allowing control by users of what your collect is key for your users.  Their privacy pillars align with GDRP and extend in to on-device processing and security protections.

New Tools

  • There are new APIs in the Embedded Photo Picker, the Screen Capture Picker, a Write-Only calendar access, Oblivious http, and Communication Safety
  • The photo picker allows the API to only access a subset of pictures, you can embed this picker into your app in iOS17 and macOS Sonoma
    • If you use the new embedded picker, you will not have to request permissions
    • Embed the Photos Picker in your App – check this session
    • The new permissions dialog makes it very obvious what can be shared, and will periodically remind users what your app has access to
  • Screen Capture Picker
    • Prior to Sonoma you had to provide the whole screen, in Sonoma yet will not present a window picker on your behalf
    • You will be able to record selected content for the duration.  
    • A screen sharing menu bar item will be displayed.
    • What’s new in ScreenCaptureKit – session
  • Calendar can now be set to only add items since it has a lot of private information from a users perspective
    • EventKit will allow permission by default to create events. 
    • If you create your own API for creating entries you will have a new UI to request write only permission
    • If you need full access you can ask once for upgrade – you will not be able to ask again
    • For write only, you will be transition to this by default and if you use an old versions of EventKit it will only be asking for write
    • Discover Calendar and EventKit –
  • Oblivious HTTP API – hides client IP address from your server
    • This will also hide data from network operators 
    • This may add additional challenges for your app, so you can now use OHTTP to help protect app usage – by separating the who from the what.  This lightweight standard protocol allow the network operator to only see data to the relay – not the end point.
    • This is already used by Private relay
  • There are additional consideration is you use this, depending on your apps architecture
  • Communication Safety – to address sensitive content
    • This is the hide nudity protection from children.  It has been expanded beyond messages, to Airdrop, Contact posters, photos picker, and Phone app.  
    • It is available for all users, not just kids
    • This on device technology is available for apps to deal with sensitive data.

Platform Changes

  • Mac App data protection, advanced data protection, safari private browsing and Safari app extensions
    • Locations on Disk – have systems managed permission – like Desktop, Documents, and Downloads, some apps have data in other locations like ~/Library or ~/Library/Container
      • You can now control that a user must give permission to allow an app to access data in a container from a different developer
      • Use App Sandbox to protect data by your users
      • You can ask for permission by default if you make no changes – it is valid for as long as your app is open, otherwise the permission will be reset.  You should provide a meaningful purpose string.
        • Use NSOpenPanel – outside of your process to allow a user to find the data before a user is prompted
        • For backup or disk management tool that have already been provided Full Disk assess will not need to ask
        • If you have the same signature across apps, you should  have access to the data from your own other apps.  You can specific a NSDataAccessSecurityPolicy to change from “Same Team” access.
    • Advanced Data Protection – (added in 2022) to provide E2E encryption. If you use CloudKit you can get ADP for your app, when the user enabled ADP in their iCloud
      • Use CKAsset and Encrypted variants for all data types in your app.
      • Use the encryptedValues API to simplify the impact in your app
    • Safari Private Browsing – enables protection from finger printing and advanced tracking protections  (you can also turn this on for normal browsing)
      • By default in private mode, known tracking methods are not allowed, you can see the blocking in the web inspector
      • Tracking information on links is automatically stripped away
    • There is a new permission model will be also users to decide on a per site basis if an extension can run and/or when in private browsing mode.

Spatial Input Model

  • To achieve these goals hand and eyes are processed by an internal system component. So your app only get’s the tap or touch notice.

What’s new in App Store Connect

Monetize your app

  • Think about pricing and in app purchases
  • StoreKit for SwiftUI allows you to simplify adding in App Subscriptions to your app
  • Setting pricing needs to consider international pricing issues – there are 9 price points, based on a base region, You can have the store automatically adjust pricing to manage currency exchange changes. 

Manage testers

  • TestFlight makes it easy to test on all platforms and testers on each platform.  You get data about your testers to see how engaged they are, if they installed, sessions, crashes, and feedback, and now which device they installed on.
  • You can put release controls in place to send specific builds to internal teams vs. external teams.
  • Using Xcode cloud you can easily upload TestFlight what to test, can do this from a custom build script or directly from a local text file
  • Sandbox testing can now support family sharing, by combing test accounts to a family group.  Up to 6 accounts are in a single family
  • You can also add on-device to modify renewal rates, test interrupt purchases, clear purchases, and see the family grouping

Build your store presence

  • You can configure your product page, with privacy nutrition labels, and you can now add in information related to VisionOS, for surrounding, hands and head movement.
  • Release schedule on the app, can support Pre-Orders on a regional basis via soft launches.  This is on the availability page.
  • Product page optimization lets you see which ones your users like best, these now persist across app releases.

Automate with APIs

  • Allows for customizing workflows, syncing with internal systems, and dealing with customer reviews and responses.  Adding support for Game Center later this year
    • This is for leaderboards and achievements
    • You can do scores and achievement unlocks
    • Manage fraudulent activities
    • Match based on custom rules
  • For API authentication
    • You can generate marketing and customer services API Keys 
    • You can also create a user-based key 

Verify app dependencies with digital signatures

Automatically verify integrity of your dependencies, thru signature verification. This is trying to address the supply chain security attacks that have been happening.

Just like the discussion on privacy reporting, it is ultimately each developer’s responsibility to ensure all your signatures are correct and you are not using maliciously modified code.

Dependency Signatures

  • This is code signing is done by linking the binary, info.plist and privacy manifest to your developer identity.  This is done via a CDHash and your Dev. Identify from your Certificate. This hash allows for validation that the object has not been modified and was signed by your key.
  • New Signature verification will automatically handle the dependencies in your project
    • It tracks identities of frameworks in your project.  
    • Note the chain of trust will help across changes in certificates, etc.
  • A revoked certificate will trigger an alert to allow you to resolve or remove the framework from your application.

App developers

  • Demo with Backyard Birds app.- for any XCFrameworks you can see the signing information 
  • Notice the Kind and Status. When you build the app all signatures are validated in the app.  
  • You will get an error if they don’t match and get an alert with an option to accept the change or trash the framework.  Be careful with any self signed certificate.
  • You should start using only signed SDKs

SDK Authors

  • For SDK Authors, it becomes important to sign your SDKs so that developers can confirm that the code has not been manipulated 
  • You can either use Apple Developer program (ADP) or Self Signed certificates.
  • For ADP you should use Apple Distribution and Development Certificates
    • Apple is attestation is provided by Apple for these certificates
    • Validation is handled automatically
  • You can sing your code with codesign —timestamp -v —sign “YOUR CERTIFICATE Apple Distribution: (XXXX)” App.xcframework
    • This is using self signed certificate if you are a ADP member
  • You should start signing right away either with ADP or self-signed

Update your app for watchOS 10

While this is a code along session, at the time of my viewing there was no link provided, I did search the forums and the link here was provided, but it gave me a page not found.  Hopefully by the time others see this blog post the link will work.

Check out the Meet watchOS 10 a Design and Build apps for WatchOS 10 sessions.

An initial compile of a WatchOS 9 app against watchOS 10, you will get many features automatically, if you were using standard controls and options.  Shifting to NavigationSplitView can be achieved by switching from NavigationStack, moving your contents into the detail View builder, while adding a list view for navigation.

Breaking up long views to Vertical Tab Views make it easier to break up content in to logical aspects.

Adding in ToolBar and setting materials to provide additional information, makes it easy to help users.

Take SwiftUI to the next dimension

This will review how to use a handful APIs to use Windows, Volumes, and Full Space

Volumes

  • This is used to emphasize 3D content.  You get a fixed scaled container, which maintains the same size regardless of distance.
  • You just use .windowStyle(.volumetric) to get started…. It’s that simple

3D views and layout

  • Model3D API is used to load usdz objects and more.  It is similar to asyncImage 
  • There is a a phase to landing a Model3D,
    • These include .empty, .failure, .success
  • You should use .scaleToFit() modifier so that the model can it within the volume
  • By default alignment is back flush – so you want to use a .frame(depth: , alignment) modifier 
  • And than use .overlay to attach Labels
  • Using a TimelineView you can add .animation to the 3D View so it has motion

RealityView

  • For more complex scenes or experiences you should use RealityView instead of Model3D
  • Using Entity/Component system – check out “Build Spatial Experience with RealityKit” and “Enhance your spatial computing app with RealityKit”
  • Adding Attachments allow you to provide annotations relative to entities. You need to use any washable value for the .tag assigned to attachments
    • .look(at: ) allow you to position and orient items 

3D gestures

  • To interact with these entities you need to configure them for input, this is achieved by adding both InputTargetComponent and CollisionComponent so that you can add spatial tap gestures
  • There is also a .targetedToEntity so that gestures will fail if they are not on the specific entity.  Use .onEnded to retrieve the coordinates of the gesture.
  • You can add a Model3D as an attachment to a RealityView entity , don’t forget to add a .tag
  • You can combine gestures in a Gesture Modifier to provide more functionality