WWDC 2022 – Day Four – The nitty gritty

Yesterday was a really productive day. As always, there was too much content to get to it all, but I learned a ton of new things that I want to go back and learn more about over the summer.  There were multiple sessions that have led me to rethink some of my existing code in both Wasted Time and my Card Tracker app.  Today’s set of items is a bit deeper on specific that I believe will have a direct impact on my Card Tracker app, starting with how I manage Photos.

What’s new in Photos Picker

The systems photo picker has been updated to not require special permissions.  There were sessions over the last two years that I should review including Improve access to Photos in your app (WWDC21) and Meet the new Photos picker (WWDC20).  Check out the links to those sessions. Documentation –  PhotoKit

  • New Features
    • Added new types of images for filters, like .sceenshots, .screenRecordings, .slomoVideos, etc.  These have been back ported too.
    • You can also use .any, .not, and .or – examples include (I will certainly want to include these new filters in my app, which should only include .images and .screenshots
      • .filter = .any(of: [.videos, .livePhotos])
      • .filter = .screenShots
      • .filter = ..all(of: [.images, .not(.screenshots])
    • Sheet presentation improvements – you can now create half-height mode.
    • You can also use .deselectAssets(withIdentifiers: [identifier])
    • You can also reorder via the moveAsset
  • Platform Support
    • It is now available also on macOS and watchOS, so no supported on iOS, iPadOS and the prior two.
    • On the iPad you have the sidebar available:
  • On macOS
  • Both pickers will also show assets in iCloudPhotos
  • On MacOS For simple picks of images or videos – the NSOpenPanel API may be enough for more apps.
  • Media Centric apps should use PHPicker
  • WatchOS Looks like this
  • However only images will show
  • Frameworks
    • Available in AppKit and SwiftUI, since I am focused on SwiftUI for my apps, I will focus on that side only
    • SwiftUI API
    • You can present via a @Binding selection: [PhotosPickerItem]
    • And using the PhotosPicker(selection: matching:) {} Item
    • Will pick best layout based on platform, configuration, and screen space
    • Loading selective photos and videos, note some will be delayed (ie iCloud Photos), show a per Item in loading UI
    • It uses Transferable and can load directly in your objects via this method.  Check out yesterdays “Meet Transferable” session.
    • Use FileTransferRepresentation to reduce memory footprint
    • Sample code 
  • You will need to update the image and add a didSet in the model as you see here:
  • Note on watchOS you should consider small short interactions
  • Family Setup
    • You can also use Images stored in iCloud Photos
    • This will show a loading UI before closing

Discover PhotoKit change history

Accessing photo change history, allows you to get to information about editing, etc.  PhotoKit allows for deep understanding of images in your library. It will also allow you to be notified of updates and deletion of images.

  • New Change History API
    • This uses a persistent change token that can be persisted across app launches.  It represented library state.
    • It is local to the device and matched the selected library.
    • Supported on all platforms that support PhotoKit
    • For each change you can get details on three types of objects, Asset, Asset Collection, and Collection List
  • At the end you have a new token.
  • To look at the persistent change API you will get back a an identifier for each change.  You would use that identifier in your app, to store access to specific images,  without having to store the image in your app.
  • If an asset returns .hasAdjustments – you can update the image view in your app to address if they’ve been edited.
  • Considerations
    • Determine what is important to your app and only address them.
    • Make sure your changes run in a background thread since there may be many changes 
  • Handling Errors
    • Expired change token – older than histories
    • Change details unavailable.
    • In both cases refetch data in API
  • Cinematic Video Access
  • New Error Codes
    • File provider sync root 
    • Network error

What’s New in App Store Connect

App Store Connect is used to manage the apps I have on the App Store.  It allows me to setup TestFlights and check the status of new users and updates.

Key Links: App Store Connect and App Store Connect API

  • Last year we got in app Events, TestFlight for Mac and more.
  • Enchanted Submission experience
    • Can group multiple items into a single submission
      • Add multiple Review Items to a submission (typically in 24 hours)
      • Items can be resolved independently – but all items in a submission must be approved (or removed) before the submission can go forward.
      • Review items can be App Versions, in-App events, Custom Product Pages, or Product Page Optimization Tests
    • You can submit without needing a new app version
      • Each submission has an associate platform with it’s now review items. For example:
  • You can have on “in progress” submission per platform 
  • If you don’t have a version in the submission the other items will be reviewed against a previously submitted version of your app.
  • There is a decided app review page 
    • This is now available as part of the iOS and iPadOS app (previously only on the web portal)
  • App Store Connect API
    • Last year Xcode cloud, app clips and many other features were added
    • With 2.0 there is
      • In app purchases and subscriptions
        • Can create , edit and delete them
        • Manage pricing
        • Submit for review
        • Create special offers and promo codes
      • Customer reviews and developer responses
        • Build your own workflows to manage feedback and review
      • App Hang diagnostics
        • Used to only show # of changes
        • Now will include stack traces, logs, and more
    • Starting to decommission the XML feed and supporting RestAPIs for access

Go further with Complications in WidgetKit

A few years back I added complications to my Watch App and Widgets to my iOS and macOS version of Wasted Time.  Apple has now merged this by making complications part of WidgetKit.  This gives me an opportunity to update my Complications and also make them available as widgets on the new iOS Lock Screen.

Links –

  1. Adding widgets to the Lock Screen and watch faces
  2. Creating Lock Screen Widgets and Watch Complications
  3. WidgetKit

Check out the Reloaded talk from earlier this week If you have not seen it already.

  • Unique to WatchOS
    • Watch Specific Family
      • .accessoryCorner
      • Add the larger circular content style, it will be 
      • .widgetLabel modifier will draw controls for the text, gauge or progress review in the corner.
    • This are across all
      • .accessoryRectangular (not widget label)
      • .accessoryInline (already has it’s own label)
      • .accessoryCircle
        • .widgetLabel can also be used here to provide text (or other information)  you may need to look at the environment to decide what you show based on the label.  See below:
  • The larger text watch face will auto scale up one complication to fit.
  • Auxiliary content
  • Multiple representation
  • Migration of existing code
    • Adopt WidgetKit
      • All faces now use rich complications from 12 to 4 
  • Views are used instead of templates
  • Timelines are also sued.
  • Upgrade existing installed complications
    • To do this, the app will run automatically on the an existing watch.
    • This is a new API called CLKComplicationsDataSource with a CLKComplicationWidgetMigrator that you should implement to handle this in your app.  See more in the above WidgeKit API documentation listed above.
    • My approach will be to completely re-write my code to use the four above classes and remove support for watches not running WatchOS 9

Discover ARKit 6

I was really hoping for new hardware this WWDC, but not a new laptop… I wanted the dev kit for AR/VR from Apple.  Well it didn’t happen.  However the new ARKit 6 API may hold hints to what may come in the future.  My guess is the new Ear Joint information would definitely need to be available if you had a headset!

Linke: 

  1. Tracking Geographic Locations in AR
  2. ARKit
  3. Human Interface Guidelines: Augmented Reality
  4. Qualities of great AR experiences
  5. Bring your world into augmented reality
  • 4K Video
  • Note that the wide camera has special value for AR work
  • 3840×2840 is the pixel resolution on the 13 Pro for capture.  And then simplifies the frame by binning – to 1920 x 1440, and is used also in low light environments.  Roughly every 17ms you get a new image.
  • With new hardware you can not get access to the full 4k by skipping the binning step above.  It will be aver 33ms, or 30 frames per second.  Reality Kit will scale, crop and rending for you.
  • This is available on iPhone 11 and up and any M1 iPad Pro or higher
  • Camera Enhancements
    • High Resolution Background Photos
      • In an AR session, you can also capture a single photo in the background while continuing to stream 
      • Created a sample app that allows you to see where a picture was actually taken.
      • Creating of 3D models using option capture will benefit from this feature as you can overlay a 3D UI to provide capture guidance and take pictures at the higher resolution.  There is a convenience function to allow your session to capture this via CaptureHighResolutionFrame
    • HDR mode
      • Another convenience feature .isVideoHDRSupported allows you to turn on .videoHDRAllowed == true on your session’s config
    • AVCaptureDevice access for more fined control 
      • You can do this dynamically as you need it
    • Exif Tags
      • This are now available for every AR frame.
  • Plane Anchors
    • Fully decoupled plane anchor and geometry anchor
    • Information is contained in ARPlaneExtent, and hold .rotationOnYAxis defined by width, height and center 
  • Motion Capture
    • Both skeleton and Joints are detected
    • Added Ear Joint Tracking (2D)
    • And better occlusion handling (3)
  • Location Anchors
    • New cities and countries are supported for Location Anchors
    • London and many US states
    • Added 3 in Canada , Singapore, 7 in Japan, and 2 in Australia 
    • More coming later this year 

Evolve your Core Data schema

On thing that my card tracking app doesn’t do is allow you to pick an event and show all the cards based on that event.  I have the data, but need to think thru how I would enable this feature.  This session may help me out… Let’s go!

Link – Using Lightweight Migration

  • What is schema migration
    • Chaining your data model means you need to materialize it in the data store.
    • If you don’t change the model you wont’t able to open your datastore 
  • Strategies for migration
    • There are built in tools to migrate your data model.  They are referred to as Lightweight migration.
    • It automatically analyzes and infers the necessary migration changes
    • This happens at runtime and maps old data to new data
      • Support, adding, removing, making non-optional optional, renaming, and making an optional non-optional and providing a default value.
      • This also addresses adding and removing relationships, change cardinality, and renaming relationships
      • Entities are also available for light weight, add, remove, rename, create new parent or child, move an entity up or down in the hierarchy, you CANNOT merge hierarchies 
    • Migration is controlled by two keys
      • NSMigratePersistentStoresAutomaticallyOption
      • NSInferMappingModelAutomaticallyOption
      • If you use NSPersistentContainer or NSPersistentStore it happens for you automatically
    • Let’s see it in code:
  • You don’t need to make a new model to make changes.  
  • A discussion on how to address non-lightweight is covered in this session.  Basically you decompose the migration steps into steps that are available for lightweight – this way you can step thru multiple migrations to get to your desired end state.
  • CloudKit schema Migration
    • If you use Core Data and CloudKit keep in mind you need to have a shared understanding
    • Cloudkit doesn’t support all the features of core data model
    • Unique constraints are not supported
    • Undefined and ObjectID are unavailable
    • All relationships are optional and must have an inverse
    • You can not modify or delete exiting record types or fields
    • You can add new fields or record types
    • It is essentially additive, so consider effects on older versions of the app
    • Approaches to address
      • Incrementally add new files to existing record types
      • Version your entities
      • Create a new container to associate new store with new container, may take an extended period of time for users to upload their data to this new store.

Writing for interfaces

Sometimes a session title looks interesting but I don’t spend a lot of time on the description.  This is one of those titles.  My guess was API interfaces, but it is really about how to build out clear and concise information in your app; (something I know I need to work on), so this is a pleasant surprise of a session.

Links:

  1. Apple Design Resources
  2. Human Interface Guidelines
  • Early days focus on easy and clear. Conversational with interfaces: 
  • Purpose
    • Think about what is the most important thing to know at the moment of the screen
    • Consider how you order things on the screen.
    • Headers and Buttons should be clear as people may skip other information
    • Know what to leave out.  Don’t overload the screen with data that could be placed elsewhere or not at all
    • When introducing a new feature, tell people why it’s there and why it’s important.
    • Every screen should have a purpose, and for the entire flow.
  • Anticipation
    • Think of your app as a conversation with the user.
    • Develop a voice for your app, and vary tone based on the interaction
    • Think about what comes next in the app flow.  This will help you in the interaction 
  • Context
    • Think outside the app, when will people use your app.  Will they be distracted
    • Write helpful alerts – these are interruptions so make sure they are helpful and clear.  Give context, make sure the choices are clear.
    • Create useful empty states, i.e. show what the user can do.  Try not to use idioms.
  • Empathy
    • Write for everyone, regardless of who your audience is, so you don’t leave people out who may be causally interested in your app
    • Deal with Localization – when doing translation be aware of the impact to your UI.
    • Design for accessibility – consider size and voice over.   Your language should be well designed to make your app welcoming.
  • Check out the above Human Interface Guidelines to make your app accessible by as many people as possible
  • Read your writing out loud – it really helps

SwiftUI on iPad: Organize your interface

The next few sessions are all about SwiftUI and the iPad. My own apps run on multiple platforms and I am really looking forward to making them even better on the iPad.  

This is part 1 of 2 sessions.  Links:

  1. contextMenu(menuItems:preview:)
  2. EditMode
  3. List
  4. NavigationSplitView
  5. NavigationSplitViewStyle
  6. Tables
  • Lists and Tables
    • Many of the APIs show also work on Mac.
    • Multi-column tables should be used for dense lists
      • You now get sections on both Mac and iPadOS – check out the session SwiftUI on the Mac: Build the fundamentals (WWDC22)
      • You use a Column Builder instead of a ViewBuilder.
      • In compact view you only get the first column
      • There’s a convenience modifier to allow just a string without a viewBuilder
      • If you have a comparable field then the column becomes sortable (but you have to handle the sorting yourself
      • On iPad they don’t scroll horizontally so limit your columns.  On Mac you can scroll horizontally
  • Selection and menus
    • Each row has a tag, and some state to hold the tag selection 
      • The list will coordinate via a selection binding
      • Tags are a value for a view in a selectable container. In many cases it can be auto synthesize for you
      • To manually tag a view use View.tag(_:) – but be careful tag type is important.
    • Selection State
  • Could be a single selection Required selection and multiple selection, along with lightweight multiple selection 
  • List selection no longer requires edit mode 
  • The next session will talk about toolbar buttons
  • You can also add a multiple select Context Menu.  This will work on multiple items, single item or empty area
    • If you use forSelectionType it should match the selection Type
  • Split Views
    • NavigationSplitView allows for two or three column views – for details go to the CookBook session from a few days ago
    • Standard Split View has a Sidebar and a Detailed view – in landscape they both show by default. In portrait the Sidebar is hidden.
    • In three column mode you get a Content View between the sidebar and the detail view. Recommended to use automatic style in three column view.

SwiftUI on IPad: add Toolbars, titles and more

This is the second part of SwiftUI on iPad.  If you skipped the prior session – go back and watch it.

Links:

  1. Configure Your Apps Navigation Titles
  2. ControlGroup
  3. ShareLink
  4. ToolbarItem
  5. ToolbarRole
  • Toolbars – provide quick action to common features
    • You can customize tool bars, and provide many features that used to be only available on the Mac.
    • Overflow menus can be handled for you.  Change them to a ToolbarItemGroup which will insert individual items into the menu and auto place in the overflow indicator if needed.
    • There are three areas, leading, trailing and center.  Primary actions end up in the Trailing area. Secondary actions are in the overflow menu by default.  But if you use ToolBarRole modifier, you can override that behavior
    • The editor role will move title to the leading location, and will move secondary items in the center area.
    • User customization (from API on macOS) to adopt this feature.  Only toolbar items are customizable.  It must have a unique identifier.
    • Customizations will automatically be persisted across launches.
    • You can model control groups so that items that are logically together can be added together as one unit.
    • You also make a toolbarItem as placement: .primaryAction – to make sure that it is always presented. It will be in the trailing area and is not customizable
  • Titles and documents
    • You can now define your own Document Types with properties, etc.  you can then share those Documents with others via Transferable
    • You a create a Menu attached to them .navigationTitle, which then can do thing across the document.  Like Rename, Print, etc. If you provide a document, you will get a special preview view and a Share icon for Drag and Drop.

The craft of SwiftUI API Design: progressive disclosure

My final planned session for the day is about the API design for SwiftUI.  During my day job I focus on API discovery and usability.  The application I work on has a long history and tons of APIs, but it assumes a lot of preexisting knowledge by potential users.  Getting a better view of how to understand Swift’s API design will hopefully help me in my day job too.

  • Progressive Disclosure is a base design principle.  
    • This is not unique to the design of APIs
    • The Save dialog is a great example of this principle.  It shows defaults and common values, but you can always expand the dialog to add complexity.
  • Making code feel great to use means, the complexity at the call site progressively exposes functionality as it is needed.
  • Benefit
    • Lowers learning curve
    • Minimizes time to first build
    • Creates Tight feedback loop
  • Consider common use cases
    • Label is a great example of this.  Simple is just text.
    • You can drive an overland to create a View for the Label
    • This same pattern is used across the framework
  • Provide intelligent defaults
    • To streamline common use cases, all the things that are not specified 
    • A great example is Text(“hello world”) with this code it will localize the string, adopt to dark mode, and scale based on accessibility  but you don’t need to provide any values.
    • Line spacing is automatically too.  But it can also be manually set of your use case.
  • Optimize the call site
  • Looking at Table:
  • The above image is fairly complex example. That shows how to create a simple table but also has the added complexity for sorting and group of data. And it supports sorting.
  • For a simple example with just the list 
  • We can optimize the call site to make it easier. Take a look at this code, note how simple it is.
  • Compose, don’t enumerate
    • HStack as an example: it only needs two things, the content and how to arrange it.
    • So most common use cases are simple items next to each other.  Alignment may be needed to address all three cases (leading, trailing, center).
    • What if you want to do spacing, now you an go crazy with enums for every behavior.  IF you start enumerating  common cases. Try breaking them apart.
    • An example you can now use Spacer() in a Stack
  • D20 for the win!