Completed two major classes

I like learning. It’s that simple. I like to take classes, learn new things, read about new things, and play with new things. It’s one of the reasons I think I like being in technology, there are always new things to learn.

Early last year I decided to start the class 100 Day of Swift from the website HackingWithSwift. Paul Hudson – AKA TwoStraws, does a fantastic job of providing daily lessons for 100 Days, along with challenges and exams, to help you get a good understanding about Swift in general.

While the class was fun, I really wanted to improve my skills in SwiftUI, so I took a break and then begin the class 100 Days of SwiftUI. Of course, you don’t need to take the first class, as this one does a great job of getting you familiar with many of the same Swift concepts.

I finally completed the 100 Days of SwiftUI early this year. Mainly because I had too many day job interruptions, and personal issues come up while taking the class.

I do feel that I have a much better understanding of Swift now, but there is still so much to learn. If I were able to spend 1-2 hours a day on iOS and Swift, I would be in a much better place with the language, but as I don’t use this in my day to day work, I just get to dabble with it when I have time, I will have to continue to move along at a slower pace than I’d like.

Updated my Watch and wow!

Screen shot from Apple Watch ultra.
Friday Night screen shot

Last Monday Apple released the first software update for the new version of WatchOS (version 9.1). The big thing that was promised was an improved Low-Power mode for the ultra giving it at least 60 hours of continuous use.

Of course, I had to test it. Wednesday morning I put it on my wrist and kept my normal usage pattern. This includes using it to track sleep, taking a 30 minute walk each day, handling mail, reviewing calendar, playing some podcasts and using it to track my sleep.

When I finally took it off on Friday evening, I was down to 12% battery. Not bad for 63 hours of continuous use. I think they may have something here.

100 Days of Swift – Notes App

As I continue to work on the 100 Days of Swift projects from https://hackingwithswift/100 by Paul Hudson, I am really enjoying how the consolidation days pull together things you’ve learned up to that point. Today, Day 74, was a really cool project. Basically, I had to recreate a version of the Apple Notes App.

As you can see, it’s not a direct copy, for example I don’t handle all the formatting, etc. but It is a passing simulation of the app.

I have posted my code on GitHub https://github.com/TheApApp/ChallangeDay74/ in order to see if I can improve it. So please take a look and provide any feedback you’d like. I am sure I have made it a bit overly complex in some areas.

While I don’t like using Interface Builder, that was part of the assignment.

I do like how easy it is to add the share sheet feature in Swift, so you can take the simple text note and share the contents with others.

Let’s talk Open-Services (OSLC)

ELM Server – Registered Applications

It’s been a few months since my last post. Since then I was able to give a talk a the IBM ELM Users Conference, pretty much taking people through the prior series of blog posts. It was exciting to hear the reception of theses blog posts.

The prior post showed how “easy” it can be to create a POST based on the OSLC discovery process and the resource shape. While my goal is to make the consumption of the ELM APIs as easy as possible, there are some very powerful and complex concepts that I think I need to address at this time.

What is a Resource Shape?

Stated simply, a resource shape is a set of assertions constraints that are applied to a resource. These constrains consist of assertions in the form of triples. A triple is simply a subject, predict and object. This is foundational to how RDF is represented. RDF is key to the semantic web, which is all about linked data. The ability to make any data machine accessible thru this approach allows for applications to have an open approach to collecting, analyzing and reporting any data. The ELM applications are based on this concept.

Wow, that’s a meaty paragraph.

Let’s look deeper a the triple and what it means when trying to interpret the resource shape. I will do it based on our POST example for generating a Test Plan. I am only going to show the Body which is a rdf/xml representation of the Test Plan we wanted to generate.

<rdf:RDF 
    xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
    xmlns:dcterms="http://purl.org/dc/terms/" 
    xmlns:oslc_qm="http://open-services.net/ns/qm#"
    xmlns:oslc_rm="http://open-services.net/ns/rm#"
    xmlns:rqm_qm="http://jazz.net/ns/qm/rqm#"
    xmlns:foaf="http://xmlns.com/foaf/0.1/"
    xmlns:oslc="http://open-services.net/ns/core#"
    xmlns:process="http://jazz.net/ns/process#"
    xmlns:rqm_process="http://jazz.net/xmlns/prod/jazz/rqm/process/1.0/"
    xmlns:calm="http://jazz.net/xmlns/prod/jazz/calm/1.0/"
    >
    <oslc_qm:TestPlan>
        <dcterms:title>Test plan created from API</dcterms:title>
        <dcterms:description>Here'\''s a really long description that was created by typing a bunch of words.</dcterms:description>
        <oslc:formalReview/>
        <oslc:hasChildPlan/>
        <rqm_qm:catagory/>
        <oslc:hasPriority/>
        <foaf:contributor/>
        <oslc:template/>
        <oslc:relatedChangeRequest/>
        <process:iteration/>
        <oslc:testSchedule/>
        <process:teamArea/>
        <oslc:hasWorkflowState/>
        <oslc:runsOnTestEnvironment/>
        <oslc:usesTestCase/>
        <oslc:keyDate/>
        <oslc_qm:testsDevelopmentPlan/>
        <oslc:attachment/>
        <rqm_qm:objectiveStatusGroup/>
        <oslc:risk/>
        <oslc:containsTestSuite/>
        <rqm_qm:executionEffort>42.0</rqm_qm:executionEffort>
        <oslc:category_PML_F4kaEeynq4H4YH03kw/>
        <oslc:category_PMRep4kaEeynq4H4YH03kw/>
        <oslc:category_PL_KwIkaEeynq4H4YH03kw/>
        <oslc_rm:validatesRequirementCollection/>
        <rqm_qm:planningEffort>42.0</rqm_qm:planningEffort>
    </oslc_qm:TestPlan>
</rdf:RDF>

The first thing I’ll do is use the handy “RDF Validator” made available on IBM’s cloud to see what this would look like once validated.

This validator will take a look at all the triples and allow me to covert between rdf/xml and either json or text/turtle. This will also allow us to see how the system will interpret our resource.

Converting the above rdf/xml to a text/turtle provide the following output:

@prefix oslc_qm: <http://open-services.net/ns/qm#> .
@prefix calm:  <http://jazz.net/xmlns/prod/jazz/calm/1.0/> .
@prefix process: <http://jazz.net/ns/process#> .
@prefix rdf:   <http://www.w3.org/1999/02/22-rdf-syntax-ns#> .
@prefix dcterms: <http://purl.org/dc/terms/> .
@prefix rqm_process: <http://jazz.net/xmlns/prod/jazz/rqm/process/1.0/> .
@prefix oslc_rm: <http://open-services.net/ns/rm#> .
@prefix rqm_qm: <http://jazz.net/ns/qm/rqm#> .
@prefix foaf:  <http://xmlns.com/foaf/0.1/> .
@prefix oslc:  <http://open-services.net/ns/core#> .

[ a                            oslc_qm:TestPlan ;
  process:iteration            "" ;
  process:teamArea             "" ;
  rqm_qm:catagory              "" ;
  rqm_qm:executionEffort       "42.0" ;
  rqm_qm:objectiveStatusGroup  "" ;
  rqm_qm:planningEffort        "42.0" ;
  oslc:attachment              "" ;
  oslc:category_PL_KwIkaEeynq4H4YH03kw
          "" ;
  oslc:category_PML_F4kaEeynq4H4YH03kw
          "" ;
  oslc:category_PMRep4kaEeynq4H4YH03kw
          "" ;
  oslc:containsTestSuite       "" ;
  oslc:formalReview            "" ;
  oslc:hasChildPlan            "" ;
  oslc:hasPriority             "" ;
  oslc:hasWorkflowState        "" ;
  oslc:keyDate                 "" ;
  oslc:relatedChangeRequest    "" ;
  oslc:risk                    "" ;
  oslc:runsOnTestEnvironment   "" ;
  oslc:template                "" ;
  oslc:testSchedule            "" ;
  oslc:testsDevelopmentPlan    "" ;
  oslc_qm:usesTestCase         "" ;
  oslc_rm:validatesRequirementCollection
          "" ;
  dcterms:description          "Here'\\''s a really long description that was created by typing a bunch of words." ;
  dcterms:title                "Test plan created from API" ;
  foaf:contributor             ""
]

The second thing we can do is look at if our resource creation is valid, based on the constraints in the resource shape. Let’s do that:

First off we see that all of the empty tags that were passed in the above API call, are converted from a null value to empty strings. Using the same triple, let’s go back and look at the resource shape defined for the process:iteration from our GET API on the Test Plan’s Resource shape.

As you can see in the valueType should be a iteration shaped resource, not a null string.

[ a                        oslc:Property ;
  oslc:hidden              false ;
  oslc:isMemberProperty    false ;
  oslc:name                "iteration"^^<http://www.w3.org/2001/XMLSchema#string> ;
  oslc:occurs              oslc:Zero-or-one ;
  oslc:propertyDefinition  process:iteration ;
  oslc:range               process:Iteration ;
  oslc:readOnly            false ;
  oslc:representation      oslc:Reference ;
  oslc:valueShape          <http://jazz.net/ns/process/shapes/Iteration> ;
  oslc:valueType           oslc:Resource ;
  dcterms:description      "The development iteration associated with the Test Plan."^^<http://www.w3.org/2001/XMLSchema#string> ;
  dcterms:title            "Iteration"^^<http://www.w3.org/2001/XMLSchema#string>
] .

Each triple is made up of a subject (URI), a predicate (URI) and an Object (which can be either a simple or complex object). Simple objects are primitive data types, String, Int, etc., while complex types, are further defined as URIs. So any triple will consist of URI, URI, and URI or Primitive data type.

And if we look at few of the triples in the Iteration resource we see the following triples:

  • A oslc:Property (subject), oslc:propertyDefinition (predicate), contains a process:iteration (object).
  • A oslc:Property (subject), oslc:occurs (predicate), exists oslc:Zero-or-one (attribute).
  • A oslc:Property (subject), oslc:valueType (predicate), contains a oslc:Resource (object).
  • A oslc:Property (subject), oslc:valueShape (predicate) exists as https://jazz.net/ns/process/shapes/Iteration (attribute).

The last one tells us that the constraints of a this specific property (process:iteration) is defined at https://jazz.net/ns/process/shapes/iteration. The definition is not part of OSLC, but it is defined by the Jazz process implemented on this server. This is a complex object type not a simple XMLLiteral like String, Int, or Date. As such, if we provide a value for this property it will constrained by the resource shape’s assertions.

We also see this because our valueType is an oslc:Resource (which can be found at http://open-services.net/ns/core#Resource) which states it must resolve to a URI.

We can continue to resolve each of the URIs in the triple.

Looking at the second item, we see that the property value type indicates that is exists Zero or more times. If we look at the URI for oslc:Zero-or-one (http://open-services.net/ns/core#Zero-or-many) that it defines the iteration property as optional and multi-valued. If we provide a value, it must be a valid property as defined by the valueType (an OSLC resource, defined as an object0 and it will be constrained further by the resources shape defined by https://jazz.net/ns/process/shapes/Iteration.

Given all this, we can easily see now that our original assertion was incorrect. We cannot just use process:iteration with an empty string.

Let’s look at one other property in the TestPlan resource shape to see a simple object. The rqm_qm:executionEffort is constrained in text/turtle as follows:

[ a                        oslc:Property ;
  oslc:hidden              false ;
  oslc:isMemberProperty    false ;
  oslc:name                "executionEffort"^^xsd:string ;
  oslc:occurs              oslc:Zero-or-one ;
  oslc:propertyDefinition  rqm_qm:executionEffort ;
  oslc:readOnly            false ;
  oslc:valueType           xsd:float ;
  dcterms:description      "The execution effort that the Test Plan defined in person hour."^^xsd:string ;
  dcterms:title            "Execution Effort"^^xsd:string , "Execution Effort"@en
] .

As we see this is much simpler – showing an optional field that is constrained with the following triple:

  • A oslc:Property (subject), oslc:valueType (Predicate), is of type xsd:float (attribute).
  • A oslc:Property (subject), oslc:occurs (predicate), exists oslc:Zero-or-one (attribute).

So we assert a simple float, that is an optional value.

What did we learn?


Sometimes a rule of thumb is just that. It is a simple guidance to get you started.

But in OSLC you need to be more precise. Look at the details of the assertions and ensure that you fully compliant. You may or may not generate an error on your POST depending on the assertions.

For example, in the above post, I am using “oslc:usesTestCase”. This is not the correct property, it should be “oslc_qm:usesTestCase”, however, since “oslc:usesTestCase” has no assertions defined for it, it will not cause the POST to fail.

WWDC 2022 – Day Five – Smooth Landing

Wow, this week has flown by and I’ve learned a lot, but I still have tons more to learn.  While I have less sessions planned today, I am sure I can find other sessions I will watch.  So let’s get started.

Creating accessible Single App Mode Experiences

Single mode apps, lock the system to a single app.  This is great for a kiosk, but you should make sure that accessibility is available.  Other scenarios, medical offices, testing, and you can actually enable it via guided access.

Links:

  • Guided access
    • Turn it on in accessibility settings.  Triple tap the side button and you can set restrictions and enable the restrictions you want in place.
    • I’ve never actually known about this setting, so I find it a great way to help setup a device for just email as an example.  Will have to test if you can launch a browser from an email when in this mode.
    • As a developer you can create your own custom experiences in your app.
    • For cognitive disability:
      • Be forgiving of errors,
      • Warn users before irreversible actions
      • Reduce dependence on timing
      • Always confirm payments
      • These will promote independence for people with cognitive issues
  • Code example 
  • Enable the UIGuidedAccessRestrictionDelegate for your app to enable this feature.
  • You must provide an array of restrictions which can be addressed by Guided Access
  • You also should provide a simple user facing title and additional details information 
  • Then implement the guidedAccessRestriction(withIdentifier: didChange:) method to get a notification when a restriction is toggled.  You can then post a Notification in your app to turn off user 
  • This feature was enabled in iOS 12.2 and you can check the status of customer restrictions by calling guidedAccessRestrictionState(forIdentifier:) at anytime.
  • Single App Modes
    • You can programmatically enable this mode on device programmatically, and they all benefit from the above features.
    • For Single App mode – this is for when you want the device to run always.  Will go back to the state on a reboot, but you need to make sure the devices is supervised – Use the Apple Configurator to setup the device in this mode.  You can only exit via the configurator
    • Autonomous Single App Mode – for when the app goes in and out of the mode.
  • Use a single API method to do it. – UIAccessibility.requestGuidedAccessSession(enabled: true)
  • Must be device supervised.
  • You can check for this mode on any iOS 6.0 device or higher.
  • Assessment Mode – this is for testing type applications.
    • This has been unified for iOS and MacOS
    • Doesn’t need to be supervised, but you have to get Entitlement
  • Accessibility API designed for Single App Mode
    • You may need to address additional items for people who are using assistive technologies.  In the configurator you enable a handful always available.  You can also add items in the Accessibility Shortcut menu.
      • But they need to be configured before single app mode
    • You can enable some of these via APIs
      • Use UIAccessibility.configureForGuidedAccess(features: enabled:) API for .zoom, .voiceOver, .invertColors, .assistiveTouch, and .grayscaleDisplay

Explore Apple Business Essentials

I wanted to learn more about how Apple is targeting SMB (small and medium businesses) customers for their services.  I also wonder if they enable APIs to let you add value added services on top of business essentials. 

Links:

  • This is a subscription based service with Cloud Storage, Devices Management, and other services for SMB
  • Signing in with a managed Apple ID the device is automatically setup and configured, along with all the software that is required will be installed.
  • Apple Business Manager
    • This is required to setup.  
    • You can add individuals or sync in from MS Active Directory or Google Workplace
    • You can setup Groups
  • Subscription
    • Then you can enroll either an employee plan or a device plan (for loaners, kiosks, etc.). Each employee can have up to three devices.  A fully loaded account appears to be about $25 per employee.
  • Settings and Apps
    • There is a security tab to bring together all the defaults for their devices. You can pre-configure wifi settings and many more so that the devices will be addressed automatically for a user and they won’t have to setup Wifi.
    • Managed Apps are auto updated and uninstalled if the user signs out of the device.
    • You provide a managed ID here for the users.
  • Employee experience
    • By signing in via their managed ID the device is already setup.
    • On a personal devices you go to setting and VPN which allows you to sign in with a separate work account. They will be crypto graphically separated on the device.  The essentials app will be on the machine which will provide access to services, like device repair, apps, etc.

There were no APIs discussed in this session, but it certainly makes sense for a small firm.  At one level, I would do this for my family as a means to manage their devices, but on another, I think the family sharing I have setup is good enough.

What’s new in Screen Time API

I use Screen TIme myself to help balance how much time I sit working on my machines.  Setting this up has been one way that I’ve improved my balance between computers and physical activity.  I have not looked at the APIs yet, to see if it makes sense to add any to my Apps.

Links:

This API was introduced last year, to help apps manage time for users and kids.  There were a set of Apps called out on this screen.

To be honest, I don’t recognize any of them. I may have to look them up to see what they do.

  • Highlights from 15
    • Family controls
      • Gate way to Screen Time API
      • Prevents removal
      • Provides privacy tokens 
    • Managed settings
      • Allows your app to brand a similar feature to screen time
    • Device Activity 
      • Tracked activities and identified if you exceed a threshold.
  • New Items
    • Sample app – Workload
    • In Family Controls in iOS16
      • Can authorize independent users from their own device
      • This allows for non-parental control use cases
      • You do a simple request on app launch
  • Once successfully authenticated it won’t prompt again.
  • Managed Settings:
    • Revamped to make it easier to use.  Especially in the data store.
    • In iOS15 you could only have one per process. Now   you can create up to 15 stores per process and they are shared between the app and all of it’s extensions
  • The most restrictive settings always win.
  • Device activity 
    • Has new reporting services to create custom usage reports via SwiftUI
    • Will address privacy too
  • The above code is a sample of a report, using the new Swift Charts API

What’s new in PDFKit

I’ve never looked at PDFKit; however, I am thinking about adding printing features to my card tracking application.  My assumption for this session is that I will be able to create a PDF for the reports I am thinking about.  By creating PDFs, it should become easier to create the reports, as well as to share them.

Links:

  • PDFKit Review
    • Full featured framework for viewing, editing, and writing PDF files,  available on iOS, macOS and Catalyst
    • Four core classes 
      • PDFView (included in your layout)
      • PDFDocument (the file or root of the object graph)
      • PDFPage (one ore more are contained in the Document
      • PDFAnnotations (Optional leaves to a Page – this are editable)
      • Check out Introducing PDFKit
  • Live text and Forms
    • In PDFs this is different than in the photos, in PDF if you see text – it is expected Text.
    • This allows Text selection and search
    • OCR Scanning has on demand on and is done in the document itself
    • Forms are automatically recognized and you can tab thru them as you would expect
  • Create PDFs from images
    • This creates Pages from images. This is a new API
    • public convenience init(image: CGImage, andOptions options: [PDFPageInitWithImageOption : ANY] = [:] )
    • This is auto compressed by default
    • Options include – 
      • mediaBox (like Letter or A4)
      • rotation (Portrait or landscape
      • upscaleIfSmaller – by default if the image is larger than the media box, it will scale down to fit, 
  • Overlay Views
    • To draw on a page with Pencil get
    • You can now use Overlay view on each PDF page
      • Install your overlay view 
        • Since you can have 1000s of pages in a PDF –  PDFKit will intelligently load via a new protocol
        • You must create overlayViewFor page:, the others are optional
        • A detailed walk thru of an implantation of this codes is included in the video
  • Save your content in the PDF
    • Use the PDF Annotations a the model – 
      • The Appearance Stream can recorded and will work across other readers
      • PDF annotations are stored in a dictionary – so you can put your own custom data in private objects
    • Override content for saving.
    • Images and PDFs are saved by default with maximum resolution. 
  • Best practices when saving
    • You can over ride this by .saveAllImagesAsJPEG and/or .optimizationImagesForScreen
    • .createLinearizedPDF – is optimized for internet based reading.. loading the first page first.  By default PDFs have always loaded from the last page
  • This is another session that I recommend going thru the video multiple times to get all the code examples.  Or you can get the transcript: https://developer.apple.com/videos/play/wwdc2022/10089/

Use Xcode for server-side development

While Swift has already been made available on platforms like IBM System-Z and Linux, most people still think of it as a client side application language.  While viewing this session I looked up if Swift supported WebSphere and was disappointed to see this article from 2020 – IBM Stops working on Swift – https://www.infoq.com/news/2020/01/ibm-stop-work-swift-server/. While I was disappointed to see this, I figured I could still learn how the technology works.  This session will show how to do server-side code in Xcode.  I have used multiple IDEs over my years and find that even with it’s nuances, Xcode is my favorite IDE. 

Links:

Extending an iOS application into the cloud, is an example of why you may care about server-side development.  Server components tend to be created using different technologies… but if you can use Swift, it would simplify this a lot.  Server apps are modeled as Swift Packages.  Making it a web server, requires you to define the package dependency to appropriate technology. The example talked about using Vapor which is an opens source server.  The sample code was a simple server that echo’s back data sent to it.

By using Xcode you can not only test in a terminal window via CURL, you can also write a simple iOS app to test the interaction via the simulator.  Enabling simple server deploy allows for quick testing via the cloud.  Many of these use cases are very much like what my day job application does via it’s on infrastructure and languages.   The pattern here is traditional server based development.  From a server database perspective, Swift has drivers for FoundationDB, Redis, Cassandra, Postgres, DynamoDB, MongoDB, SQLite and more.  For more information you can look at the above Swift server documentation page.

So allowing for a swift development shop to extend it’s reach to server based apps via Swift, is a great story.  Glad to see this one.

Bring multiple windows to your SwiftUI App

Okay, going to wrap up the day with two more sessions on SwiftUI.  The amount of links that theses sessions are driving shows that they are really bringing together a lot of the ideas presented over the course of the week.  SwiftUI has really made it easier to develop for Apple’s platforms.  As I mentioned elsewhere, I was really hoping that this year Apple would make a SwiftData framework to simplify the usage of things like Core Data and CloudKit.  Unfortunately, they did not do that, there’s always WWDC23.  Well let’s dive into the last two sessions.

Links:

One thing that amazes me about Friday sessions as WWDC is how dense they are.  Each my planned sessions today are only about 15 minutes long, but they pack a ton of great content in them.

  • Scene basics
    • Scenes commonly represent a window on screen. 
      • Window Group (All platforms), for data drive applications 
      • Document Group (iOS and macOS), for document driven applications 
      • Settings defines an interface for in app settings on macOS
    • You can compose them together to extend your apps functionality.
    • Two new additions:
      • Window – a single unique window on all platforms (this is great for things like games)
      • MenuBarExtra – macOS only – control in the menu bar (it is persistent in the menu bar- available as long as the app is running) – has two rendering styes – both a default style, and a chrome-less window attached to the menu bar.
  • Auxiliary scenes
    • You can add an additional scene to your Scene group – this will take it’s title and add it to the Window menu item  on the Mac
  • Scene Navigation
  • There are new callable types via @Environment for example:
    • \.openWindow – can present windows for either a window Group or window
    • \.newDocument – can create a new document window for both FileDocument and referenceFileDocument
    • \.openDocument – can present a document window with contents from disk (using a url)
  • Prefer your model’s identifier rather than the model itself. Value should be Hashable, Codable,
  • You need to create a scene for a data type.  The above code shows the button and the scene definitions.
  • Scene customizations
    • By default you will get a menu item for each group in the file menu.  You can override this with the .commandsRemoved() scene modifier.
    • By default new windows are placed in the center of the screen – you can override with .defaultPosition(.topTrailing) which will be used if now position has been provided before
    • Also .defaultSize(width: , height:) modifier 
    • And .keyboardShortcut(“0”, modifiers: [.option, .commend]) = at the scene level which allows you to create a new scene with this keyboard shortcut 

Efficiency awaits: Background tasks in SwiftUI 

And finally a follow up from last year’s async updates.  This should improve the UI responsiveness of my card tracker app.  While I have loaded a lazy grid for the cards view, it is still pretty slow.  I think I need to put up placeholder images.

  • Stormy: a sample app – that uses background tasks
    • This API is available on watchOS, iOS, tvOS, Mac Catalyst and Widgets.  It is also supported for IOS apps running on Apple Silicon Macs
    • This app prompts users to take pictures at noon if it is stormy outside.  You can see the the system can schedule the refresh for noon, and then run multiple tasks in the background awaiting for results via async/ await before notifying if the user should take the picture. 
  • Background on Background Tasks
    • These happen during the App Refresh period for background processing.  If the app is running out of time, the system will notify it to gracefully handle processing.
    • By setting a network request as a background network request, it can be put back into a wait state and woken back up for more processing when the network responds.
  • SwiftUI API in practice
  • Creating the schedule request, we will call this function and register a New .backgroundTask() scene modifier.  In this case we use .appRefresh(“Name”)
    • Note this code allows for a periodic check and 
  • Swift Concurrency
    • URLSession has adopted concurrency so now you can use try? await on URLSession.  If you want to setup it up as a background session, you should change from URLSession.shared to URLSessionConfiguration.background(withIdentifier: “App specific identifier”), and use that configuration in your URLSession(configuration: config) don’t forget to set sessionSendsLaunchEvents = true for your config object.
    • This is really important on watchOS as all network request must be done as background sessions.
    • Since an background may be expiring, add a onCancel: to your await via awaitTaskCancellationHandler{ } on Cancel: { }
    • The second runs only when the task is cancelled, and in this code it will promote the request from background to background Download.
  • Updating the app to be able to launch in the background with a .urlSession type using the same identifier we created earlier.  This ill only launch when the specific task request it.

WWDC 2022 – Day Four – The nitty gritty

Yesterday was a really productive day. As always, there was too much content to get to it all, but I learned a ton of new things that I want to go back and learn more about over the summer.  There were multiple sessions that have led me to rethink some of my existing code in both Wasted Time and my Card Tracker app.  Today’s set of items is a bit deeper on specific that I believe will have a direct impact on my Card Tracker app, starting with how I manage Photos.

What’s new in Photos Picker

The systems photo picker has been updated to not require special permissions.  There were sessions over the last two years that I should review including Improve access to Photos in your app (WWDC21) and Meet the new Photos picker (WWDC20).  Check out the links to those sessions. Documentation –  PhotoKit

  • New Features
    • Added new types of images for filters, like .sceenshots, .screenRecordings, .slomoVideos, etc.  These have been back ported too.
    • You can also use .any, .not, and .or – examples include (I will certainly want to include these new filters in my app, which should only include .images and .screenshots
      • .filter = .any(of: [.videos, .livePhotos])
      • .filter = .screenShots
      • .filter = ..all(of: [.images, .not(.screenshots])
    • Sheet presentation improvements – you can now create half-height mode.
    • You can also use .deselectAssets(withIdentifiers: [identifier])
    • You can also reorder via the moveAsset
  • Platform Support
    • It is now available also on macOS and watchOS, so no supported on iOS, iPadOS and the prior two.
    • On the iPad you have the sidebar available:
  • On macOS
  • Both pickers will also show assets in iCloudPhotos
  • On MacOS For simple picks of images or videos – the NSOpenPanel API may be enough for more apps.
  • Media Centric apps should use PHPicker
  • WatchOS Looks like this
  • However only images will show
  • Frameworks
    • Available in AppKit and SwiftUI, since I am focused on SwiftUI for my apps, I will focus on that side only
    • SwiftUI API
    • You can present via a @Binding selection: [PhotosPickerItem]
    • And using the PhotosPicker(selection: matching:) {} Item
    • Will pick best layout based on platform, configuration, and screen space
    • Loading selective photos and videos, note some will be delayed (ie iCloud Photos), show a per Item in loading UI
    • It uses Transferable and can load directly in your objects via this method.  Check out yesterdays “Meet Transferable” session.
    • Use FileTransferRepresentation to reduce memory footprint
    • Sample code 
  • You will need to update the image and add a didSet in the model as you see here:
  • Note on watchOS you should consider small short interactions
  • Family Setup
    • You can also use Images stored in iCloud Photos
    • This will show a loading UI before closing

Discover PhotoKit change history

Accessing photo change history, allows you to get to information about editing, etc.  PhotoKit allows for deep understanding of images in your library. It will also allow you to be notified of updates and deletion of images.

  • New Change History API
    • This uses a persistent change token that can be persisted across app launches.  It represented library state.
    • It is local to the device and matched the selected library.
    • Supported on all platforms that support PhotoKit
    • For each change you can get details on three types of objects, Asset, Asset Collection, and Collection List
  • At the end you have a new token.
  • To look at the persistent change API you will get back a an identifier for each change.  You would use that identifier in your app, to store access to specific images,  without having to store the image in your app.
  • If an asset returns .hasAdjustments – you can update the image view in your app to address if they’ve been edited.
  • Considerations
    • Determine what is important to your app and only address them.
    • Make sure your changes run in a background thread since there may be many changes 
  • Handling Errors
    • Expired change token – older than histories
    • Change details unavailable.
    • In both cases refetch data in API
  • Cinematic Video Access
  • New Error Codes
    • File provider sync root 
    • Network error

What’s New in App Store Connect

App Store Connect is used to manage the apps I have on the App Store.  It allows me to setup TestFlights and check the status of new users and updates.

Key Links: App Store Connect and App Store Connect API

  • Last year we got in app Events, TestFlight for Mac and more.
  • Enchanted Submission experience
    • Can group multiple items into a single submission
      • Add multiple Review Items to a submission (typically in 24 hours)
      • Items can be resolved independently – but all items in a submission must be approved (or removed) before the submission can go forward.
      • Review items can be App Versions, in-App events, Custom Product Pages, or Product Page Optimization Tests
    • You can submit without needing a new app version
      • Each submission has an associate platform with it’s now review items. For example:
  • You can have on “in progress” submission per platform 
  • If you don’t have a version in the submission the other items will be reviewed against a previously submitted version of your app.
  • There is a decided app review page 
    • This is now available as part of the iOS and iPadOS app (previously only on the web portal)
  • App Store Connect API
    • Last year Xcode cloud, app clips and many other features were added
    • With 2.0 there is
      • In app purchases and subscriptions
        • Can create , edit and delete them
        • Manage pricing
        • Submit for review
        • Create special offers and promo codes
      • Customer reviews and developer responses
        • Build your own workflows to manage feedback and review
      • App Hang diagnostics
        • Used to only show # of changes
        • Now will include stack traces, logs, and more
    • Starting to decommission the XML feed and supporting RestAPIs for access

Go further with Complications in WidgetKit

A few years back I added complications to my Watch App and Widgets to my iOS and macOS version of Wasted Time.  Apple has now merged this by making complications part of WidgetKit.  This gives me an opportunity to update my Complications and also make them available as widgets on the new iOS Lock Screen.

Links –

  1. Adding widgets to the Lock Screen and watch faces
  2. Creating Lock Screen Widgets and Watch Complications
  3. WidgetKit

Check out the Reloaded talk from earlier this week If you have not seen it already.

  • Unique to WatchOS
    • Watch Specific Family
      • .accessoryCorner
      • Add the larger circular content style, it will be 
      • .widgetLabel modifier will draw controls for the text, gauge or progress review in the corner.
    • This are across all
      • .accessoryRectangular (not widget label)
      • .accessoryInline (already has it’s own label)
      • .accessoryCircle
        • .widgetLabel can also be used here to provide text (or other information)  you may need to look at the environment to decide what you show based on the label.  See below:
  • The larger text watch face will auto scale up one complication to fit.
  • Auxiliary content
  • Multiple representation
  • Migration of existing code
    • Adopt WidgetKit
      • All faces now use rich complications from 12 to 4 
  • Views are used instead of templates
  • Timelines are also sued.
  • Upgrade existing installed complications
    • To do this, the app will run automatically on the an existing watch.
    • This is a new API called CLKComplicationsDataSource with a CLKComplicationWidgetMigrator that you should implement to handle this in your app.  See more in the above WidgeKit API documentation listed above.
    • My approach will be to completely re-write my code to use the four above classes and remove support for watches not running WatchOS 9

Discover ARKit 6

I was really hoping for new hardware this WWDC, but not a new laptop… I wanted the dev kit for AR/VR from Apple.  Well it didn’t happen.  However the new ARKit 6 API may hold hints to what may come in the future.  My guess is the new Ear Joint information would definitely need to be available if you had a headset!

Linke: 

  1. Tracking Geographic Locations in AR
  2. ARKit
  3. Human Interface Guidelines: Augmented Reality
  4. Qualities of great AR experiences
  5. Bring your world into augmented reality
  • 4K Video
  • Note that the wide camera has special value for AR work
  • 3840×2840 is the pixel resolution on the 13 Pro for capture.  And then simplifies the frame by binning – to 1920 x 1440, and is used also in low light environments.  Roughly every 17ms you get a new image.
  • With new hardware you can not get access to the full 4k by skipping the binning step above.  It will be aver 33ms, or 30 frames per second.  Reality Kit will scale, crop and rending for you.
  • This is available on iPhone 11 and up and any M1 iPad Pro or higher
  • Camera Enhancements
    • High Resolution Background Photos
      • In an AR session, you can also capture a single photo in the background while continuing to stream 
      • Created a sample app that allows you to see where a picture was actually taken.
      • Creating of 3D models using option capture will benefit from this feature as you can overlay a 3D UI to provide capture guidance and take pictures at the higher resolution.  There is a convenience function to allow your session to capture this via CaptureHighResolutionFrame
    • HDR mode
      • Another convenience feature .isVideoHDRSupported allows you to turn on .videoHDRAllowed == true on your session’s config
    • AVCaptureDevice access for more fined control 
      • You can do this dynamically as you need it
    • Exif Tags
      • This are now available for every AR frame.
  • Plane Anchors
    • Fully decoupled plane anchor and geometry anchor
    • Information is contained in ARPlaneExtent, and hold .rotationOnYAxis defined by width, height and center 
  • Motion Capture
    • Both skeleton and Joints are detected
    • Added Ear Joint Tracking (2D)
    • And better occlusion handling (3)
  • Location Anchors
    • New cities and countries are supported for Location Anchors
    • London and many US states
    • Added 3 in Canada , Singapore, 7 in Japan, and 2 in Australia 
    • More coming later this year 

Evolve your Core Data schema

On thing that my card tracking app doesn’t do is allow you to pick an event and show all the cards based on that event.  I have the data, but need to think thru how I would enable this feature.  This session may help me out… Let’s go!

Link – Using Lightweight Migration

  • What is schema migration
    • Chaining your data model means you need to materialize it in the data store.
    • If you don’t change the model you wont’t able to open your datastore 
  • Strategies for migration
    • There are built in tools to migrate your data model.  They are referred to as Lightweight migration.
    • It automatically analyzes and infers the necessary migration changes
    • This happens at runtime and maps old data to new data
      • Support, adding, removing, making non-optional optional, renaming, and making an optional non-optional and providing a default value.
      • This also addresses adding and removing relationships, change cardinality, and renaming relationships
      • Entities are also available for light weight, add, remove, rename, create new parent or child, move an entity up or down in the hierarchy, you CANNOT merge hierarchies 
    • Migration is controlled by two keys
      • NSMigratePersistentStoresAutomaticallyOption
      • NSInferMappingModelAutomaticallyOption
      • If you use NSPersistentContainer or NSPersistentStore it happens for you automatically
    • Let’s see it in code:
  • You don’t need to make a new model to make changes.  
  • A discussion on how to address non-lightweight is covered in this session.  Basically you decompose the migration steps into steps that are available for lightweight – this way you can step thru multiple migrations to get to your desired end state.
  • CloudKit schema Migration
    • If you use Core Data and CloudKit keep in mind you need to have a shared understanding
    • Cloudkit doesn’t support all the features of core data model
    • Unique constraints are not supported
    • Undefined and ObjectID are unavailable
    • All relationships are optional and must have an inverse
    • You can not modify or delete exiting record types or fields
    • You can add new fields or record types
    • It is essentially additive, so consider effects on older versions of the app
    • Approaches to address
      • Incrementally add new files to existing record types
      • Version your entities
      • Create a new container to associate new store with new container, may take an extended period of time for users to upload their data to this new store.

Writing for interfaces

Sometimes a session title looks interesting but I don’t spend a lot of time on the description.  This is one of those titles.  My guess was API interfaces, but it is really about how to build out clear and concise information in your app; (something I know I need to work on), so this is a pleasant surprise of a session.

Links:

  1. Apple Design Resources
  2. Human Interface Guidelines
  • Early days focus on easy and clear. Conversational with interfaces: 
  • Purpose
    • Think about what is the most important thing to know at the moment of the screen
    • Consider how you order things on the screen.
    • Headers and Buttons should be clear as people may skip other information
    • Know what to leave out.  Don’t overload the screen with data that could be placed elsewhere or not at all
    • When introducing a new feature, tell people why it’s there and why it’s important.
    • Every screen should have a purpose, and for the entire flow.
  • Anticipation
    • Think of your app as a conversation with the user.
    • Develop a voice for your app, and vary tone based on the interaction
    • Think about what comes next in the app flow.  This will help you in the interaction 
  • Context
    • Think outside the app, when will people use your app.  Will they be distracted
    • Write helpful alerts – these are interruptions so make sure they are helpful and clear.  Give context, make sure the choices are clear.
    • Create useful empty states, i.e. show what the user can do.  Try not to use idioms.
  • Empathy
    • Write for everyone, regardless of who your audience is, so you don’t leave people out who may be causally interested in your app
    • Deal with Localization – when doing translation be aware of the impact to your UI.
    • Design for accessibility – consider size and voice over.   Your language should be well designed to make your app welcoming.
  • Check out the above Human Interface Guidelines to make your app accessible by as many people as possible
  • Read your writing out loud – it really helps

SwiftUI on iPad: Organize your interface

The next few sessions are all about SwiftUI and the iPad. My own apps run on multiple platforms and I am really looking forward to making them even better on the iPad.  

This is part 1 of 2 sessions.  Links:

  1. contextMenu(menuItems:preview:)
  2. EditMode
  3. List
  4. NavigationSplitView
  5. NavigationSplitViewStyle
  6. Tables
  • Lists and Tables
    • Many of the APIs show also work on Mac.
    • Multi-column tables should be used for dense lists
      • You now get sections on both Mac and iPadOS – check out the session SwiftUI on the Mac: Build the fundamentals (WWDC22)
      • You use a Column Builder instead of a ViewBuilder.
      • In compact view you only get the first column
      • There’s a convenience modifier to allow just a string without a viewBuilder
      • If you have a comparable field then the column becomes sortable (but you have to handle the sorting yourself
      • On iPad they don’t scroll horizontally so limit your columns.  On Mac you can scroll horizontally
  • Selection and menus
    • Each row has a tag, and some state to hold the tag selection 
      • The list will coordinate via a selection binding
      • Tags are a value for a view in a selectable container. In many cases it can be auto synthesize for you
      • To manually tag a view use View.tag(_:) – but be careful tag type is important.
    • Selection State
  • Could be a single selection Required selection and multiple selection, along with lightweight multiple selection 
  • List selection no longer requires edit mode 
  • The next session will talk about toolbar buttons
  • You can also add a multiple select Context Menu.  This will work on multiple items, single item or empty area
    • If you use forSelectionType it should match the selection Type
  • Split Views
    • NavigationSplitView allows for two or three column views – for details go to the CookBook session from a few days ago
    • Standard Split View has a Sidebar and a Detailed view – in landscape they both show by default. In portrait the Sidebar is hidden.
    • In three column mode you get a Content View between the sidebar and the detail view. Recommended to use automatic style in three column view.

SwiftUI on IPad: add Toolbars, titles and more

This is the second part of SwiftUI on iPad.  If you skipped the prior session – go back and watch it.

Links:

  1. Configure Your Apps Navigation Titles
  2. ControlGroup
  3. ShareLink
  4. ToolbarItem
  5. ToolbarRole
  • Toolbars – provide quick action to common features
    • You can customize tool bars, and provide many features that used to be only available on the Mac.
    • Overflow menus can be handled for you.  Change them to a ToolbarItemGroup which will insert individual items into the menu and auto place in the overflow indicator if needed.
    • There are three areas, leading, trailing and center.  Primary actions end up in the Trailing area. Secondary actions are in the overflow menu by default.  But if you use ToolBarRole modifier, you can override that behavior
    • The editor role will move title to the leading location, and will move secondary items in the center area.
    • User customization (from API on macOS) to adopt this feature.  Only toolbar items are customizable.  It must have a unique identifier.
    • Customizations will automatically be persisted across launches.
    • You can model control groups so that items that are logically together can be added together as one unit.
    • You also make a toolbarItem as placement: .primaryAction – to make sure that it is always presented. It will be in the trailing area and is not customizable
  • Titles and documents
    • You can now define your own Document Types with properties, etc.  you can then share those Documents with others via Transferable
    • You a create a Menu attached to them .navigationTitle, which then can do thing across the document.  Like Rename, Print, etc. If you provide a document, you will get a special preview view and a Share icon for Drag and Drop.

The craft of SwiftUI API Design: progressive disclosure

My final planned session for the day is about the API design for SwiftUI.  During my day job I focus on API discovery and usability.  The application I work on has a long history and tons of APIs, but it assumes a lot of preexisting knowledge by potential users.  Getting a better view of how to understand Swift’s API design will hopefully help me in my day job too.

  • Progressive Disclosure is a base design principle.  
    • This is not unique to the design of APIs
    • The Save dialog is a great example of this principle.  It shows defaults and common values, but you can always expand the dialog to add complexity.
  • Making code feel great to use means, the complexity at the call site progressively exposes functionality as it is needed.
  • Benefit
    • Lowers learning curve
    • Minimizes time to first build
    • Creates Tight feedback loop
  • Consider common use cases
    • Label is a great example of this.  Simple is just text.
    • You can drive an overland to create a View for the Label
    • This same pattern is used across the framework
  • Provide intelligent defaults
    • To streamline common use cases, all the things that are not specified 
    • A great example is Text(“hello world”) with this code it will localize the string, adopt to dark mode, and scale based on accessibility  but you don’t need to provide any values.
    • Line spacing is automatically too.  But it can also be manually set of your use case.
  • Optimize the call site
  • Looking at Table:
  • The above image is fairly complex example. That shows how to create a simple table but also has the added complexity for sorting and group of data. And it supports sorting.
  • For a simple example with just the list 
  • We can optimize the call site to make it easier. Take a look at this code, note how simple it is.
  • Compose, don’t enumerate
    • HStack as an example: it only needs two things, the content and how to arrange it.
    • So most common use cases are simple items next to each other.  Alignment may be needed to address all three cases (leading, trailing, center).
    • What if you want to do spacing, now you an go crazy with enums for every behavior.  IF you start enumerating  common cases. Try breaking them apart.
    • An example you can now use Spacer() in a Stack
  • D20 for the win!

WWDC Day three – Catching up

As usual I was a bit overly ambitious yesterday and ran out of time.  So I have a few more sessions I have to watch this morning before things get going.  The first of which is:

The SwiftUI cookbook for navigation

Review of new APIs for navigation across of SwiftUI.  This will provide simple receipts to get started:

  • Quick review of existing APIs
    • Based on passing Links around.
    • You basically add a binding to a link to persist state as in – NavigationLink( “Details”, isActive: $item.showDetail) { DetailView() }
    • In the new API you move the binding up to the top.
    • NavigationStack(path: $path) { NavigationLink(“Details), value: value)
    • Much simpler than above.
  • New navigation APIs
    • Basic recipes app is used as an example with a three category app.
    • New container types:
      • Navigation Stack – this is a push / pop interface:
        • NavigationStack(path: $path) { Details() } – this is settings on macOS, Find my on the Watch, etc.
      • Navigation Split View: 
        • NavigationSplitView { Categories() } detail: { Grid() }. // to column example
        • Perfect for multi column apps like Mail or Notes and will collapse to Stack in smaller views
        • There are two sets of initializers.  Above is a two column 
        • Three column is vain NavigationSplitView { Categories() } content: { List() } detail: { Detail() } // think of the notes app with the folders listed
        • There’s a great session in 2021 on this
      • Navigation Link shows a Title and the info to present:
        • NavigationLink(“Show Detail”) { Details() }
        • The new one presents a value, not a view:  NavigationLink(“Show information”, value: informationShown)
        • That value will change based on the type of item it is presented in
  • Recipes for navigation
    • Basic stack of Views, with sections for each item. And within each item you can get details
      • You will need a NavigationStack, NavigationLink, and the modifier .navigationDestination(for: )
      • This will push items on the stack so when you go back you will go back up the stack
  • You can bind the path to the view via a state binding.  This recipe works on all platforms.
  • Multi-column presentation without stacks:
    • Uses NavigationSplitView, NavigationLink, and List ( great on larger devices)
  • Now we have fill programmatic control over the view
  • This is auto translated to a single stack on WatchOS and TvOS, so this is also available on all platforms
  • Adding them together for a complex view navigation: Two column navigation experience (This is what I need for my Card Tracker app).
    • Allows for navigation between rated information, using NavigationSplitView, NavigationStack, NavigationLink, .navigationDestination(for:) and List
    • Start with a standard two column display but add a navigationStack into the details:
  • The root view is the RecipeGrid.  If we go to it to show details
  • We now have a link for each grid entry., We will need to add a .navigationDestination Modifier – we don’t want it on each link.  So we should place it put is next to the scrollView – bellow .navigationTitle
  • And finally:
  • Persistent state:  To do this we need to address Codable and SceneStorage
    • This example requires us to move state to the class, make it Codable 
    • Take special focus on the initializer as defined in the sample code with the session. They are using Json to save and restore the state.  This uses @SceneStorage.  Adding .task modifier to view, this is run asynchronously beginning when the view appears and ending when the view disappears. I am sure this code will be used to easily extend a lot of views

Dive into App Intents

The last of the sessions I wanted to get to on Day 2, this is another meaty one.  As I’ve been playing with App intents in Wasted Time, though I never did get them working the way I wanted, this session is of great interest to me.  To quote Apple Fitness+ – “Let’s Go!”

  • Introducing AppIntents
    • Introduced in IOS10, now there is a new Framework called “AppIntents”.  
    • By using Intents you app becomes available in Siri, it also makes it show in Spotlight, you can now address Focus Filters, and of course shortcuts (this makes app available across all parts of Apple’s ecosystem).
  • Intents and Parameters
    • Single unit of functionality – run by request (or automatically)
    • Include: Metadata, Parameter and Perform Method
    • Need to create an intent for each item action a user could expect.   Then you create a struct which concludes a @MainActor and perform() method.  Include a title as a static
  • And that’s it for a simple intent that takes a user to a specific screen in your app. (I should create an intent that shows a user the metrics screen in Wasted Time)
  • If you add a little code you a make your Intent show up in the Shortcuts app.  Will want to check out Implement App Shortcuts with App Intents 
  • Phrases is used by Siri
  • Additional example was shown to help you take the user to any screen.
  • Entities, Queries, and Results
    • Use entity when values are more unbounded.  List a list of books vs, the screens in an reading app, you app then enabled queries and returns this entities as a Result
    • The entity should be Identifiable, a type Name and a DisplayRepresentation – could be a string of text
    • The Query Struct provides an interface to retrieve the entity from your app.  Must have ID lookup, can also provide suggested entities and more.  It must confirm to EntityQuery protocol.
    • Implement the defaultQuery type on an entity pointing to the Query. 
    • With additional work you can allow for stringing intents together.  
    • The Open Intent is used to open your app with an intents value passed in.
  • Properties, Finding, and Filtering
    • Entities expose properties which can be useful by other parts of the system and used in intents and short cuts – for example:
  • Here in our Book example from the session, we can export more information about the book.  You use @Property wrapper in your entity to export them.
  • By combing Properties with Queries now you have the ability to find and filter to be used in Shortcut building blocks.
  • You will need to adopt Property Query in your application.  You define how you want to allow for the query to perform.
  • You use these to define NSPredicates to be used in query and similar in sorting
  • Implementing matching then allows the system to string the predicates together to provide the resultant list
  • User Interactions
    • You can enable Dialog (for Siri) and Snippets (for visual feedback), Request Value, Disambiguation, and Confirmation to help clarity
    • You can think of Snippets as visual version of Dialog
    • Since this is done as a closure, it is more and more important to keep your views small and self contained.
  • Architecture and Lifecycle
    • There are two ways to build the intents – in the App or via a operate extension
    • Simpler to do it directly in your App
    • If you enable openAppWhenRun it will be able to run in the foreground
    • An extension is lighter weight, and if you use focus intents, it will run immediately
    • With App Intents your code is the source of truth.  Xcode will create the Metadata file needed for your intentions 
    • To upgrade from SiriKit apps you can click Convert to App Intent in the definitions file.

And now we get to the content I was planning to get to for today:

What’s New in Privacy

  • Privacy is a fundamental human right.  I still appreciate this statement from Apple and it is one of the reasons I use their products and spend my free time developing for their platforms.
  • There are four patterns (or pillars) that Apple uses for their approach:
    • Data minimization – only use the data needed to build the feature
    • On-device processing – use the device to address any sensitive data
    • Transparency and Control – If data is sent off device, let users know and control what it is, how it Is used, and control it’s access
    • Security Protections – keep data safe at rest and in flight
  • Platform Updates
    • Device name entitlement – which restricts access to device name.  Location shows app attributes in control center, gatekeeper has improved to verify integrity of the app in more places.  Launching Mac apps at login notifies people of additions and Pasteboard now requires permission.
    • The “user’s name” is provided in device name.  So now the name will return the “iPhone” if you do not request entitlement. 
    • Location use will now also show the app (not just the arrow) in the bar.
    • GateKeeper on the Mac, checks integrity of all notarized apps.  You should makes sure your signatures stay valid.  It will also prevent the app to be modified in the certain ways.  You need to include info.plist changes to allow other apps to update your app. This will have implications to Eclipse based apps.
      • I like this feature for security purposes, but I can see this being an issue in an enterprise setting
      • Will certainly have to ask my day job to look into this one in more detail
    • To enable apps to run at start in – 
      • Single API to launch app, launch agent or daemon 
      • SMAppService API is used for this. All items should be inside your bundle and they should be able to run fine.
    • Pasteboard Access – now you will get a confirmation message, you should edit options, a keyboard shortcut or UIKit paste controls to stop these.
  • New Feature to adopt
    • UIKit Paste Controls – if you add this to your app experience, this allows without any edit menu, keyboard short kit, etc.   It is validated by App Intent.  It must be visible to work
    • Media device discovery – Permission used to be required and access to bluetooth, which poses a finger printing risk. Using discovery end up showing up in the same area as AirpLay and the app only sees the sandboxed results being used.  The app won’t see the whole network but will get those features it needs from the network. Create an app extension to enable discovery if you are a protocol provider.
    • PHPicker – The new photo picker will provide access to the photos without prompt.  
    • Private access Tokens – this replaces captchas and uses blinded tokens.  Reduces ability to track users.  This is based on IETF privacy pass standard.  (Check out replace Captchas with private access tokens).
    • PassKeys – check out the prior session from Day 2.
  • Safety Check
    • I ran this feature today and hadn’t realized that I had some people with access to my photos library.. let’s just say – I fixed that 🙂 
    • This can allow for single button changes to help people in domestic or other abusive relationships – stops sharing data with people, apps, signpost of features on other devices, changes password and change trusted phones numbers and manage emergency contacts.
    • Emergency Reset is across people and apps immediately.  The only issue that I can think of is, if someone is kidnapped they could be forced to run this option and “disappear” from those who may be searching for them.
    • Manage Sharing and Access- allows you to review and confirm – this can help you identify if someone has added an app to your device to track you.  Quick Exit – takes you back to Home Screen immediately and changes so that going back to Settings to take you back to the root of Settings.

What’s new in HealthKit

I love my Apple Watch and Fitness Plus+ the additions of Medicine and sleep tracking are exciting

  • Sleep Analysis
    • Apple watch can be used to manage sleep schedule, this year we have Sleep Stages
    • Stages are stored in HealthKit, represented by category samples HKCategoryType.sleepAnalysis of REM Core and Deep.
    • Create one sample fore each continuous period of time.
    • Sleep Core equal to stages 1 and 2 of the AASM model
    • Deep is Stage 3
    • REM corresponds to Rapid Eye Movement
    • “Sleep” state will be deprecated.
    • You a read sleep samples for a given stage using Predicate .predicateForSamples( _, value: .asleepREM), you will receive an array of samples in that state.
  • Swift Async
    • They have updated the queries using async – making queries simpler 
    • They are all based on HKQuery – for example HKStatisticsCollectionQuery
    • You can use the HKStatisticsCollectionQueryDescriptor, as an example to get all the calories burned in a week by using a predicate of caloriesPredicate
  • Workouts
    • Great for capturing workouts and all day metrics.  API is being updated to identify Triathlon and other interval based activities.
    •  
  • Activities cannot overlap in time, and are not required to be continuous.
  • You assign the type of data you want to collect in a work out by starting a new activity .beginNewActivity
  • Reading metrics which will allow you to pick the statistic for a specific activity type.  This means older methods are being deprecated in favor of this method. There are a new set of Predicates for search and statistics.
  • By using workout activities to track intervals you can now get a more comprehensive view of the workout.  SWOLF (strokes in a given length and the time it took to swim that length) is being added as a unique metric that will be captured.
  • Heart rate recover is also being captured.  This Cardio recover data type will be available in the health app.  .heartRateRecoveryOneMinute 
  • Vision Prescriptions including digital copy of physical prescription
    • You can now add in vision correction information – 75% of US adults require prescription for vision correction
    • Can save both classes and contact information – Start date == prescription date
    • You can include a digital copy of the prescription 
    • (I wonder if Apple will use this data to improve their AR/VR experience going forward)
    • Prescription have a new permission authorization – it is unique for each prescription.

What’s new in the CloudKit Console

I’ve been using CloudKit for an app I’ve been working on for the last few years.  Looking at what’s new in the console will help me understand how this may impact my desire to release this app.  This is used to synchronize data between devices.

  • Hidden Containers
    • In the console you can now choose which are hidden or available – this is at the team level so you don’t see them in the various tools.  Keeping things cleaner for views and workflows.
  • Act As iCloud
    • In the Console you can now view as iCloud account verses as your Development account.
    • So you can sign in as a separate iCloud Account to look at the data from that account.
    • I can use that to see how the data may be stored in my app
    • You can’t perform schema operations, but you can do queries.
    • Great for debugging and see how the data is used in Production.
    • Data is still encrypted and only viewable by the original account
  • Zone sharing
    • This is used to share data securely across users
    • I am not using this feature, but it seems like a key thing for games and social activities.  There are both Public and Private shared zones to help separate the data
    • You can take an existing zone and then decide if it is shared.  I don’t believe it moves over the data from that zone.  But people who join later will have access to shared data.

What’s New in Swift-DocC

Since I am focusing on APIs in my day job an dhow to make our APIs more discoverable and consumable, I am really interested in what is going on with DocC.  This and the next session will provide me insights on what we may need to do.  The only issue is, my day job team is focused on Java based development, so the patterns are important to me, but the specific implementation details may not apply.

  • Allows you to use Reference, articles and tutorials adding in additional workflows for frameworks and app projects, can also do Objective-C and C APIs
  • Supports Hosting providers like Github Pages
  • Have moved to Open Source for the project.

Focus is on:

  • Document
    • You document right in the code – with no documentation created.. the Build Documentation will create stubs automatically.
    • API
      • Add three slashes for documentation – ///
      • Link syntax is “LINK“
      • Start by teaching how each API works individually and then build up to higher level content 
      • Use mark down code syntax to provide code example.
      • Initializers  you should describe each parameters
      • There’s a language toggle to help people see examples in appropriate syntax
    • Top level page you can add a new file of type documentation catalog, this allows you to build out summary and overview
      • You can add images, etc.
      • And links to each page for the APIs
  • Publish
    • Builds a static bundle called a DocC archive, it can be shared by Export.
    • It also includes a standard website out of the box and compatible with most WebServers.
    • You can auto Publish to GitHub Pages via a Base Path to make it compatible for certain hosting items – like your own domain
    • I tried this for Wasted Time and am currently getting a build error for the Build Documentation command
    • To create the Base Path – you need to add in the build settings.
    • The workflow is very simple for those using Git and you can make it automated via the Swift-DocC Plugin 
  • Browse
    • The new browser and navigation function makes it much easier to explore the API and Documentation
    • It calls out Methods, Protocols, Events, Properties, and Structs.
    • It also supports searching via the Filter bar
  • Examples and more details are available at:

Improve the discoverability of your Swift-DocC content

The second session on DocC is about the design of your documentation

  • Web navigation
    • This has both a navigator / filer bard
      • Disclosures all you to show relationship of objects, properties, etc.
      • Filter bar includes type tags to reduce the amount of things that are included
    • Content view
  • Optimize your content 
    • There is automatic organization which means your documentation is organized by type
    • But you an override this default view by
      • Define the high-level themes (limit to a few key things here, sort them by importance or adoption)
        • Essentials – getting started, etc.
          • Great to add code examples and introductory articles, etc.
        • What can be done with your framework 
        • How can you get data or visualize your framework’s data
        • What management tasks are available
      • Organize by importance and specificity 
        • Use this to guide thru the content 
      • Optimize titles to make it helpful for developers 
        • Update titles so it is clear and descriptive – 
        • Should make sense on it’s own
        • They should be mutually exclusive 
  • By improving your Documentation and making it easy to understand you can help Serendipity for developers 

What’s new in iPad Design

There are a lot of changes to expand the iPad to a more desktop like experience, especially with external displays.  These new design patterns will improve the overall experience with your app.

  • Toolbars
    • They organize your apps functionality
    • If we compare pages between iPadOS15 and 16 notice the difference at the top.  The iPadOS16 view provide a lot more functionality and discoverability of functions:
  • The most common. Workflows in your app should be tagged in the center options.  If you have too many, you end up with an overflow identifier.  You can add Customization if you have a lot of items.
  • You can group or collapse toolbar options.  When there’s not enough room, they will collapse to a + button
  • Important item that must always show should be in the trailing edge.
  • Document Menus – 
    • Documents should use this new menu, and you will get back a < to go to a browser view.  
    • Standard item like Print, or Duplicate can show in this menu.
  • Editing menus
    • These are now optimized for touch, pointer and other modalities.  So you get horizontal for touch and vertical popup for Pointer
    • You should leave the default values and group your custom objects
  • Find and replace
    • You now have a system Keyboard option for find and replace.
    • When attached to a hardware keyboard it shows up a the bottom of the screen
  • Navigation
    • Content browsing experiences are everywhere.
    • New style – browser style navigation (using back and forward buttons)
  • It’s up to your app to decide what those buttons should navigate to.
  • Note you don’t need this mode if your app is fairly flat in content.
  • If you use this, then you can enable search in the top right of the browser window. Great for filtering the details.  It supports suggestions, filters and other features..
  • If you want to search across all of your app, you should have search in a Search Tab (like the left hand navigator list)
  • Selection and Menus
    • iPadOS15 had Drag select (aka Band Selection)
    • In iPadOS16 you can now use keyboard modifier to select and deselect. It also doesn’t go into editing mode.  
    • You can now long press or click again to get a menu pop-up to execute commands against the selection
    • You can also use context menus in empty area.
    • Your app should support
      • Keyboard Focus
      • Band Selection
      • Multi-Select without modes
      • Menus for multiple selection
      • Empty Area Menus
    • Submenus – on iPhone you should only use them when you really need one. (They are vertical)
    • On iPadOS they go horizontally, they are quick 
    • You also now have a new control for popup buttons in list, which show a menu to choose an option.
      • This will keep you in context and reduces the number of steps
      • Use for well defined list of things
  • Tables 
    • New component in iPadOS16. – different than the old Table control
    • Think of this like a spreadsheet.  It supports sort by taping a header. Swapping columns (to show only the most important). They support multi-selection features.
    • Think of them as an extended list view.  Tables switch back to a single column list if size is limited.  If that is done, you should take the additional columns and add them as subtitles.  Sorts should then be moved to a ToolBarItem

Building a Desktop-class iPad App

Given that my Card Tracking app is really a task specific database tool, I think understanding how to make the Desktop and iPad versions as equal as possible is key.  The new features of iPadOS16 will make this more possible.  So this session should provide me with guidance on how to improve many of my features.

  • This is a deep dive session to update an existing App and will focus on three high-level changes.
  • UI organization:
  • Start by changing from standard Navigation View and move many options up to the NavigationView type.. using Browser or Editor options.  I would need to switch to Editor mode for my Card Tracker
  • Will want to customize the built in Back action to match our needs
  • Update the Title Menu with document info
  • Enable Document Renaming
  • And expose features into the Bar.
  • Start by changing navigationItem.style = .editor
  • Change out the old Done button with navigationItem.backAction = UIAction(…) 
  • If appropriate we could do title information, like document properties.
  • You can now enable new features based on the UIDocumentProperties and you then able things like drag and share based on the document.  If you go thru the prior session on Desktop Class iPad apps – you will learn about the system menu items and adding your custom items to the document menu.
  • For system functions you will need to override the system functions for any app specific activities, like actually renaming the file.  On Mac Catalyst you will need to manually expose the Title options to the menu builder.
  • Quick actions
    • Adding actions:
  • Fixed groups are not customizable or able to be moved by users, so they are present on the leading edge of the view.
  • Handling different sizes between iPad you want to be able o define multiple actions, and preferredElementSize property means you can address the menu size and .keepsMenuPresented attribute allows multiple selection without dismissing the menu
  • Theres a ton of stuff here.. highly recommend multiple views and analysis of the posted code.
  • Text experience 
    • Prior to iOS16 you’d have to have develop your own bulk actions edit mode with new menu items.
    • You can now enable this design with a light weight selection style, by enabling multiple properties, .allowsMultipleSelection = true, we enable Keyboard focus via .allowsFocus = true and then enable keyboard to drive the selection by .selectionFollowsFocus = true
    • If you do these changes you want to make sure to adjust via performPrimaryActionForItemAt indexPath on the collectionView.
    • You then enable collectionView(_:contextMenuConfigurationForItemsAt:point:) https://developer.apple.com/documentation/uikit/uicollectionviewdelegate/4002186-collectionview   If the array is empty then you are in an empty space and invoke the Add or similar function, otherwise check for single item or you should loop thru the multiple selections.  On an iPad you use a two-finger click get the menu
    • Find and replace is enable via setting a property.
    • You can define your own custom items for selected text.  Check out adopt desktop class editing interactions for much more.

Meet Transferable

Support drag and drop, cut and paste and also other interactions across your app.  Check out the app for this one.. great to see catalog of female inventors and scientists.  The sharing interface is also available via this (ShareLink). 

  • This is a declarative way, built on Swift First:
  • Anatomy of Transferable
    • To transfer data between two applications there is always binary data that goes across.  You need to understand what it corresponds to. All items need to provide ways to convert too and from binary type, and the Content type.  This is an apple specific technology (content type – (uniform type identifiers – UTI).
    • To create your own UTI – 
      • Add declaration your info plist file 
      • add a file extension and add it to your info.plist
      • Declare it in code – check out UTI – A Reduction video
    • Many standard types already conform to transferable.  You will use ShareLink and new PasteItem API
  • Conforming custom types
    • You need to implement add :Transferable to your object and then define the transferRepresentation as in static var transferRepresentation: some TransferRepresentation { … }
    • Three key items CodableRepresentation, DataRepresentation and FileRepresentation
    • CodableRepresentation uses an encoder and decoder and by default it uses JSON – to learn more – watch WWDC18 session on Data you can trust. Use this for simple items.
    • DataRepresentation allows conversion via only two modes – this is for in memory binary information.
    • FileRepresentation should be used for very large items, like video and audio files.
  • Advanced Tips and Trips
    • Order of TransferRepresentation is important.  They system will use the first one that matches.  
    • ProxyRepresentation – should read up more on this one.
    • You will get very different behavior between FileRepresentation v.s. ProxyRepresentation one is the data itself vs. the link to the data.
  • Key links:

Optimize your use of Core data and CloudKit

A few key links to start with:

AS mentioned earlier in my posts, I have been working on an app that tracks Greeting Cards.  It uses Core Data and CloudKit to synchronize across devices.  I am currently seeing long delays in that synchronization and hope I can gain some insights here. I like that he calls this the Water Cycle, – I think of this as agile at one level, and waterfall at another.

  • Exploration (Via a data generator)
    • Goal is to learn, challenge and verify all assumptions, will the store sync when I expect it, how does the system behave with very large data
    • Will focus on how the shape, structure and variance of the data.
    • Using Algorithmic data generators allow you to test, you should be able to store and analyze this data.  Building these to create larger and complex datasets are great practice
    • You can then build tests to validate
    • You should also create a test to test syncing this data since we are taking about CloudKit
    • Binding generator to a user interface you can now see how the data is behaving from user perspective.
  • Analysis 
    • Focus on Instruments on time and allocations
      • You can use profile and it will create tests to analyze the steps that are running.  This will help you understand where time is spent on the application
      • Using Allocations in Instruments you can see if you are either leaking memory or keeping data around after use for too long. Use this to uncover if you have retain errors.
    • Logs for information from CloudKi and Applications and push notifications.
      • Looking at this you can see= what system services are being used
      • Dasd is the process you want to look at in the logs for Mac based scheduled logs
        • It will show any policies that affect how the process runs
      • Cloudd and apsd processes should also be reviewed.
      • In terminal you can use log stream —predicate ‘ process = “” AND (sender = “CoreData” or Sender = “CloudKit”) to get a window of  with real time log streams
  • Feedback
    • Collecting diagnostic information from devices allows for actionable and specific 
    • Need to install CloudKit Profile on the device
      • This is from the developer portal and it will try and install on the device.
      • Reboot the device to take effect
      • Reproduce the behavior and then create a sysdiagnose
    • Collect a Sysdiagnose
      • On current iPhone press both volume buttons and a the Power button for a few seconds.
      • You will see a sysdiagnose in logs and you can press share to send via airdrop or other method.
      • With sysdiagnose you use log show —info —debug —predicate ‘process = “dasd” AND message contains[cd] “appidentifier”’ system_logs.logarchive to get the data in terminal
      • See more details in the video transcript.
    • If you have access to the device you can collect the Store Files via Xcode

Compose custom layouts in SwiftUI

This should be another really meaty session, as Custom views are a key feature with this release.  A sample project and detailed documentation can be found at https://developer.apple.com/documentation/swiftui/composing_custom_layouts_with_swiftui

Combining built in views to more complex views.  Modifiers provide conditional controls:

  • Grid (Static 2 dimensional views – you use lazy grids for scrollable views.)
    • Swift UI will figure out size (columns and rows) automatically.
    • You should always create sample data in your structs so you can use it in Previews
    • You use the .gridColumnAlignment modifier to adjust a single column in a grid
    • You can add the .gridCellColumn to span multiple columns
  • Layout (create a custom one)
    • To adjust button widths to equal the widest button stack you can create a custom Layout.  So you can now change from HStack to a custom stack.  In this case you make a type that confirms to  Layout.
    • A sample of a customer stack is here
  • Note the inout Void cache to share values across instances (more info in the documentation)
  • All views have spacing preferences that indicate the ideal spacing between itself and items on any sides (and each side could different ) it can vary by type and by platform. If edge presences don’t match the system will use the larger by default.
  • To see the details the commented areas above – watch the session, as they are details that you need to understand.
  • I really like this feature. Since I had tried to use Geometry Reader to solve the problem.  It’s intended use is one directional.
  • ViewThatFits (auto selects the view that fits)
    • You can use this to pick the item that fits, in the available space.  
    • Read the documentation on this one.
  • AnyLayout  – AnyLayout
    • Lets you provide different layouts based on some condition.  It will transition to that view when the value changes.  So you get a viewDidChange instead of a new View.
    • This allows for smooth animation and reduced impact to your users

Build a productive app for Apple Watch

I have had an Apple Watch app on the store since 2019.  So this session is always interesting to me.  Let’s see what nuggets I can glean to improve my app.  Check out the details at https://developer.apple.com/documentation/watchOSApps

  • Each year more capabilities are added to WatchOS.  So this session will allow you to add them toters
  • Create A watch App
    • You should decide if you need to make a companion app on the phone.
    • Great apps focus on essential focus of the app. 
    • The dual targets are a hold over from earlier days. New model you only need one.
    • You can convert your app using the validate setting to transfer the App to the new Swift UI Lifecycle.  Absolutely need to do this for my app.
    • You can now use a single 1024×1024 app icon image.
      • If you have details that get lost you can create specific items
  • Add a simple list
    • Create a data model
  • Enable list updates
    • Preview should show an empty list.. so now you want to add a button to add a new item
    • TextFieldLink can be used to allow for text input
    • Placement of items you should use at the end of list for primary action on a short list
    • On a long list, you may want to add it as a ToolbarItem – this will show at the top of the list when the user pulls it down.
  • App navigation structure
    • On Watch you can do Hierarchical so use NavigationStack
    • Page-based you use TabView – this is where all items are peers – my app uses this
    • A full screen app, uses the full screen with a single view. You want to ignoresSafeArea modifier and toolbar modifier to hide the navigation view
    • A modal Sheet should be an important task that covers the enter view.  Deciding between this and NavigationStack should be considered based on the details.
    • Use a property to control the display of the modal sheet.
    • In WatchOS9 9 you can use Steppers for sequential values. (They don’t have to be numbers, just logically sequential).  A stepper is huge for my app.  I may have to switch to it.
  • Share with a friend
    • ShareLink is also supported on WatchOS 9.  This will allow you to easily share items via standard sharing methods.
    • Check out the Transferable session earlier today.
  • Add a chart
    • You can use the new Chart feature in Swift. – SwiftCharts and they can be reused across all platforms.
    • Definitely check out Hello Swift Charts
  • Scroll with the Digital Crown
    • There is a new Digital Crown rotation modifier which allows a call back.
    • You use @State variables to track the .digitalCrownRotation modifier so that you can react to it.
    • It uses detent in the context it is the resting notch of the crown.
    • You can apply the modifier on a chart in order to display a RuleMark() 

Embrace Swift Generics

I’ve been taking a class lately – #100DaysOfSwift by Paul Hudson over at https://hackingwithswift.com … I’ve been using swift for many years, but it has all been by discovery only, so this class is really helping me fill in some gaps. One topic that comes up on the site is Generics.  Having a good understanding of Generics is key. I am looking forward to this session.

  • Generics are a fundamental tool for writing abstract code in swift.
  • In swift you can abstract away complex Types.  This is a Generic in swift.
  • Model with concrete types
    • You could overload methods to address things.  If you do this a lot with very repetitive nature.. you should think about Generics 
  • Identify common capabilities
    • Looking at the types you created you should see where things are common.  And how they behave differently based on different types.. this is Polymorphism – allowing one piece of code to have different behavior based on how it is used.
    • You can use Overloading 
    • Subtype Polymorphism – 
      • This could be done via class hierarchy with type specific overrides by sub class.
      • You be forced into reference semantics 
      • Also requires methods to be overridden, forgetting to do so, won’t be caught until runtime.
      • The more things you try to fix, the more boiler plate you will have to add.
    • Parametric Polymorphism – achieved via Generics
  • Build an interface
    • You can build an interface to represent capabilities 
    • This is a protocol, allowing you to separate the idea of what it does from implementation details
    • You add an assoScreen Shot 2022-06-08 at 21.04.38 (2)ciatedType to a protocol 
    • You then use a method to deal with the operation that the protocol needs.  Concrete types have to implement that function.
    • You can annotate at the protocol type or as an extension
    • You have to implement the methods.
  • Write Generic Code
    • We can now write generic code via parametric polymorphism
    • The type parameter can be named what ever you want, and use it across the signature.  We then annotate the type parameter with the protocol it conforms to
    • There is a simpler declaration of func functionName(_ name: some PROTOCOL)
    • Some means there is a specific type you are working with and is followed by a protocol for conformance.
    • This is used by “some View” 
    • Stepping back to understand concept of a specific abstract type.
    • The abstract type is called an opaque type
    • The specific type is the Underlying type
    • For values with opaque type, the underlying type is fixed for the scope of the type.
    • You can use opaque types for both input and output both as Parameters or Results
    • Named Type parameters are always on the input side
    • The underlying type always comes from the same place as the value.  So local values must always have an initial value.
    • It must be fixed for the scope of the variable
    • It is key to understand some and any –  and how they behave differently 

Recommend reading the transcript many times for deeper understanding

Wrap up for the day

Wow.. what a day, there is so much for my brain to digest tonight.  The one thing I need to do next year, is come with a better plan for the specific things I am looking for.  Of course, announcing new major frameworks, breaks this planning.  This year I was hoping for SwiftData, which did not appear, so, I should have multiple plans.  The great thing about today is I discovered multiple techniques and frameworks to rewrite major portions of my apps.   Hopefully making them simpler and easier to maintain.

WWDC Day two – Into the meat of the matter

As always I am going to give my first impressions for all the sessions I go thru.  Today is always the busiest day and I may have some spill over into tomorrow.  But I am excited to get started:

Let’s start with what is new in Xcode – 

  • Xcode is 30% smaller – I noticed this by the initial download and installation.  
  • The first thing I did was download the Foodtruck app that will be used in many of the demos this year.
  • Nice to see Swift Plugins to improve development environment.
  • Dynamic type variants should help in improving my various Apps on different devices and resolutions.
Xcode
  • Improvements in TestFlight Feedback allows you to quickly and easily see the feedback from your beta testers
  • The new Hangs view shows you issues of code in production.
  • Different sized images can be automatically generated via the “Single Size” option in the inspector.

Next – What’s New in Swift:

  • Community update
    • DocC and Swift Setter were open sourced last year.
    • C++ Interoperability and Website designed were started as two new communities
    • Swift Mentorship was started last year.  I signed up for this to see what I can learn and contribute 
    • Added support for Native Toolchain for CentOS7 and Amazon Linux 2 – RPMs are available but experimental at this time.
    • Swift is now being used in Apple’s Secure Enclave
  • Swift Pakcages
    • TOFU – Trust on first Use – fingerprint is being recorded on first download.  Will validate if it changes and report error.
    • Command plugins – These include doc generation, source code reformatting, and other tools.  By embracing developer tools this way, should expand swift uptake
    • Creating a plug in with DocC was shown as a demo – you can now run from Xcode. (Directly executed at any time)
    • Build Tool plugins – allow you to inject steps in the build process.  Think of this as extending activities during a complex build environment.
    • You can use aliases to deal with module collisions
  • Performance improvements
    • Drivers can now be used as a framework within the build system.  So you can improve parallelization in build process.
    • Type Checking has sped up due to how they deal with generics
    • Runtime – improved launch time due to how protocols are being handled.  They can now be cached
  • Concurrency updates
    • Further fleshed out Async/Await – is now available to be back deployed to earlier Operating systems
    • Added in data race avoidance – this is at the thread level this should address many bugs related to data race conditions
Journey to Swift 6
  • The goal is to get to full thread safely by Swift 6.   This current release is swift 5.7
  • You can enable stricter checking in the build settings.
  • Distributed keyword let’s the compiler know that an actor can be on a different machine – even across the network.  You will need Await and Try to handle if network errors occur.
  • There is a new set of Open source Algorithms released in Swift 5.5
  • Actor prioritization will allow for improved performance 
  • There is a new concurrency view in instruments so you can visualize and optimize your concurrency code.
  • Expressive Swift
    • New short hand for if let and guard so you can drop the right hand side of an = and just use the optional.
    • Swift is very type specific, unlike C which allows you to deal with automatic conversions.  Now if you passing it to C from Swift, you won’t get the swift errors, from valid pointer conversions.
    • New String tool to improve parsing the string.  There’s a new declarative approach via RegEx to do string parsing. You can now use words instead of symbols to do RegEx – import RegexBuilder library to enable this feature.  And you can take it and make it reusable, and recursive.  If you want you can still use regex literals in the builder.
    • Compatible with UTS#18 with extensions.
  • Generic code clarity 
    • This will require you to add any keyword ahead of any instance of a protocol verses as a generic type. (Not fully sure I understand this one yet, but will dig into it more over the next 12 months).
    • Primary associated types can now be constrained
    • Generics are improved by the same keyword – to minimize the amount of boiler plate changes you need to use when they are created.

Next – What’s new in SwiftUI:

SwiftUI Enhancements
  • New SwiftUI app structure
  • Swift Charts (new Framework)
    • State driven charts in SwiftUI for data visualization 
    • A chart is just “Some View”
    • It handles localization, dynamic type and dark mode automatically 
  • Navigation and Windows
    • Stacks – Push / Pop
      • New Container view – NavigationStack – wraps a root content view
      • Improves handling of stack 
    • Split Views
      • New NavigationSplitView allows 2 and 3 column layout
      • Works with ValueLinks, automatically collapses to a stack for smaller width environments
      • Look at The SwiftUI Cookbook for Navigation
    • Scenes – Multi-window views
      • WindowGroup is the base way for app …
      • Now you can create “Window” which allows a single window for an app., and you can add a keyboard short cut for that window to be displayed.  You can use openWindow() to open the window.
      • SwiftUI will remember a user’s changes across launches
      • And .presentationDetents modifier can be used on iOS to provide sheets from the bottom.
    • MenuBar extras are now available in SwiftUI
      • MenuBarExtra()
      • You can use this to build an entire app in the menu bar, or add a menu bar extra to be available.
    • Added updates for each of them.
  • Advanced Controls
    • Forms
      • System Settings  uses this
      • Forms allow you to create consistent and well formed designs for the detail view
      • You use Form{}. Section{}  and use the modifier .formStyle(.grouped)
      • You can also use LabeledContent() view to provide simple text and item
      • Titles and subtitles can have updated text based on state 
    • Controls
      • Can configure TextField to expand based on axis: .vertical and you can add a limit for the expansion
      • Date picker now support non-contiguous selection
      • Aggregate toggles are also available now.  
      • Steppers can now have a format to it’s value. (And available on watch OS – I should change Wasted time to use a stepper)
    • Tables
      • Now supported on iPadOS – this is using the same code as macOS – Table(){ TableColumn() } and will render on compact devices like iPhone
      • You can add a contextMenu(forSelectionType:) to add a selection of items in a table.  Will work on single, multiple or no selected row.
      • You now have a new toolbar design on iPadOS – they can be customized by users and re-ordered.  Which will be saved across app launches.
      • ToolbarItem(placement: .secondaryAction)
      • Basic search is available with .searchable modifier. Now can add tokens for customized search… can also add scope.  Lots of new stuff here.
  • Sharing (transferable)
    • Photos
      • The new picker is privacy preserving 
      • Can be placed anywhere in your app
      • Need to check out PhotosPicker you add a binding for the selection (looks very easy to use)
    • Sharing
      • Standard sharing view just use ShareLink to enable it within your app
      • Provide content and preview
      • Should add to card app to share cool cards found
    • Transferable
      • New swift first way of transferring across applications
      • Supports drag and drop and uses .dropDestination() 
      • You need to use Codable and custom Content Type
  • Graphics and layout 
    • Shape Styles
      • Color has new gradient properties based on the color you use
      • Shape has a new Shadow modifier 
      • Previews in Xcode 14 allows for variants to allow you to easily see things at the same time.  And without writing configuration code.
    • Layouts
      • Applied Geometry 
      • Grid is a new container view to allow for improved layouts
      • New Layout protocol allows you to build really complex abstractions.

On to “Get to know Developer Mode” –

I noticed this right away when some of my local apps I’ve been working on wouldn’t run.  So I had to figure out how to turn it on for my iOS and iPadOS devices.  Seems like Apple is working to lock down more things on the device to help increase security on devices.

  • What is it Developer Mode
    • iOS16 and WatchOS9 – disabled by default
    • Have to be enrolled by device
    • Persists across reboots and system updates 
    • Most common distribution modes do not require developer mode, TestFlight, App Store, enterprise.
  • Using Developer Mode
    • If you want to run and install development signed applications,
    • Use testing automation
    • Beta releases will have the option visible
    • It is under Settings -> Private Security
  • Automation Flow
    • There are tools to automate multiple devices, but you need to have passcode turned off
    • You can use devmodectl on macOS Ventura to enable this by default

Now on to security with Meet Passkey:

I wonder if 1Password and LastPass are working with Apple to enable in their password mangers.

  • Passkeys are now available and should be adopted by everyone
  • With Autofill and password managers – you currently have 2-3 steps just to log in and it’s longer if you use second factor
  • Passkeys change to 1 step login – stores data in your KeyChain and is unique by account and strong.
  • Process will use a QR code if you try to login from an unknown device – or non-Apple – you can then scan from your phone and use it to log in.
  • Shared accounts support ability to take a trusted account in KeyChain and use Share button to allow other person to use the same key.
  • Designing for Passkeys
    • 1st they are replacements for passwords – faster, easier and much more secure
    • Common phrase is “passkey”
    • In SF Symbols person.key.badge and person.key.badge.fill
    • You don’t need to design new UI for login.. just keep User Name field 
    • You can present them as a first class entry via AutoFill
    • There are additional UI options
  • Passkeys and AutoFill
    • Use WebAuthn on backend to allow this to work
    • On Apple platforms the ASAuthorization Family of APIs are used
    • You need to setup associated domains in webcredentials (2017 session will help you understand)
    • Make sure you use username type in your UI
      • Fetch challenge from server
      • Create provider and request via ASAuthorizationPlatformPublicKeyCredentialProvider and createCredentialAssertionRequest
      • Pass the request to the ASAuthorizationController
      • And start the request via .performAutoFillAssistedRequests(). If you use .performRequests() you will get a modal request.
      • You will get a didCompleteWithAuthorization call back and it will have an assertion and should read values confirm with your server and complete
      • This will also support Signin With Nearby Device
      • This also works on the Web via WebAuthN API. – this session shows typical JavaScript example for people who are working on websites.
      • Note Passkeys are replacing Safari’s legacy platform authenticator 
  • Streamlining Sign-in
    • Supports using passkey allow Lists – this will show a list of potential passkeys by default for a specific site, you can also use the username to restrict the list to appropriate values.
    • Silent fallback requests – by default if there are no matching passkeys – it will show you a QR to sign in from a near by device.  But you can fallback to show traditional login features, like username and password, but you must handle the error.code == .canceled to show that form (or other logic)
    • Combined credential requests – this will allow the picker to show that you have a passkey and another account with password 
  • How Passkeys Work
    • Current technology requires a server to store the hashed and salted value, which if someone else gets it – allows them to get into your account
    • Passkeys are using public/private keys.  Public is stored on the server and available to anyone.  This is used in a single use challenge back to your device where the private key is stored in the secure Hardware enclave.  This is used to send back a solution, which is sent back to the server. If the server can validate the solution with your public key, you are logged in.  The server can only say yes the solution is valid, but it CANNOT create new solutions.  The public/private Key solution makes it significantly harder for attackers.  AS you should NEVER provide your private key, it makers it near impossible for a hacker to phish.  However, like all other security, if you hand over you private key, you are giving someone else what they need to hack you.
    • Passkey also uses bluetooth via proximity exchange, so a hacked email or fake site can’t also provide the proximity challenge, again, they can’t hack.  (The proximity check is in the browser not from the server).
  • Multi-Factor Authentication
Passkey Authentication Method Comparison
  • Today, multi-factor allows you combine factors to improve security.
  • Passkeys eliminate the human factor from phishing, and can’t be leaked from the server, since it is secure on your hardware.

Meet Desktop Class iPad

Focus on Navigation Bar Updates:

  • Styling and controls
    • UINavigationBar allows for new optimized UI for iPad, styles like Navigator, Browser, and Editor 
Navigation Styles
  • Default style is Navigator – this is a traditional UINavigation Bar,
  • Browser – changes to address history and location.  Title is leading
  • Editor – Leading aligned title – great for destination activities
  • These last two have lots os space in the center – which all you to present new buttons for the user.  Overflow is supposed in all modes (…) 
  • UIBarButtonItems are now grouped  – Fixed Groups always appear first and cannot be removed or moved by customization
  • Movable groups cannot be removed but can be moved.
  • Shape Group is both movable and removable – will be collapsed when UI needs to save space
  • Customization will apply these rules… and then allow the user to make the changes.
  • Overflow contains any item that won’t fit along with the item to customize the bar.
  • On MacOS this all becomes NSToolBar 
  • Document interactions
    • UINavigationBar supports adding a menu to the title group – to add actions on the content as a whole
    • By default they are duplicate, move, rename, export, and print
    • You can add your own items on the menu, filter away some default, etc.
    • On Mac Catalyst these items exist in the File Menu, and you add, you have to use UIMenuBuilder to add them to an appropriate menu
  • Search udpates
    • In iOS16 Search now takes up less space – it is inline in iOS
    • Suggestions appear and can be updated along with the query
    • Confirm to .searchResultsUpdate, UISearchSuggestionItems 

Getting the most out of Xcode Cloud

  • Xcode Cloud Review:
    • Introduced last year for CI/CD 
    • Can build, run automated tests in parallel and distribute to users
  • Review and existing Workflow
    • First check that current workflows are working and look for optimization view
    • The build details Overview provides info
    • Build duration is time based, usage is effort based and is used to calculate usage
  • Xcode Cloud usage dashboard
    • This shows you usage, trends, and available remaining compute.
  • Best Practices
    • Avoid unintended builds (based on start conditions)
    • Don’t build duplicate commits
    • Can also use files and folders options to not start a build if only docs or other folders
    • Predefined actions, Analyze, Archive, Build, Test, etc.
    • Tests should be based on a concise set of devices (don’t need to do all devices), there is an alias for recommended devices.
    • To skip based on type of change – just ad [ci-skip]  to the end of the commit message and this will skip running a build.
  • Revisit Optimized Build
    • Usage dashboard show you the impacts of your changes
    • Given that Apple is starting to charge for this, I will monitor my usage on my current app I am testing to see if it is still worth it

Bring your world into augmented reality

Using object capture API and reality kit to create 3d Models of real world objects

  • Object Capture recap
Object Capture Flow
  • Model and textures are combined to the model
  • Last year’s session goes thru details 
  • There are some great models available and tools that are avail.  Check out the Go App to try on shoes via AR.
  • ARKit camera enhancements
    • Take good photos from all sides, if you use camera iPhone or iPad app, it will capture scale and orientation 
    • Leverage ARKit for 3D Guidance UI – this is built into ARKit
    • The higher the image resolution the better the model quality.
    • New High Resolution API at native camera resolution – while still using other items.  On iPhone 13 (so far) – it does not interrupt the ARSession.
    • EXIF meta data is available in the photos – just use  ARWorldTrackingConfiguration.recommendedVideoFormatForHighResolutionFrameCapturing 
      • Then set config..videoFormat = hiResCaptureVideoFormat
      • session.run(config)
    • You then call session.captureHighResolutionFrame()
    • AVCaptureDevice allows you to capture the underling device properties directly
    • All of this is in Discover ARKit 6
  • Best practice guidelines
    • Characteristics – sufficient texture details (transparent is really hard)
    • Minimal reflective surfaces
    • Ensure object is ridged if you flipped 
    • Limited fine Structure – you will need to get close to address that
    • Ideal environment has diffuse even lighting
    • Lots of space around the object
    • Get various nights, and make sure that the object is large in the field of view.
    • Lots of overlap is important
    • (Around 80 photos is good)
    • Get the bottom (another 20 pictures)
    • Landscape mode for capturing a long object
    • Copy to a Mac and process them.  There are four detail levels of the models – Reduced, Medium, Full and RAW – iOS is limited to the first two.  The others are for Pro Workflows
  • End to end Workflow
    • Demo of the workflow
    • Use Photogrammetry session API (from 2021 WWDC)
    • Reality convert allows you to change colors and textures on objects
    • Recommend checking out RealityKit session from 2021
    • Will certainly go over this section multiple times.. great example of showing how use 3D objects in a game

Create parametric 3D room scans with RoomPlan

Cool to see members of the prototyping and video team.  RoomPlan is a framework to scan a room to build a parametric model of the room.

This should allow for significant changes in interior design and other home remodeling tools.

  • Scanning experience API
Room Plan
  • UIView Subview with World Space Feedback, Realtime Model Generations, and coaching & user guidance
  • To use in your own app, it’s just four steps:
RoomPlan Code
  • You can also add delegate classes to opt out of processing and / or export the USDZ model
  • Data API
    • Scan -> Process -> Export is the basic workflow
    • There’s a simple RealityKit app – I downloaded the code.. can’t run it tonight.. but hope to get to it later this week
  • Best Practices
    • Need at least 50 lux
    • Problematic things
      • Mirrors and Glass present problems
      • High Ceilings
      • Dark Surfaces
    • For high resolution 
      • Close doors
      • Open curtains
    • Provide feedback during the scan
    • Battery and Thermals are things to consider – don’t really want to scan over long 5 minutes

Bringing Continuity Camera to your macOS App

  • What is continuity camera?
    • Just bring your iPhone close to your Mac and it works wirelessly
    • It will also show up as an external camera and microphone on your macOS
    • For Wired it needs USB for Wireless you need both Bluetooth and Wireless
    • You get an onboarding dialog onetime for an app.
    • You now get the devices available 
    • You get additional video effects – In control center you then have the ability to add additional effects
    • With continuity camera you get blue on non Apple Silicon Macs
      • You can also combine video effects together
    • Desk View mode – Portrait camera placement is best.  Also turn on Center stage.
    • I gave this a try with my iPhone by holding It up behind my Mac.  It really was magical to see my external keyboard and fingers on the track pad, as if from an overhead camera 
    • All notifications will be paused when you use continuity camera 
  • Building a magical experience
    • New Camera Management APIs are available.
    • You should enable the APIs to automatically switch the Camera instead of requiring the user to select from a drop down.  You can set the user preferred camera property.  It is key value observable (KVO) to intelligently switch to the most preferred camera.
    • systemPreferredCamera is read only – to present best choice on the system.
    • Recommend adding a new UI element to enable and disable auto selection mode.
    • Recipe:
      • Automatic Camera Selection is Off
        • Stop KVO systemPreferredCamera
        • Update session’s input device with user selection
        • Set userPreferredCamera when user picks a camera
    • Will need to check out the sample app – Continuity Camera Sample 
  • New APIs on macOS
    • Your Mac app can now use iPhone camera features
    • Support up to 12MP in AVCapturePhotoOutput
    • Can also prioritization of photo quality vs. speed
    • Flash Capture – is also enabled
    • Along with metadata to capture if there is a Face of HumanBody in the image from iPhone
    • Also supports video, movie and other formats are now supported in Continuity camera
    • The Desk View is exposed as a separate device in device discovery

Adopt Desktop class editing interactions 

  • Edit Menus
    • Representation is based on input mode used, you can get context menus with right click or secondary menu option.
    • Data detector integration does things like – if you select an address you will get Get Direction and open in Maps.  No code adoption required.
    • Adding your own actions is UITextViewDelegate to customize – or return nil to get standard system menu
    • UIMenu controller is deprecated 
    • Instead now we have UIEditMenuInteraction which allows for context menu presentation on secondary click
    • This is contextual based on the location of the touch in the system.
    • The behavior on Mac will be familiar to that modality 
    • You now have Preferred element size – similar to how widgets are handled on iPadOS and iOS
  • Find and Replace
    • New UI component for in app find and replace editing feature.  Works across macOS, iOS, and iPadOS all you need to do to enable is set .isFindInteractionEnabled = true for UITextView, WKWebView, PDFView or Quick Look (already setup). 
    • With HW keyboard all short cuts work as expected just make sure the view can become first responder
    • You can make it available via a navigation bar
    • On a scroll view it will adapt to trait selection changes.
    • It also supports Regular Expression – Wow!
    • If you want to add to another view, like a list view.  You can add UIFindInteraction on any view.
    • After you add it to your custom view… just setup a UIFindInteractionDelegate 
    • You can also handle all things your self, like a manual find and replace feature you’ve written
  • Highly recommend watching this session – both for their dry humor and the amount of detail the provide by something that seems so easy to just use. They had me at this closing image.
Edit Menu and Search

Complications and Widgets: Reloaded

Adding both watch complications and iPhone Lock Screen widgets

  • Complications timeline
    • Immediately accessible, high value information, tap takes your to the area in the app
    • They are being remade in WidgetKit – I will have to recreate my complication from Wasted Time for the Watch
    • Accessory Corner family is specific for WatchOS
    • Go further with WidgetKet
  • Colors
    • System controls the look in one of three modes: Full color, accented or vibrant
    • There is a WidgetRenderingMode
    • Accented keep original opacity
    • Vibrant rendering mode, you content is desaturated and then adapter to the environment it is in
    • Avoid using transparent colors in Vibrant mode
    • Using AccessoryWidgetBackgroundMode will fix a lot of issues
  • Project Setup
    • To update your existing widget app.
    • You can add a new Widget target if you want to add watch app to existing iOS widget
    • For watch OS you need to create a preconfigured list of intent providers
  • Making glanceable views
    • New AutoUpdatingProgress so you don’t have to so many timeline updates
    • Make sure to use font styles use things like Headline body, title, and caption
    • Also look at ViewThatFits to provide better options for o tough fit elements
  • Privacy
    • You should have settings for your widgets and complications for when they are redacting content or are in a low luminance state. This is the first hint about an always on iPhone screen.
    • To test these states you can use the @Environment(\.isLuminanceReduced) variable.
    • Use .privacySensitive() to redact some values in your widgets

Apple Design Awards 2022

Looking at this year’s design finalists – I’ve only used three of them. Let’s see if they win. https://developer.apple.com/design/awards/

The three I’ve used or played are Wylde Flowers (an enjoyable game), Vectornator: Vector Design (for vector graphics editing), and Lego Star Wars: Castaways (another game).

Great quotes from the video:

Love the idea that Design creating art with code!

Design let’s users instantly understand very complex ideas and features. Getting things simple is good design.

Failing is part of every creative process.

Solving your own problem – a real way to get started.

“Giggly feel”

Take all that power and make it usable and simple.

Making something simple is quite complex.

Challenge is to keep limits in place so you actually get something done.

stick to your original idea

If you only get inspiration from others games, you’ll just create another game.

Don’t chase the market, chase your passion.

Let’s keep on building it!

And the winers are: