GPT vs. Llama

Like many in tech I’ve been looking at various Large Language Models lately. One of the most fascinating use cases is using Chat-GPT to write code that can be a productivity enhancer for developers.

As part of my day job, I work with OSLC compliant applications. If you are not familiar with OSLC, check out this post in my ELM blog. OSLC is complex and powerful. The open nature of the standard means that there are very few constraints, which allows customers and application vendors to create very powerful integrations and extensions.

Getting started writing OSLC applications, requires a lot of work to get started, and this should be a perfect opportunity for using a LLM to write the boiler plate. As such, I decided to try two different tools. Instead of starting with ChatGPT, I thought I would use Meta’s llama model. We had talked about it last yesterday on my weekly podcast. If you haven’t listened to it yet, go check it out at GamesAtWork dot Biz.

After recording that episode, I used the instructions from one of your show links and setup Llama-cpp on my Mac. Here’s how the conversation went:

DevPro llama.cpp-master % ./main -m ./models/30B/ggml-model-q4_0.bin -n 256 –repeat_penalty 1.0 –color -i -r “User:” -f prompts/chat-with-bob.txt

main: seed = 1679443809

<skipping a bunch of loading information>

main: prompt: ‘ Transcript of a dialog, where the User interacts with an Assistant named Bob. Bob is helpful, kind, honest, good at writing, and never fails to answer the User’s requests immediately and with precision.

User: Hello, Bob.

Bob: Hello. How may I help you today?

User: Please tell me the largest city in Europe.

Bob: Sure. The largest city in Europe is Moscow, the capital of Russia.

User:’

<More log skipping >

== Running in interactive mode. ==

 – Press Ctrl+C to interject at any time.

 – Press Return to return control to LLaMa.

 – If you want to submit another line, end your input in ‘\’.

 Transcript of a dialog, where thew User interacts with an Assistant named Bob. Bob is helpful, kind, honest, good at writing, and never fails to answer the User’s requests immediately and with precision.

User: Hello, Bob.

Bob: Hello. How may I help you today?

User: Please tell me the largest city in Europe.

Bob: Sure. The largest city in Europe is Moscow, the capital of Russia.

User:what is OSLC

Bob: OSLC is an open source collaborative project that enables sharing of information between systems and applications.

User:Write code to retrieve a requirement from a OSLC provider

Bob:OSLC is an open source collaborative project that enables sharing of information between systems and applications.

User:can you write a python program to retrieve a requirement from an OSLC Provider?

Bob:OSLC is an open source collaborative project that enables sharing of information between systems and applications.

User:Can you write any code to utilize OSLC?

Bob:OSLC is an open source collaborative project that enables sharing of information between systems and applications.

User:can you provide any more details

Bob:OSLC is an open source collaborative project that enables sharing of information between systems and applications.

conversation with llama

As you can see, Bob was not very helpful. To be honest this is not a big surprise. I would assume that the Meta model is mainly based on information it found on Facebook or Instagram. Not really a hotbed for development tool vendors or users.

Next I tried the same prompts with ChatGPT – let’s see what happened:

ME: what is OSLC

ChatGPT: OSLC stands for “Open Services for Lifecycle Collaboration”. It is a set of open standards that define how different software tools can interoperate and exchange data with each other, particularly in the context of product development and lifecycle management.

OSLC provides a standardized way for different tools to communicate with each other, reducing the need for custom integrations between individual tools. This helps organizations to better manage their product development process, improve collaboration between teams, and ultimately accelerate time-to-market.

OSLC defines a set of REST-based APIs (Application Programming Interfaces) that enable tools to exchange data with each other in a standardized way. This includes APIs for accessing and updating artifacts such as requirements, designs, test cases, and defects, as well as APIs for managing links between these artifacts.

OSLC is widely used in industries such as aerospace, automotive, and software development, where there is a need for complex tools to work together seamlessly.

ME: Write code to retrieve a requirement from a OSLC provider

ChatGP: To retrieve a requirement from an OSLC provider, you would typically use the OSLC REST APIs to make a GET request to the provider’s endpoint for requirements. Here’s some sample code in Python that demonstrates how you might do this:

import requests

# Define the URL of the OSLC provider's requirements endpoint
requirement_url = 'https://example.com/oslc/requirements'

# Define the ID of the requirement you want to retrieve
requirement_id = '12345'

# Define the OSLC query string to retrieve the requirement with the specified ID
oslc_query = '?oslc.where=dcterms.identifier="{}"'.format(requirement_id)

# Define the headers to send with the request (including authentication if required)
headers = {
    'Accept': 'application/rdf+xml',
    'Authorization': 'Bearer <access_token>'  # Replace <access_token> with your actual access token if required
}

# Send the GET request to the OSLC provider's requirements endpoint with the specified query string and headers
response = requests.get(requirement_url + oslc_query, headers=headers)

# Check the response status code to see if the request was successful
if response.status_code == 200:
    # If the response was successful, extract the requirement data from the response body (which may be in RDF format)
    requirement_data = response.text
    print('Requirement data: {}'.format(requirement_data))
else:
    # If the response was not successful, raise an exception or handle the error in some other way
    raise Exception('Error retrieving requirement: status code {}'.format(response.status_code))

Note that the exact code required to retrieve a requirement from an OSLC provider will depend on the specific APIs and authentication mechanisms provided by the provider. The above code should serve as a starting point and may need to be modified to work with your particular provider.

conversation with public version of ChatGPT

Now that’s a very different response. ChatGPT not only understood the context of OSLC that I was looking for, it also provided a much more meaningful description of OSLC.

While the code won’t provide me an out of the box solution, as a python developer it provides you with enough of a frame work to get started. I know that you will need some level of authentication, and that you need to accept the response back as a “rdf/xml” document. Knowing this, I thought, “hey” let’s see what I would get back if asked about the authentication code:

can you provide sample authentication code for this Oslo code

Sure, here’s an example of how you might authenticate with an OSLC provider using OAuth 2.0 and retrieve a requirement:

import requests
from requests_oauthlib import OAuth2Session

# Define the OAuth 2.0 credentials for the OSLC provider
client_id = 'my-client-id'
client_secret = 'my-client-secret'
authorization_url = 'https://example.com/oauth2/authorize'
token_url = 'https://example.com/oauth2/token'

# Define the URL of the OSLC provider's requirements endpoint
requirement_url = 'https://example.com/oslc/requirements'

# Define the ID of the requirement you want to retrieve
requirement_id = '12345'

# Define the OSLC query string to retrieve the requirement with the specified ID
oslc_query = '?oslc.where=dcterms.identifier="{}"'.format(requirement_id)

# Create an OAuth 2.0 session using the OAuth2Session class from the requests_oauthlib library
oauth_session = OAuth2Session(client_id, redirect_uri='http://localhost')
authorization_url, state = oauth_session.authorization_url(authorization_url)

# Send the user to the authorization URL and wait for them to grant authorization
print('Please go to this URL and authorize the application: {}'.format(authorization_url))
authorization_response = input('Enter the full callback URL after authorization: ')

# Fetch the access token using the token URL and the authorization response from the user
token = oauth_session.fetch_token(token_url, authorization_response=authorization_response, client_secret=client_secret)

# Define the headers to send with the request, including the access token
headers = {
    'Accept': 'application/rdf+xml',
    'Authorization': 'Bearer {}'.format(token['access_token'])
}

# Send the GET request to the OSLC provider's requirements endpoint with the specified query string and headers
response = requests.get(requirement_url + oslc_query, headers=headers)

# Check the response status code to see if the request was successful
if response.status_code == 200:
    # If the response was successful, extract the requirement data from the response body (which may be in RDF format)
    requirement_data = response.text
    print('Requirement data: {}'.format(requirement_data))
else:
    # If the response was not successful, raise an exception or handle the error in some other way
    raise Exception('Error retrieving requirement: status code {}'.format(response.status_code))

Note that the exact code required to authenticate with an OSLC provider will depend on the specific authentication mechanism used by the provider. The above code demonstrates how to authenticate using OAuth 2.0, but other mechanisms such as Basic Authentication or client certificates may also be used. Consult the provider’s documentation for details on how to authenticate with their APIs.

ChatGPT Response
Question on authentication (ChatGPT)

As you can see , know context allowed ChatGPT to pull in more details about the Oauth2.0 approach for accessing the API. Having written my own example on my ELM blog, I can tell you, this is a good start to help you understand OSLC.

It’s not 100% there, but it certainly will get you asking the right questions. I believe that is where AI’s will start providing real value to development. Not replacing developers, but helping new developers learn some basics, and letting experience developers focus on the hard problems…For now.

Minor Wasted Time Update Submitted

Since completely rewriting WastedTime into SwiftUI and fixing it to run nearly the same code across macOS, watchOS, iOS, iPadOS, and tvOS, I’ve not been happy with the way I’ve handled the fonts.

Today I took a step back and made changes to better handle the needs of larger devices.

Ultimately the code is pretty easy. First I added a new computed property on most screens. This computed property works on all platforms EXCEPT for watchOS. Here it is –

var isIpad: Bool {
    #if !os(watchOS)
        UIDevice.current.userInterfaceIdiom == .pad
    #else
        false
    #endif
}

All this does is define a simple boolean indicating if you are on a iPad like device. The reason there is a compiler directive looking for watchOS is that UIDevice is not available in watchOS. Luckily, we can just set the isIPad to false if we are on the watch.

Second, I define a new extension on SwiftUI’s Font with two computed properties:

extension Font {
    static var systemSpecificBody: Font {
        #if !targetEnvironment(macCatalyst)
        #if os(iOS) || os(watchOS) || os(tvOS)
            .body
        #else
            .title
        #endif
        #else
            .title
        #endif
    }

    static var iPadSpecificBody: Font {
        .title
    }
}

These two computed properties are used to return the font size I’d like to use for text. The reason I have two is so that I can use my previously defined boolean (isIPad) to set the font. Again we uncover a difference between Apple’s platforms. there is no compiler directive to say you are compiling for iPadOS. iPadOS is evidently more just marketing, and not really an operating system.

Additionally, since I am using macCatalyst for my macOS version of WastedTime, I have to capture the targetEnvironment to recognize that I will be running on macOS.

Having done both of these preparatory steps, I can now easily set the font for any text in my SwiftUI views. I do this with following code:

.font(isIpad ? .iPadSpecificBody : .systemSpecificBody)

This is a ternary, basically an inline if-else statement. The means set the value of .font to .iPadSpecificBody if the value of isIPad is true, otherwise use .systemSpecificBody.

And that’s it. I now correctly increase the fonts from .body to .title for those devices that have more space.

Wasted Time Update – Removing Twitter

New Wasted Time Icon

Well, I’ve been looking at adding Mastodon support to Wasted Time since removing support for twitter a few months back. I was talking to my good friend and podcast co-host @andypiper about this yesterday, and he asked why. I thought about it long and hard. While back in the twitter days I used this for two reasons:

  • A fun way to show people how much time is being wasted getting meetings started with the hashtag #wastedtime
  • A way to promote the app with a AppStore Url

We discussed that this mode of app promotion was frowned upon in the Fediverse. I also looked back at my apps number of users and marketing strategy and came to the realization that I had almost no-one who downloaded the app based on seeing the AppStore Url in a tweet. Why would I think this would be different in the Fediverse?

While I want to play with the Fediverse API, I think I’ve decided that it would be better not to add support for Mastodon in the app.

One of the big features that I took away from the app a few years back, when I was completely rewriting it to support SwiftUI was the ability to capture ToDo’s during a meeting. I think my time would be better spent working on adding that feature back in.

What do you think?

Post Script

I was finally able to get my lasted update to the WatchOS version of Wasted Time submitted to the App Store yesterday. I had been having issues with the app versioning. Everything worked for the tvOS, macOS and iOS versions of the app, but the watchOS version kept ignoring the new version number. This cause the App Store to reject the upload every time.

I even spent time with Apple developer support for 2 months going back and forth to try and figure out what was going on. No luck, until yesterday I was digging around in the many different configuration files, and found one that somehow got hard coded to an old version. Strange.

Completed two major classes

I like learning. It’s that simple. I like to take classes, learn new things, read about new things, and play with new things. It’s one of the reasons I think I like being in technology, there are always new things to learn.

Early last year I decided to start the class 100 Day of Swift from the website HackingWithSwift. Paul Hudson – AKA TwoStraws, does a fantastic job of providing daily lessons for 100 Days, along with challenges and exams, to help you get a good understanding about Swift in general.

While the class was fun, I really wanted to improve my skills in SwiftUI, so I took a break and then begin the class 100 Days of SwiftUI. Of course, you don’t need to take the first class, as this one does a great job of getting you familiar with many of the same Swift concepts.

I finally completed the 100 Days of SwiftUI early this year. Mainly because I had too many day job interruptions, and personal issues come up while taking the class.

I do feel that I have a much better understanding of Swift now, but there is still so much to learn. If I were able to spend 1-2 hours a day on iOS and Swift, I would be in a much better place with the language, but as I don’t use this in my day to day work, I just get to dabble with it when I have time, I will have to continue to move along at a slower pace than I’d like.

Updated my Watch and wow!

Screen shot from Apple Watch ultra.
Friday Night screen shot

Last Monday Apple released the first software update for the new version of WatchOS (version 9.1). The big thing that was promised was an improved Low-Power mode for the ultra giving it at least 60 hours of continuous use.

Of course, I had to test it. Wednesday morning I put it on my wrist and kept my normal usage pattern. This includes using it to track sleep, taking a 30 minute walk each day, handling mail, reviewing calendar, playing some podcasts and using it to track my sleep.

When I finally took it off on Friday evening, I was down to 12% battery. Not bad for 63 hours of continuous use. I think they may have something here.

100 Days of Swift – Notes App

As I continue to work on the 100 Days of Swift projects from https://hackingwithswift/100 by Paul Hudson, I am really enjoying how the consolidation days pull together things you’ve learned up to that point. Today, Day 74, was a really cool project. Basically, I had to recreate a version of the Apple Notes App.

As you can see, it’s not a direct copy, for example I don’t handle all the formatting, etc. but It is a passing simulation of the app.

I have posted my code on GitHub https://github.com/TheApApp/ChallangeDay74/ in order to see if I can improve it. So please take a look and provide any feedback you’d like. I am sure I have made it a bit overly complex in some areas.

While I don’t like using Interface Builder, that was part of the assignment.

I do like how easy it is to add the share sheet feature in Swift, so you can take the simple text note and share the contents with others.

Let’s talk Open-Services (OSLC)

ELM Server – Registered Applications

It’s been a few months since my last post. Since then I was able to give a talk a the IBM ELM Users Conference, pretty much taking people through the prior series of blog posts. It was exciting to hear the reception of theses blog posts.

The prior post showed how “easy” it can be to create a POST based on the OSLC discovery process and the resource shape. While my goal is to make the consumption of the ELM APIs as easy as possible, there are some very powerful and complex concepts that I think I need to address at this time.

What is a Resource Shape?

Stated simply, a resource shape is a set of assertions constraints that are applied to a resource. These constrains consist of assertions in the form of triples. A triple is simply a subject, predict and object. This is foundational to how RDF is represented. RDF is key to the semantic web, which is all about linked data. The ability to make any data machine accessible thru this approach allows for applications to have an open approach to collecting, analyzing and reporting any data. The ELM applications are based on this concept.

Wow, that’s a meaty paragraph.

Let’s look deeper a the triple and what it means when trying to interpret the resource shape. I will do it based on our POST example for generating a Test Plan. I am only going to show the Body which is a rdf/xml representation of the Test Plan we wanted to generate.

<rdf:RDF 
    xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
    xmlns:dcterms="http://purl.org/dc/terms/" 
    xmlns:oslc_qm="http://open-services.net/ns/qm#"
    xmlns:oslc_rm="http://open-services.net/ns/rm#"
    xmlns:rqm_qm="http://jazz.net/ns/qm/rqm#"
    xmlns:foaf="http://xmlns.com/foaf/0.1/"
    xmlns:oslc="http://open-services.net/ns/core#"
    xmlns:process="http://jazz.net/ns/process#"
    xmlns:rqm_process="http://jazz.net/xmlns/prod/jazz/rqm/process/1.0/"
    xmlns:calm="http://jazz.net/xmlns/prod/jazz/calm/1.0/"
    >
    <oslc_qm:TestPlan>
        <dcterms:title>Test plan created from API</dcterms:title>
        <dcterms:description>Here'\''s a really long description that was created by typing a bunch of words.</dcterms:description>
        <oslc:formalReview/>
        <oslc:hasChildPlan/>
        <rqm_qm:catagory/>
        <oslc:hasPriority/>
        <foaf:contributor/>
        <oslc:template/>
        <oslc:relatedChangeRequest/>
        <process:iteration/>
        <oslc:testSchedule/>
        <process:teamArea/>
        <oslc:hasWorkflowState/>
        <oslc:runsOnTestEnvironment/>
        <oslc:usesTestCase/>
        <oslc:keyDate/>
        <oslc_qm:testsDevelopmentPlan/>
        <oslc:attachment/>
        <rqm_qm:objectiveStatusGroup/>
        <oslc:risk/>
        <oslc:containsTestSuite/>
        <rqm_qm:executionEffort>42.0</rqm_qm:executionEffort>
        <oslc:category_PML_F4kaEeynq4H4YH03kw/>
        <oslc:category_PMRep4kaEeynq4H4YH03kw/>
        <oslc:category_PL_KwIkaEeynq4H4YH03kw/>
        <oslc_rm:validatesRequirementCollection/>
        <rqm_qm:planningEffort>42.0</rqm_qm:planningEffort>
    </oslc_qm:TestPlan>
</rdf:RDF>

The first thing I’ll do is use the handy “RDF Validator” made available on IBM’s cloud to see what this would look like once validated.

This validator will take a look at all the triples and allow me to covert between rdf/xml and either json or text/turtle. This will also allow us to see how the system will interpret our resource.

Converting the above rdf/xml to a text/turtle provide the following output:

@prefix oslc_qm: <http://open-services.net/ns/qm#> .
@prefix calm:  <http://jazz.net/xmlns/prod/jazz/calm/1.0/> .
@prefix process: <http://jazz.net/ns/process#> .
@prefix rdf:   <http://www.w3.org/1999/02/22-rdf-syntax-ns#> .
@prefix dcterms: <http://purl.org/dc/terms/> .
@prefix rqm_process: <http://jazz.net/xmlns/prod/jazz/rqm/process/1.0/> .
@prefix oslc_rm: <http://open-services.net/ns/rm#> .
@prefix rqm_qm: <http://jazz.net/ns/qm/rqm#> .
@prefix foaf:  <http://xmlns.com/foaf/0.1/> .
@prefix oslc:  <http://open-services.net/ns/core#> .

[ a                            oslc_qm:TestPlan ;
  process:iteration            "" ;
  process:teamArea             "" ;
  rqm_qm:catagory              "" ;
  rqm_qm:executionEffort       "42.0" ;
  rqm_qm:objectiveStatusGroup  "" ;
  rqm_qm:planningEffort        "42.0" ;
  oslc:attachment              "" ;
  oslc:category_PL_KwIkaEeynq4H4YH03kw
          "" ;
  oslc:category_PML_F4kaEeynq4H4YH03kw
          "" ;
  oslc:category_PMRep4kaEeynq4H4YH03kw
          "" ;
  oslc:containsTestSuite       "" ;
  oslc:formalReview            "" ;
  oslc:hasChildPlan            "" ;
  oslc:hasPriority             "" ;
  oslc:hasWorkflowState        "" ;
  oslc:keyDate                 "" ;
  oslc:relatedChangeRequest    "" ;
  oslc:risk                    "" ;
  oslc:runsOnTestEnvironment   "" ;
  oslc:template                "" ;
  oslc:testSchedule            "" ;
  oslc:testsDevelopmentPlan    "" ;
  oslc_qm:usesTestCase         "" ;
  oslc_rm:validatesRequirementCollection
          "" ;
  dcterms:description          "Here'\\''s a really long description that was created by typing a bunch of words." ;
  dcterms:title                "Test plan created from API" ;
  foaf:contributor             ""
]

The second thing we can do is look at if our resource creation is valid, based on the constraints in the resource shape. Let’s do that:

First off we see that all of the empty tags that were passed in the above API call, are converted from a null value to empty strings. Using the same triple, let’s go back and look at the resource shape defined for the process:iteration from our GET API on the Test Plan’s Resource shape.

As you can see in the valueType should be a iteration shaped resource, not a null string.

[ a                        oslc:Property ;
  oslc:hidden              false ;
  oslc:isMemberProperty    false ;
  oslc:name                "iteration"^^<http://www.w3.org/2001/XMLSchema#string> ;
  oslc:occurs              oslc:Zero-or-one ;
  oslc:propertyDefinition  process:iteration ;
  oslc:range               process:Iteration ;
  oslc:readOnly            false ;
  oslc:representation      oslc:Reference ;
  oslc:valueShape          <http://jazz.net/ns/process/shapes/Iteration> ;
  oslc:valueType           oslc:Resource ;
  dcterms:description      "The development iteration associated with the Test Plan."^^<http://www.w3.org/2001/XMLSchema#string> ;
  dcterms:title            "Iteration"^^<http://www.w3.org/2001/XMLSchema#string>
] .

Each triple is made up of a subject (URI), a predicate (URI) and an Object (which can be either a simple or complex object). Simple objects are primitive data types, String, Int, etc., while complex types, are further defined as URIs. So any triple will consist of URI, URI, and URI or Primitive data type.

And if we look at few of the triples in the Iteration resource we see the following triples:

  • A oslc:Property (subject), oslc:propertyDefinition (predicate), contains a process:iteration (object).
  • A oslc:Property (subject), oslc:occurs (predicate), exists oslc:Zero-or-one (attribute).
  • A oslc:Property (subject), oslc:valueType (predicate), contains a oslc:Resource (object).
  • A oslc:Property (subject), oslc:valueShape (predicate) exists as https://jazz.net/ns/process/shapes/Iteration (attribute).

The last one tells us that the constraints of a this specific property (process:iteration) is defined at https://jazz.net/ns/process/shapes/iteration. The definition is not part of OSLC, but it is defined by the Jazz process implemented on this server. This is a complex object type not a simple XMLLiteral like String, Int, or Date. As such, if we provide a value for this property it will constrained by the resource shape’s assertions.

We also see this because our valueType is an oslc:Resource (which can be found at http://open-services.net/ns/core#Resource) which states it must resolve to a URI.

We can continue to resolve each of the URIs in the triple.

Looking at the second item, we see that the property value type indicates that is exists Zero or more times. If we look at the URI for oslc:Zero-or-one (http://open-services.net/ns/core#Zero-or-many) that it defines the iteration property as optional and multi-valued. If we provide a value, it must be a valid property as defined by the valueType (an OSLC resource, defined as an object0 and it will be constrained further by the resources shape defined by https://jazz.net/ns/process/shapes/Iteration.

Given all this, we can easily see now that our original assertion was incorrect. We cannot just use process:iteration with an empty string.

Let’s look at one other property in the TestPlan resource shape to see a simple object. The rqm_qm:executionEffort is constrained in text/turtle as follows:

[ a                        oslc:Property ;
  oslc:hidden              false ;
  oslc:isMemberProperty    false ;
  oslc:name                "executionEffort"^^xsd:string ;
  oslc:occurs              oslc:Zero-or-one ;
  oslc:propertyDefinition  rqm_qm:executionEffort ;
  oslc:readOnly            false ;
  oslc:valueType           xsd:float ;
  dcterms:description      "The execution effort that the Test Plan defined in person hour."^^xsd:string ;
  dcterms:title            "Execution Effort"^^xsd:string , "Execution Effort"@en
] .

As we see this is much simpler – showing an optional field that is constrained with the following triple:

  • A oslc:Property (subject), oslc:valueType (Predicate), is of type xsd:float (attribute).
  • A oslc:Property (subject), oslc:occurs (predicate), exists oslc:Zero-or-one (attribute).

So we assert a simple float, that is an optional value.

What did we learn?


Sometimes a rule of thumb is just that. It is a simple guidance to get you started.

But in OSLC you need to be more precise. Look at the details of the assertions and ensure that you fully compliant. You may or may not generate an error on your POST depending on the assertions.

For example, in the above post, I am using “oslc:usesTestCase”. This is not the correct property, it should be “oslc_qm:usesTestCase”, however, since “oslc:usesTestCase” has no assertions defined for it, it will not cause the POST to fail.

WWDC 2022 – Day Five – Smooth Landing

Wow, this week has flown by and I’ve learned a lot, but I still have tons more to learn.  While I have less sessions planned today, I am sure I can find other sessions I will watch.  So let’s get started.

Creating accessible Single App Mode Experiences

Single mode apps, lock the system to a single app.  This is great for a kiosk, but you should make sure that accessibility is available.  Other scenarios, medical offices, testing, and you can actually enable it via guided access.

Links:

  • Guided access
    • Turn it on in accessibility settings.  Triple tap the side button and you can set restrictions and enable the restrictions you want in place.
    • I’ve never actually known about this setting, so I find it a great way to help setup a device for just email as an example.  Will have to test if you can launch a browser from an email when in this mode.
    • As a developer you can create your own custom experiences in your app.
    • For cognitive disability:
      • Be forgiving of errors,
      • Warn users before irreversible actions
      • Reduce dependence on timing
      • Always confirm payments
      • These will promote independence for people with cognitive issues
  • Code example 
  • Enable the UIGuidedAccessRestrictionDelegate for your app to enable this feature.
  • You must provide an array of restrictions which can be addressed by Guided Access
  • You also should provide a simple user facing title and additional details information 
  • Then implement the guidedAccessRestriction(withIdentifier: didChange:) method to get a notification when a restriction is toggled.  You can then post a Notification in your app to turn off user 
  • This feature was enabled in iOS 12.2 and you can check the status of customer restrictions by calling guidedAccessRestrictionState(forIdentifier:) at anytime.
  • Single App Modes
    • You can programmatically enable this mode on device programmatically, and they all benefit from the above features.
    • For Single App mode – this is for when you want the device to run always.  Will go back to the state on a reboot, but you need to make sure the devices is supervised – Use the Apple Configurator to setup the device in this mode.  You can only exit via the configurator
    • Autonomous Single App Mode – for when the app goes in and out of the mode.
  • Use a single API method to do it. – UIAccessibility.requestGuidedAccessSession(enabled: true)
  • Must be device supervised.
  • You can check for this mode on any iOS 6.0 device or higher.
  • Assessment Mode – this is for testing type applications.
    • This has been unified for iOS and MacOS
    • Doesn’t need to be supervised, but you have to get Entitlement
  • Accessibility API designed for Single App Mode
    • You may need to address additional items for people who are using assistive technologies.  In the configurator you enable a handful always available.  You can also add items in the Accessibility Shortcut menu.
      • But they need to be configured before single app mode
    • You can enable some of these via APIs
      • Use UIAccessibility.configureForGuidedAccess(features: enabled:) API for .zoom, .voiceOver, .invertColors, .assistiveTouch, and .grayscaleDisplay

Explore Apple Business Essentials

I wanted to learn more about how Apple is targeting SMB (small and medium businesses) customers for their services.  I also wonder if they enable APIs to let you add value added services on top of business essentials. 

Links:

  • This is a subscription based service with Cloud Storage, Devices Management, and other services for SMB
  • Signing in with a managed Apple ID the device is automatically setup and configured, along with all the software that is required will be installed.
  • Apple Business Manager
    • This is required to setup.  
    • You can add individuals or sync in from MS Active Directory or Google Workplace
    • You can setup Groups
  • Subscription
    • Then you can enroll either an employee plan or a device plan (for loaners, kiosks, etc.). Each employee can have up to three devices.  A fully loaded account appears to be about $25 per employee.
  • Settings and Apps
    • There is a security tab to bring together all the defaults for their devices. You can pre-configure wifi settings and many more so that the devices will be addressed automatically for a user and they won’t have to setup Wifi.
    • Managed Apps are auto updated and uninstalled if the user signs out of the device.
    • You provide a managed ID here for the users.
  • Employee experience
    • By signing in via their managed ID the device is already setup.
    • On a personal devices you go to setting and VPN which allows you to sign in with a separate work account. They will be crypto graphically separated on the device.  The essentials app will be on the machine which will provide access to services, like device repair, apps, etc.

There were no APIs discussed in this session, but it certainly makes sense for a small firm.  At one level, I would do this for my family as a means to manage their devices, but on another, I think the family sharing I have setup is good enough.

What’s new in Screen Time API

I use Screen TIme myself to help balance how much time I sit working on my machines.  Setting this up has been one way that I’ve improved my balance between computers and physical activity.  I have not looked at the APIs yet, to see if it makes sense to add any to my Apps.

Links:

This API was introduced last year, to help apps manage time for users and kids.  There were a set of Apps called out on this screen.

To be honest, I don’t recognize any of them. I may have to look them up to see what they do.

  • Highlights from 15
    • Family controls
      • Gate way to Screen Time API
      • Prevents removal
      • Provides privacy tokens 
    • Managed settings
      • Allows your app to brand a similar feature to screen time
    • Device Activity 
      • Tracked activities and identified if you exceed a threshold.
  • New Items
    • Sample app – Workload
    • In Family Controls in iOS16
      • Can authorize independent users from their own device
      • This allows for non-parental control use cases
      • You do a simple request on app launch
  • Once successfully authenticated it won’t prompt again.
  • Managed Settings:
    • Revamped to make it easier to use.  Especially in the data store.
    • In iOS15 you could only have one per process. Now   you can create up to 15 stores per process and they are shared between the app and all of it’s extensions
  • The most restrictive settings always win.
  • Device activity 
    • Has new reporting services to create custom usage reports via SwiftUI
    • Will address privacy too
  • The above code is a sample of a report, using the new Swift Charts API

What’s new in PDFKit

I’ve never looked at PDFKit; however, I am thinking about adding printing features to my card tracking application.  My assumption for this session is that I will be able to create a PDF for the reports I am thinking about.  By creating PDFs, it should become easier to create the reports, as well as to share them.

Links:

  • PDFKit Review
    • Full featured framework for viewing, editing, and writing PDF files,  available on iOS, macOS and Catalyst
    • Four core classes 
      • PDFView (included in your layout)
      • PDFDocument (the file or root of the object graph)
      • PDFPage (one ore more are contained in the Document
      • PDFAnnotations (Optional leaves to a Page – this are editable)
      • Check out Introducing PDFKit
  • Live text and Forms
    • In PDFs this is different than in the photos, in PDF if you see text – it is expected Text.
    • This allows Text selection and search
    • OCR Scanning has on demand on and is done in the document itself
    • Forms are automatically recognized and you can tab thru them as you would expect
  • Create PDFs from images
    • This creates Pages from images. This is a new API
    • public convenience init(image: CGImage, andOptions options: [PDFPageInitWithImageOption : ANY] = [:] )
    • This is auto compressed by default
    • Options include – 
      • mediaBox (like Letter or A4)
      • rotation (Portrait or landscape
      • upscaleIfSmaller – by default if the image is larger than the media box, it will scale down to fit, 
  • Overlay Views
    • To draw on a page with Pencil get
    • You can now use Overlay view on each PDF page
      • Install your overlay view 
        • Since you can have 1000s of pages in a PDF –  PDFKit will intelligently load via a new protocol
        • You must create overlayViewFor page:, the others are optional
        • A detailed walk thru of an implantation of this codes is included in the video
  • Save your content in the PDF
    • Use the PDF Annotations a the model – 
      • The Appearance Stream can recorded and will work across other readers
      • PDF annotations are stored in a dictionary – so you can put your own custom data in private objects
    • Override content for saving.
    • Images and PDFs are saved by default with maximum resolution. 
  • Best practices when saving
    • You can over ride this by .saveAllImagesAsJPEG and/or .optimizationImagesForScreen
    • .createLinearizedPDF – is optimized for internet based reading.. loading the first page first.  By default PDFs have always loaded from the last page
  • This is another session that I recommend going thru the video multiple times to get all the code examples.  Or you can get the transcript: https://developer.apple.com/videos/play/wwdc2022/10089/

Use Xcode for server-side development

While Swift has already been made available on platforms like IBM System-Z and Linux, most people still think of it as a client side application language.  While viewing this session I looked up if Swift supported WebSphere and was disappointed to see this article from 2020 – IBM Stops working on Swift – https://www.infoq.com/news/2020/01/ibm-stop-work-swift-server/. While I was disappointed to see this, I figured I could still learn how the technology works.  This session will show how to do server-side code in Xcode.  I have used multiple IDEs over my years and find that even with it’s nuances, Xcode is my favorite IDE. 

Links:

Extending an iOS application into the cloud, is an example of why you may care about server-side development.  Server components tend to be created using different technologies… but if you can use Swift, it would simplify this a lot.  Server apps are modeled as Swift Packages.  Making it a web server, requires you to define the package dependency to appropriate technology. The example talked about using Vapor which is an opens source server.  The sample code was a simple server that echo’s back data sent to it.

By using Xcode you can not only test in a terminal window via CURL, you can also write a simple iOS app to test the interaction via the simulator.  Enabling simple server deploy allows for quick testing via the cloud.  Many of these use cases are very much like what my day job application does via it’s on infrastructure and languages.   The pattern here is traditional server based development.  From a server database perspective, Swift has drivers for FoundationDB, Redis, Cassandra, Postgres, DynamoDB, MongoDB, SQLite and more.  For more information you can look at the above Swift server documentation page.

So allowing for a swift development shop to extend it’s reach to server based apps via Swift, is a great story.  Glad to see this one.

Bring multiple windows to your SwiftUI App

Okay, going to wrap up the day with two more sessions on SwiftUI.  The amount of links that theses sessions are driving shows that they are really bringing together a lot of the ideas presented over the course of the week.  SwiftUI has really made it easier to develop for Apple’s platforms.  As I mentioned elsewhere, I was really hoping that this year Apple would make a SwiftData framework to simplify the usage of things like Core Data and CloudKit.  Unfortunately, they did not do that, there’s always WWDC23.  Well let’s dive into the last two sessions.

Links:

One thing that amazes me about Friday sessions as WWDC is how dense they are.  Each my planned sessions today are only about 15 minutes long, but they pack a ton of great content in them.

  • Scene basics
    • Scenes commonly represent a window on screen. 
      • Window Group (All platforms), for data drive applications 
      • Document Group (iOS and macOS), for document driven applications 
      • Settings defines an interface for in app settings on macOS
    • You can compose them together to extend your apps functionality.
    • Two new additions:
      • Window – a single unique window on all platforms (this is great for things like games)
      • MenuBarExtra – macOS only – control in the menu bar (it is persistent in the menu bar- available as long as the app is running) – has two rendering styes – both a default style, and a chrome-less window attached to the menu bar.
  • Auxiliary scenes
    • You can add an additional scene to your Scene group – this will take it’s title and add it to the Window menu item  on the Mac
  • Scene Navigation
  • There are new callable types via @Environment for example:
    • \.openWindow – can present windows for either a window Group or window
    • \.newDocument – can create a new document window for both FileDocument and referenceFileDocument
    • \.openDocument – can present a document window with contents from disk (using a url)
  • Prefer your model’s identifier rather than the model itself. Value should be Hashable, Codable,
  • You need to create a scene for a data type.  The above code shows the button and the scene definitions.
  • Scene customizations
    • By default you will get a menu item for each group in the file menu.  You can override this with the .commandsRemoved() scene modifier.
    • By default new windows are placed in the center of the screen – you can override with .defaultPosition(.topTrailing) which will be used if now position has been provided before
    • Also .defaultSize(width: , height:) modifier 
    • And .keyboardShortcut(“0”, modifiers: [.option, .commend]) = at the scene level which allows you to create a new scene with this keyboard shortcut 

Efficiency awaits: Background tasks in SwiftUI 

And finally a follow up from last year’s async updates.  This should improve the UI responsiveness of my card tracker app.  While I have loaded a lazy grid for the cards view, it is still pretty slow.  I think I need to put up placeholder images.

  • Stormy: a sample app – that uses background tasks
    • This API is available on watchOS, iOS, tvOS, Mac Catalyst and Widgets.  It is also supported for IOS apps running on Apple Silicon Macs
    • This app prompts users to take pictures at noon if it is stormy outside.  You can see the the system can schedule the refresh for noon, and then run multiple tasks in the background awaiting for results via async/ await before notifying if the user should take the picture. 
  • Background on Background Tasks
    • These happen during the App Refresh period for background processing.  If the app is running out of time, the system will notify it to gracefully handle processing.
    • By setting a network request as a background network request, it can be put back into a wait state and woken back up for more processing when the network responds.
  • SwiftUI API in practice
  • Creating the schedule request, we will call this function and register a New .backgroundTask() scene modifier.  In this case we use .appRefresh(“Name”)
    • Note this code allows for a periodic check and 
  • Swift Concurrency
    • URLSession has adopted concurrency so now you can use try? await on URLSession.  If you want to setup it up as a background session, you should change from URLSession.shared to URLSessionConfiguration.background(withIdentifier: “App specific identifier”), and use that configuration in your URLSession(configuration: config) don’t forget to set sessionSendsLaunchEvents = true for your config object.
    • This is really important on watchOS as all network request must be done as background sessions.
    • Since an background may be expiring, add a onCancel: to your await via awaitTaskCancellationHandler{ } on Cancel: { }
    • The second runs only when the task is cancelled, and in this code it will promote the request from background to background Download.
  • Updating the app to be able to launch in the background with a .urlSession type using the same identifier we created earlier.  This ill only launch when the specific task request it.

WWDC 2022 – Day Four – The nitty gritty

Yesterday was a really productive day. As always, there was too much content to get to it all, but I learned a ton of new things that I want to go back and learn more about over the summer.  There were multiple sessions that have led me to rethink some of my existing code in both Wasted Time and my Card Tracker app.  Today’s set of items is a bit deeper on specific that I believe will have a direct impact on my Card Tracker app, starting with how I manage Photos.

What’s new in Photos Picker

The systems photo picker has been updated to not require special permissions.  There were sessions over the last two years that I should review including Improve access to Photos in your app (WWDC21) and Meet the new Photos picker (WWDC20).  Check out the links to those sessions. Documentation –  PhotoKit

  • New Features
    • Added new types of images for filters, like .sceenshots, .screenRecordings, .slomoVideos, etc.  These have been back ported too.
    • You can also use .any, .not, and .or – examples include (I will certainly want to include these new filters in my app, which should only include .images and .screenshots
      • .filter = .any(of: [.videos, .livePhotos])
      • .filter = .screenShots
      • .filter = ..all(of: [.images, .not(.screenshots])
    • Sheet presentation improvements – you can now create half-height mode.
    • You can also use .deselectAssets(withIdentifiers: [identifier])
    • You can also reorder via the moveAsset
  • Platform Support
    • It is now available also on macOS and watchOS, so no supported on iOS, iPadOS and the prior two.
    • On the iPad you have the sidebar available:
  • On macOS
  • Both pickers will also show assets in iCloudPhotos
  • On MacOS For simple picks of images or videos – the NSOpenPanel API may be enough for more apps.
  • Media Centric apps should use PHPicker
  • WatchOS Looks like this
  • However only images will show
  • Frameworks
    • Available in AppKit and SwiftUI, since I am focused on SwiftUI for my apps, I will focus on that side only
    • SwiftUI API
    • You can present via a @Binding selection: [PhotosPickerItem]
    • And using the PhotosPicker(selection: matching:) {} Item
    • Will pick best layout based on platform, configuration, and screen space
    • Loading selective photos and videos, note some will be delayed (ie iCloud Photos), show a per Item in loading UI
    • It uses Transferable and can load directly in your objects via this method.  Check out yesterdays “Meet Transferable” session.
    • Use FileTransferRepresentation to reduce memory footprint
    • Sample code 
  • You will need to update the image and add a didSet in the model as you see here:
  • Note on watchOS you should consider small short interactions
  • Family Setup
    • You can also use Images stored in iCloud Photos
    • This will show a loading UI before closing

Discover PhotoKit change history

Accessing photo change history, allows you to get to information about editing, etc.  PhotoKit allows for deep understanding of images in your library. It will also allow you to be notified of updates and deletion of images.

  • New Change History API
    • This uses a persistent change token that can be persisted across app launches.  It represented library state.
    • It is local to the device and matched the selected library.
    • Supported on all platforms that support PhotoKit
    • For each change you can get details on three types of objects, Asset, Asset Collection, and Collection List
  • At the end you have a new token.
  • To look at the persistent change API you will get back a an identifier for each change.  You would use that identifier in your app, to store access to specific images,  without having to store the image in your app.
  • If an asset returns .hasAdjustments – you can update the image view in your app to address if they’ve been edited.
  • Considerations
    • Determine what is important to your app and only address them.
    • Make sure your changes run in a background thread since there may be many changes 
  • Handling Errors
    • Expired change token – older than histories
    • Change details unavailable.
    • In both cases refetch data in API
  • Cinematic Video Access
  • New Error Codes
    • File provider sync root 
    • Network error

What’s New in App Store Connect

App Store Connect is used to manage the apps I have on the App Store.  It allows me to setup TestFlights and check the status of new users and updates.

Key Links: App Store Connect and App Store Connect API

  • Last year we got in app Events, TestFlight for Mac and more.
  • Enchanted Submission experience
    • Can group multiple items into a single submission
      • Add multiple Review Items to a submission (typically in 24 hours)
      • Items can be resolved independently – but all items in a submission must be approved (or removed) before the submission can go forward.
      • Review items can be App Versions, in-App events, Custom Product Pages, or Product Page Optimization Tests
    • You can submit without needing a new app version
      • Each submission has an associate platform with it’s now review items. For example:
  • You can have on “in progress” submission per platform 
  • If you don’t have a version in the submission the other items will be reviewed against a previously submitted version of your app.
  • There is a decided app review page 
    • This is now available as part of the iOS and iPadOS app (previously only on the web portal)
  • App Store Connect API
    • Last year Xcode cloud, app clips and many other features were added
    • With 2.0 there is
      • In app purchases and subscriptions
        • Can create , edit and delete them
        • Manage pricing
        • Submit for review
        • Create special offers and promo codes
      • Customer reviews and developer responses
        • Build your own workflows to manage feedback and review
      • App Hang diagnostics
        • Used to only show # of changes
        • Now will include stack traces, logs, and more
    • Starting to decommission the XML feed and supporting RestAPIs for access

Go further with Complications in WidgetKit

A few years back I added complications to my Watch App and Widgets to my iOS and macOS version of Wasted Time.  Apple has now merged this by making complications part of WidgetKit.  This gives me an opportunity to update my Complications and also make them available as widgets on the new iOS Lock Screen.

Links –

  1. Adding widgets to the Lock Screen and watch faces
  2. Creating Lock Screen Widgets and Watch Complications
  3. WidgetKit

Check out the Reloaded talk from earlier this week If you have not seen it already.

  • Unique to WatchOS
    • Watch Specific Family
      • .accessoryCorner
      • Add the larger circular content style, it will be 
      • .widgetLabel modifier will draw controls for the text, gauge or progress review in the corner.
    • This are across all
      • .accessoryRectangular (not widget label)
      • .accessoryInline (already has it’s own label)
      • .accessoryCircle
        • .widgetLabel can also be used here to provide text (or other information)  you may need to look at the environment to decide what you show based on the label.  See below:
  • The larger text watch face will auto scale up one complication to fit.
  • Auxiliary content
  • Multiple representation
  • Migration of existing code
    • Adopt WidgetKit
      • All faces now use rich complications from 12 to 4 
  • Views are used instead of templates
  • Timelines are also sued.
  • Upgrade existing installed complications
    • To do this, the app will run automatically on the an existing watch.
    • This is a new API called CLKComplicationsDataSource with a CLKComplicationWidgetMigrator that you should implement to handle this in your app.  See more in the above WidgeKit API documentation listed above.
    • My approach will be to completely re-write my code to use the four above classes and remove support for watches not running WatchOS 9

Discover ARKit 6

I was really hoping for new hardware this WWDC, but not a new laptop… I wanted the dev kit for AR/VR from Apple.  Well it didn’t happen.  However the new ARKit 6 API may hold hints to what may come in the future.  My guess is the new Ear Joint information would definitely need to be available if you had a headset!

Linke: 

  1. Tracking Geographic Locations in AR
  2. ARKit
  3. Human Interface Guidelines: Augmented Reality
  4. Qualities of great AR experiences
  5. Bring your world into augmented reality
  • 4K Video
  • Note that the wide camera has special value for AR work
  • 3840×2840 is the pixel resolution on the 13 Pro for capture.  And then simplifies the frame by binning – to 1920 x 1440, and is used also in low light environments.  Roughly every 17ms you get a new image.
  • With new hardware you can not get access to the full 4k by skipping the binning step above.  It will be aver 33ms, or 30 frames per second.  Reality Kit will scale, crop and rending for you.
  • This is available on iPhone 11 and up and any M1 iPad Pro or higher
  • Camera Enhancements
    • High Resolution Background Photos
      • In an AR session, you can also capture a single photo in the background while continuing to stream 
      • Created a sample app that allows you to see where a picture was actually taken.
      • Creating of 3D models using option capture will benefit from this feature as you can overlay a 3D UI to provide capture guidance and take pictures at the higher resolution.  There is a convenience function to allow your session to capture this via CaptureHighResolutionFrame
    • HDR mode
      • Another convenience feature .isVideoHDRSupported allows you to turn on .videoHDRAllowed == true on your session’s config
    • AVCaptureDevice access for more fined control 
      • You can do this dynamically as you need it
    • Exif Tags
      • This are now available for every AR frame.
  • Plane Anchors
    • Fully decoupled plane anchor and geometry anchor
    • Information is contained in ARPlaneExtent, and hold .rotationOnYAxis defined by width, height and center 
  • Motion Capture
    • Both skeleton and Joints are detected
    • Added Ear Joint Tracking (2D)
    • And better occlusion handling (3)
  • Location Anchors
    • New cities and countries are supported for Location Anchors
    • London and many US states
    • Added 3 in Canada , Singapore, 7 in Japan, and 2 in Australia 
    • More coming later this year 

Evolve your Core Data schema

On thing that my card tracking app doesn’t do is allow you to pick an event and show all the cards based on that event.  I have the data, but need to think thru how I would enable this feature.  This session may help me out… Let’s go!

Link – Using Lightweight Migration

  • What is schema migration
    • Chaining your data model means you need to materialize it in the data store.
    • If you don’t change the model you wont’t able to open your datastore 
  • Strategies for migration
    • There are built in tools to migrate your data model.  They are referred to as Lightweight migration.
    • It automatically analyzes and infers the necessary migration changes
    • This happens at runtime and maps old data to new data
      • Support, adding, removing, making non-optional optional, renaming, and making an optional non-optional and providing a default value.
      • This also addresses adding and removing relationships, change cardinality, and renaming relationships
      • Entities are also available for light weight, add, remove, rename, create new parent or child, move an entity up or down in the hierarchy, you CANNOT merge hierarchies 
    • Migration is controlled by two keys
      • NSMigratePersistentStoresAutomaticallyOption
      • NSInferMappingModelAutomaticallyOption
      • If you use NSPersistentContainer or NSPersistentStore it happens for you automatically
    • Let’s see it in code:
  • You don’t need to make a new model to make changes.  
  • A discussion on how to address non-lightweight is covered in this session.  Basically you decompose the migration steps into steps that are available for lightweight – this way you can step thru multiple migrations to get to your desired end state.
  • CloudKit schema Migration
    • If you use Core Data and CloudKit keep in mind you need to have a shared understanding
    • Cloudkit doesn’t support all the features of core data model
    • Unique constraints are not supported
    • Undefined and ObjectID are unavailable
    • All relationships are optional and must have an inverse
    • You can not modify or delete exiting record types or fields
    • You can add new fields or record types
    • It is essentially additive, so consider effects on older versions of the app
    • Approaches to address
      • Incrementally add new files to existing record types
      • Version your entities
      • Create a new container to associate new store with new container, may take an extended period of time for users to upload their data to this new store.

Writing for interfaces

Sometimes a session title looks interesting but I don’t spend a lot of time on the description.  This is one of those titles.  My guess was API interfaces, but it is really about how to build out clear and concise information in your app; (something I know I need to work on), so this is a pleasant surprise of a session.

Links:

  1. Apple Design Resources
  2. Human Interface Guidelines
  • Early days focus on easy and clear. Conversational with interfaces: 
  • Purpose
    • Think about what is the most important thing to know at the moment of the screen
    • Consider how you order things on the screen.
    • Headers and Buttons should be clear as people may skip other information
    • Know what to leave out.  Don’t overload the screen with data that could be placed elsewhere or not at all
    • When introducing a new feature, tell people why it’s there and why it’s important.
    • Every screen should have a purpose, and for the entire flow.
  • Anticipation
    • Think of your app as a conversation with the user.
    • Develop a voice for your app, and vary tone based on the interaction
    • Think about what comes next in the app flow.  This will help you in the interaction 
  • Context
    • Think outside the app, when will people use your app.  Will they be distracted
    • Write helpful alerts – these are interruptions so make sure they are helpful and clear.  Give context, make sure the choices are clear.
    • Create useful empty states, i.e. show what the user can do.  Try not to use idioms.
  • Empathy
    • Write for everyone, regardless of who your audience is, so you don’t leave people out who may be causally interested in your app
    • Deal with Localization – when doing translation be aware of the impact to your UI.
    • Design for accessibility – consider size and voice over.   Your language should be well designed to make your app welcoming.
  • Check out the above Human Interface Guidelines to make your app accessible by as many people as possible
  • Read your writing out loud – it really helps

SwiftUI on iPad: Organize your interface

The next few sessions are all about SwiftUI and the iPad. My own apps run on multiple platforms and I am really looking forward to making them even better on the iPad.  

This is part 1 of 2 sessions.  Links:

  1. contextMenu(menuItems:preview:)
  2. EditMode
  3. List
  4. NavigationSplitView
  5. NavigationSplitViewStyle
  6. Tables
  • Lists and Tables
    • Many of the APIs show also work on Mac.
    • Multi-column tables should be used for dense lists
      • You now get sections on both Mac and iPadOS – check out the session SwiftUI on the Mac: Build the fundamentals (WWDC22)
      • You use a Column Builder instead of a ViewBuilder.
      • In compact view you only get the first column
      • There’s a convenience modifier to allow just a string without a viewBuilder
      • If you have a comparable field then the column becomes sortable (but you have to handle the sorting yourself
      • On iPad they don’t scroll horizontally so limit your columns.  On Mac you can scroll horizontally
  • Selection and menus
    • Each row has a tag, and some state to hold the tag selection 
      • The list will coordinate via a selection binding
      • Tags are a value for a view in a selectable container. In many cases it can be auto synthesize for you
      • To manually tag a view use View.tag(_:) – but be careful tag type is important.
    • Selection State
  • Could be a single selection Required selection and multiple selection, along with lightweight multiple selection 
  • List selection no longer requires edit mode 
  • The next session will talk about toolbar buttons
  • You can also add a multiple select Context Menu.  This will work on multiple items, single item or empty area
    • If you use forSelectionType it should match the selection Type
  • Split Views
    • NavigationSplitView allows for two or three column views – for details go to the CookBook session from a few days ago
    • Standard Split View has a Sidebar and a Detailed view – in landscape they both show by default. In portrait the Sidebar is hidden.
    • In three column mode you get a Content View between the sidebar and the detail view. Recommended to use automatic style in three column view.

SwiftUI on IPad: add Toolbars, titles and more

This is the second part of SwiftUI on iPad.  If you skipped the prior session – go back and watch it.

Links:

  1. Configure Your Apps Navigation Titles
  2. ControlGroup
  3. ShareLink
  4. ToolbarItem
  5. ToolbarRole
  • Toolbars – provide quick action to common features
    • You can customize tool bars, and provide many features that used to be only available on the Mac.
    • Overflow menus can be handled for you.  Change them to a ToolbarItemGroup which will insert individual items into the menu and auto place in the overflow indicator if needed.
    • There are three areas, leading, trailing and center.  Primary actions end up in the Trailing area. Secondary actions are in the overflow menu by default.  But if you use ToolBarRole modifier, you can override that behavior
    • The editor role will move title to the leading location, and will move secondary items in the center area.
    • User customization (from API on macOS) to adopt this feature.  Only toolbar items are customizable.  It must have a unique identifier.
    • Customizations will automatically be persisted across launches.
    • You can model control groups so that items that are logically together can be added together as one unit.
    • You also make a toolbarItem as placement: .primaryAction – to make sure that it is always presented. It will be in the trailing area and is not customizable
  • Titles and documents
    • You can now define your own Document Types with properties, etc.  you can then share those Documents with others via Transferable
    • You a create a Menu attached to them .navigationTitle, which then can do thing across the document.  Like Rename, Print, etc. If you provide a document, you will get a special preview view and a Share icon for Drag and Drop.

The craft of SwiftUI API Design: progressive disclosure

My final planned session for the day is about the API design for SwiftUI.  During my day job I focus on API discovery and usability.  The application I work on has a long history and tons of APIs, but it assumes a lot of preexisting knowledge by potential users.  Getting a better view of how to understand Swift’s API design will hopefully help me in my day job too.

  • Progressive Disclosure is a base design principle.  
    • This is not unique to the design of APIs
    • The Save dialog is a great example of this principle.  It shows defaults and common values, but you can always expand the dialog to add complexity.
  • Making code feel great to use means, the complexity at the call site progressively exposes functionality as it is needed.
  • Benefit
    • Lowers learning curve
    • Minimizes time to first build
    • Creates Tight feedback loop
  • Consider common use cases
    • Label is a great example of this.  Simple is just text.
    • You can drive an overland to create a View for the Label
    • This same pattern is used across the framework
  • Provide intelligent defaults
    • To streamline common use cases, all the things that are not specified 
    • A great example is Text(“hello world”) with this code it will localize the string, adopt to dark mode, and scale based on accessibility  but you don’t need to provide any values.
    • Line spacing is automatically too.  But it can also be manually set of your use case.
  • Optimize the call site
  • Looking at Table:
  • The above image is fairly complex example. That shows how to create a simple table but also has the added complexity for sorting and group of data. And it supports sorting.
  • For a simple example with just the list 
  • We can optimize the call site to make it easier. Take a look at this code, note how simple it is.
  • Compose, don’t enumerate
    • HStack as an example: it only needs two things, the content and how to arrange it.
    • So most common use cases are simple items next to each other.  Alignment may be needed to address all three cases (leading, trailing, center).
    • What if you want to do spacing, now you an go crazy with enums for every behavior.  IF you start enumerating  common cases. Try breaking them apart.
    • An example you can now use Spacer() in a Stack
  • D20 for the win!