What’s new in SwiftUI

SwiftUI in more places

  • Containers have been updated to address VisionOS – you can add .volumetric to a WindowView
  • RealityView
  • ImmersiveSpace {} Scene type allows for full emersion or mixed to create a AR view
  • Redesigned UX on watchOS 10, is based on existing SwiftUI views.  Get new transitions
    • .containerBackground modifier will allow for transitions in push and pop
    • ToolbarItem(placement: .topBarTrailing and .bottomBar}
    • Date Picker and Selection in List have been added to Apple Watch
  • Widgets on iPad, Home Screen, Desktop, and interactive controls
  • New framework updates for SwiftUI – Focus on major updates to MapKit – (will need to see how I can reformat my Map integrations – Meet MapKit for SwiftUI), ChartKit has scrolling charts, donuts, and pie charts.

Simplified Data Flow

  • Start with data model created in SwiftData
    • @Observable models use SwiftUI patterns but for DataFlow, if you pass a model thru intermediate views, they won’t get invalidated if they don’t use the property in the model
    • Confirm session Discover Observation with SwiftUI
  • SwiftData are represented entirely by code. Will receive persistence and observable.
    • Add a .modelContainer to you App View
    • Add @Query to model in your views (tells swift data to do a fetch from database).  Also works for document based apps.
  • Document Groups – automatically get sharing, renaming and more
  • Inspectors – create side bars or sheets based on platform.
  • Dialog customization
    • Help Links – 
  • Lists and Tables for fine tuning them.
    • Tables – Column order and visibility, and new Disclosure Table rows (for grouping)
    • Sections now have programmatic expansion.
    • Style formatting for smaller lists or tables.

Extraordinary animations

  • Animations can be improved with KeyFrames – triggered by state change. (Runs a set of animations in parallel)
  • Phase animators (simpler than KeyFrame – step thru a sequence of animations)
  • Haptic feedback uses new SensoryFeedback API – add a modifier .sensoryFeedback check HIG for Playing Haptics
  • .visualEffects modifier – don’t need geometry reader, to allow for an animation based on position on the screen.
  • You can do text interpolation with .foregroundStyle so you can do cool metal shaders using ShaderLibrary.
  • Sliders now have .symbolEffect modifier (can do it at a single item or the entire view).  Check out (Animate Symbols in your app)
  • You can do .textScale to a text view.
  • .typesettingLanguage modifier will allow for text that requires more space like some languages (Thai)

Enhanced interactions

  • .scrollTransition modifier will allow effects to items in a scroll view.
  • .containerRelativeFrame modifier allows for making things relative to other screen parts
  • .scrollTargetLayout()  
  • .scrollPosition allows your do some thing based on position within the scroll view.
  • .allowedDynamicRange modifier is now supported to show full fidelity, use sparingly (must be a performance hog)
  • New Accessibility options. You can add .accessibilityZoomAction so voice over can access other actions like swipe actions.
  • Color can look up customer colors defined in your asset Catalog.
  • Menus now have improved visual styles
  • New BorderShape styles for buttons 
  • .onKeyPress – allows for actions based on key presses (with modifiers)

What’s new in Swift

Swift Project updates

Expressive Code

  • If-else/switch can now be used inline – this simplifies initialization and other conditions
  • Improved error messages and more directly aligned to where the problem is.  (This will come in handy in SwiftUI)
  • Swift Generics allows for type identification preservation, this has been extended to handle # multiple argument types.  So no longer will you be limited to the number of arguments to be passed. Now use <each Result>
  • Macro System –
    • Macros are APIs so you can import module that defines them
    • @Observable macros is key to turn a class into fully observable in your code. (Expand on Swift Macros session)

Swift everywhere

  • Low ceremony code to make it easier to scale
  • Future of Foundation – starting a rewrite of Foundation for Apple and non-Apple platforms.  This is moving the implementation of Swift to Swift.
    • Drives major performance updates across calendar (20%), date formatting (150%), and JSON coding (200-500%)
    • New opt-in capabilities which focus on code ownership. Check it out on the swift forums
  • You can now interoperate between C++ and Swift now. There is a Swift Forum that is being driven by a WorkGroup focused on this.
  • CMake now support Swift and can e used to mix and match C++ and Swift make files, there is a sample Repo to help you get started
  • Actors and Concurrency
    • This section explained synchronization, how the actors and tasks behave

Case Study

What’s new in Xcode 15

Editing

  • Improved code completion
  • Improved asset handling to allow it to be supported in code completion and renames
  • Strings for internationalization will now show if they are stale (that’s cool)
  • Improved documentation styling and a new assistant which shows the documentation previews.
  • Check out Create Rich Documentation with DocC
  • Support for Swift Macros – they are using in Standard lib, Foundation and SwiftData frameworks – Learn more about @Observable, #Predicate, and @Model
  • Cmd-Shift-A allows you to access any menu options quickly.
  • To expand a macro use Cmd-Shift-A Expand Macros
  • (Expand on Swift Macros, and Write Swift Macros 

Navigating

  • BookMarks Navigator (in left panel) to get to things you’ve flagged.
  • Great for Find Queries
  • You can do “Conforming Types” queries in find panel

Sharing

  • Changes navigator and Commit editor – allows your to view mods in a single scrolling view.
  • Staging bar allows for easy unstaging

Testing

  • Updated test navigator (re-written in Swift) 45% faster
  • Can filter tests by result types, etc.
  • Insights on patterns of results
  • Playback of UI tests are supported by Video captures, etc. via test details view.

Debugging

  • OSLog integration within Xcode, so the Xcode console will now allow for filters, shows severity via color, etc.  Jump to line of code via the logger

Distributing

  • Xcode Cloud deals with versioning, signing and distribution.. adds in Test Flight Test details – to provide information to your testers, also will notarize your Mac Apps.
  • XCframeworks now has signature verifications – which shows who produced and signed the framework.
  • Privacy framework allows for improved Privacy report for app.
  • Most common distribution settings are bundled with Xcode.

WWDC 2023 – Day 1 – Platform State of the Union

Time to get into the details… Ok, this is a just a post of my raw notes, no real analysis. I am looking forward to starting all the sessions tomorrow. But tonight I still have to watch the Design Awards.

  • Darian Adler – VP of Internet Technologies and Privacy
  • Language, Frameworks, Tools, and Services come together to make a Platform
  • Over 300 frameworks on Apple’s products. 
  • Interesting to see Apollo Reddit app on their main screen
  • Swift and SwiftUI
    • Holly Borla – Engineering Manager
    • New kind of API – Swift Macros, i.e. annotations
      • Generates code in Xcode 
      • You can expand Macros to see what they will do.
      • All macros provide custom feedback in Xcode
      • Attached Macros, allows you to add functionality to your existing code, like @AddAsync to your function.  Will expand during debugging your code so you can see what it does.
      • Many of the new APIs will use Macros
    • Can now do bidirectional interoperability with C++
    • SwiftUI – updates
      • New support for Pi Charts
      • Expanded MapKit support
      • Animation updates
        • Auto transfer gesture velocity to animation
        • .spring(duration:, bounce:) 
        • Animation in Symbols are now supported
        • Animation Phase
        • Full support for Key Framing
      • DataFlow – getting simpler, you will only have to deal with @State and @Environment
        • You can use @Observable on a call (using annotation), it will set up for you, and you don’t need to use @State
      • CoreData is now SwiftData.
        • Uses Annotation with @Model macro and @Query.
        • Update your widget with same @Query so now you can introduce data in classes
    • App Experiences — Johnathan
      • WidgetKit
        • Make your app more glanceable and usable.
        • Minor updates to become available in new iPhone standby mode
        • Available on iPad and MacDesktop
        • New Widget Architecture allows for continuity 
        • Identify Background, and padding
        • Add buttons or toggles allow for interactivity for your widget
        • Can see widget timeline in Xcode previews… (this may help me fix my own app’s widget issue)
      • AppIntents
        • If you wrap your intent in an app shortcut it will show up next to your app in the shortcut app
      • TipKit
        • Feature discovery in your app, allows your to educate your users.
        • You can create simple targeting to functionality.
      • AirDrop
        • Tap share sheet between devices and that passes data between devices.
  • Hardware Features
    • Gaming
      • Easier than every to port games to Mac
        • It evaluates your existing game via emulation environment
        • Cover and compile shaders – via Metal Shader converter (supported in Xcode and on Windows)
        • MetalFX allows for straightforward porting of code
    • Camera
      • AVCapture
        • Zero Shutter Lag, Overlapping Captures, Deferred Processing (move Deep Fusion into the background)
        • Later this year – Volume Shutter buttons
      • HDR capable displays will increase photography improvements information on displays.  There is no industry standard for photos, so Apple has been working with industry to create ISO HDR standard for photos.
      • Video Conferencing
        • Reactions etc. added in the camera app by default. But you can capture that it is being used to update things your app
        • Screen Capture Kit – allows for better screen sharing (just share the 2 apps you want)
        • Added external camera support with iPad and tvOS
        • Continuity Camera can be used easily in the code.
    • WatchOS
      • containerBackground, Pagination, ToolBarItem, and Navigation Split View, and new Spring Animation. IF you are using SwiftUI it will be automatic for you
      • New custom workout API
      • New core motion API – for swing analysis (like golf or tennis)
  • Values
    • Chris Fleizach Sr. Manager Accessibility
      • 1.3 B people with disabilities around the world.
      • Added ability to detect doors in magnifier app.
      • Animation and flashing lights sensitive –
        • Pause Animated Images – stops animation in animated GIFs
        • Dim Flashing Lights – will darken screen automatically when flashing lights occur (automatically if you use AVFoundation) – Apple Open Source this algorithm
      • VisionOS has tons of Accessibility features built in from the start
    • Katie – Privacy
      • Privacy prompt improvements. Like Add only permission for Calendars.  Photos have new picker 
      • App Privacy – help users to show how you protect their data.
        • For Third Party SDKs – Privacy Manifests – this will be combined for a summary 
        • Supply Chain Signatures for third party SDKs to validate it was signed correctly
      • Communication Safety
        • This was added to messages in iOS15
        • It is now available across the entire platform  – Sensitivity Content Framework, added blurring too.
    • Chris Markiewicz – App Store
      • Merchandizing UI for improved exposure in StoreKit
      • Can add subscription store view easily and it works across all platforms.
      • Will automatically determine version to show the user
      • SKAdNetwork – for measuring downloads based on Advertising, can now measure reengagement.
  • Tools
    • Ken Orr – Sr. Mgr Xcode and Swift Playgrounds
      • Xcode –
        • Code completion will use surrounding code for prioritization, will also automatically define symbols for your assets
        • Previews – using swift Macros
        • Works across all UI Frameworks
        • Git Staging is integrated in Xcode
        • Can see unpushed commits
        • Testing –
          • Redesign of test report, includes video recording of test results with navigation timeline.
          • Can show accessibility frames
        • Xcode cloud
          • Have improved workflows
          • Added tester notes for test flight
          • Linker speed increased
  • Deep dive into Vision OS
    • Michael – VP
      • Fundamentals – actions create 
      • Windows
      • volume
      • spaces
      • Dedicated full space
      • SwiftUI and UIKit, for UI
      • RealityKit for visual and spatial audio
      • AR Kit for objects
  • With SwiftUI you can add depth or a 3d Object for layering. Vision Apps can then separate those depth objects
  • Using .offset(z: value)
  • Creating a Volume in SwiftUI
  • And SwiftUI renders thru RealityKit
  • Ornaments are for controls and toolbars
  • For full scenes – you want to use RealityKit
    • Automatically adjusts to lighting conditions
    • Can also create portals into 3D Scenes 
    • Dynamic Foveation – uses eye tracking to focus processing power on the area the user is actually looking 
    • IBL – Image Based Lighting object for customizing lighting effects
    • Material X is shader for environment
    • Can add attachments to 3d Objects
  • ARKit – allows objects to interact with the real world
    • Always running – persistence and world mapping is automatically handled for you
    • Plane Estimation
    • Scene ??
    • Image Anchoring
    • World Tracking 
    • Skeletal Hand Tracking
  • Accessibility
    • People with physical or motor issue can use eyes, or voice
    • Can use finger, arm or head for selection
    • Supports VoiceOver information 
    • Dynamic Type, Reduced Transparency and Accessibility UX features are available
  • These Busca-Aegeria – development tools
    • Xcode has it all in it.
    • Simulator has multiple scenes for lighting
    • MacVirtual Display 
    • Reality Composer Pro – to import and organize 3D Objects and integrates in Xcode for development workflow
  • Jason – Tour
    • Demo of Reality Composer Pro
    • Test Flight will be available on the device from the start
  • Unity – using their tools on top of RealityKit
    • They can coexist int he share space 
    • Ralph from Unity
      • Can bring over Unity work and reimagine it for VisionPro
        • Demo of What the Golf
  • Additional Features
    • Goals – Use Features and Preserve Privacy
    • Virtual Sound should behave as if it is real in the room.  There is a realtime spatial audio and rendering.
    • User Input is private by design.
    • Sharing and Collaboration, can use share play to share any window on FaceTime call.
    • Personas are in 3d for other VisionPro users…
    • Spacial Personas, will show them in the same space (outside of the window).
      • Will be available as a developer preview later this year
  • What’s next
    • Start design, developing, and testing, Vision dev kit will be available later this month.
    • To see how your app works, there will be a visionPro lab in US this will be in Cupertino, in Germany is in München. 
    • Band new Appstore 

WWDC 2023 – Day 1 – So far living up to my hopes

Love the intro animation, very trippy… 

Screenshot 2023-06-05 at 13.00.55.png

Good morning!   15th Anniversary of the App Store, which changed everything.   Nice to start the discussion  with Developers.  Time for some breakthrough experiences.

Over to John to talk about the Mac. (Interior shot)

  • 15 inch MacBook Air, (Nice to see Female exec with the details) – 18 hour battery lap! Definitely a nice machine for the people.  Nice to show copy and paste with phone. Starting at $1299.  (This may be the one to replace susie’s machine. Screen size and lowest tech would be a blow away compared to her current machine)
  • Pro machine update
    • Mac Studio Specs
    • Mac Studio – Used at NBC for SNL.  Getting an update (Jennifer Munn – Dir. Product Engineering) – Getting M2 Max, adding the M2 Ultra, 24Core CPU, 76 Core GPU, 32 Core Neural Engine, and 192GB of Unified Memory. Can support 6 Pro Display XDRs.
    • Mac Pro Summer 2023
    • Mac Pro with Apple Silicon, adding the PCI expansion to the Mac Pro case.  All are based on M2 Ultra.  With built in 7 afterburner cards, built in).
  • All machines are order able today and will ship next week.  And Completes the transition to Mac Silicon.

Software platform time… (Craig)

What's new in iOS
  • iOS
    • Updates to phone, FaceTime, and messages – Personalized Contact posters… so people who get your call get’s your screen you want to send.
      • It’s like personalizing your Lock Screen.
      • This is using CallKit 
      • Live Voicemail – should you answer or not, live transcription in real time to decide if you want to answer the phone. 
      • When you call soon on faceTime you can leave a message
      • Messages –
        • Better search filters to add people or terms to find data
        • Jump up to the first message you haven’t seen
        • Swipe to reply on inline
        • Transcribed messages
        • Inline location.
        • Check-In to see that people made it home okay, with shared data – End to End encrypted.
        • New Stickers
          • Add effects
          • Tap and rotate
          • And they are system wide
      • AirDrop updates
        • Name drop on phone, watch, etc. to hand off contact information 
        • Bring devices close together – can share pictures, share play, etc. 
      • Keyboard and Dictation improvements.
        • On device machine learning with AutoCorrect (improved) uses transformer Language model. (Notice they are using Transformer Language Model instead of Large Language Model).
      • Adept Ullal – special moments (new feature)
        • Journal (app) – coming late this year.  There is a new Suggestions API to help in the Journal app, also usable for developers.   On device processing, E2E encryption and ability to “lock” your journal.
      • Standby – Clock, power of Widgets, pictures, and smartstacks.. 
      • Now you can say Siri – instead Hey Siri,
      • Offline maps
      • Improved People albums, and pets – because they are people too
iPadOS 17
iPadOS 17
  • iPadOS
    • Final Cut and Logic Pro on iPad (just last month)
      • Widgets on the Lock Screen, and jeu are now interactive!
      • Love the Astronomy screen for iPad
      • Live activities on Lock Screen too
      • Health on iPad
      • PDF improvements
        • Auto recognize fields in PDF
        • Signatures, and reply back in mail as a completed form
        • Notes app will auto expand PDFs for viewing and annotations with Apple Pencil, Live Collaboration!
      • Stage manager improvements.  Positioning and sizing, use camera in monitor
Games
Games new and coming
  • macOS
    • The Name is – Sonoma, yes it is about wine
    • New screen savers
    • Widgets – available somewhere new – i.e. on the desktop … widgets are faded or tinted based on what you are doing.
    • Continuity will allow widget handoff from iPhone to Mac
    • Gaming (macOS)
      • Metal 3 – huge performance 
      • Game Mode – prioritizes CPU and Gpu on Game being played.
      • Games Porting Toolkit – to see how to make conversion simpler for shaders and graphics code
      • Death Stranding  game coming to Mac later this year.
    • Video conferencing enhancements – Presenter Overlay , reactions overlay
    • Safari
      • Webkit improvements 
      • Privacy improvements – locks browser windows, blocks all trackers, and removes trackers from URLs
      • Sharing passkeys and passwords for your group.  (E2E encrypted)
      • Profiles, setup work, personal, etc.  this can be tied to Focus Modes
      • WebApps – created from a web page.  Just add to Dock from file menu
  • AirPods updates
    • Adaptive Audio – combines Noise cancelling and Passthru  – dynamically blending to match condition of the surroundings.
    • Personalized volume to give best media experience in the moment
    • Conversational awareness – start talking, turns down music and focuses on the voices in front of you
    • Calls on the go, will leverage adaptive audio
    • Improved automatic switching
  • Airplay updates
    • On device intelligence will learn what device you should share to
    • Airplay support in Hotel- to share your airplay to wife and TV in the hotel.
    • Apple Music / CarPlay – share play in the car.
  • tvOS and apple TV
    • New control center.
    • User your iPhone to find the remote
    • Photos to the TV easily
    • FaceTime on the TV (uses continuity camera)
    • FaceTime API and Continuity API for developers
watchOS Features
watchOS 10 Features
  • watchOS
    • Comprehensive App redesign across the phone
    • Turn crown for widgets using Smart Stack
    • Widgets can hold complications
    • World clock app was updated
    • Active app redesigned
      • Added Cycling
        • Connect to bluetooth cycling computers
        • In watch OS10, cycling becomes. Full display live activity on your phone.
      • Added Hiking
        • New waypoint tracking (when did you last have cell overage)
        • Also added a topography map and elevation details
    • Two new watch faces – Pallet and Snoop / Woodstock 
    • Health improvements on Apple Watch
      • Mental Health reflecting on your state of mind to improve wellness – from Mindfulness map
        • Can also do this on iPhone 
        • Current risk to depression or anxiety
      • Vision health – Myopia
        • 80-120 minutes a day outside in daylight (using ambient light sensor
        • Screen distance – uses true depth camera to measure if the device is too close
    • All info is encrypted on device and only unlockable with your lock code, passcode, etc.
  • Back to Craig
    • Lots of new APIs across all platforms, 175 sessions
  • Back to Tim
    • One more thing!!! Years in the making – AR is a profound technology
  • Apple Vision Pro! – A new kind of computer
    • Eyes hand and voice control
    • Spacial Computing
    • (Alan Dye)
    • Home view looks like watch
    • Environments transform the space for you
    • This is the holo deck
    • Subtle gestures
    • “Eye Sight” – the pass thru of your eyes
    • Always in sync with your devices – via iCloud
    • Change focus by turning your head
    • 3D Object – pull it out and look at any angle
    • “You have the room to do it all!”
    • Works with Magic Keyboard and track pad.  Look at your Mac and bring it in view
    • Remote collaboration and integration with FaceTime and Sir.. you can now put people around you. (Life sized)
    • At home –
      • Power of spacial computing, capture and experience photos and videos in new ways
      • Panaroamics allow you feel like you are standing where you took them
      • First 3d camera! Allows your capture 3d value
      • Movie theatre experience with spacial audio.
      • Game controller support, 100 Games available on day 1.
    • Walt Disney on stage, this will allow us to bring fans closer to characters they love
      • Partnering the greatest Story telling company in the world with the greatest technology company
      • Disney+ will be available on day 1
    • Richard Howarth to talk about the design
      • 3d formed and laminated glass.
      • Quiet thermal design, textile parts, fleshes to your face, confirms to your face.  Don’t need AirPods, Band is changeable, and micro adjustments. Custom optical inserts.
      • All day use when plugged in, 2 hour when on battery.
    • Mike – Technology
      • Advanced realtime sensor system, spacial audio, and incredible 
      • resolution
        • 64 pixels in the space of 1 iPhone pixel. 23 million pixels (more than a 4k TV for each eye)
        • 3 element lens… true 4k and UHD range.
      • Audio –
        • Integrated audio pods allows for ambient sound and matches it to your room (using audio ray tracing)
      • Eye tracking
        • You don’t need hand controllers.
      • Foundation is Apple Silicon, it’s a M2 plus a new chip called R1
        • Virtually eliminates drag, 12 ms – 8 times faster than the blink of an eye
        • FaceTime creates an authentic representation of you, you enroll you face to create a digital person.and matches you as a persona
        • VisonOS – designed from ground up for facial computing.
    • Susan Prescott – Developers
      • Reimage existing apps and design new ones
      • A few developers played with it
        • The Heartbeat to give a new education view
        • SigSpace – 
        • Stagent from PTC
        • SkyGuide – this looks way cool!
      • RealityComposer Pro
      • Same frameworks from iPad and iOS are available – so available at launch
      • Unity will be Brough to VisionOS
      • All to be covered in the State of the Union
    • Mike –
      • Privacy and Security
        • Optic ID, encrypted, never leaves the device, etc.
        • Where you look is not communicated to apps, only where you touch with your hand
        • Camera data is at the operating system only
      • Filed over 5,000 patents during it’s development
      • $3,499 starting next year.

Looking forward to WWDC 2023

Author wearing Apple Glass

Tomorrow is the start of WWDC 2023, and the rumors are all coalescing around the idea of the Apple Headset. It’s been 9 years since I tried on, and ultimately bought, Google Glass, so the promise of an Apple headset is more than exciting to me.

While people are complaining about a rumored price of $3,000, I don’t consider that to be unreasonable for a developer kit. The ability to experiment with a new piece of Apple tech is pretty exciting to me. I just hope that they make it available quickly, like they did years ago when the M1 Mac mini was provided for developers (even though they indicated that the dev kit was running a A12 processor).

But, there is a whole lot more to look forward to this year’s WWDC; revamp of WatchOS, iPadOS Lock Screen updates, more advancements with SwiftUI, and hopefully improvements between CoreData and CloudKit. Yeah, I know, that last one is my pipe dream. I’ve been hoping for years now, that Apple would take a SwiftUI approach to CoreData, making it easier to use across local and remote data synchronization.

As has been the case for the last few years, Apple is not holding WWDC in person (as such). They are continuing to stream the sessions from Tuesday to Friday, releasing content at 8am PDT each day. To that end, I am taking the week off from my day job and time-shifting to California hours. Spending 8 or more hours a day trying to go thru as much content as I can. There is some benefit to this, as I can backup and replay sessions to learn the content more; but I find that I learned so much more when WWDC was in person, and I could talk to others between sessions, cementing the ideas in my long term memory.

I will be writing a daily blog on the sessions I find most interesting, and of course Monday I will be watching the Keynote, State of the Union, and Apple Design Awards on Monday, and hope to provide a summary of salient information on each of those.

Let me know what you are excited to hear about!

XROS Thought – Eyes

Last night, while I was trying to doze off, I started thinking thru the expected announcement of Apple’s VR Headset. I had read the rumor that one item that Apple was going to do is give people the ability to see the users eye, by having a screen on the outside. This way when you are viewing thru the headset in “pass thru” mode, you can look eye to eye

I’ve always said that Apple let you know what is coming if you just look closely enough. This got me thinking about the FaceTime change a few years back that dynamically adjusted the video so that the eyes always looked at the camera.

This feature now makes sense… If I am in the VR space, and lookup the popup contact card that tells me who I am talking to, I don’t want them to notice I am not looking at them.

Just a thought.

GPT vs. Llama

Like many in tech I’ve been looking at various Large Language Models lately. One of the most fascinating use cases is using Chat-GPT to write code that can be a productivity enhancer for developers.

As part of my day job, I work with OSLC compliant applications. If you are not familiar with OSLC, check out this post in my ELM blog. OSLC is complex and powerful. The open nature of the standard means that there are very few constraints, which allows customers and application vendors to create very powerful integrations and extensions.

Getting started writing OSLC applications, requires a lot of work to get started, and this should be a perfect opportunity for using a LLM to write the boiler plate. As such, I decided to try two different tools. Instead of starting with ChatGPT, I thought I would use Meta’s llama model. We had talked about it last yesterday on my weekly podcast. If you haven’t listened to it yet, go check it out at GamesAtWork dot Biz.

After recording that episode, I used the instructions from one of your show links and setup Llama-cpp on my Mac. Here’s how the conversation went:

DevPro llama.cpp-master % ./main -m ./models/30B/ggml-model-q4_0.bin -n 256 –repeat_penalty 1.0 –color -i -r “User:” -f prompts/chat-with-bob.txt

main: seed = 1679443809

<skipping a bunch of loading information>

main: prompt: ‘ Transcript of a dialog, where the User interacts with an Assistant named Bob. Bob is helpful, kind, honest, good at writing, and never fails to answer the User’s requests immediately and with precision.

User: Hello, Bob.

Bob: Hello. How may I help you today?

User: Please tell me the largest city in Europe.

Bob: Sure. The largest city in Europe is Moscow, the capital of Russia.

User:’

<More log skipping >

== Running in interactive mode. ==

 – Press Ctrl+C to interject at any time.

 – Press Return to return control to LLaMa.

 – If you want to submit another line, end your input in ‘\’.

 Transcript of a dialog, where thew User interacts with an Assistant named Bob. Bob is helpful, kind, honest, good at writing, and never fails to answer the User’s requests immediately and with precision.

User: Hello, Bob.

Bob: Hello. How may I help you today?

User: Please tell me the largest city in Europe.

Bob: Sure. The largest city in Europe is Moscow, the capital of Russia.

User:what is OSLC

Bob: OSLC is an open source collaborative project that enables sharing of information between systems and applications.

User:Write code to retrieve a requirement from a OSLC provider

Bob:OSLC is an open source collaborative project that enables sharing of information between systems and applications.

User:can you write a python program to retrieve a requirement from an OSLC Provider?

Bob:OSLC is an open source collaborative project that enables sharing of information between systems and applications.

User:Can you write any code to utilize OSLC?

Bob:OSLC is an open source collaborative project that enables sharing of information between systems and applications.

User:can you provide any more details

Bob:OSLC is an open source collaborative project that enables sharing of information between systems and applications.

conversation with llama

As you can see, Bob was not very helpful. To be honest this is not a big surprise. I would assume that the Meta model is mainly based on information it found on Facebook or Instagram. Not really a hotbed for development tool vendors or users.

Next I tried the same prompts with ChatGPT – let’s see what happened:

ME: what is OSLC

ChatGPT: OSLC stands for “Open Services for Lifecycle Collaboration”. It is a set of open standards that define how different software tools can interoperate and exchange data with each other, particularly in the context of product development and lifecycle management.

OSLC provides a standardized way for different tools to communicate with each other, reducing the need for custom integrations between individual tools. This helps organizations to better manage their product development process, improve collaboration between teams, and ultimately accelerate time-to-market.

OSLC defines a set of REST-based APIs (Application Programming Interfaces) that enable tools to exchange data with each other in a standardized way. This includes APIs for accessing and updating artifacts such as requirements, designs, test cases, and defects, as well as APIs for managing links between these artifacts.

OSLC is widely used in industries such as aerospace, automotive, and software development, where there is a need for complex tools to work together seamlessly.

ME: Write code to retrieve a requirement from a OSLC provider

ChatGP: To retrieve a requirement from an OSLC provider, you would typically use the OSLC REST APIs to make a GET request to the provider’s endpoint for requirements. Here’s some sample code in Python that demonstrates how you might do this:

import requests

# Define the URL of the OSLC provider's requirements endpoint
requirement_url = 'https://example.com/oslc/requirements'

# Define the ID of the requirement you want to retrieve
requirement_id = '12345'

# Define the OSLC query string to retrieve the requirement with the specified ID
oslc_query = '?oslc.where=dcterms.identifier="{}"'.format(requirement_id)

# Define the headers to send with the request (including authentication if required)
headers = {
    'Accept': 'application/rdf+xml',
    'Authorization': 'Bearer <access_token>'  # Replace <access_token> with your actual access token if required
}

# Send the GET request to the OSLC provider's requirements endpoint with the specified query string and headers
response = requests.get(requirement_url + oslc_query, headers=headers)

# Check the response status code to see if the request was successful
if response.status_code == 200:
    # If the response was successful, extract the requirement data from the response body (which may be in RDF format)
    requirement_data = response.text
    print('Requirement data: {}'.format(requirement_data))
else:
    # If the response was not successful, raise an exception or handle the error in some other way
    raise Exception('Error retrieving requirement: status code {}'.format(response.status_code))

Note that the exact code required to retrieve a requirement from an OSLC provider will depend on the specific APIs and authentication mechanisms provided by the provider. The above code should serve as a starting point and may need to be modified to work with your particular provider.

conversation with public version of ChatGPT

Now that’s a very different response. ChatGPT not only understood the context of OSLC that I was looking for, it also provided a much more meaningful description of OSLC.

While the code won’t provide me an out of the box solution, as a python developer it provides you with enough of a frame work to get started. I know that you will need some level of authentication, and that you need to accept the response back as a “rdf/xml” document. Knowing this, I thought, “hey” let’s see what I would get back if asked about the authentication code:

can you provide sample authentication code for this Oslo code

Sure, here’s an example of how you might authenticate with an OSLC provider using OAuth 2.0 and retrieve a requirement:

import requests
from requests_oauthlib import OAuth2Session

# Define the OAuth 2.0 credentials for the OSLC provider
client_id = 'my-client-id'
client_secret = 'my-client-secret'
authorization_url = 'https://example.com/oauth2/authorize'
token_url = 'https://example.com/oauth2/token'

# Define the URL of the OSLC provider's requirements endpoint
requirement_url = 'https://example.com/oslc/requirements'

# Define the ID of the requirement you want to retrieve
requirement_id = '12345'

# Define the OSLC query string to retrieve the requirement with the specified ID
oslc_query = '?oslc.where=dcterms.identifier="{}"'.format(requirement_id)

# Create an OAuth 2.0 session using the OAuth2Session class from the requests_oauthlib library
oauth_session = OAuth2Session(client_id, redirect_uri='http://localhost')
authorization_url, state = oauth_session.authorization_url(authorization_url)

# Send the user to the authorization URL and wait for them to grant authorization
print('Please go to this URL and authorize the application: {}'.format(authorization_url))
authorization_response = input('Enter the full callback URL after authorization: ')

# Fetch the access token using the token URL and the authorization response from the user
token = oauth_session.fetch_token(token_url, authorization_response=authorization_response, client_secret=client_secret)

# Define the headers to send with the request, including the access token
headers = {
    'Accept': 'application/rdf+xml',
    'Authorization': 'Bearer {}'.format(token['access_token'])
}

# Send the GET request to the OSLC provider's requirements endpoint with the specified query string and headers
response = requests.get(requirement_url + oslc_query, headers=headers)

# Check the response status code to see if the request was successful
if response.status_code == 200:
    # If the response was successful, extract the requirement data from the response body (which may be in RDF format)
    requirement_data = response.text
    print('Requirement data: {}'.format(requirement_data))
else:
    # If the response was not successful, raise an exception or handle the error in some other way
    raise Exception('Error retrieving requirement: status code {}'.format(response.status_code))

Note that the exact code required to authenticate with an OSLC provider will depend on the specific authentication mechanism used by the provider. The above code demonstrates how to authenticate using OAuth 2.0, but other mechanisms such as Basic Authentication or client certificates may also be used. Consult the provider’s documentation for details on how to authenticate with their APIs.

ChatGPT Response
Question on authentication (ChatGPT)

As you can see , know context allowed ChatGPT to pull in more details about the Oauth2.0 approach for accessing the API. Having written my own example on my ELM blog, I can tell you, this is a good start to help you understand OSLC.

It’s not 100% there, but it certainly will get you asking the right questions. I believe that is where AI’s will start providing real value to development. Not replacing developers, but helping new developers learn some basics, and letting experience developers focus on the hard problems…For now.

Minor Wasted Time Update Submitted

Since completely rewriting WastedTime into SwiftUI and fixing it to run nearly the same code across macOS, watchOS, iOS, iPadOS, and tvOS, I’ve not been happy with the way I’ve handled the fonts.

Today I took a step back and made changes to better handle the needs of larger devices.

Ultimately the code is pretty easy. First I added a new computed property on most screens. This computed property works on all platforms EXCEPT for watchOS. Here it is –

var isIpad: Bool {
    #if !os(watchOS)
        UIDevice.current.userInterfaceIdiom == .pad
    #else
        false
    #endif
}

All this does is define a simple boolean indicating if you are on a iPad like device. The reason there is a compiler directive looking for watchOS is that UIDevice is not available in watchOS. Luckily, we can just set the isIPad to false if we are on the watch.

Second, I define a new extension on SwiftUI’s Font with two computed properties:

extension Font {
    static var systemSpecificBody: Font {
        #if !targetEnvironment(macCatalyst)
        #if os(iOS) || os(watchOS) || os(tvOS)
            .body
        #else
            .title
        #endif
        #else
            .title
        #endif
    }

    static var iPadSpecificBody: Font {
        .title
    }
}

These two computed properties are used to return the font size I’d like to use for text. The reason I have two is so that I can use my previously defined boolean (isIPad) to set the font. Again we uncover a difference between Apple’s platforms. there is no compiler directive to say you are compiling for iPadOS. iPadOS is evidently more just marketing, and not really an operating system.

Additionally, since I am using macCatalyst for my macOS version of WastedTime, I have to capture the targetEnvironment to recognize that I will be running on macOS.

Having done both of these preparatory steps, I can now easily set the font for any text in my SwiftUI views. I do this with following code:

.font(isIpad ? .iPadSpecificBody : .systemSpecificBody)

This is a ternary, basically an inline if-else statement. The means set the value of .font to .iPadSpecificBody if the value of isIPad is true, otherwise use .systemSpecificBody.

And that’s it. I now correctly increase the fonts from .body to .title for those devices that have more space.

Wasted Time Update – Removing Twitter

New Wasted Time Icon

Well, I’ve been looking at adding Mastodon support to Wasted Time since removing support for twitter a few months back. I was talking to my good friend and podcast co-host @andypiper about this yesterday, and he asked why. I thought about it long and hard. While back in the twitter days I used this for two reasons:

  • A fun way to show people how much time is being wasted getting meetings started with the hashtag #wastedtime
  • A way to promote the app with a AppStore Url

We discussed that this mode of app promotion was frowned upon in the Fediverse. I also looked back at my apps number of users and marketing strategy and came to the realization that I had almost no-one who downloaded the app based on seeing the AppStore Url in a tweet. Why would I think this would be different in the Fediverse?

While I want to play with the Fediverse API, I think I’ve decided that it would be better not to add support for Mastodon in the app.

One of the big features that I took away from the app a few years back, when I was completely rewriting it to support SwiftUI was the ability to capture ToDo’s during a meeting. I think my time would be better spent working on adding that feature back in.

What do you think?

Post Script

I was finally able to get my lasted update to the WatchOS version of Wasted Time submitted to the App Store yesterday. I had been having issues with the app versioning. Everything worked for the tvOS, macOS and iOS versions of the app, but the watchOS version kept ignoring the new version number. This cause the App Store to reject the upload every time.

I even spent time with Apple developer support for 2 months going back and forth to try and figure out what was going on. No luck, until yesterday I was digging around in the many different configuration files, and found one that somehow got hard coded to an old version. Strange.