Doing AI Right

I was looking through mastodon today and saw the following post –

Screenshot of a mastodon post - https://mastodon.online/@parismarx/113255243970792466
https://mastodon.online/@parismarx/113255243970792466
https://mastodon.online/@parismarx/113255243970792466

This got me thinking about the hype that’s been going on for the last two years around large language models and AI in general.

What can AI’s do?

  1. LLMs can do some really cool things, generating long form content.
  2. AI has done a really good job of understanding code, generating documentation and such.
  3. Some can generate reasonable unit tests
  4. Early career programmers can generate good starting templates for writing new code
  5. Experienced developers can use them to generate code for oft used algorithms.

Given this, do they really drive significant productivity? Can they make code that is bug free?

On my podcast, over at GamesAtWork dot Biz we’ve been talking about AI and LLMs for what seems like 12-15 years. We have seen examples of AI’s playing games, and even generating games.

Netting out the article:

The above article, while trying to make a big statement, doesn’t really say what the headline says, it states that the while some developers see some productivity, it may be impacted by the increased introduction of bugs. AI hasn’t helped in reducing the amount of hours developers are spending coding and testing.

What’s real here?

I think this leads to why AI assistants aren’t the panacea that many executives believe them to be. Any code that is written needs to be validated, whether you wrote it yourself, copied it from stack overflow, or let an assistant generate it for you. Typing up code is probably the lowest value for a developer.

Understanding the problem, reasoning about different solutions, thinking thru security and scaling issues, designing the right flow to make the program intuitive and ease to use, and ultimately validating what you’ve come up with, all takes more times that actually writing code. We are currently optimizing the wrong part of the problem.

In summary

Despite the hype surrounding AI and LLMs, their impact on developer productivity is limited. While they can generate code and documentation, they do not significantly reduce coding hours or eliminate bugs. The true value lies in understanding the problem, reasoning about solutions, and validating the code.

Greet Tracker Temporary Sale

In celebration of upcoming new iOS releases and other goodies, I am doing a temporary sale for Greet Tracker. This is the app that allows you to keep track of all the cards you send to people. A single purchase of $1.99 will give you access to Greet Tracker across iOS, iPadOS, macOS, and VisionOS!

Tell your friends!

You can find it at the iOS App Store at Greet Tracker

Greet Keeper – how it works

I realize that my design is not always intuitive so I decided to put together a simple (ha!) video of how my latest app works.

The app is both simple and powerful. The basic premise is you have control of what types of cards (or events) you’d like to track. You create your own galleries of cards (taken either via your camera or directly imported from your photo library), and you have a list of recipients who will receive the cards.

You can pick recipients from your Contacts or enter them yourself in the app. No data is ever shared with me or anyone else. It is all stored either on your device, or, if you have iCloud setup, it will sync via your iCloud storage across to the app running on your Mac, your iPhone, your iPad, and even your Vision Pro.

The app is the same on all platforms. And as an added bonus, you can generate a PDF of all the cards you have sent for a specific event, all the cards in a specific gallery, or all the cards you have sent to a specific recipient.

I created this app, after realizing I had sent the same Thanksgiving card to my parents two years in a row!

I hope you enjoy it!

Greet Keeper for VisionOS Submitted

Bad layout example

This year I plan on making sure that any app I work on is available on at least three Apple platforms. To that end, I have updated my Greet Keeper application, which tracks greeting cards, to run via SwiftUI on VisionOS.

I know it doesn’t yet really take advantage of Spatial computing, but I wanted to make sure that I could sync greeting cards between iOS, macOS, and visionOS. The biggest challenge has been addressing various UI elements which were really sized differently on visionOS.

While the Vision Pro has amazing resolution, the eye targets for tracking where you are looking tend to be pretty big. Each target should be at least 60×60 pixels. This really impacted my grid layouts. You can see the problem in the above image example. I believe I have fixed it in my update for the App Store. This is my first attempt at trying to size things differently. Let’s see if Apple lets me make it through the app review process.

Fingers crossed!

WWDC 2024 Impressions and insights

Apple Logo

As expected, WWDC 24 was all about AI. Of course we are talking about “Apple Intelligence”.

This week is my yearly “education-vacation” and it has been overwhelming in the complexity of some of the sessions I have gone through. While most of the keynote was about cool new features and better integrations of App Intents to enable “Apple Intelligence”. But my focus was on deeper understanding of what the impact of Swift6 was going to be on my own apps, and to see if I can get better debugging tips to address some performance issues I’ve been having re-writing my app to SwiftData.

Labs

To achieve these goals, I immediately signed for two lab sessions on Tuesday. I got to spend time with the SwiftData team and the SwiftUI teams. Both teams were gracious, knowledgeable, and kind. Helping me to both understand some issues I was misunderstanding, and pointing me to resources I wasn’t aware of. They also helped me raise a very specific feedback, and direct it to a specific team member on the SwiftData team. I have high hopes that my crash will be resolved, as it happens in the background on various devices.

AI and Privacy

As always, I am impressed with Apple’s continual and consistent focus on customer privacy and security. This was reflected in all aspects of the Apple Intelligence presentations, as well as in “What’s new in Privacy.” The use of on-device and Apple’s proprietary cloud for specific AI activities, along with the exposure of specific chat features to third parties (currently only OpenAI’s ChatGPT) when approved by the customer on a transactional level, all keep your data in your control.

Swift 6

The transition to Swift 6 is all about making sure that applications are concurrency safe. I’ve started looking what it will take to transition to Swift 6 in my apps. I believe one of my crashes, which is rare to occur, is caused by a race condition. You start the transition by turning on Strict Concurrency Checking and resolving those issues. Once they are resolved you turn on the Swift 6 compiler setting. From there on out, the compiler will ensure that you don’t introduce data race conditions in your app. Can’t wait to resolve the issues I’ve already identified.

App Changes

I also spent time on how I could remove some external dependencies in my app. The key one right now is how I access the Contacts on the device for sending cards. Apple has improved the security by brining the same limitations that were introduced a while ago for the photos app. I don’t need full, on going access to a user’s contacts… nor do I want it. But I do need to be able to search contacts and pull the name and address into the app to show who a specific card was sent to. After viewing multiple sessions and talking with the SwiftUI team, I think I know what I need to do. This summer should be fun.

Open Sourcing Greeting Keeper

While my post about Greeting Keeper being available had somehow got stuck in draft, I am posting a second blog entry to talk about the challenges that the app has had and what I am trying to do about it.

First, shortly after releasing the app, and buying the first copy myself, I put out a quick TestFlight fix to address a problem when there are too many cards in the Gallery. My gallery view didn’t scroll! I tool the time to correctly add a scrollView and also add the name of the card to the view. This made the Greeting Card picker so much nicer! The unfortunate thing was suddenly the app started having a background crash. I have not yet figured that out, as it only happens when you are not doing anything.

Second, I discovered that the GA version of the code introduced a very frustrating bug. One feature I had added in the shift to SwiftData was an edit feature. This would allow you to edit the Card for a specific recipient and the descriptive data about any specific card in the gallery. Well suddenly SwiftUI started having updates in the background that caused the view to accept and return as soon as you tried to change anything! This effectively disabled editing!

To address the second issue, I got some great feedback in my Swift Slack channels, to re-implement local variables in the view and manually process the edit. That did fix the problem, but after that, I started seeing major performance issues.

I am sure the performance issues are not related to the edit feature, but to the fact that I suddenly started to really add my history of data to the system. Now whenever you try and load a view with more than a few cards, the app really hangs while loading. To that end I have been trying to add the AsyncImage feature of SwiftUI so that image loads can happen in the background, giving the app a snappier feel. However, that doesn’t seem to be working at this time.

I think the big issue is that Swift Data is loading all the columns of data on the fetch, and I should exclude the image data. I can then do a separate fetch of the image data within the AsyncImage view. There may be better ways to address this issue, and as such I am going to let others take a look at the code which is currently hosted on github.

If you are interested in taking a look and helping me improve this app, please drop me a line at Michael Rowe

Greeting Keeper Released

Well, after writing, rewriting, refactoring, and replatforming for six year, I have finally released a personal project to the App Store. Greeting Keeper has gone thru UIKit, SwiftUI, CoreData, CloudKit, and SwiftData to become a pretty useful app. It does, however, have two major bugs, that crept in during the release, and I am trying to deal with them, when I can.

The bugs are as follows:

  1. If you add a new card to your card gallery, while viewing the gallery for that type of card, the UI freezes up. I can recreate this bug every time, but I can’t yet figure out what is causing it.
  2. My various edit screens, will allow you to change one selection and then automatically return you to the higher menu OR if it is a text field that you want to type into, then when you select the text field, it automatically returns you to the prior screen.

Both of these bugs make the app unusable for the average user. To prevent that, I made it a paid app, and so far only one person has bought it… ME. Which is fine.

So this is the weirdest launch post ever, basically I am telling people don’t download the app. Not yet. I need to resolve the two issues.

A VisionPro-ductive Week

It’s been a solid week now with the Apple Vision Pro. When I got it last Saturday, I provided a before and after view of the experience as part of my Weekly Podcasts Games At Work dot Biz (you can find it here Episode 452 – Before and After ). Since then, in order to see if it can be a productive device, I have tried to spend as much time as possible using the headset. In this post, I hope to give you a little look into how that experience has been.

Using Business Applications on the Apple Vision Pro

Let’s begin with how I was able to use the Apple Vision Pro for my day job. I currently work in a large multi-national company. In my capacity as a Technical Strategist, I spend at least half my day on Webex meetings. To that end, I installed the native Webex Vision Pro app, and surprised a few teammates by joining in with my persona. When I tried it on Monday, I had a few people tell me that I looked angry, I guess resting persona face is really not flattering.

Persona Version 1.0

After the 1.1 upgrade I captured a new Persona and was told that it didn’t make me look as angry. I think adding the glasses helped.

Persona Version 1.1

The most productive aspect of using the Vision Pro was that I could get my work MacBook Pro mirrored and then use the environments to block out all the distractions when I working on building presentations, doing email, and working on some development tasks.

As someone, who is easily distracted by others, being able to really focus in allowed me to be more productive. I also liked the ability to have music playing in the Vision Pro, without having to have my AirPods Pro or AirPods Max on. It blocked just the right amount of noise, while still allowing me to be aware of my environment.

Doing Development on the Apple Vision Pro

While my day job coding does not include any work on Apple platforms, I do some personal coding in Xcode, including having an app on the Vision Pro (launch app!), as such I was really excited to see how easy it was to deploy to the headset. Net-net, it is the same as doing wireless deployment to the iPhone or iPad. The only unique part was how you enabled developer mode on the Vision Pro to connect to your development Mac. After enabling the developer setting as you would on any iSO device, while wearing the headset, Go to Settings > General > Remote Devices and select your computer. You can now see your Vision Pro in the Devices and Simulators setting of Xcode.

I had looked into the Apple Developer strap (a $299 add-on only available thru the developer program), but don’t think it is worth the cost at this time. If I were more worried about bricking the device, I believe this is the hardware you’d need to run the Apple Configurator and refresh the headset. Apple had a similar piece of hardware for early Apple Watches, which was used by the Genius Bar to fix a broken watch.

Reading on the Apple Vision Pro

The other productivity aspect I wanted to hit on is reading manuals and other technical documentation. I am a big user of Calibre to make sure that any ebook I purchase is readable on any device of my choosing. I buy books from Amazon for the Kindle, from Apple in the Books App, and from independent ebook sellers like Cory Doctorow who does not DRM his books. Depending on how I am working, I like to have access to the books in the most platform native manner. While the Books App on the Apple Vision Pro is just the iPad version, I figured having a reference book up next to my Mac screen share would be a great test for development.

So far, the iPad app is not quite up to par. It the same issue I’ve had with other iPad apps, the eyesight targets are not consistently visible. I am not sure if this is a VisionOS bug, or just bad UI implementation by the developers of iPad apps. I personally had issues with some of my custom buttons of my own iOS app that I ported to VisionOS. As such I had to remove some of my custom button designs to allow VisionOS to correctly handle targeting.

Headset placement and Comfort

While I tried the default Solo Knit band to start with, and while the headset did pull forward some, it wasn’t too uncomfortable. However, once I switched to the Dual Loop Band, I could wear the headset as long as I wanted without any discomfort. What did happen, about midway thru the week, I started getting a hot forehead. I also started getting a deep red mark on my forehead. This came about the same time that I tried switching from the W to the W+ Light Seal Cushion. I tried switching back to the W after a day and was still having issues. I then finally realized that I had been slowly tightening up the bands to have the headset more and more stationary. This wasn’t because it felt loose or anything, but I guess subconsciously I felt it needed to be tight to the face. After loosening the straps of the Dual Loop Band, thing have gotten back to feeling good.

What was strange, was that after installing VisionOS 1.1 beta, it seems that the eye tracking has gotten worse. I am have issues with some of the targets being recognized. I am hoping that this is a beta issue and will be fixed soon.

The other comfort issue I wanted to touch base on is the Zeiss Optical Inserts. My regular glasses are progressives and I also have astigmatism. I send in my prescription and so far the lenses that I got are working great.

Overall Verdict

Right now I am loving the Apple Vision Pro. As I mentioned, I am easily distracted and the ability to focus on my work is a huge benefit. I have started playing a few of the games, and if you have a big enough space for Fruit Ninja, it is fun. I really enjoyed playing Battleship (reminded me of being a kid again). 3D movies and videos are amazing. And finally, I really need to take more panoramic pictures. I found about 75 in my photo library. Some going back to 2009, some hand stitched together in PhotoShop while on vacation in Prague, some while at rocket launch in Florida, and everyone one of them transported me back to the time and place they were taken. I had even taken one the last time I met my brother, his wife, and my sister for a great dinner in Georgia. It felt like we were sitting in the restaurant, enjoy the wine, and getting ready for another great conversation.

Always learning new things

Yesterday I went to #Unwrap Live 2024. This online, all day, SwiftUI programming class by Paul Hudson from Hacking With Swift fame, was all about the Apple Vision Pro this year.

I love the way Paul explains code, techniques, and APIs. The other valuable thing is, he is willing to rathole on a question that someone brings up during the sessions. His incredible knowledge of Apple’s APIs is amazing. During the session yesterday, someone asked about rendering a video on a 3D object, he was able to add the feature, explaining how and why you may make certain choices, etc. Amazing.

Well, during the session yesterday, Paul explained a few things about Apple’s materials choice for VisionOS. I realized that the version of Wasted Time that I made available for day one release, would not only violate much of the material design, but also cause people eye strain. Today, I spent time redesigning the UI of Wasted Time Pro, and also all the related versions of the app. I have submitted it to Apple for review, and hopefully the newer version will be available on launch day too. Fingers crossed!