While I have been developing apps for some time, I’ve not been very good at promotion and/or pricing. I am hoping to get more users so that I can get better feedback on my various apps. To that end, I have just scheduled a blowout sale of the following apps:
I have created the apps over time, and have tried to charge very little for them in the past. Wasted Time is $0.99, in all its variations. Greeting Tracker is my newest app, that I spent years fiddling around with, and it originally charged $3.99 for it, but dropped its price to $1.99 earlier this year.
For just a few days, Friday, Nov. 29th to Sunday, Dec. 1st, all of the above apps will be free.
I’d love to hear what you think of them, and if you have any suggestions for app improvements, or UX improvements, I would greatly appreciate it!
The funny thing is if you search on the title Vision Pro is dead, you will find articles and Reddit stories saying this since April 2024. Which is ludicrous since the Vision Pro only started shipping to customers in February of 2024.
Regardless of the doom and gloom stories, I am going to discuss the two primary use cases for my Vision Pro, and where I see the future of the Vision Pro is going.
How I use the Vision Pro
Let me begin by saying I use the Vision Pro every day. I know, that is not what you will hear from most people who bought the device on launch day. So many of the early buyers, bought it because they wanted the latest and newest Apple device. I think they are also the people who bought the first HomePod on launch day.
Working on coursework from Udemy
The above screen shot represents a pretty normal day for me lately. I recently retired from IBM after 29 years, and am doing some re-skilling by taking classes on Amazon AWS thru the Udemy platform. I have the class work running in a video window on the Vision Pro, while mirroring my Mac to actually perform the various exercises. This is a simple example of a use case. And while, yes I could just use two monitors on my Mac (either via Side car with my iPad and my MacBook Pro, or via my Mac Studio and a secondary display), but that only solves one issue.
What the Vision Pro allows me to do is, turn the digital crown and slowly block out all the distractions that are around me. I also have background sounds turned of for the environment, which helps in removing the ringing that I have in my ears at all times. Some people achieve this same result by working in an open space or coffee shop. For me, that type of sensory input causes me to focus on what people are saying around me. It makes it much harder for me to focus.
Gaming
I love games. All kind of games. Games ranging from solo games, all the way to MMORPGs. While I’ve not found a good MMORPG that runs in the VisionPro, I do find that running World of Warcraft on a giant monitor does help me get me immersed. However, there are social games like Demeo, which give you the old school Dungeons and Dragons like experience.
You get to sit in the basement, around a table, and pic an adventure to run.
And you can run the adventure in a turn based mode, with magic spells, cool weapons, and wonderful treasure. I’ve played solo, with a friend 1000s of miles away, and a group of 3 other random strangers. Each experience was fun and fulfilling. The other thing for this game is, that it is not only available for the Vision Pro, but you can play it on the Meta Quest, on the Mac, and on iPadOS. So if you have a group of friends who’d like to get together but can’t, this is a great way to play together.
Content consumption
When paired with a set of AirPods Pro 2, you can really get immersed in all kinds of media consumption. Music surrounds you, TV and Movie shows, allow you to watch 2D and 3D content like you are in your own private theater, and while I’ve not found a willing partner to try it, you should be able to use Apple’s SharePlay feature to sit in the same space and watch a movie.
My wife and I still do not go to the theatre since the pandemic, and I would love to watch a movie with friends in the Vision Pro.
Watching various media that Apple has slowly dipped out for the Vision Pro, shows that we still have a ways to go before all the directors and editors are comfortable making true Spatial content, but I for one am happy to watch well made 3D movies.
The promise of a future
There is so much more that I wish that the Vision Pro would do, and given the hardware that Apple put together I am sure it could do it all. The question will be, does Apple continue to iterate the software and provide more content, so that more developers can see the possibility.
We need developers to build for the Vision Pro, and we need consumers to buy the promise. Perhaps a cheaper headset will make it more likely that some to buy it just for media consumption, but I wouldn’t count on it. Not when a cheap Apple device will probably be 3-5x more expensive than the Meta Quest. And most people still don’t want a heavy device on their face.
Ultimately I do believe that glasses or contacts will be the platform for AR, but for VR or mixed reality devices, glasses will not do it for me. I think a device like the Vision Pro will be required, and I am glad I don’t have to wait for it.
This got me thinking about the hype that’s been going on for the last two years around large language models and AI in general.
What can AI’s do?
LLMs can do some really cool things, generating long form content.
AI has done a really good job of understanding code, generating documentation and such.
Some can generate reasonable unit tests
Early career programmers can generate good starting templates for writing new code
Experienced developers can use them to generate code for oft used algorithms.
Given this, do they really drive significant productivity? Can they make code that is bug free?
On my podcast, over at GamesAtWork dot Biz we’ve been talking about AI and LLMs for what seems like 12-15 years. We have seen examples of AI’s playing games, and even generating games.
Netting out the article:
The above article, while trying to make a big statement, doesn’t really say what the headline says, it states that the while some developers see some productivity, it may be impacted by the increased introduction of bugs. AI hasn’t helped in reducing the amount of hours developers are spending coding and testing.
What’s real here?
I think this leads to why AI assistants aren’t the panacea that many executives believe them to be. Any code that is written needs to be validated, whether you wrote it yourself, copied it from stack overflow, or let an assistant generate it for you. Typing up code is probably the lowest value for a developer.
Understanding the problem, reasoning about different solutions, thinking thru security and scaling issues, designing the right flow to make the program intuitive and ease to use, and ultimately validating what you’ve come up with, all takes more times that actually writing code. We are currently optimizing the wrong part of the problem.
In summary
Despite the hype surrounding AI and LLMs, their impact on developer productivity is limited. While they can generate code and documentation, they do not significantly reduce coding hours or eliminate bugs. The true value lies in understanding the problem, reasoning about solutions, and validating the code.
In celebration of upcoming new iOS releases and other goodies, I am doing a temporary sale for Greet Tracker. This is the app that allows you to keep track of all the cards you send to people. A single purchase of $1.99 will give you access to Greet Tracker across iOS, iPadOS, macOS, and VisionOS!
Tell your friends!
You can find it at the iOS App Store at Greet Tracker
I’ve had to increase the price of Wasted Time across all platforms to $0.99. Currently if you buy for one platform you have access across all (except for VisionOS).
I am hoping to add a new feature for a fall launch!
I realize that my design is not always intuitive so I decided to put together a simple (ha!) video of how my latest app works.
The app is both simple and powerful. The basic premise is you have control of what types of cards (or events) you’d like to track. You create your own galleries of cards (taken either via your camera or directly imported from your photo library), and you have a list of recipients who will receive the cards.
You can pick recipients from your Contacts or enter them yourself in the app. No data is ever shared with me or anyone else. It is all stored either on your device, or, if you have iCloud setup, it will sync via your iCloud storage across to the app running on your Mac, your iPhone, your iPad, and even your Vision Pro.
The app is the same on all platforms. And as an added bonus, you can generate a PDF of all the cards you have sent for a specific event, all the cards in a specific gallery, or all the cards you have sent to a specific recipient.
I created this app, after realizing I had sent the same Thanksgiving card to my parents two years in a row!
This year I plan on making sure that any app I work on is available on at least three Apple platforms. To that end, I have updated my Greet Keeper application, which tracks greeting cards, to run via SwiftUI on VisionOS.
I know it doesn’t yet really take advantage of Spatial computing, but I wanted to make sure that I could sync greeting cards between iOS, macOS, and visionOS. The biggest challenge has been addressing various UI elements which were really sized differently on visionOS.
While the Vision Pro has amazing resolution, the eye targets for tracking where you are looking tend to be pretty big. Each target should be at least 60×60 pixels. This really impacted my grid layouts. You can see the problem in the above image example. I believe I have fixed it in my update for the App Store. This is my first attempt at trying to size things differently. Let’s see if Apple lets me make it through the app review process.
As expected, WWDC 24 was all about AI. Of course we are talking about “Apple Intelligence”.
This week is my yearly “education-vacation” and it has been overwhelming in the complexity of some of the sessions I have gone through. While most of the keynote was about cool new features and better integrations of App Intents to enable “Apple Intelligence”. But my focus was on deeper understanding of what the impact of Swift6 was going to be on my own apps, and to see if I can get better debugging tips to address some performance issues I’ve been having re-writing my app to SwiftData.
Labs
To achieve these goals, I immediately signed for two lab sessions on Tuesday. I got to spend time with the SwiftData team and the SwiftUI teams. Both teams were gracious, knowledgeable, and kind. Helping me to both understand some issues I was misunderstanding, and pointing me to resources I wasn’t aware of. They also helped me raise a very specific feedback, and direct it to a specific team member on the SwiftData team. I have high hopes that my crash will be resolved, as it happens in the background on various devices.
AI and Privacy
As always, I am impressed with Apple’s continual and consistent focus on customer privacy and security. This was reflected in all aspects of the Apple Intelligence presentations, as well as in “What’s new in Privacy.” The use of on-device and Apple’s proprietary cloud for specific AI activities, along with the exposure of specific chat features to third parties (currently only OpenAI’s ChatGPT) when approved by the customer on a transactional level, all keep your data in your control.
Swift 6
The transition to Swift 6 is all about making sure that applications are concurrency safe. I’ve started looking what it will take to transition to Swift 6 in my apps. I believe one of my crashes, which is rare to occur, is caused by a race condition. You start the transition by turning on Strict Concurrency Checking and resolving those issues. Once they are resolved you turn on the Swift 6 compiler setting. From there on out, the compiler will ensure that you don’t introduce data race conditions in your app. Can’t wait to resolve the issues I’ve already identified.
App Changes
I also spent time on how I could remove some external dependencies in my app. The key one right now is how I access the Contacts on the device for sending cards. Apple has improved the security by brining the same limitations that were introduced a while ago for the photos app. I don’t need full, on going access to a user’s contacts… nor do I want it. But I do need to be able to search contacts and pull the name and address into the app to show who a specific card was sent to. After viewing multiple sessions and talking with the SwiftUI team, I think I know what I need to do. This summer should be fun.
While my post about Greeting Keeper being available had somehow got stuck in draft, I am posting a second blog entry to talk about the challenges that the app has had and what I am trying to do about it.
First, shortly after releasing the app, and buying the first copy myself, I put out a quick TestFlight fix to address a problem when there are too many cards in the Gallery. My gallery view didn’t scroll! I tool the time to correctly add a scrollView and also add the name of the card to the view. This made the Greeting Card picker so much nicer! The unfortunate thing was suddenly the app started having a background crash. I have not yet figured that out, as it only happens when you are not doing anything.
Second, I discovered that the GA version of the code introduced a very frustrating bug. One feature I had added in the shift to SwiftData was an edit feature. This would allow you to edit the Card for a specific recipient and the descriptive data about any specific card in the gallery. Well suddenly SwiftUI started having updates in the background that caused the view to accept and return as soon as you tried to change anything! This effectively disabled editing!
To address the second issue, I got some great feedback in my Swift Slack channels, to re-implement local variables in the view and manually process the edit. That did fix the problem, but after that, I started seeing major performance issues.
I am sure the performance issues are not related to the edit feature, but to the fact that I suddenly started to really add my history of data to the system. Now whenever you try and load a view with more than a few cards, the app really hangs while loading. To that end I have been trying to add the AsyncImage feature of SwiftUI so that image loads can happen in the background, giving the app a snappier feel. However, that doesn’t seem to be working at this time.
I think the big issue is that Swift Data is loading all the columns of data on the fetch, and I should exclude the image data. I can then do a separate fetch of the image data within the AsyncImage view. There may be better ways to address this issue, and as such I am going to let others take a look at the code which is currently hosted on github.
If you are interested in taking a look and helping me improve this app, please drop me a line at Michael Rowe
Well, after writing, rewriting, refactoring, and replatforming for six year, I have finally released a personal project to the App Store. Greeting Keeper has gone thru UIKit, SwiftUI, CoreData, CloudKit, and SwiftData to become a pretty useful app. It does, however, have two major bugs, that crept in during the release, and I am trying to deal with them, when I can.
The bugs are as follows:
If you add a new card to your card gallery, while viewing the gallery for that type of card, the UI freezes up. I can recreate this bug every time, but I can’t yet figure out what is causing it.
My various edit screens, will allow you to change one selection and then automatically return you to the higher menu OR if it is a text field that you want to type into, then when you select the text field, it automatically returns you to the prior screen.
Both of these bugs make the app unusable for the average user. To prevent that, I made it a paid app, and so far only one person has bought it… ME. Which is fine.
So this is the weirdest launch post ever, basically I am telling people don’t download the app. Not yet. I need to resolve the two issues.