WWDC – Day 5

I had high hopes for day five, but I ended up pulling a muscle in my shoulder and I just couldn’t get my head on right to got through online content. I did make some major progress, but then petered out… My backup plan was to power through more today, but I ended up starting to work on my own apps; fixing accessibility issues, designing new icons, and working on some paper cuts. Overall, very rewarding day. The list I did get through are as follows:

  • Discover machine learning & AI frameworks on Apple Platforms – this foundational session when through a few key concepts. How Apple is using ML in their own apps, how to leverage the new foundation model as a developer, and finally how to bring your own models to the device.
  • Dive deeper into Writing Tools – Apple intelligence, first introduced in 2024, the improvements made to writing tools has been extensive. Not only that, but you can now customize native views to limit which features you wish to expose to your users. You can also customize the writingToolsResultOptions to let writing tools know the type of text to expect.
  • Elevate the design of your iPad app – This session takes you through changes you may wish to make to your iPad app to take advantage of the new “liquid glass” design language. While many of the features will be automatically applied if you just recompile, understanding how window resizing, navigation bars, pointers, and the menu bar works will allow your app to really shine on iPad.
  • Embracing Swift concurrency – The biggest change to Swift 6.2, in my humble opinion, is the new Approachable Concurrency change. Since most apps trying to adopt the swift 6 concurrency model were quickly overwhelmed with warning and issues, the new model allows developers to declare that their app is, by default, single threaded. You then add the concurrency deliberately, greatly simplifying getting your app setup. I switched one of my apps to this model and it removed 50% of the warnings I have been trying to resolve.
  • Enhance child safety with PermissionKit – there has been a lot of legislative activity around child safety on line and in app stores across the US. One of the issues has been that this is causing some states to mandate online ID verification for content. One of the challenges is this creates yet another problem area for personal information to leak. One of the interesting aspects that Apple has provided in this new API is the ability to define “age ranges” that are queryable without requiring direct access for websites or apps to PII (personally identifiable information). I hope that this approach get’s adopted more broadly. However, many companies would rather have your PII so that they can sell that data. Anyway, the session goes through how to use the Permission Kit API, including how to create “ask” experiences, causing a child’s device to notify the parent that they want access and requiring positive confirmation by the parent.
  • Enhance your app’s multilingual experience – TextKit 2 was introduced last year and this year it really steps up with the ability to correctly handle two languages at the same time in the same input field. Think about writing words in both left to right and right to left languages in the same sentence. Blows my mind!
  • Evaluate your app for Accessibility Nutrition Labels – this is the session that really side tracked me. There is so much I needed to do after running an accessibility audit on my simplest app, that understand it all becomes even more important. This session does an awesome job of going through specific examples for each of the sessions of the Nutrition Table. Highly recommended!
  • Share visionOS experiences with nearby people – While I know I will never convince my wife to try the Vision Pro, let alone have two of them in our house. This session goes through the architecture of both nearby people as well as remote users. Explaining how placement, recentering, and FaceTime integration all work. Fascinating discussion shared anchors too!
  • Set the scene with SwiftUI in visionOS – A look at how you can now integrated SwiftUI with Volumes and Immersive Spaces, in a much more fluid manner. The new scene bridging capabilities allows for UIKit to also support volumes and immersive spaces. Hopefully this means we will see a lot more visionOS apps coming soon!
  • Say hello to the new look of app icons – the final session I watched before my shoulder and neck pain took me out of being able to focus. This sessions did a great job of getting me psyched to update my own apps. A member of Apple’s design team took you through how Apple updated their own icons across the board. Well done!

WWDC Day 4

Finally getting into the groove of going through all the content I want to watch this week. The main issue is how much great stuff was announced, and now to figure out what it all means to me as a developer.

I attended three more Group Labs, but asked no questions, more of a listen in to all the great questions others asked. My favorite was the Accessibility techniques group lab. I’ve been so naive when it comes to accessibility in my apps. I’ve always believed that Apple’s attention to this meant my apps just worked; however after learning how I could run an accessibility audit on my apps, I was shocked to see how badly I faired.

My oldest app is Wasted Time, I have made it available on iOS, iPadOS, watchOS, macOS, tvOS, and visionOS. Every year I have tweaked it after WWDC to add a new feature or redesign how it handles the UI and data. This year, I ran Apple’s Accessibility Audit against it. I was very disappointed with how badly I fared. So I think this summer will be my “Summer of Accessibility”. My goal is to get all my apps as accessible as possible.

Anyway a quick rundown on the sessions I watched yesterday:

  • SwiftUI Group Lab – This session had one of my favorite Apple presenters – Curt. Over the years, his explanations of SwiftUI content has been simple yet detailed. The biggest part of this lab was getting me to look at the new Instrument for SwiftUI. Can’t wait to check out how to improve my scrolling card view!
  • Wake up to AlarmKit – I was expecting more details on night stand mode, but this session was all about Apple exposing their alarms API for your own app to use. Nice new API for me, but I have no use for it. Expect more apps to interrupt what you are doing 🙂
  • Automate your Deployment Process with App StoreKit API – This was a good session going through the APIs that you can use to improve your build and deploy process. While I am really enjoying Xcode Cloud, most large companies already have their own CI/CD pipelines and Apple is finally opening up many of their APIs to make it easier to upload your builds, kick off TestFlight, and retrieve feedback. Highly recommended for any enterprise CI/CD engineer.
  • Better together: SwiftUI and RealityKit – The expansion of the Spatial Web and improvements in UI consistency across platforms makes this session a must watch. Apple has made it easier to bridge between 3D content in SwiftUI and RealityKit, with Observable Entities, improved unified coordinate systems, and Object Manipulation. I expect to see more blurring between what you can do on visionOS and on your Mac. This is a good thing.
  • Bring advanced speech-to-text to your app with SpeechAnalyzer – With the opening up of Apple’s Foundation Model on various platforms, this is a big quality of life improvement for people wanting to to Speech to Text. The sample app shows you how to do realtime voice to text handling. The new model all runs on device addressing the data privacy issue, and supports many major languages and dialects (Cantonese, Chinese (3 regions), English (13 regions), French (4 regions), German (3 regions), Italian (2 regions), Japanese, Korean, Portuguese (2 regions) and Spanish (6 regions)).
  • Build a SwiftUI app with the new design – As with every year Apple has a major theme in their sessions helping developers address the biggest API announcement. For “liquid glass” this is that session. The explanation on how the app handles placement of various elements and insets is important to understand the UI impacts to your app. I have already done some testing with Wasted Time, and saw multiple things that will require fixing. Another key aspect of this session was how to address your custom controls and make them work with “liquid glass”. I have a custom button type for my apps, that I use to minimize the amount of coding I have to do in each view. I will be taking a look at how it needs to change.
  • Create icons with Icon Composer – On Wednesday I raised a feedback (FB17954788)  to get the Icon Composer team to enable visionOS and tvOS in this new tool. Right now it only support macOS, iOS, and watchOS. Given that visionOS and tvOS already have 3D icons, I would expect that his will be a low priority for Apple to address.
  • Customize your app for Assistive Access – I’ve been interested in this mode for some time. I help elderly family members use their various Apple devices and as such, I am constantly dealing with “gremlins” in the system. For the most part these “gremlins” are mis-clicks on the Mac, where emails mysteriously disappear. I’ve not had the ability to dig deep into this mode and see what it might do for the Mac, but I am interested. I am also wondering how my remote software will behave if the Mac is running in Assistive mode.
  • Deep dive into the Foundation Models framework – I didn’t take many notes on this session, as it required a lot of concentration. I do plan on watching it a few more times; however, the explanation of @Generable and @Guide macros was fantastic. And I felt that Apple has explained how they handle private on device data in a manner that makes me even more comfortable on what it will do. Having said that, I am worried that app makers will take advantage of that expectation and still exfiltrate data to their own cloud based services. Think of how Meta and others are treating your data, and you can understand the challenges that Apple must face to enforce this data isolation and privacy.
  • Design foundation from idea to interface – A really good session on how a designer thinks. I’ve always struggled between building apps that are functional verses those that make the function obvious. Much of what the presenter goes through may be obvious for good designers, for the rest of us this is a much watch.
  • Design hover interactions for visionOS – I think I watched a similar session last year (or in 2023), but without the context that I have now. The presenter really does a good job on how / when you should customize a hover effect. There are some custom controls I have developed in Wasted Time that certainly need a second look.
  • Design Interactive Snippets – As we assumed last year with the hard push on App Intents, the magic for Apple Intelligence was going to be what your app contributes. When it comes to actions, Snippets are where you can provide actions in a an easy to read and obvious manner. This session address this compact displayable view of App Intents.
  • Apple Intelligence Lab – The most exciting part of this lab was the clarity in how to test your AI features. I had always wondered what the best way was to handle this. They also explained the way the system handles limits for the on device model. It seems this is much more liberal than I expected. Nice.
  • Design widgets for visionOS – I am so excited by widgets for visionOS. I spent a lot of time this last year fixing the widgets for Wasted Time, and trying to think of new widgets for my other apps. While the session seems to make it so easy to add them, there is still a lot of clarity missing in my knowledge of widgets for visionOS. I’ve tried to enable them but I had to blackout the code. Perhaps I will make more progress over the summer.
  • Develop for Shortcuts and Spotlight with App Intents – This is a surprisingly cool session. My reaction was this would be a minor update to last year’s App Intents sessions. BUT this sessions goes deep into using Apple Intelligence and other LLMs within your Shortcuts. Highly recommend that people dig through this session in detail.
  • Discover machine learnings & AI frameworks on Apple Platforms – I didn’t take notes on this one, as once again this is a session I plan to view multiple times.
  • Accessibility techniques group Lab – As I mentioned in the paragraph above, this lab got me very excited to address the accessibility issues in my app. I wish they would make the labs available after the fact.

WWDC 25 – Day 3

The hits keep coming on Day 3 of the WWDC material. Today was a little lighter on Group Labs, but I was able to get a question in on the visionOS group lab. I finally made it through all the “What’s new” sessions I had wanted to go through.

A quick rundown of the sessions with information I have captured that may be of interest:

  • What’s new in Wallet – two major enhancements in my mind. The updates to the Boarding Passes seem great (evolutionary but great) having more information in one place when flying into a new city will be helpful as I’ve taken a new job. The ability to store US passport information will be helpful for those people who have not upgraded to RealID; however, since it won’t be usable overseas, and you will still need to carry your passport, this one may be much less impactful.
  • What’s new in Passkeys – The industry is really pushing hard to get everyone onto passkeys. Apple has provided a vision on how this will happen. While I really like using passkeys; my wife does not. She doesn’t have a smart phone (by choice) and doesn’t like/use Apple products. So this migration to passkeys will be much much slower in our house.
  • What’s new in RealityKit – I had not thought about using RealityKit in Mac apps. The new features are now supported not only on macOS but also on tvOS. To me this is the most telling message that Apple is trying to get more serious about gaming. The Anchoring updates to improve attachments, the natural object manipulation APIs, and improvements in scene handling, all should make gaming and XR easier for developers.
  • What’s new in SF Symbols 7 – I pretty much only use SF Symbols to find iconography for my apps, but the new version enables some pretty powerful features including building your own draw paths and using symbols as progress indicators. May have to reassess my usage.
  • What’s new in Spatial Web – Apple has made major progress in trying to open up and make their approach for spatial computing part of the W3C standard. A proposal for adding a <model/> tag to the html spec is in the works, and this session show how easy it is to extend your website to support 3D content.
  • What’s new in StoreKit and In-App purchases – I’ve been working on my first app with an in-app subscription model, so this session was very interesting to me. I don’t fully understand it, but I am seeing how Apple is enabling a ton of new business models to improve the stickiness and ease of purchase for developers and consumers. The one item that worried me a bit about the newer features was the ability to make consumables easier (think about games). The way this was presented, along with the ability to provide up to a million offer codes per quarter, had me thinking this is really about sports betting and online gambling. I guess Apple wants a piece of the action. (sigh).
  • What’s new in Swift – this was a very meaty session that I will have to watch multiple times. I didn’t take notes during this first watch, as I knew it was going to be complex; however, the exciting parts were Embedded Swift, Server Swift Apps, and major simplification to the concurrency approach. The ability to declare an app single threaded by default, and then only push concurrency where specific tasks require it, will make it much easier for most developers.
  • VisionOS Group Lab – I am really enjoying the group labs, and even got a question voted up in this one. Each lab consists of a moderator (who is usually a platform evangelist for the area being discusses), and 4 developer/designer/engineering manager individuals. They begin by telling us their favorite new features, which surprising on aggregate will cover all the new features. Then a set of pre defined questions will call out new features or common challenges. The team then goes through as many of the questions they can from the Slido Q&A in Webex. I got a question voted up on the best approach tot take a standard iOS app and port it to visionOS. Besides the “it just works” answer, there were a list of three key sessions to review. I plan on going to do them – Principals of Spatial Design, Design for Spatial Interfaces, and Better Together, SwiftUI and RealityKit.
  • What’s new in SwiftUI – a key session for anyone developing for the Apple ecosystem. Each year they expand their vision to help make it easier for you to build cross platform applications. Key this year were how to leverage the new design “liquid glass”, improvements to the underlying frameworks driving major improvements in performance, SwiftUI’s extension to handle 3D objects, Webpages, and RichText in editors. Another session to watch multiple times.
  • watchOS group Lab – I didn’t get my question voted up high enough in this one, but the session was amazing. The watchOS team has been crushing it over the last few years. One of the things I learned was that the watchOS 10 updates pretty much allowed them to be prepared for “liquid glass” in advance of this week’s updates.

I am hoping to make more progress today but have three labs I am attending. I have raised multiple feedback reports (aka bugs) in the last 24 hours. The biggest right now is that my iPad running visionOS 26 has become very unstable with its networking. Given how much I use my iPad and much I am enjoying the updates they released; I hope this gets resolved quickly.

Have a great #WWDC!

WWDC 25, Day 2

Well, this year’s virtual WWDC is packed with even more activities, all which make it really had to get into my normal review of a ton of sessions. Yesterday I attended three different group labs. These are new for 2025, and end up being a Webex with a team of Apple Engineers from various areas. The format is a set of Q&A from the attendees, and questions are up voted. Only one hour long, they can get pretty detailed, but they also interrupt my flow for the day. Today I am only attending two of them so I am hoping to get caught up on the various virtual sessions.

I attended the following group labs:

  • Developer Tools group lab – the discussion primarily focused on the new swift assist features (AI). This is going to have a huge impact on how developers work on their code.
  • Camera & Photos frameworks group lab – the discussion here was surprisingly focused on the LIDAR features of the camera. I am thinking there were a lot of people who worked in the auto industry on the Webex. There were also some discussions about how to improve performance on photo views.
  • Swift group lab – most of this discussion was about swift concurrency, which is not a big surprise, as the introduction of Swift 6 has forced a lot of developers to make their apps more concurrency safe.

In between the various labs, I tried to get a few videos in:

  • What’s new in App Store Connect – The major changes here were exposing much of the App Store Connect features via APIs for enterprises to integrate into their existing pipeline, along with new features in the iPad app. Adding summaries in the reviews based on Apple’s LLM was enabled.
  • What’s new in Apple Device Management and Identity – This was extremely informative. I recently took a new job and my company uses Apple Managed IDs. It has really impacted my ability to use my Vision Pro on a daily basis (I won’t put my work account on my personal devices). Apple’s new improvements to Business and School accounts, means that enterprises can now control even more granular capabilities on the device. They also did major improvements to pre-provisioning and reuse of devices. They also now support the Vision Pro in a major way with all the same features (for the most part) as Mac and iOS.
  • What’s new in visions OS – I am going to have to go back and review this on multiple times. A few key items were adding access to the foundation models on the device, new accessories including using the Sony VR controllers, shared experiences, and a whole lot of content focused on enterprise usage models. These include improvements to manage multiple users sharing a single device and multiple people sharing the same virtual space when in the same room.
  • What’s new in watchOS 26 – The changes here were pretty minor, mostly UX refinements; however, the one feature that seemed to be meaningful was the introduction of RelevanceKit. This will allow widgets, stacks and notifications to be much more aware of location, time of day, and other patterns. The one part that worries me about this feature is that it invites a new vector for stores to advertise.
  • What’s new in Widgets – To me the biggest changes in Widgets is that they are now available on visionOS, and have the ability to be pinned to a location. This pinning is also available for other windows, and will make the Vision Pro even more relevant. You can also now use APN (Apple Push Notifications) to update widgets.
  • What’s new in Xcode – The integration of Swift Assist and local models is the part I am so excited about. Learning about the new Icon composer, updates to string catalogs also looks great.

WWDC 2025 – Liquid Glass

Liquid Glass

Today was day one of WWDC 2025. The most exciting part will be the new UI. Apple calls this design aesthetic “Liquid Glass” and I couldn’t be more excited.

Image from Apple demonstrating their new Liquid Glass design.

The best way most people will understand this change is that it is taking the learning from the Vision Pro and taking it across the rest of the Apple ecosystem. However, this appears to be an oversimplification. The way that Liquid Glass behaves is much more like water droplets on a service. Think of how water will refract items below them. This will make your device seem more “alive”.

Seeing how Apple is taking this across all devices, watch, tv, Vision Pro, iPhone, iPad, and macOS this will hopefully make it easier for people to adapt to the new UX..

Xcode enhancements

The second exciting thing from today’s announcements is that Apple is allowing third party models to be used within Xcode to help in coding. By default you can attach your OpenAPI / ChatGPT account; however they have made it expandable to other models, either on your device or via API keys. This is a huge upgrade, and will allow developers to quickly get up to speed in using an AI to simplify some of their coding tasks.

I have been using Visual Studio Code’s integration with chatGPT, Alex, and Perplexity for some of these tasks, but a first party integration into Xcode is a game changer. In the past I have quickly run through Alex’s free tier, chatGPT has been ok, but has issues with accessing my projects when they are under GIT, and Perplexity right now would require me to cut and paste my projects onto their servers.

iOS 26

Screen shot from Vision Pro of apple executive standing in front of large iOS icons
iOS Icons in space!

This section of the presentation was more about the UI changes, with major improvements; all designed to expand content to the edges of the screen. Time was spent on the new phone app! Where they have redesigned the UI to pull together most of the content on the main screen, and expose some of the content that you may not remember. Overall, there were the usual large list of changes to apps like Camera, FaceTime, Messages, Music, Maps, Wallet, Gaming, Visual Intelligence, and more.

There were two types of changes, UX changes to take advantage of the new design language and functionality changes that are taking advantage of more Apple Intelligence features.

The key message continued to be that developers can use AppIntents to be included in all the cool up coming Apple Intelligence features.

watchOS 26

A few key updates to watchOS:

  1. The new Workout Buddy feature. This will use on device AI to build motivational commentary while you work out. The coolest part to me was that the voice they used in the demo appeared to be based on “Sam” my favorite Apple Fitness instructor.
  2. The Notes app is now available on the watch. I use Apple notes a lot, so this is a nice addition for me.
  3. Updates to the Smart Stack. This has been slowly becoming more useful for me.

tvOS 26

tvOS features map floating in Vision Pro view

Not much here other than a new Karaoke mode in the music app that allows everyone to use their iPhone to sing. Everything else seems to be UX changes.

macOS 26 Tahoe

macOS Features page floating in Vision Pro screen

The biggest things here, besides UX matching with Liquid Glass:

  1. ability to color folders and add emojis – yes this will kill a few utilities I use
  2. the Phone app is now available on the mac
  3. Continuity – Live activities are now available on the Mac, and will auto launch the app on the iPhone if you are using mirroring.
  4. Shortcuts – you can now tell a shortcut to use Apple Private Cloud Compute,
  5. Spotlight – I think this version has finally taken me to the point where I don’t need Alfred any more. I took Alfred off my set machine to see how I do this summer.
  6. Gaming – Yes the new gaming app is here too. I tested it to see if it would find non-App Store apps on my machine. Nope.. not yet

visionOS 26

visionOS Features page floating in Vision Pro vies

Other than the UX changes across all other devices, this was the most exciting one for me. It appears to me that visionOS is now at the level of a version 1.0 operating system. I believe that the average person can now find things that might make visionOS a daily drivers.

  1. Widgets are finally available. I can’t believe that I said “finally” the OS has not been around that long. I need to check my Widgets in wasted time this summer.
  2. Support for the Sony VR2 controllers and an upcoming Logitech device for writing in 3D Space.
  3. Persona’s are so much better. The left one is the new persona, the right one is the original.

iPadOS 26

iPadOS features page floating in visionOS space.

This is probably the most impactful set of updates. The iPad has been a great productivity device for me, I use it to edit my weekly podcast (Games At Work dot Biz), take notes on it during meetings, follow my social media feeds, read RSS, read books, watch media, and play games. I would love to do more on it. My M4 iPad Pro is just as powerful as my M2 Max MacBook Pro.

The biggest change is the Multitasking subsystem appears to no longer be throttled. As a developer you can now update your app to process in the background in a manner that makes sense. This means long running processes can be kicked off, and they will run!

The windowing system is finally basically the same as on a Mac, i.e. fully resizable. The mouse pointer is now a pointer, not a blob, and tiling is much more mature. Oh, and now you have menu bar at the top of the screen.

You also have improvements in Files. The Preview app is now available on the iPad!

And finally, as a podcaster, you can now do local recording of your audio input, allowing for the double ender editing that we use for our podcast.

So much more

There was so much more, and I will spend the week going through over 50 of the 100 sessions that Apple released yesterday. Looks like it will be another exciting summer.

Two Weeks In

On May 12th, I started a new day job at the company Atlassian. Atlassian, is THE premier provider of tools for Teams. One of the most exciting parts about being a Newlassian, is that from everything I have experienced so far, they are truly practicing what they sell to customers.

The ability for anyone to work anywhere, while still being part of the team, is a key attribute of #TeamAnywhere. They also focus on their employee’s in a way I have not seen since the very earliest part of my career.

As part of the onboarding that I am going through right now (30 days of basic onboarding and another five weeks of onboarding for my specific role), they talk about the company values. While I’ve been at multiple companies in the past that talk about Company values, most tend to feel like either designed by committee, or a third party consulting firm, as to ensure that they sound pithy but have no teeth. Atlassian’s values don’t seem like this at all.

  1. Open company, no bullshit – This means that the language within the company is direct and open. No corporate passive aggressive crap, that had become all too common in other companies.
  2. Build with heart and balance – This to me is the exact opposite of the “move fast and break things” approach. While Atlassian is now 20 years old, they have a thoughtful startup mentality.
  3. Don’t #@!% the customer – While the values page uses #@!%, the company doesn’t fuck around with the term. Going back to #1 above, the language is direct. It must be the Australian mindset. But it is definitely true, in all the meetings I have been in the customer is front and center, and the teams focus on doing the right thing.
  4. Play, as a team – Software is a team sport. Business is a team sport. While people call out the “leaders” or “founders” as the people who are successful, it can’t be successful without a team. And as such, the people I am meeting and working with are really focused on everyone succeeding.
  5. Be the change you seek – And finally, all of the prior items lead us to how can you make the change you need to be successful.

I am only two weeks in, but I already feel like I made the right choice to pursue working here. Will check in again in the future, as I continue to move forward in this stage of my career.

Have You Considered, LLC

I’ve been working for some time to appropriately setup a business around my App work, along with other activities I’ve done. To that end, the business has been setup as a LLC.

A LLC is a Limited Liability Corporation. And this morning, I have updated the copyright on all of my apps to show that they are part of Have You Considered, LLC.

On-Device Design Failure

Privacy

Privacy is one of the reasons I really like Apple products. Apple tries to keep things happening on device for the sake of privacy. I try to keep to this same design principle for my apps. I have feel it is important that people can trust that you are not doing things with their data. And to that end, I’ve been working on a new app, that will contain some very personal data. That data should only be available on the device of the user.

Features

One major feature that the app MUST have is to trigger an event if the user is no longer able to respond. To achieve this feature, I had thought to send the user a local notification, on a periodic schedule of the user’s own choosing. I would then monitor for a response to the notification. The app will trigger sending data to someone if a response is not received within a certain amount of time.

I have built out the scheduling mechanism and added in logic for acknowledging that the user has tapped on the notification. Those were the easy parts. Since then I’ve investigated triggering an event via the app when the notifications are sent. This does not seem possible, as most likely the app will be in a background state. At this point, I have to use a remote notification, breaking my rule for only on device notifications.

Challenge

Given that the notification is only there for proof that the user can respond, this is probably not a big security / privacy issue, but I had not wanted to track any data about any users. This will require that I register the device, and the user being able to respond. It will then make it so much easier to add additional features which take advantage of having this server based backend.

Should I do this? Or should I call the app off?

Latest Activities

I’ve been working on a few projects lately, and it has impacted my desire to blog more often. Just a few things I have been working on include:

Building a business – yeah. this is a big deal. I retired from my “day job” back in October, and since then I’ve been working with a startup called TVPCT, Inc. As Vice President – Technical Strategy and Operations, I am helping the team focus on two things:

  1. Improving their technical solution.
  2. Improving their operational posture.

Like many startups they have been working hard on creating a technical solution, that not only meets the needs of the founder, but that also addresses the many requests from visionary early users. They have an incredible solution that allows SMB (Small and Medium Businesses) take advantage of the recent explosion of AI capabilities. I’ve been helping them understand issues like Scalability, Maintainability, and Customer onboarding. It’s an exciting time to be working on these challenges.

Improving my technical credentials – I received my AWS Architect certification and am currently working on AWS Security certification. These two certifications are both technical and challenging. Many of the practices and learnings are applicable for other platforms too.

Releasing new Applications – I’ve released a new application back in January (Quick Localizer) which allows for easy localization of your application developed within Xcode. Xcode uses a xcstrings catalog for all the strings in your application. As such, we can take advantage of Apple’s language support on macOS to translate strings from one language to another. I am currently working on two new applications:

  1. Vinyl tracker – an application that allows you to keep track of the vinyl records in your collection.
  2. Letter tracker – an application that allows you to track physical letters that you send to people.

I am really excited to work on these new applications, as they are giving me an opportunity to really stretch my knowledge of SwiftData and SwiftUI.

Greet Keeper 1.5 Feature Test

One feature I’ve been wanting to add for Greet Keeper is to allow a user to grab the image of the cards they’ve sent from the manufacturer’s site, or if they sent an e-card to include an image of the ecard.

To that end, I’ve been toying with adding a new picker to the add Card Gallery image. Here’s a prototype of it a simple one screen app. What do you think?