Wasted Time for Apple TV

One of the really cool things about keeping up with Apple’s summer release schedule is that you can add new features easily to your apps. This summer I spent time restructuring Wasted Time to add support for Widgets on iOS, macOS, and iPadOS. After I got that working, I thought, what the hell, why not make a Apple TV version of the app.

So, here you go, my first build of Wasted Time for Apple TV.

I am having a few issues with getting a valid provisioning profile. I have a new Apple TV 4K, and while I can select it in the Devices & Simulators section of Xcode, it won’t show as a valid deployment target. Without that, I cannot get the UDID to finish setting up a Device, which is required to complete the provisioning profile.

More to come, as I figure it out.

Explaining the Statistics in Wasted Time

I’ve been working on adding a set of widgets to Wasted TIme for iOS14. After playing with this program for over a decade, I realized people may not really understand the statistics on the totals page. This will may become more confusing once the widget is live.

Let’s take a look at the two sets of statistics on the new widget on iPadOS 14.

The to major items are “All Meetings” and “Lifetime”. “All Meetings” is about the meetings, while “Lifetime” is about yourself. You can reset the “All Meetings” but you can’t reset Lifetime.

So People, in “All Meetings” is the total number of people in the meetings since you last hit “reset” on the Totals tab. People in the “Lifetime” is the number of people in all your meetings, for all time.

Waste is the total waste for all the people. Again this can be Reset in “All Meetings”, but it can’t be reset in Lifetime. Cost; however, is the cost of the meetings in “All Meetings”, BUT in Lifetime, it is the cost to you for your time.

I’ve been thinking thru for a long time if I need to relabel things. What do you think?

WWDC – Day 5

I wanted to start this day early, since I was declined for my second Lab request. I have been working on Shortcuts support in Wasted Time and while I had a great session on Wednesday, I am still having issues with the launching of Wasted Time from the Shortcuts app. I am getting a “Sorry there was a problem with the app” error. So I started the day early and went back to a WWDC19 session on Adding Parameters for Shortcuts. While it was helpful, it didn’t solve my problem and I will have to go back to a WWDC18 sessions on Shortcuts.

My second session was one that should have directly helped me – it was called Decipher and Deal with Common Siri Errors. This two minute session gave a few tips for debugging Siri errors, and by inference Shortcut errors. The first tip is that in Xcode you can pass a Siri command to the run command. That doesn’t really help me, but there was still hope. The second thing is you can look at the Console log and monitor the simulator for error messages. This should be helpful and I will do this, to see if I can figure out which of the three common problems above are the problem. The third suggestion was about using os_log statements to capture the program flow. While I do this with print statements, it seems that this may be better. I already had plans to change my prints to os_log statements based on a session earlier in the week.

The next session (number three for those keeping count) was called “Empower Your Intents”. This was a deeper dive into the technology behind intents. The one piece that was helpful to my problem was about adding in-app intent handling. This perhaps would be helpful in dealing with my issue, as I could at least allow Siri control for adding and removing people to a meeting with out major problem. Much of the processing I want to add is more appropriate for running in the app anyway. But to handle in-app intent handling I would need to change the architecture of my app to be a multi-window app. Not sure that is really appropriate for it’s function. Will have to think thru this.

The next session was about handling background mode for your app. This seemed like it might help me, but turned into a session about optimization of your app. Not a bad session at all, and it explained how iOS manages background execution and prioritization. What was most interesting was how your user’s behavior will change your background processing over time. Additionally, did you know that all apps are set to run in the background by default. I will have to do a better job of going thru my settings on my machine and turn that off more apps. There are 7 factors that iOS uses to manage background execution priorities, and many of them are under the user’s control.

Next (Session 5) was on designing great widgets. While this was interesting, I didn’t focus on it much, as they spent tons of time explaining the design aesthetics of their own apps. I really should spend more time in the design documentation that Apple provides. Perhaps that would make my app gain more users, perhaps not.

Session 6 – Structure Your App for SwiftUI Previews! Ok, now this session was amazing. I have been enjoying SwiftUI over the last year, but as it was the first release, Xcode has been a bit buggy. This session really showed what it could be, once you restructure how you code in Xcode and with SwiftUI. I’ve under used Pinning of views, this is when you lock a specific Preview so that as you move to other source modules you can see how your changes impact it in real time. Adding multiple Preview Groups is something I have used, especially in building my new Widget for Wasted Time. The other key thing I need look at for my app, from this session, was the use of the @StateObject – this will allow key models in your to only be created when you first launch your app. The other key insight that came thru is to break up your screens into multiple smaller views. You won’t get penalized by this approach, and it allows for you to do interesting things with the new found modularity of your app.

Session 7 was about Building SwiftUI views for Widgets. The speaker did a great job of showing many trick and tips to build out a caffeine tracking app and widget. I will need to download the code for future reference. To better adapt Apple’s design principles they have added a new CornerRelativeShape feature, that will make your views conform with the high-level view they sit in. If you look above you can see it in the Caffeine widget. The other key new feature is relative date/time processing. As you can see in the code above, if you use these in a widget, it will dynamically update to show relative time. In this case, the last time you had coffee. If I were to add tracking of when your last meeting was in Wasted Time, I could show a time tracker. Should I do this?

Session 8 – we are getting down to the wire (my plan was 9 sessions a day for a total of 36 session. SF Symbols 2 – Symbols were introduced last year for iOS and WatchOS, they are now available for macOS Big Sur. These symbols are powerful in how they behave across platforms and sizes. They now have over 750 symbols to help change the iconography of your app. They also have added multi-color capabilities. I have updated WastedTime last year to use this iconography, but will need to double check how I am doing it, as there are some things that would make it much better looking this year. I’ve wondered if any of my users have even noticed yet. Oh, and by the way, they renamed some of them, so developers do need to deal with deprecated names in their code, if they support back levels of the OS. I also never thought about localization of iconography, but good thing Apple has. Icons can be flipped for right to left languages, and in some case they have different glyphs in side to reflect the differences in how script looks.

And what was my last session? Well Core Data of course! (Seriously, I am still not 100% comfortable with Core Data, and was really looking forward to a total reimagine of this with some like SwiftData (maybe one day)). This session really went deep into how to handle batch processing. As more and more data is being stored online, the core data team has been modernizing these processes and making it much more performant for cloud access. I applaud the team doing this. Perhaps one day I can take advantage of it.

After a few more days I will give a summary of what I thought overall about an online only WWDC.

WWDC – Day 4

Another fun filled day, and now I am starting to get back into my code.

I began the day with a session on the OS.Logging extensions added into Swift. I am thinking I should change my code to start using Logging instead of print statements. I do a lot of inline print statements so when I run the code locally I can see a few key messages. But of course, once the app is deployed, I won’t get access to these. If I am to change to logging, I will be able to find out more information about my app while it is running, and when I get crash logs I should be able to get to more data.

Session 2, was actually one I saw on Tuesday, but had not marked as watched. So was a nice refresher on Scribble.

I was looking forward to Modern Cell Configuration, and while it was a very informative section, by switching over to SwiftUI last year, this session held little value to me. It does seem that they are bringing much of the declarative methods from SwiftUI back to UIKit. I have not converted my greeting card app to SwiftUI, yet, so perhaps I can take advantage of these new features.

Session 4 was on integrating hardware keyboards on iPad apps. This one is really interesting to me, as I need to add keyboard shortcuts to Wasted Time. It is really starting to come together on how I can make the app truly cross platform with the same code base. The keyboard features will allow me to take my Catalyst app and make it much more like a full fledge Mac app. If you look at what Apple has done with Messages in the latest Big Sur beta, you will see that they are serious about making iPad and iPhone apps, at home on the Mac. So my question here is, should I make the + and – keys for adding and removing people from the meeting, or should I use the up and down arrows? If I do, I could make it so holding down the key just keeps add or removing people.

Session 5 was about the new Grid and Outline views. I am very interested in the Grid format for my greeting card app. Being able to quickly see a history of all the cards I’ve sent someone over the year will make it much less likely that I’ll accidentally send them the same card. The use of the new LazyHGrid and LazyVGrid seem to really make strong performance improvements. Overall, it’s not hard to implement, so will have to do when I get back to that app.

Session 6 is all about the new way you can write a Swift app with by just using the @main entry point and the power of the new scene types. What is not quite clear to me yet is how these apps can then be extended with the various features that need to register in the AppDelegate or enable additional entry points, like Shortcuts and Intent handling. I am sure the answer to that will show itself in some demo code somewhere soon.

Session 7 was about push notifications. While I have no need for any push notifications, so this was more informational. I enjoyed learning how this all get’s setup, but in all honesty there was nothing that didn’t just make sense.

Session 8 – Sync a Core Data Store with the CloudKit Public Database. I miss read this session, but in the end it was really cool. I had figured this would take me thru how to sync my CoreData Store via CloudKit to other devices. I guess I really should pay attention to the full title. The Public Database is just that, it is Apple’s CloudKit data store for public access to data. Think of a leaderboard in an app. Having the ability to multiple people replicate data locally, and have appropriate access to the data that is common across teams of people. A key thing is how do you handle a delete, when multiple people may have it locally replicated. The discussion on that part, was well described and informative.

Session 9 – the final session of the day for me, was on Build document-based apps in SwiftUI. This really is a great example of bringing together many of the things about new Swift App structure, and SwiftUI in general, to show how easy it is to create a Document based app. Right out of the box you can create a simple text editor with the template. Updating it to handle different filetypes, etc. is what this session shows you. All in about 15 minutes. Pretty cool!

WWDC Day 3

Day three was loads of fun, the highlight of which was the Code-Along for WidgetKit. I also had my lab appointment on the problems I was having with SiriIntents and Shortcuts. While I was a able to get my Intents to work, we ran out of time for the Shortcut problem. But I do believe I have a better understanding of what I was doing wrong, AND I raised my third bug report for this cycle. So overall a win.

The sessions I did were all about getting better at the new stuff. First I watched the Design for iPad session. The key aspect of which was learning about sidebars verses the tab bar. For now, I don’t think Wasted Time needs to worry about this.

Then on to building complications in SwiftUI. They spent a lot of time explaining how the new Watch Face Tinting works. I assume this is because you want your new complications (which a user can use multiple complications from the same app to build a customized watch face and share it with others) to feel as if they belong to the watch face they are on. A really nice feature of Xcode is by using SwiftUI you can do the live preview function and see how your complication looks on multiple watch faces and with multiple tints all at the same time.

Session 3 was Visually editing SwiftUI views. This very quick tutorial was jammed packed with how to improve your productivity in building new views. I learned a lot!

Session 4 on Creating complications for Apple Watch got into the guts of the various providers you can use in complications. It also explained how the CLKDefaultComplicationHandler is necessary if you want users to be able to share your complications with others. The whole idea of which means that your users can be a great marketing arm for distributing your app to more people.

Session 5 – What’s new in WatchOS design. This session was more about explaining the new design principles of WatchOS. Apple really wants you to use their design principles so that you can be more forward compatible. They made the point multiple times to stop using gestured based menus, sounds like this will deprecated soon.

Session 6 – Secure you app: Threat modeling and anti-patterns. I have to confess, I am a security buff. I am fascinated by all things related to computer security and getting a perspective of how Apple sees you should test your app was cool. The basic premise was simple, understand your apps assets and those who which to get inappropriate access to those assets. This model then is used in a series of examples of where the coding practices many of us have developed may be inadvertently exposing those assets to bad actors. If you want to learn more about securing your apps, I highly recommend watching this one.

Session 7 – Explore the new system architecture of Apple Silicon Macs. Another must watch session. Apple is making major architectural changes with the new Macs and you should understand them. At one level you can just assume all the good and bad parts of the iPhone and iPad are coming to the Mac, but you’d be wrong. They understand the importance of developers having much deeper access to the hardware, and they will allow it… to a point. They do quickly flash a sudo command on the screen to turn off SIP. Got do a screen grab of that.

Session 8-10 were the Widgets Code-Along. I went thru these a few times to better understand Widgets. I think I will create a Wasted Time widget this summer to show the status of your meeting history.

Well today promises to be another jammed back day. Which Apple would drop the videos earlier in the day, but what can you do.

WWDC 2020 – Day 2

Wow! Made it thru 9 sessions today, and have 2 more that I wanted to do, but I am bushed. I will try and post a quick summary in the morning of the session on this same blog post… More to come…

As promised, here’s what I’ve learned on Day 2 (or at least the sessions that caught my fancy.

Session 1 – Meet Scribble for the iPad.  

I did this one first, because it was the first one that downloaded for me.  My goal was to start downloading everything I was going to watch for the day and then start going thru them.  Even thought the videos “Dropped” at 1pm ET, there was a delay before they showed in Developer app, but not a surprise.  Apple had to coordinate the distribution of all these videos to various content distribution centers.  It was certainly with the wait.

The simple aspect of Scribble is that any “standard” field that support text will handle scribble by default.  This is as expected. I immediately tested with Wasted Time on the iPad and “it just worked”.  There were only a few APIs that the session covered, mainly to allow for custom controls to recognize Scribble. 

If you want to allow the user to add a new element in a list (think the Reminders app), then you need to use UIIndirectScribbleInteraction. Think of this as recognizing that the user is in your list, but in an area that doesn’t “yet” have content.  Overall, Scribble seems pretty clean, and simple.

Session 2 – What’s new in SwiftUI 

A few new areas if change.

As you can see from the list above, there a lot new things in SwiftUI. This is great to hear! I rewrote my app last year to use SwiftUI and while it certainly improved a lot of my code, there are things that still need to be worked on.

The most existing thing to me was the change to allow for @main as the entry point for your app. This means that a lot of the upfront code that was original developed in your AppDelegate, SceneDelegate and View, can now be simplified.

Lazy loading for Stacks means that you can greatly improve the performance and memory usage of your app.

Session 3 – Port your Mac App to Apple Silicon

Only three simple steps.

This session was one of the longest sessions I watched. It was, however, well done, pointing out those areas that your app may have problems during the transition. I won’t go thru the details, but I have signed up to work on the transition with Wasted Time. I am expecting it to be painless, as I have kept my code up to date with Apple’s recommended changes year after year.

Session 4 – What’s New in Swift

Another jammed pack session

This session really focused on how Swift5.3 is now a first class language, being available on multiple platforms, building out a thriving ecosystem of Open Source projects, etc. If you want to get all the details, I suggest that you go to Swift.org where they include a great blog on their enablement of Swift on AWS Lambda, along with updates on how you can contribute to the evolution of the Swift language.

Session 5 – What’s New in Mac Catalyst

Another first class citizen

As you can see from the list above, Mac Catalyst is now supporting more and more of the iOS frameworks. This aligns really well with the transition to Apple Silicon. A key item is that for those services that don’t work on the Mac – think AR Kit, instead of requiring #if directives and compiling differently based on target, you can now write your code, like you would on iOS, and check for feature availability on the device. Nice!

Session 6 – Meet WidgetKit

You can tell the sessions that keep me way too focused, I only get a screen capture from the title page.

Widgets are really taking advantage of the Intents that I’ve been working on in Wasted Time. If you have used the Siri face on your Watch, and how it exposed information that should be right and right on time, this is the logic on how WidgetKit will expose information in Widgets.

Keeping in the overall theme so far, these Widgets are written in SwiftUI and are exposed across all platforms! Can’t wait to see if I can get Wasted Time to automatically be exposed when meetings start. We shall see.

Session 7 – What’s New in SiriKit and Shortcuts

This picture really explains what’s going on. Siri and Shortcuts are becoming much more integrated in to the system and will be much less intrusive. Siri will now use the shortcut intent UI (as you see on the right) to expose information. And Shortcuts now can run on top of other apps, instead of taking over the whole device.

Session 8 – Lists in UICollectionsView

It was getting late in the day when I viewed this one. I am going to have go back and re-view this session, as the speaker was doing very detailed analysis of what is new. I am hoping that my greeting card app can take advantage of this session.

Session 9 – iPhone and iPad apps on Apple Silicon

Another late in the day session. The good news is – “it should just work”. But the key answer is … compile and test! Good advice.

WWDC 20 – Day 1 – Platform State of the Unions

The one thing that you don’t see when you are at home following WWDC, is the “Platform State of the Unions”. This year it is streaming via the Developer App and everyone can watch it. This will be interesting to see how the non-developers react to the more detailed technology discussions.

One of the cool things that happen at WWDC, is the streaming video that is playing on the large screens when you come into each session. We are now having that with the streams… I’m digging it.

Lots of demos coming up: iOS, iPadOS, WatchOS, tvOS, and macOS. From a developer perspective I am really excited to learn more about Xcode 12, SwiftUI, and App Extensions. So now people can change their email and web browser defaults. Allowing others to use the FindMy network meal that the challenges to Apple Tags must have been real. Go to developer.appl.com/find-my.

Demo of Big Sur on a new Apple Silicon Mac. Showing the details from the keynote video. The key thing of this demo is that it shows things working, but as anyone who does demos, you know there’s the “Happy Path”. So hopefully my code will run on the test hardware. Speaking of which, I am thinking about applying for this.

Key idea behind building the chips is to customize the chip to the task at hand. The architecture of the design was done to give them the ability to do this, and this same architecture will be used to build Apple Silicon for the Mac. This will be a family of SoCs unique for the Mac. This should drive improved performance.

Balancing energy efficiency and performance is a key challenge. Years ago one of mentors showed me a heat sync for a leading edge performance chip from the late 90’s – it was almost the size of a cinderblock. So getting that balance right is always a challenge. Personally, I think Apple has done a great job with this over the years.

The new Mac will have a dedicated inference chip – meaning that AI will get much better. Additionally, dedicated caching for cloud content, improved image processing, and other improvements to GPU performance will change workflows on the Mac. If Apple lives up to the potential, things will certainly accelerate based on the architecture.

The Quick Start Program – is used to get started on the transition. The DTK will have the OS and Xcode. See software features will not be supported, register under Universal App Quick Start program. The new universal apps will have 2 versions of the code in the same bundle. This will allow the end user to not notice any differences regardless of which platform it is on.

Testing of Open Source projects has already begun after Apple ported many of them to Apple Silicon. This include Python 3, and many others. Apple has also been working with Unity, and demonstrated a version that can target both Intel and Apple Silicon at the same time. Next month Unity will release a preview version of this system, and later this year they will release for production.

Rosetta 2 will be released with new macOS Big Sur which include the ability to run intel apps. Most apps will be converted during the installation. During the example of how to do this, they went to pains to show that things would be “the same” for developers.

They showed Affinity Photo as the image editor app to show that the Rosetta2 app didn’t add any overhead. The overall approach here reminds me a lot of Wine apps, which swap out windows calls to Mac calls. I wonder if the Apple development teams looked at this model.

A key message was that if developers switched to DriverKit and ExtensionKit then your device drivers would be supported on Apple Silicon. If not, sounds like these are going to be critical for the future.

Another demo of Debian running under a ported version of Parallels running on Apple Silicon. This allowed for a version of an Apache server running to demo cross platform development environments. Apple is working with Docker to make sure they can get it running on the Apple Silicon environments.

Catalyst is the underlying process to allow for iPhone and iPad apps to run on the new Apple Silicon devices. As developer, you can choose to allow if you want your apps to run on the Mac. iOS apps will allow for some default menu options, etc. There are a couple of tricks that Apple plays. 1) They add a new wrapper for app package, so that the user can rename the app, etc. 2) The App runs in another wrapper the sanitizes the path information. iOS app extensions work “where appropriate”, for example an iOS photos extension will show up in the Mac Photos app.

macOS Big Sur – this is a huge software release.

The new design language will be provided as templates for developers. If you provide an image and text, you get new documents icon template. Apple will provide this for you automatically. I think I am liking the new layouts and design language, but I am sure that my mother in law and parents may have trouble with the auto hiding information. There are some new enhancements for SF Symbols, that I am planning on watching later this week. Apple is putting color back into many of the menus, etc.

Mac Catalyst – Apple is updating it for their own apps, so we should see new features in catalyst.

The new Mac Idiom will allow Catalyst apps to address screen views and sizes that are more like Mac Apps. A great example of this is the new version of Messages.

It is such a change that Apple is giving Big Sur a new number.

Mail showing off the new Sidebar API

Now that the iPadOS has keyboard and trackpad support. So there are new APIs with side bars, so you can slide over additional content. This allows for simplified navigation in your app. New pickers have been exposed, calendar, color, emojis, and more.

Action sheets have been moved over to new menus, this will allow for more commonality between the iPad and the Mac.

Hardware extensions on the iPad allow for more experiences for users. We then learned more about LiDAR. Scene geometry allows for building a 3D mesh of the room. The Depth Map uses the LiDAR to get much more detailed room measurements. And the positioning of virtual objects will better address people walking in front of or behind things. Apple pencil is really expanding capabilities with Scribble. Your apps should get it these features by default. Text fields automatically convert scribble to text. If you want to add additional features in your app, you will need to enable PencilKit. Drawing with Pencil is using a new CanvasKit. You can now get stoke data as the pencil is going across the screen. You can tell when the pen is on the screen verses the finger is on the screen.

iOS 14 – Widgets and App Clips are the key areas of focus.

There are only three sizes of the new Widgets. They are available in the “Today” view, but you can also add them to the home screen. They are written in SwiftUI, which makes it easier to share them across iOS, iPadOS, WatchOS, and macOS. You can archive the view of the widget which allows for better efficiency in drawing. Users can stack widgets so that you don’t lose screen space, similar to the Siri face on the Watch, stacks will put the “most important” widget on top of each stack at the right time. This is based on a timeline view that you define in your Widget archive. Sounds like Emoji Rangers may be the sample code for understanding widgets. WidgetKit is using the Intents framework. (I’ve been working on adding Siri Intents in my app, so it certainly seems like a good thing that I should get working).

App Clips – is all about getting access to things quickly. The pop-up is autogenerated by Apple based on your metadata. If the user sees the popup Apple will download the app in the background. You can also setup “offers” to encourage users to upgrade to the full version of your app. For App Clips, you can setup 8 hour notification permission, this means users don’t get the prompt and after 8 hours of the App Clip being on the device, it will automatically lose notification permission.

In App Store connect you update your privacy information. Very simple.

Apple Watch now allows for SwiftUI based complications. By setting up different identifiers for you complication you can provide a user with multiple complications from your app. Xcode 12 now supports complications previews in SwiftUI previews. Very cool! When you share a watch face it comes across as a text, and if you don’t have all the apps on your watch, it will automatically download those, and set it as the watch face on the watch.

And finally – Xcode 12 review. Document Tabs is the new way to organize your work space. Double clicking on a document and ti will show as a new tab. You can also show things like you Git Commit log. New SwiftUI templates allows you to automatically setup things easily. test coverage can also include UI responsiveness, so that you can work on performance improvements. Also added in StoreKit as a testing environment that does not require deployment fo the app.

Swift Packages now include the ability to have assets, control and previews. This should increase the ability to share code and features with others.

SwiftUI updates, prioritizing source stability, so all the new features are additive so that exiting code does not break. (YAY!!). They key is things like adding LazyVStack and LazyGrid mode so that you an reduce memory usage. There is now switch and if-then-let logic into SwiftUI. Apple added a bunch of new SwiftUI version of existing kits, like Maps, etc.

You can now define your App’s structure via SwiftUI. This is example like the new TextEditor App. So by defining the app structure in you @main section you get appropriate platform specific behavior. This should allow for native experiences across platforms.

So much to learn!

WWDC 2020 – Day 1

7:24am WWDC-time



After being able to make it to WWDC for two years in a row, it really feels strange getting ready for WWDC while sitting at home. At least I am on vacation from the day job for the week, so I am letting the excitement build, while social distancing. I am wearing my favorite shirt from last year’s WWDC, the mind blown dragon!



I am really looking forward to getting my mind blown again, with hopefully enhancements to SwiftUI, Catalyst, and Combine. These three big changes from last year’s WWDC have all be a bit rough around the edges, but exciting none the less. I will check in later to this post as we all get closer to the time!

=== 30 minutes later ===

So last year my Mac went belly up when I tried to install the latest macOS, so this year I created a new partition, installed the latest GA version of Catalina, and have put a giant background to worn me that this is the beta environment. The second thing I will do, is make sure I don’t turn on iCloud Drive. For the last three years, there have been synchronization issues with iCloud Drive with each version of macOS. So hopefully, this year, I will be safe.



The keynote is about to start.

Really enjoying the ambient music that is playing along with the keynote start video. If you look really closely you will see that all of the dots are actually Memoji from around the world. I wonder if they are doing a real-time query, or are they showing the last time people checked in. If they would zoom out, it would be easier to make out each country.

And my first streaming hiccup happened right at the beginning. But glad to see Tim Cook, start the keynote with #BlackLivesMatter and their commitment to racial justice. And a great shout out to Covid-19 heroes. They are also doing all the stream support coming out of Apple Park.

Over to Craig!

iOS – 14. Rethinking the iconic elements – starting with the home screen. Yup.. customized home screen right off the bat.

New home screen – the App Library

Start by auto sorting all the content into default folders, and notice seems like folders in folders. Jiggle mode – to tap and hide pages you no longer need. Search apps and listed from alphabetically. Most used apps are at the top level of the AppLibrary screen.

Widgets on the home screen.

Next – Widgets – reimagined widgets on the screen with different sizes, and you can take them out of the widget view to the home screen. The widget gallery allows you to see the different sizes available

Picture in picture on the iPhone! This is really starting to be cool… I am sure that my Android friends will talk about how some of the features have been their before.

Siri updates now – so let’s focus on improving the UI. A compact design to allow you to keep seeing things you may be working on while you talk to Siri.

Siri has over 25 billion requests per month. Siri is being expanded to private conversation with translate for conversation, will work completely offline – keeping your data private. Supporting 11 languages.

Messages has 40% increase in the last year with 2x usage to groups. so updating groups conversations, and pinning your most important conversations. Memoji is being updated to add ever more styles, including face covering. Inline replies with threading. This will make it easier to keep track of the conversation… and only be notified when you are mentioned. Nice to see some of these updates, reflecting things I’ve been using in slack for years.

Maps are being updated, finished rolling out the new maps earlier this year. The new changes have gotten them praises from Fast Company about UI and privacy. Adding new countries this year. Along with the new feature called Guides, but now adding features for increased use of green options, beginning with cycling. Nice feature to show when you have to carry you bike or have a steep hill to climb up. If you have an electric car they are adding EV Routing to help remove range anxiety. Additional green features for China to track when the user can drive their cars within certain congested areas.

CarPlay is now available on 97% of all new cars in the US. They are adding a bunch of new types of app-types for the car. But the rumors are true – digital car keys in the iPhone. First car to support is the 2020 BMW. Place the phone in the qi charger to start your car. You can share your key (digitally) with specific driving profiles, also enabling this feature early in iOS 13. In order to expand usage, Apple is working with the CarKeyConsortium. New cars will support in 2021.

AppStore – extending support to enable to use “app clips” get the app right when you need it. Looks like it is using NFC to get the data, and then the app has to support it. Can use Apple Pay for payments so you don’t need to give them your credit card information, and if you use Sign-in with Apple, then you don’t need to provide other information either. Very cool private focus!

Next up – iPadOS!

Now that iPadOS is separate, they can focus on the features separately. New design elements for iPadOS. There are now over 1m apps on the App Store designed just for iPad. Of course, we have redesigned widgets.

Updating photos with a new sidebar, which makes it behave a bit more like photos on the Mac. And that is being added across many of the iPad Apps – Photos, Notes, Files, Music, etc.

Yay!! Finally sort order changes in Files. Fullscreen player in music.

Siri – all the small views make it better by allowing it to put a compact notification on top of other apps. This feature will be updated for iOS too. Of course search is also updated, and now matches the macOS search feature. This will now be universal across all of iPadOS.

Apple Pencil get’s its own section. Automatically translating writing into text. Scribble recognizes multiple languages in the same line. You can copy handwriting and paste as text.

And another new section on AirPods – automatic switching between devices. Between Mac, iPad, iPhone, etc. Also adding Spatial audio in the AirPods Pro. Also, tracking your head so that they don’t shift audio sound field, when you move your head. matching motion data between head, screen and AirPods Pros. That’s a hell of a lot of calculations.

WatchOS.

Now enabling multiple complications from the same app. And as usual, updated watch faces, including improved method for searching for complications. You can also share watch faces with – Face Sharing! (what a name – didn’t we see this in Agents of Shield this season). Developers can now also add pre-configured watch faces, or even share them across social media.

Adding cycling directions to maps on the watch.

Workout app is one of the most used app on the watch. Adding Dance as a new workout. They are using adding sensor fusion to figure out the differences between arms only, arms and legs, and full body workout dancing. Changed the name of the App on the iPhone to “fitness”.

A few new features in the watch – Sleep: Adding wind down to help you get to bed on time. By adding small shortcuts and reminders to get you ready to go to bed. Screen is off when you sleep and tap for a very simple watch face. Sleep is also available on the iPhone.

Hand washing to have automatically detect when you are washing your hands. Adds features to help you wash long enough.

A whole new section in the keynote on Privacy!

  1. Data minimization
  2. On-device intelligence
  3. Security protections
  4. Transparency and control

These are the principles that Apple uses in developing their devices, services and software. There are over 200 million users for sign-in with Apple. Wish to be able to cover your accounts to Sign in with Apple. Developers can now do this.

Can change location to be approximate, rather than precise. Also adding indictors to let you know if an app is using microphone or camera. Apps must ask if they will be tracking you. Apps privacy practices will be exposed as a Nutritional label. Information will be included in all of the app stores. Really happy to see this.

In the home, all of the new features have the following requirements

  1. Ease of use
  2. Privacy
  3. Better together.

HomeKit – partnered with Amazon, Google and others to expand ecosystem. Adding automations to quickly setup new HomeKit equipment. The home app is expanded to give you a quick view of what is going on. Light app is adding adaptive lighting. HomeKit security video will now enable activity zones, so you don’t get notification when people walk by, but do when they come to your door. You can also add face recognition of who is at your door.

AppleTV – Expanding multi-user support for games, so that you can swap out players. in TVOS14 will add picture in picture across all of your services, so that you can watch the news while doing your workout. On AppleTV+ they are now working on Issac Asimov’s Foundation. I have to confess I have never read these books, but the trailer looks pretty cool.

MacOS – Big Sur

An entirely new design. This always worries me… but let’s see the video. First design elements have been cleaned up and made comparable to what you’ve seen on the iPad. Things appear and disappear so you only see them when you “need’ them. Of course the icons are different, but yet the same. Nice to see Craig is using MacPro. Looking at the design it really does look like the iPadOS look.

iWork has been updated. Funny haven’t heard that name in a long time. Many of the controls have been reworked to expose features. And Control center is now on the Mac. Notification is now exposed by clicking on the time. They are grouped like on the iPhone. Widgets are exposed in the control center. Developers can bring their own to the Mac too.

Messages on the Mac – added in search to help you find things. You can create Memoji on the Mac. Effects and Pin conversations. Looks like they have finally gotten messages current with the iPhone.

Apple Maps – another example of making it comparable to the iPhone, like: ETA, indoor mapping, and favorites.

Mac Catalyst updates – can fully utilize the native resolution of the full screen. Controls have been updated. Maps and Messages were created with Catalyst.

Safari is being updated to be 50% faster than Chrome on average. But the big thing is updates for Privacy. Adding a privacy report toolbar. Also monitoring passwords to make sure they are not compromised. Allowing for extension updates to make it easier for developers to support Safari. The privacy allows for you to restrict extensions to specific pages, and allow them to function for only one time, etc.

You can now build your own customized start page for Safari. Allowing for privacy on the web, you can see the intelligent tracking and improved information. This feature is something you can do with add ons like the EFF Privacy Badger.

Tab hovering allows you to see previews. And you have built in translation for websites on the fly. And that’s macOs!

And now we have the ARM transition. Using their own Apple Silicon. Too funny on the video to the “undisclosed location”.

Nice to see racks and racks of MacPro’s in the “undisclosed” lab. Each of their devices, iPhones, iPads, and Apple Watch, are considered “Best in Class” because of Apple Silicon. A key thing to consider is power, if apple can get to the upper left corner the Apple Mac would truly be innovative.

By building a customer system on a chip – SoCs could be optimized to their major features.

And now we can enable developers to quickly develop across all of these platforms. Many of the foundations have been enabled already in Big Sur. All of Apple’s apps are enabled as “native” apps. When you recompile you can flag to be a “universal 2” app. Microsoft and Adobe have already started this work.

All the earlier Big Sur features were demoed on the Apple SoC demo system. Craig then showed MS office, Lightroom and Photoshop running on this machine. Photoshop was a 5GB image with many layers.

And then a 4k video on Final Cut Pro with realtime color correction, animated titles and lens flare all in real time. To show the A12Z processor they had three full-res Apple Pro Res videos playing back at the same time.

In order to transition, they will do a Rosetta like feature, called Rosetta2 that automatically translates all non-native apps. Will do it during install for most apps, but will also happen just in time for websites, etc.

Virtualization options will be enabled so that you can run Linux and other environments. Demoed Maya and Tomb Raider game to show how well a translated app can run. Also demoed Parallels to demo running a server. And the one more thing is, iPhone and iPadApps will run native on the new Macs.

Quick start program – allows for developers to access DTKs Developer Transition Kits, enable to get started with new hardware chips. You must apply and units will be shipped this week!

Timeline for the transition is – by the end of this year for the 1st customers, and will complete the transition in 2 years. BUT, will still support Intel Macs going forward for the foreseeable future. Developer betas will be available today, and public betas will be available in July.

Time to code!!!!

iPad only Podcast Editing

This week, I finally did an iPad only edit of my podcast GamesAtWork.Biz. I’ve been wanting to see if I could pull it off, and after last week’s edit (which was only partially done on the iPad), I took the plunge. Thank you to the team over at Ferrite for an amazing product, and for Jason Snell over at SixColors for showing his process.

Before going thru the process, let me describe my prior workflow. I use Logic Pro X from Apple, Loopback from Rogue Amoeba, and Audacity to put the edit together, and I ask my co-host and guests to use a local recording solution like Audio Hijack from Roque Amoeba. I record myself and the remote participants in Logic Pro X, using Loopback so that my audio is one track and all others are a second track (without picking up any system sounds), while the remote participants place their audio in Dropbox. Their audio has two channels, the left is only their voice, and the right is everyone else. Once I get their files, I use Audacity to split the file into two mono tracks, level the sound, and only keep their independent track.

After putting all the audio in Logic Pro, I then trim the tracks, cut out any major screw ups we may have had, do a final leveling and compression before exporting the final edit to Forecast – in order to add the cover art, and compress the file for final uploading to my website. On a good day without any major problems, I can pull together the final edit in about 2x the length of the podcast.

Overall it’s not too bad of a workflow, but getting rid of background noise can be problematic, and not being an expert at Logic Pro X, mean I am still having issues with filters, etc. The biggest one is, when I try to use remove silence, I get too many clipped words or other audio artifacts. So I decided to try Ferrite from beginning to end.

The first thing is to make sure everyone uploads their captured audio to the same Dropbox. I changed my local process so that I am also capturing my sound directly to the same Dropbox folder.

Captured files for Episode 275

I have created a template in Ferrite that has the show intro and outro music and titles. This template has appropriate ducking for the various voice overs, but does not include any tracks for the speakers. We’ll come to that later.

Games At Work dot Biz Template

Now I have to import that audio and split the tracks so I only get the individual speaker’s tracks. Ferrite makes this pretty easy, I click on import audio, go to my Dropbox folder in the files app, and select the individual track.

Import files

This will create four new files in my main view. I select each file individual and use the share sheet and select “Covert to Mono” and the “Split to Separate Tracks” option.

Converto Mono

This create two new files in Ferrite with the original name of the file plus either “- Left” or “-Right”. I can now just delete the ones named “-Right” as they are not need.

Left and Right Files

I create a new Project using the above Template, Choose the Episode information from the created pick list (Year and Air Date), and then update the various meta data, including the art work, show title and tags. At this point I now have a base that I can add the individual tracks into to finalize the podcast.

After add in all the files and aligning the beginnings, removing any major problems sections as agreed with by my cohosts, I can select each speaker’s track and choose Strip silence.

When I am done with all the rest of the edits, I go back to the main menu and choose the share sheet again and select “Save to Dropbox”. This creates the final edit, with the show art, base tags, and the right file name. The only step left is to upload to the website so my co-host Michael Martine can do the show notes.

Export to Dropbox

Simples!