WWDC Day 1 – Awesome Woes

WWDC is designed for developers, and this year’s keynote didn’t miss. I arrived shortly after 6 am, and ended up pretty far back in the queue, but as always, Apple does a great job of moving every one thru. The four hour wait didn’t seem as along, and the discussions in the queue were fun and enlightening.

I ended up with people from China, Romania, UK and other locations. Like last year, most of them were developers from big companies. The jackets from Sunday were able to be turned inside out, with all of them having random colors. I have seen Red, Blue, Orange, and Black. I hope there are at least 2 other colors – shout out to six colors!

 

Mine is blue, and even though it is pretty cool temperature wise in San Jose, the material is dense enough – so I didn’t wear it all day. Hope I can get a few days that I can wear it at home.

We turned and got into the building around 7:30 and had large breakfast. Here I was talking to developers from Japan, UK, Jamaica, and US. One guy was a Facebook developer, and we discussed that I removed Facebook support from one of my apps due to the level of data I was getting in the Analytics. Way way too much data.

After making it in and having an amazing keynote I was sitting far enough in the back to be able to grab a bio break – this next picture shows you a view you don’t see in the keynote stream! TONS of developers enjoying themselves.

I was able to get a seat on an isle so that made it easier to get up and around if needed.

So let’s get to what I was most excited about. First and foremost, WOW the Mac Pro is amazing, the monitor is crazy beautiful, and in true apple form, I won’t be buying this one. It is designed for true pros, and I don’t need all of this, nor can I afford it.

Catalyst (Aka Marzipan) looks really incredible. The biggest thing in it is the same thing that is making tons of things possible – SWIFTUI. This is going to make it possible to really do really dynamic UIs and do so that really appears to be device independent. Now let’s see if my vision comes true. I plan on rebuilding my Wasted Time app with SwiftUI and Catalyst. And my new app I’ve been working on, will be build with SwiftUI too. So excited.

Other things that were cool – Multi-User support on tvOS, and HomePod. Hearing App on the Watch. Stand alone Watch Apps and AppStore. Map updates. IPadOS. HomeKit Secure Video , HomeKit Routers, CarPlay Updates, USB Support on iPad! And Major updates to ARKit in clouding Reality Composer and RealityKit.

Now on to to the woe.

During the lunch break, I successfully downloaded all the betas, etc. and tried to update the iPad first. This failed over and over, saying it needed to install software. I assumed this was going to need macOS Catalina, so I kicked off the install. Everything seemed to go well, until the final reboot. It hung

After 20-30 minutes, I rebooted. The boot up looked good until I went to log in, and it hung (at roughly the same place). I tried to install again. No go. I tried to rollback to mirror OS. This appeared to go Ok, but then asked me for a disk drive password. nothing worked. Not login (which I had to use to unlock the disk to do the downgrade), not iCloud, nothing. After a few tries, I went into disk until and saw the the upgrade had created two drive. One for the OS, and one for Data.

A key announcement for Catalina was they were isolating the OS and putting it on a read only partition. So this made sense. I decided that to fix the upgrade. I would delete the OS only partition. So now I was really hosed. I then took and went back to the hotel, booted to recovery mode, went to a terminal window and tried to copy as much of my data of the drive as I could. Putting is on an external 500GB drive I brought with. It of course, ran out of room.

So now I have just erased the entire 2TB drive, and am in the process of installing macOS Sierra (clean). When it is done, I will do an upgrade to Catalina. We will see how we go from there. Wish me luck!

Getting Ready for WWDC

The trip yesterday was long but uneventful, unless you count the 1/3 of the continental United States that had turbulence enough to stop all cabin service, and keep EVERYONE in their seats for about 90 minutes. However, the view was amazing:

Got to San Jose this morning

After a late night arrival, grabbing a car, getting to the hotel, and getting some sleep. I woke up early (not intentionally), rolled over, and woke back up fully (still early), and decided to head in and grab a breakfast before getting in line to get my badge, etc.

Well, by the time I drove around and around to find a place to park for less than $25, I ended up just heading to the line, which was surprisingly shorter than I expected; however that didn’t last long.

The right lane is scholarship students

After a few hours waiting, and talked to a developer from Kroger and another indie developer from the UK, the line started to move. I, of course, had to get my bag “randomly” searched. I did make it into the first group into the door, where I got a nice surprise.

After getting my stuff, I did get back to my hotel and got ready to head over to watch a recording of the TWiT podcast.

The podcast was a load of fun, and even got to plug my podcast <a href=”https://gamesatwork.biz”>Games at Work dot Biz</A> with Leo.

What am I looking forward to?

Well when I picked up my new iPad Pro last year, it became clear to me that the horse power and form factor where making it good enough to be a development machine. My dream would that Tim announces xCode for the iPad, along with major functionality for Marzipan. I can’t wait for the keynote tomorrow!

Moogfest Day 4 – It’s a Wrap

The final day of Moogfest is always interesting. Seeing which artists will do talks on Sunday morning, and what media will be presented in the afternoon. This year, there did not have a movie, but they did have LightBeam playing a really laid back ambient set with visualizations from plant biometric visualizations.

The highlight for me was a talk that Patrick Gleeson did at the Full Frame Theater. His talk was about the history of synth music and the various projects he worked on over the last fifty (yes!!! 50) years. He also played a ton of music off of YouTube including a 1950s song from a researcher at RCA Labs named Vladimir UssacheVsky. Three people influenced by the revolution of this kind of music were Don Buchla, Alan R. Pearlman, and Robert Moog. We now have the thread of bringing the modern synthesizers to market.

In 2016, I had the opportunity to hear Silver Apples play at Moogfest and he was a big user of the Buchla Box synth devices. In 2017 and 2018, I heard Suzanne Ciani at MoogFest, another user of Buchla. These are very different from the Moog synths as they don’t use a keyboard, it was felt that why should this new instrument be tied to the input of an old idea (the keyboard).

The other cool part of the discussion was how we get from the experimental sounds from the 50’s and 60’s to the popular music of today. Patrick believed I was driven by two key artists – Tangerine Dream and Kraftwerk. I can’t find fault in that logic as I love both of them. The other album that was instrumental in this shift was the Fat Albert Rotunda from Herbie Hancock. This one surprised me. While I remember Herbie Hancock for his music in the 80s, and I knew he was a long time Jazz musician, I hadn’t realized his early experimental synth work with Patrick Gleeson.

The other session on Sunday I went to was pretty much an add for a former kickstarter called Sensory Percussion. They had Madame Gandhi (from MoogFest in 2018) there to show how a drummer can use their tech to do an entire electronic performance. Very cool, and I really like Madame Gandhi’s music. Go check out her music over at https://www.madamegandhi.com.

Moogfest Day 3 – Music, fun and disappointment

One of the highlights for 2019 was going to be getting to see and hear Thomas Dolby; unfortunately he was ill and his doctors would not let him travel on Saturday to be at Moogfest. While this was a big disappointment for me, it was still a pretty fun day with some amazing music.

I started the day in a hands on session by the fantastic team from NCSU Libraries. They had a session that allowed you to play with 3D Printing, 2D autosketching, AR, VR, holograms, electronic synths and more. They even had a kit from Moog called the Werkstatt-01:

the kit was from MoogFest 2014, and allows you to really play with sounds.

I then got to sit in a session on using the iPad for music production. I had really high hopes for this session, but was a bit disappointed. The presenter was going to build his presentation when he had a day to relax from flying from Belgium, but had a 1 day flight delay. One of those typical airport delays where you are slowly moved hour by hour by the airlines, and can’t leave the airport, nor do anything of value. the session turned into an hour long discussion of various tips and tricks, which was good, but not helpful for me.

I then quickly made my way over to see Questlove replace Thomas Dolby for a talk on “hacking the truth” with a reporter from Buzzfeed – Jason Leopold. I had missed his talk on FOIA requests, so this was enjoyable to hear. Again not what I expected, but really a good talk.

I then caught up with a friend, talked non-profit businesses and listened to QuestLove’s set of music at the American Tobacco Campus.

I spent the evening at the Carolina Theater with two acts: The first being Kelly Moran – who did a wild set of Piano music and synth.

The second set was the replacement for Thomas Dolby – Detroit Techno DJ Juan Atkins – His music loud, fun and exciting.

MoogFest Day 2 – Workshops and Martin Gore

Today I got the chance to do a few workshops that I had signed up for. The first session was done by NC State and took everyone thru doing some WebVR development using AFrame and a site called Glitch. Glitch allows you to setup a WebVR environment quickly and easily. Fun class, reminded me of the early days of ActiveWorlds and second life. I have a quick sample put together in the class. Check it out: https://mr-webvr-moogworkshop.glitch.com

The second workshop was immediately after lunch and was about building a synth in your web browser. Well not really true as it was really about setting up a development environment using The Progressive environment and p5js plug ins. Unfortunately this sessions overlapped getting to see Martin Gore from Depeche Mode and I was not above to do the whole session. The beginning I was in was very slow and a bit dry, so I am hoping to spend some time going thru the sessions afterwards. All the material is located at the following site.

It was great to see Martin gore from Depeche Mode receive his Moog innovation award. As the writer of Depeche Mode music for decades, his music is some of the best in the last 40 years. He did a conversation with the head of mute records. Really enjoyed it, even if he was not much of a talker. The picture above is when the CEO of Moog presented the award.

Every so often a session is absolutely nothing like the description, the session called “Federal Funding for Artists, Technology, and Emerging Forms”, was one of those. While an interesting talk, by a engaging speaker and artist, there was no discussion on funding or getting grants. I ended up heading for an evening of music afterwards.

Two artists were very good. One Spencer Zahn used the stand up bass to fuse jazz and electronica:

The next one, Debit – was pure electronics :

Really enjoyed the evening at the Pinhook!

A slow start – Moogfest 2019 Day 1

Today is day 1 of Moogfest 2019. Got here in time to pick up my pass at 10am (when it opened) and they were still setting up the place. To say setting up the place, is a bit of an understatement. The didn’t even have the tent fully constructed. The VIP lounge wasn’t built yet, and the were trucks everywhere.

To be honest, this is not really unsurprising, given that they changed ticket companies within the last month… and the workshops were scheduled very very late this year. The content this year seems to be even lighter than last year.. but the good news is both Martin Gore (Depeche Mode), and Thomas Dolby are here.

I spent this morning look at the pop-up store hosted by Guitar Center. The coolest thing I saw this year so far is the following synth:

The name of the company is Folktek, the are using a mix of electronics, gold, and wood to create beautiful instruments. I didn’t get to hear it played yet, but I did sample a few other synths by others. It is amazing to me that you can get some many sounds out of these old school synths.

Looking forward to a ton of workshops tomorrow.. and getting to see Martin Gore get his award from the team at MoogFest!!

Tesla Autonomy Day (Day 1)

So today Tesla held their first Autonomy for analysts. They described their own customer neural network chip for self driving. Began in 2016, Dec. 2017 first chip from manufacturing , July 2018 – chip full production, Dec. 2018 – started to retrofit employee cars, March 2019 – model S and X added to new cars, and April forward in the new Model 3’s. So three years from design to scale in all their major vehicles.

The goals of the hardware team are:

The final chip looks likes this:

Straight forward board, graphics, power, and compute. The chip has redundant chips, for safety and security. And the whole thing fits behind the glove compartment. Adding the idea of redundancy to say that the chance of the computer fully failing is less than someone losing conscieness while driving.

The chips us a consensus algorithm to see if they agree before actually sending the command to the car.

14 nm FinFET CMOS:

Also has a safety system on the chip so it will ONLY run software signed by Tesla. The chip can handle 2100 frames per second. Developed their own Neural Network Compiler for the chip – not a big surprise here.

Results:

  • Runs at 72w (goal was stay under 100w)
  • Chip costs is about 80% the prior
  • Can handle 2,300 framers per second (it is 21x faster than last chip) (NVidia is 21 TOPS(1) and they are at 144 TOPS)

According to Elon – “all Tesla cars being produced now have all the hardware required for full self driving.”

1) TOPS – Trillion Operations Per Second

Software discussion:

Andre created the computer vision class at Stanford. Has been training neural networks at Google, Stanford, and other places.

So the basic problem that the network solves is visual recognition. By explaining on training and back propagation explains how a network actually works. In order to help the training sometimes they will have a human annotate an image for things like lane lines.

This is key for when their are unusual results, night time, wet roads, shadows, etc. Key message is more data makes the networks better. And while simulation can help, the real world is needed to address the “long tale” of reality.

Using bounding boxes, Andre took us thru a concrete example of how the training works. Great example of fixing problems in learning, a bike on the back of a car. Had to use training sets to show bikes on the back of cars, to help it realize they are one item from a driving perspective, not two. Finding things on the road is also critical, so sourcing on the road, the networking needs to understand in this.

The software identifies when people have to intervene to fix things, capture that data and send it to Tesla, which fixes it and ads it to the unit test cases. Tesla can also request the “fleet” to send them certain types of data, i.e. people coming in from the right or left lane to center. They then run the new model in “shadow mode” to test it in production. They check the results, and then can deploy the new fixes by flipping a bit between “shadow mode” and production mode. This is basically A-B testing. This is happens regardless if the car is using auto-pilot or not. People’s driving is actually annotating the data for Tesla.

Augmented Vision in the vehicles will show you what the car is actually “seeing”. Path prediction is used to handle clover leafs on highways.

Love the comment that most of us don’t shoot lasers out of our eyes to see around, this is why visual recognition is critical for level 4 and level 5 self driving cars. Tesla is going all in on visual.

Elon believes they can be feature complete this year for self driving, for people to trust it by 2nd quarter next year, and year end 2020 for regulatory ready. Of course that will take longer to get regulators to actually approve it.

Final representation is about driving at scale with no intervention by Stuart:

How do you package the HW and Computer vision together to drive this a production at scale. They actually run multiple simulations at the same time, and use them to challenge each other to get to a single set of real world truth.

A deeper discussion of how shadow mode works.

Then Controlled Deployment:

Get clips to then play them back with new simulations.. and tune and fix algorithms.

Then confirm vehicle behavior and allow the system to validate it is working correct, And based on this data collection they will then decide when code is ready to be turned on.

So based on their lifecycle they are able to increase the speed of moving fixes into production.

Elon ended up talking about how things were added in for redundancy since 2016, and how the cars are designed to be robo-taxis since Oct. 2016. Cars prior to this will not be upgraded, as it is more expensive to upgrade a car, then just make a new car.

Reviewing of the master plan, says that Tesla is on track. Expect to have first Robo Taxis out next year, with no one in them. The challenge is for people to realize that the exponential improvement curve is getting us there faster than not. Biggest challenge is getting regulatory approval. Will become a Uber or Air BnB model, and where not enough people share their vehicles Tesla will deploy vehicles themselves.

Goal of Robo Taxis will be to smooth out the demand curve of the fleet.

If you lease a Model 3 you won’t have the option to buy it (based on this new model).

Current battery pack is design GE for 300,000 miles.. new model will be 1M miles with minimal maintenance.

Compared cost of owning / driving a car verses a Robo taxi. And then looked at gross profit of a Robo taxi.

Looks like Tesla will take this seriously, and they actually have a clause in their contracts that people can’t use the car with other “networks” so you have to use there robo taxi network.

Very interesting presentation, technology (HW, SW, and Scale) plus business models.

Conversation is not enough

In the last post I talked about how conversations are a powerful way of interacting with a computer. A conversation must have context and memory in order to enable this power. Simple commands and individual queries are good, but not sufficient to enable the UX of the future.

Now let’s look at augmented reality (AR). Since 2014, I’ve had a device that allows for a graphical overlay in the real world, but it was not AR. It many ways it was no different than current audio assistants, it had a set of limited use cases that were ok, but it had no real context. Google Glass is an experiment that was put out in the public long before it had the capabilities to be worth while. Over the years, Google has taken it out of the consumer space, and released an updated pair for the enterprise.

While the ability to overlay information on the real world is helpful, what Glass missed was the context of that world. Both Google and Apple now have improved this functionally in their various AR developer APIs. Anchoring a 3D model to the real world allows your to address use cases like visualization of new furniture in your house. This is a start, but you also need to understand lighting (which has been introduced in ARKit 2.0) and Physics (available in most gaming platforms, e.g. Unity).

Understanding the context of gravity and lighting, allows for interactions in the world that are natural. Natural interaction requires simulations that understand all of this, and more. One thing I’ve not seen to date is the ability to move a virtual 3D object around a room, and see it pass behind other objects in the room. All examples I’ve seen to date have the object bounce forward and backwards as you drag it around.

The other item that is needed is devices that not only show you the world, but allow you to interact. Many device makers require that you have some kind of device you attach to your hand (think of the Oculus). Microsoft’s Hololens 2 seems to have solved this one, but I’ve not had a chance to play with it in person yet.

We are getting there, but putting the conversation in the AR world is the magic that will bring this all together.