Integrate with motorized iPhone stands using DockKit

Introduction to DockKit

  • Allows iPhone to act as a compute for motorized camera stands – controls pitch and yaw
  • Stands have LED indicator to let you know that it is tracking, simple buttons for power and disabling tracking
  • You pair a phone with the Stand – all the control is in the iPhone at the system level.  So any App that uses camera APIs can use this Feature
  • Demonstrated a prototyped Stand – that allows for the camera to track the speaker.

How it works

  • It works with the camera processing pipeline in iOS, it estimates trajectory and then adjusts to keep the user in frame.
  • The process runs as a daemon and controls actuators on the device – at 30 fps.
  • It uses bounding boxes and can track multiple users via face and body detection.
  • DockKit starts with a primary tracking person and will try to Fram others but will ensure that the primary stays in frame.

Custom Control

  • You can change framing, change what is being tracked, or directly control the motorized accessory 
  • You must register for accessory state changes in order to do this.  DockAccessoryManager.shared.accessoryStateChanges – there are only two must handle stages, docked and undocked.
  • You can change cropping via Framing Mode (left, center, or right) or you can set a specific Region of interest.
  • You can also do custom motor controls – just set setSystemTrackingEnabled(false) – then you can adjust X & Y for rotation/Pitch (X) and Tilt/Yaw (Y)
  • You can also add your own inference using Vision, CustomML or other Heuristics – use this to decide what you want to track.  Just set a bounding box on the item to track the Observation you’ve defined.  You can use a type of .humanFace or .object 
  • The current vision framework can already be used to detect the following which can be turned into trackable items:

Device animations

  • There are four built in animations – yes, no, wakeup and kapow!
  • You can setup a motion detector to trigger the animations