Over two+ years at argodesign, I was part of a small design group externally leading deep user experience development of Magic Leap's native platform leading up to the release of the ML2.
As the team's Motion Director, I led exploration and refinement of unique spatial interactions, I defined a set of brand expression global Motion principles (touching everything from the launch menu to hardware LEDs), and I helped to create a true spatial-first operating system. The ML2 implements much of our work and disceminated thinking from our designs can be found in products for Meta and Apple.
Role: Motion Director
A quick-access guide for developers and motion designers
Over the course of dozens of projects, explorations, and user flows, I devised a set of motion principles that expressed well across the broad spectrum of spatial computing, but also held up when translated into two dimensions and applied to system protocols.
⇢ Theory
↳ Timing Guides
↳ Bezier References
↳ Millisecond Values
⇢ "When to Apply"
I refined the thinking and theory behind the practice and wrote a concise set of principles, timing guides, bezier curves, and to-the-millisecond values to serve as a baseline and be applied and expanded on across the platform and hardware.
I led motion development and visual exploration for a redesign of “Prisms” (essentially a spatial ‘window’). Through this process, I created the 'Anima' system, a new set of forms and functions that dramatically reduced friction and created a more immersive interaction model.
A radical departure from the maximalist prism designs of the ML1, in which all manipulation came through invoking a contextual menu, I wanted to re-imagine the prism as a directly manipulated minimal container, invisible until called on.
With a somewhat eccentric After Effects rig, devised for quick iteration, I explored dozens if not hundreds of models and user flows surfacing a more sophisticated, subtler approach to the most ubiquitous tools on the device while maintaining and often enhancing prism utility and usability.
Controls explainer loop for quick 1n-headset reference
Anima prism rotation
Anima prism direct-manipulate movement
Anima prism resize
Anima bi-manual scaling
Removing the old design's box outlines and thick, purple corners, I imagined a translucent volume and contextual outlines invoked only under a user's focus. The lines themselves would then subtly guide the user through their interaction options and disappear once the user went back to regular use.
Anima scale select
Anima cursor + bound indicator
Anima cursor + bounds interaction
Anima cursor + bound indicator
Lumin Assistant was a placeful, persistent, system-level MR assistant meant to intuitively follow and help users. Easily invoked and situationally aware, it served as a traditional voice assist as well as a guide through the new paradigms of spatial computing.
Spatially representing the entity was a balance of brand look & feel, technical limitations, and de-anthropomorphism, moving away from Magic Leap's early ideation around a humanoid look.
The spatial style I presented eventually evolved into a more traditional voice-only OS feature with a visual overlay, which I then helped to stylize.
Look development (After Effects + Element 3D)
Proof of concept animation (After Effects + Element 3D)
Voice-only stylization reference (After Effects)
Destinations: Office (After Effects + Element 3D)
A global, persistently placeful layer over the real world can be a confusing concept. Early on in my work for Magic Leap there was an immediate need to illustrate to investors and onboarding personal how a spatial metaverse might work.
With my work on 'Destinations', the persistent world arm of the Lumin platform, I created a series of videos clearly showing how a location-based experience might work and be utilized by businesses like offices, retail, hospitality, and warehouses.
Destinations: Retail (After Effects + Element 3D)
Destinations: Brewery (After Effects + Element 3D)
Exploring the possibilities of a control-free mixed reality experience, I helped to flesh out, test, and ship a gesture-based interaction model, within the restrictions of the ML2 hardware, that has carried into newer systems and helped to define the current industry standard.
Additionally, I shot, composited, and animated assets placed throughout the operating system to teach users how to utilize the gesture functions.
A design story I shot & produced about our gesture system
System assets illustrating use (After Effects)
Hand-tracking explorations (After Effects)
Applying the Unified Motion Language System
Not just a collection of blinking dots, my LED system represents months of research and application of my Unified Motion Language System in developing the ML2’s hardware’s feedback and status communication. Limited to five LED lights on the computer, and one each on the headset and controller. Creating a consistent language that related to the brand expression, it also imbued personality and utility in the device with an absolute minimum fidelity.
Projects
USAA x NFLDirection
ApolloMotion Language System
MetaUI Vision
Sirius XMVision Exploration
EtihadFuture Vision
Realtor.comMotion Design
AppleTitle Design
T-MobileExperiential
Cognitive ScaleDesign Story
Patrón TequilaBrand Anthem
Magic Leap, Kinetic TypeBrand Identity
Under ArmourMotion Design
VRBO x FödaIdentity
NetflixProduct Design
Patrón Cocktail LabProduct Video
DellProduct Videos
DXCProduct Video
CitiMarketing
PhaidonIllustration
WaldoOrientation Animation
WinkProduct Design
Finger on the Pulse NYCIllustration
Larkin Lane FilmsBrand Identity
Canvas by DellDocumentary Series