Below is the loose script and slides from a 20 minute talk I gave in Bristol for World IA Day 2016 – “Information Everywhere, Architects Everywhere.” I presented personal design projects in which I prototype physical manifestations of invisible interactions from the mundane moments of my daily life, and the resulting insights that inform how I make sense of complex sociotechnical systems and dynamic information exchanges to design meaningful enterprise solutions. More information on the event and other speakers in Brisol can be viewed on the Lanyrd event page.
I’m an Interaction Designer & Technologist based in London, and I’m really excited to be here because of course like everyone else I’m sure, making sense of information is a large part of what I do. And not only because of my role as an Interaction Designer, but also because making sense of information is the core business of the company I work for, Zebra Technologies, who specialises in data capture products – from barcode printers and scanners to rugged mobile computers – and from these, generates actionable information solutions for enterprises. Which I know, that all of that was mouthful – because for me, all that Zebra does is also a headful – and therefore precisely what I would like to talk about today: How do we make sense of complex data-driven systems when designing enterprise solutions?
I do this through the physical prototyping of invisible interactions from my daily life to gain insights into data-driven systems.
Or, by Making the Invisible Physical.
During this talk there are three things I’d like to speak about: First, a bit more about what makes enterprise UX so challenging at Zebra. Second, what is physical prototyping and did I start using it as part of my Information Architecture toolbox. And lastly, two quick examples of recent personal projects in which I’m using physical prototyping to understand data-driven systems and the insights I’ve learned. And ultimately inspire you to try physical prototyping as a tool to understand complex and dynamic systems of information.
So why is designing meaningful user experiences for enterprises, and specifically for Zebra, so challenging?
First, our users are what we call situationally disabled, meaning they might be under extra cognitive stress or physically limited regarding how they receive and communicate vital information while simultaneously performing other tasks. The photo above is a great example. Not only is the nurse wearing latex gloves that physically changes her interaction with the device, but she must also interact with a potentially emotional, stressed, or critical patient – and of course perhaps family members too. On the other extreme, imagine a forklift driver wearing industrial gloves, safety equipment, and headphones – operating a loud, vibrating vehicle – he is also very much situationally disabled.
Second, our users work in complex, data-rich environments where they are expected to act quickly and accurately regarding both product identification and verification. While this photo of a warehouse looks organised, I would argue organised chaos, and challenge you to swiftly and with 100% certainty find the correct brown box. And of course repeat infinitely. Navigating these cities of information isn’t easy, especially when tasked with isolating an item among a sea of noise.
Lastly, this information is dynamic and often of abstract concepts. For example, this image is computer generated output of 3D scanning technology that needs to be translated appropriately into a real-time qualitative insight.
Therefore the challenge of my role as an enterprise Interaction Designer is to design how situationally disabled people interact with dynamic information in complex environments. But, by definition, interactions are reciprocal, so you could also say that I design how dynamic information interacts with situationally disabled people in complex environments. But what does it mean for information to interact?
How do we architect something abstract?
How do we architect something dynamic?
How do we architect something contextual?
How do we architect something intangible?
How do we architect something invisible?
How do we architect something synthetic?
Here is where I want to focus on the word architect, because my career actually began in architecture, where I was focusing on the realm of physical spaces and structures. And that’s where I learned about physical prototyping, whether with or without technology, as tool to understand, represent, experience, and test abstract concepts and ultimately design meaningful experiences. My initial prototypes, or models as we called them then, started out as form studies from an analytical perspective, in which physical systems were very meticulously examined and then crafted based on core components. But eventually they became kinetic – in which a focus on the form was replaced with a focus on the interactions between components. This transitioned into responsiveness in which user interaction and contextual awareness informed design decisions. Ultimately resulting in an understanding in embodied interactions – as Paul Dourish defines, ‘The creation, manipulation, and sharing of meaning through engaged interaction with artifacts.’ Therefore:
Physical prototyping is a tool to understand core components
Whether physical or digital, it’s a great starting point to both isolate pieces and parts while exploring varied configurations and resulting patterns.
Physical prototyping is a tool to situate in context
And understand how pieces and patterns related to a defined environment
Physical prototyping is a tool to explore interaction modalities
And translate an idea into a meaningful user experience
And I also believe an equally valuable tool in physically connected environments. Which brings me to share two very recent, and in progress, example personal projects with a physical computing focus, in which I’m attempting to prototype mundane moments of my daily life to better understand what it means for dynamic information to interact in complex connected environments.
The first project I want to share is Phygital Party Mode. Phygital Party Mode is an Internet of Things exploration into the real-time analytics of how people interact with my website and the translation of their experience into one of my own. The professional impetus for this project was in ongoing work with real-time and historical dashboards, but specifically an interest in the qualitative and temporal nature of the interactions that generates the resulting analytics. I specifically decided to physically prototype the interactions with my website because I couple years ago I created non-phygital Party Mode – which is a link at the bottom of my website that that turns the project links into a musical instrument. Unfortunately I’m not showing sound in this GIF, but if you try from a desktop computer you can enjoy disco, patriot, and cat modes…
Fast forward to now I’ve refactored the code to create a prototype that connects to a Philips HUE light via Arduino to real-time emulate the change in project colour. Though I don’t think the act of controlling a light from a display is novel, not for me particularly challenging – not too long ago I sent the prototype link to friends and I soon as I scaled up the interaction it sparked a flood of questions and challenges: The most obvious being what if more than one person is on my site at once? How will I manage users? And with sound introduced, instead of an instrument, I could create a band, but who controls the light? All these questions and many more (have not only created a lot more work for myself), but also sparked some key insights into the layered interactions and information flows.
Information is emergent
The behaviour of components within complex systems, especially when there are multiple access points, are not always predictable nor exhibit the same patterns combined as when individual entities.
Information is part of an ecosystem
Not only is it organised, whether intentionally or not, but it’s also generated, adapted, and consumed within the constraints of a context.
Information is not a fixed modality
Whether computer to human, or human to computer, there are multiple ways to communicate and experience what might be considered the same information.
The second personal project I’m currently working on is called Burrito – which I should preface there is no significance in the name. Burrito is a bot, currently transitioning into a robot, with the intent to explore how people have conversations with, thru, and without technology. This was obviously in part inspired by the hype around artificial intelligence and conversational UIs, but also as a professional interest in our relationship with devices, platforms, and algorithms. Therefore the goal of Burrito is to make the concept of a relationship physical. And naturally, I’m starting with my marriage.
Every time my husband and I text each other, the emojis we use are automatically stored in a database. And based on a simple scoring system we’ve assigned to individual emojis, their frequency, and proximity – at the end of the week, Burrito tells us who is the better spouse.
At this moment, Burrito is still a bot, but I’ve started working on how Burrito’s “judgement” so to speak, translates into behaviour towards us individual. This is a remote control car I’m repurposing from a previous school project.
I’m sure this sounds a bit bonkers, and definitely isn’t solving any problems, but has been really insightful:
Information has a role in the design of algorithms
And as a designer, we can design the algorithm. Much like cultivating a relationship, information and its meta data can transform into a personality based on the formula we put in place.
Information has broader implications
But at the same time, these new patterns, especially when situated in context have broader implications. So I think it’s important to not only look for the intended outcome of a system, but also explore the unintended consequences.
Information has varying levels of fidelity
And think about the levels of information fidelity appropriate necessary to the user and context to accomplish the desired experience.
To summarise, I’ve learned a lot about information and seemingly invisible interaction through physical prototyping in my personal projects. And these insights don’t intend to be a formula, but rather a growing framework to understand the complex systems in connected environment so I can design meaningful interactions for enterprise users.
And of course hopefully inspire you to embrace ambiguity and make the invisible physical.