London

Interact 2016 – Insights in Self-centred Design

This week I spoke at Interact 2016 – slides and script below – a rather intimidating lineup alongside digital and physical architects whom all I greatly admire. I’m very grateful to Nomensa for the opportunity – not only is every talk an immense learning opportunity in public speaking, but I value even more the work prior – for the rigorous synthesis and curation of empirical insights is a process all designers should engage within as a practice of communication and reflection. Enjoy!

IxDA London – Implicit Interactions

Last night I guest hosted and organized an IxDA London event on Implicit Interactions at IDEO. Below is the event description.

Talk on Implicit Interactions at IxDA London by Karey Helms, photo by Jill Lin

As the Internet of Things infiltrates the mundane moments of our daily lives, ubiquitously embedding intelligence into objects and environments – our relationships with technology become increasingly dynamic, contextual, and intangible. Therefore as interaction designers, how do we design what could and should be the resulting invisible dialogues between people, places, and things?

This month we will shift our attention away from classic, explicit interaction paradigms – those that demand our attention for direct engagement and manipulation – to implicit interactions that seamlessly behave in the background. Join us for product, prototyping, and research perspectives as we hear from Hongbin Zhuang, CEO and Co-founder of Olly robot; Karey Helms, a Senior Interaction Designer at Zebra Technologies; and Alex Taylor from Microsoft Research.

MeetUp – Artificial Intelligence, Machine Learning & Bots

Back in June I attended Practical Introduction into Artificial Intelligence by ASI Data Science as part of London Technology Week. The event was very well structured, and more importantly, perfectly distilled complex theories and processes into a digestible format for a novice like myself. I left feeling like an expert, and was able to confidently re-articulate the evening to others, which I think is very much a sign of a well run event and of course, great instructors. Moreover, as I’m navigating the world of artificial intelligence and machine learning in relation to my role and interests as an Interaction Designer, I’m being intentionally thoughtful regarding the Pareto Principle – I don’t actually need to be an expert, but do want a solid 20% foundational knowledge base.

Anyhow, the evening began with a history of artificial intelligence and the corresponding theories of influential scientists on the topic before launching into a hands-on session in which participants built our own handwriting recognition engine. Key takeaways included a clearer understanding of the relationship between artificial intelligence verse machine learning, a new comprehension of artificial neural networks (see slide below), and insights into what is and is not currently possible relative to real world applications.

Practical Introduction into Artificial Intelligence with ASI Data Science & London Technology Week

Fast forward to this past Tuesday, when I attended a MeetUp by Udacity at Google Campus London on Machine Learning and Bots by Lilian Kasem. Very different content and structure but equally insightful. Lilian’s talk was centered around Bots specifically – from the definition of a bot, to a live coding demo of the creation of a bot in Microsoft Bot Framework, and best bot practices – all while weaving in the integration of machine learning if it adds value (I also appreciate her stress on the if). Her resources in the image below:

Machine Learning and Bots by Lilian Kasem

In addition to the obvious relevance of these two events to current interaction design trends, they are helping me formulate next steps for two of my current projects, one personal and one professional.

The personal project – Burrito, a marriage bot that analyzes messages between my husband and me to determine who is a better spouse – is currently being refactored into a formal bot for Telegram. Lilian’s talk in particular was very helpful as my project focus as transitioned from theoretical to technical as I am now seeking to create a higher fidelity product and implement better, if not best, programming practices.

The professional project, for which I will intentionally be quite vague, investigates the impact of implicit data on enterprise organizational and technical systems, and in particular, the transition from frictionless to exception based workflows. As an Interaction Designer, I firmly believe it is important to not only empathize with humans, but also technology and its corresponding data because the concept of user – who, what, or where – is increasingly being blurred. Therefore, ASI’s introduction into AI was particular helpful in how I understand, design for, and implement data into my professional practice.

Long story short – two great events! In the coming weeks, I am to write a followup post regarding my technical developments in regards to Burrito bot, but also an online course I began this week to formalize my programming skills relative the Internet of Things.

IxDA London – The Uncanny Valley & Subconscious Biases of Conversational UI

The theme of IxDA London’s June event was Algorithms, Machine Learning, AI and us designers – an evening of great discussions that prompted me to dig up reading material on The Uncanny Valley and Subconscious Biases. Both these topics were strongly present, the former directly and the latter indirectly, in Ed and John’s presentation on designing for IBM Watson. They discussed the ‘Uncanny Valley of Emotion’ as a third line on the curve in addition to ‘still’ and ‘moving’ in the traditional model of the uncanny valley. While I understand their intent in creating a third category – accounting for the invisible systems, agents, and interactions not visible or physically accessible – in retrospect I disagree with the characterization. Emotion, or lack of, can by explicitly betrayed by movement. From my understanding, subtle asynchronous or unnatural movements directly related to emotional responses expected by humans are a key ingredient in the Uncanny Valley. Therefore, I would rename the ’emotion’ curve suggested by the Watson team to ‘implicit,’ thereby retaining emotion as a criteria for both explicit (still and moving) and implicit interactions.

Uncanny valley of emotion at IxDA London, photo by Karey Helms

The second subtopic, subconscious biases, greatly concerns me. A recent article in the New York Times – Artificial Intelligence’s White Guy Problem – sums it up perfectly. As designers, how do we build into our processes accountability for subconscious (and conscious) biases relative to algorithms, machine learning, and conversational interfaces? I don’t have an answer but I would like to find one!

Relevant links and resources:
The Uncanny Valley
Uncanny valley: why we find human-like robots and dolls so creepy
Navigating a social world with robot partners: A quantitative cartography of the Uncanny Valley
The Uncanny Wall
Artificial Intelligence’s White Guy Problem

Volunteering with InMoov Robots for Good

This past weekend I spent most of Saturday volunteering at Somerset House for the InMoov Robots for Good project, and an open source 3D printed robot connecting children in hospitals with the London zoo via augmented reality. I personally find the project fascinating on so many levels – from open source robotics facilitated by Wevolver to the meaningful avatar application of technology – that I really wanted to take part. Not sure how much I helped, attempting to trouble shoot the Oculus Rift and tighten some knuckle joints, but I definitely enjoyed contributing and getting to know the Wevolver founders. Needless to say, I highly recommend stopping by or chipping in before the build is over!

Robots for Good robotic hand at Somerset House, photo by Karey Helms

And on a technical note, also check out MyRobotLab for an excellent open source Java service based framework for robotics (as well as plenty of community support).