Category: MeetUp

MeetUp – Artificial Intelligence, Machine Learning & Bots

Back in June I attended Practical Introduction into Artificial Intelligence by ASI Data Science as part of London Technology Week. The event was very well structured, and more importantly, perfectly distilled complex theories and processes into a digestible format for a novice like myself. I left feeling like an expert, and was able to confidently re-articulate the evening to others, which I think is very much a sign of a well run event and of course, great instructors. Moreover, as I’m navigating the world of artificial intelligence and machine learning in relation to my role and interests as an Interaction Designer, I’m being intentionally thoughtful regarding the Pareto Principle – I don’t actually need to be an expert, but do want a solid 20% foundational knowledge base.

Anyhow, the evening began with a history of artificial intelligence and the corresponding theories of influential scientists on the topic before launching into a hands-on session in which participants built our own handwriting recognition engine. Key takeaways included a clearer understanding of the relationship between artificial intelligence verse machine learning, a new comprehension of artificial neural networks (see slide below), and insights into what is and is not currently possible relative to real world applications.

Practical Introduction into Artificial Intelligence with ASI Data Science & London Technology Week

Fast forward to this past Tuesday, when I attended a MeetUp by Udacity at Google Campus London on Machine Learning and Bots by Lilian Kasem. Very different content and structure but equally insightful. Lilian’s talk was centered around Bots specifically – from the definition of a bot, to a live coding demo of the creation of a bot in Microsoft Bot Framework, and best bot practices – all while weaving in the integration of machine learning if it adds value (I also appreciate her stress on the if). Her resources in the image below:

Machine Learning and Bots by Lilian Kasem

In addition to the obvious relevance of these two events to current interaction design trends, they are helping me formulate next steps for two of my current projects, one personal and one professional.

The personal project – Burrito, a marriage bot that analyzes messages between my husband and me to determine who is a better spouse – is currently being refactored into a formal bot for Telegram. Lilian’s talk in particular was very helpful as my project focus as transitioned from theoretical to technical as I am now seeking to create a higher fidelity product and implement better, if not best, programming practices.

The professional project, for which I will intentionally be quite vague, investigates the impact of implicit data on enterprise organizational and technical systems, and in particular, the transition from frictionless to exception based workflows. As an Interaction Designer, I firmly believe it is important to not only empathize with humans, but also technology and its corresponding data because the concept of user – who, what, or where – is increasingly being blurred. Therefore, ASI’s introduction into AI was particular helpful in how I understand, design for, and implement data into my professional practice.

Long story short – two great events! In the coming weeks, I am to write a followup post regarding my technical developments in regards to Burrito bot, but also an online course I began this week to formalize my programming skills relative the Internet of Things.

IxDA London – The Uncanny Valley & Subconscious Biases of Conversational UI

The theme of IxDA London’s June event was Algorithms, Machine Learning, AI and us designers – an evening of great discussions that prompted me to dig up reading material on The Uncanny Valley and Subconscious Biases. Both these topics were strongly present, the former directly and the latter indirectly, in Ed and John’s presentation on designing for IBM Watson. They discussed the ‘Uncanny Valley of Emotion’ as a third line on the curve in addition to ‘still’ and ‘moving’ in the traditional model of the uncanny valley. While I understand their intent in creating a third category – accounting for the invisible systems, agents, and interactions not visible or physically accessible – in retrospect I disagree with the characterization. Emotion, or lack of, can by explicitly betrayed by movement. From my understanding, subtle asynchronous or unnatural movements directly related to emotional responses expected by humans are a key ingredient in the Uncanny Valley. Therefore, I would rename the ’emotion’ curve suggested by the Watson team to ‘implicit,’ thereby retaining emotion as a criteria for both explicit (still and moving) and implicit interactions.

Uncanny valley of emotion at IxDA London, photo by Karey Helms

The second subtopic, subconscious biases, greatly concerns me. A recent article in the New York Times – Artificial Intelligence’s White Guy Problem – sums it up perfectly. As designers, how do we build into our processes accountability for subconscious (and conscious) biases relative to algorithms, machine learning, and conversational interfaces? I don’t have an answer but I would like to find one!

Relevant links and resources:
The Uncanny Valley
Uncanny valley: why we find human-like robots and dolls so creepy
Navigating a social world with robot partners: A quantitative cartography of the Uncanny Valley
The Uncanny Wall
Artificial Intelligence’s White Guy Problem

Spring Design & Tech MeetUps

Raia Hadsell from Google DeepMind at International Women's Day Summit

Though work has been keeping me busy, I’ve still been able to partake in a lot of exciting design events around London this past Spring. Some of my favorites being:

  • Google partnered with Women Techmakers to host International Women’s Day Summit 2015. Raia Hadsell from Google DeepMind gave an inspiring talk both on Reinforcement Learning as well as presenting her inspiring non-linear career path.
  • Gravity Sketch presented their process and prototypes at IxDA London April’s Augmenting space and place. Though I find their product potentially more exciting as a tool for non-creatives, I strongly appreciate rethinking the creative process and mediums of communication and collaboration.
  • Chryssa Varna had me feeling all sort of nostalgic for architecture as she presented her beautiful thesis Industrial Improvisation – a poetic combination of robotics and graceful human interaction.

IxDA London – Durrell Bishop

Another great MeetUp by IxDA London! The evening opened with an introductory talk by Dr. Dan Lockton, creator of the Design with Intent toolkit and whose workshop I volunteered for at the DRS 2014 conference this past June, and ended with a presentation and workshop lead by Durrell Bishop.

Durrell Bishop at IxDA London MeetUp

IxDA London – Wearable Interaction Design

Since moving to London this summer, on the recommendation of friends, I’ve made a strong effort to be proactive in the London MeetUp scene as both an on-going learning experience and opportunity to get to know other designers and technologists. My experiences so far can pretty much be summed up by the regret at only not exploring MeetUp sooner! While I’m a member of quite a handful, my favorite so far have easily been Women Who Code London and IxDA London, both of whom are lead by obviously passionate and motivated individuals, which I believe is what makes their events so coveted.

The most recent IxDA MeetUp was on Wearable Interaction Design, and as one fellow friend and attendee summarized – a mini conference within a single evening. Guest speakers included Melissa Coleman, Kevin McCullagh, Becky Stewart and Duncan Fitzsimons – a diverse range of views regarding wearable technology.

Wearable Interaction Design IxDA MeetUp photo by Karey Helms

While all the speakers had interesting and varied perspectives, I really appreciated Duncan’s broad and inclusive definition of wearable technology (as seen above). As the subject is too often discussed implying the modification of conventional jewelry with an LED, screen or accelerometer as the future, I believe zooming out and taking a diverse perspective is what will allow for true innovation relative to user-centered needs.

Points of discussion and other thoughts that sprang to mind or stuck included:

  • Michio Kaku’s Cave Man Principle in relation to media excitement vs longterm commitment
  • Great point by Melissa (if I remember correctly) about wearables not becoming permanently ingrained in our bodies akin to cyborgs, as with the constant release of new technology and versions become obsolete, we will fear our body becoming a technology wasteland
  • Becky proposed a great list of suggested conversations to have between a designer and engineer when prototpying, including: tech specifications of data, who needs to see what and when, one way or two way communication, and power requirements among others