Categories
Lectures

10. Robots

In this session for Digital Geographies we explore the forms of spatial imagination produced and performed with robots. The focus of the discussion is less on the actually existing uses of robots, although these are fairly plentiful in factories. Rather, we are exploring the work that stories about robots and automation does to shape our world.

Lecture slides

The ‘Good Robot’

A fascinating and very evocative example of the spatial imagination of robots and artificial intelligence in action in the form of an advertisement for the (now discontinued) “Vector” robot from a company called Anki.

How to narrate or analyse such a robot? Well, there are lots of the almost-archetypical figures of ‘robot’ or automation. The cutesy and non-threatening pseudo-pet that the Vector invites us to assume it is, marks the first. This owes a lot to Wall-E (also, the robots in Batteries Not Included and countless other examples) and the doe-eyed characterisation of the faithful assistant/companion/servant. The second is the all-seeing surveillant machine uploading all your data to “the cloud”. The third is the two examples of quasi-military monsters with shades of “The Terminator”, with a little bit of helpless baby jeopardy for good measure. Finally, the brief nod to HAL 9000, and the flip of the master/slave that it represents, completes a whistle-stop tour of pop culture understandings of ‘robots’, stitched together in order to sell you something.

I assume that the Vector actually still does the kinds of surveillance it is sending up in the advert, but I have no evidence – there is no publicly accessible copy of the terms & conditions for the operation of the robot in your home. However, in a advertorial on ‘Robotics Business Review‘, there is a quote that sort of pushes one to suspect that Vector is yet another device that on the face of it is an ‘assistant’ but is also likely to be hoovering up everything it can about you and your family’s habits in order to sell that data on:

“We don’t want a person to ever turn this robot off,” Palatucci said. “So if the lights go off and it’s on your nightstand and he starts snoring, it’s not going to work. He really needs to use his sensors, his vision system, and his microphone to understand the context of what’s going on, so he knows when you want to interact, and more importantly, when you don’t.”

If we were to be cynical we might ask – why else would it need to be able to do all of this?

Anki Vector “Alive and aware”

Regardless, the advert is a useful example of how the bleed from fictional representations of ‘robots’ into contemporary commercial products we can take home – and perhaps even what we might think of as camouflage for the increasingly prevalent extractive business model of in-home surveillance.

Tay and You, Artificial intelligence or stupidity?

“TayandYou”, or “TayTweets”, was a short-lived semi-interactive programme, colloquially termed a chatbot but also labeled by some an “AI”. The software is a Microsoft Research and it’s Twitter presence was launched on the 23rd of March. Yet after sixteen hours it was suspended. Within that time the twitter account for the programme shared what have been both condemned and derided as grossly offensive messages both as general tweets and in response to messages.

At base, I would like to suggest that the “TayTweets” situation is both: An example of a problematic attempt to translate a particular conceptualization of ‘mind’, or particular aspects of what we call intelligence, as a mathematical problem; and an example of a computational exploit that was not anticipated. The model of linguistic interaction developed by the researchers at Microsoft did not accommodate the normative contexts in which it might operate. The programme was designed to learn from the text it ‘read’. Yet, the programme was apparently designed with no ‘filter’ or mechanism for evaluating the ethical and political contexts of those inputs—even at a rudimentary. Based on the kinds of text it received the programme produced grammatically and dialogically correct tweets that were also easily judged to be grossly offensive. For instance, the programme issued tweets that proclaimed support for genocide. 

We might see this as a form of stupidity in the failure to recognise and negotiate normative contextual issues and to place greater belief in the ontic capacities of the code than the epistemological vagaries of those who would interact with it. The Corporate Vice President of Microsoft Research was quick to reduce the issue to “a coordinated attack by a subset of people” to exploit an unforeseen vulnerability. Here, I feel prompted to recall Avital Ronell’s articulation of stupidity not as the ‘other’ of knowledge but as the absence of a relation to knowing. There is an implied exhortation for us to sympathise with Microsoft Research for taking the moral high–ground against those who seek to spoil apparently honorable research. We are thereby invited to affirm a blithe enthusiasm for general technological progress, aligning “AI” and “the algorithm” with a prescribed story of always-positive technological advancement. Such an enthusiasm, in Ronell’s reading of Nietzsche, is an enthusiastic deferral of knowledge, for we are invited to ignore the details, and this is a form of stupidity. There was also arguably an element of hubris in the very public staging of the Tay experiment.

We might see this as a form of stupidity in the negotiation of how such an incident should be judged and reported or discussed. For how are we supposed to judge? In what context and against what criteria? Many grant Tay some kind of minimal subjectivity, referring to “her” agency. Yet this, of course, elides too much. The system denoted by “TayTweets” includes complex interactions amongst a host of different kinds of entities. It exists as a sociotechnical assemblage with nuanced and ill-defined agency. Even so, the discussion is always drawn to a transcendent horizon of apparent machine supremacy — the ethos being: the system is stupid now but just you wait. 

ReCap recording

The recording will be accessible from the module ELE page.

Reading

  1. Lynch, C. (2021). Critical Geographies of Social Robotics. Digital Geography and Society (2): 100010. https://doi.org/10.1016/j.diggeo.2021.100010
  2. Walker, M. et al. (2021) Locating artificial intelligence: a research agenda, Space and Polity, 25:2, 202-219, DOI: 10.1080/13562576.2021.1985868

Please read with the following questions in mind:

  • How do robots relate to geographies of work?
  • What sorts of spaces to robots represent, create or alter?
  • What sorts of claims are made for AI concerning our spatial experience?
  • (How) Does the uses of AI shape public and private space?

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.