Fakers and phonies

I recently had a conversation in response to a reading of chapters 13 -14 of Suchman, Lucy. Human-machine reconfiguration: Plans and situated actions. Cambridge University Press, 2007. These two chapters focused on critique of contemporary developments in Artificial Intelligence which had the intention of creating a sentient and humanlike entity. Our ideation of AI and it’s speculative future seem to have shifted since Suchman’s piece was written – we are no longer envisioning the AI of the future as human-passing as in early 2000’s sci fi [1]. Instead we have observed the evolution of machine intelligence to the point where we don’t even think of it as AI, or indeed notice it at all. For example, Google’s Postmaster tools use machine learning to filter spam, sort your email and auto suggest email responses. Many of us have seen retail chat bots pop up in the corner of webstores whether or not we’ve used them, and may have even answered a call from a machine salesperson[2]. Although AI is not being physically manifested as a humanoid robot butler, many of Suchman’s skepticism and concerns remain pertinent.

Suchman mentions a number of times where the emergent behaviors of AI reveal it’s limitations in human/AI social interactions – the AI works only within a certain set of environmental and social parameters and can’t cope outside of its situated context and/or with the addition of too many/the wrong external stimuli. In terms of face to face interactions with humanoid AI this may still be the case. But as our interactions are often mediated by screens – do they need to be convincing in person? As demonstrated by Which face is real a tool developed by the Calling bullshit project telling a real face from a synthesized face is harder than you think. A combination of this, ‘deep fake’ videos and increasingly sophisticated chat bots, and the masking effects of low fi video calls may mean we are close to a convincing AI human interaction.

Much of the criticism of AI as mimic has been levelled at deepfake videos in particular. This has been mostly focused on their potential usefulness as a tool for nefarious ends; blackmail via faked pornography, the further legitimisation of ‘fake news’ stories. These are legitimate concerns, but in some ways represent individualistic fears. I don’t want my image to be appropriated/I don’t want to be scammed, of course. But apart from this, I share Suchman’s concern that such technologies are unsituated, and universalised from a US/Eurocentric perspective. As I touched upon in my last post, apart from its products entirely (and not dwelling too much on the politics of creating sentient service ‘beings’), technoscience itself not neutral, and is imbued with the biases of its authors and situated context[3].

[1] Without having done too much research beyond my own impression of the period, there seemed to be a proliferation of western film/tv in the late 90’s/early 2000’s wherein replicant-esque robots featured heavily. Examples include S1MONE (2002), A.i. Artificial Intelligence (2001), Bicentennial Man (1999), the three Matrix movies (1999-2003).

[2] Let’s not get into personhood right now.

[3] Following the voicing of concerns around AI biases, large vendors like IBM and Google announce further tools to uncover the biases of their existing tools.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s