Fakers and phonies

I recently had a conversation in response to a reading of chapters 13 -14 of Suchman, Lucy. Human-machine reconfiguration: Plans and situated actions. Cambridge University Press, 2007. These two chapters focused on critique of contemporary developments in Artificial Intelligence which had the intention of creating a sentient and humanlike entity. Our ideation of AI and it’s speculative future seem to have shifted since Suchman’s piece was written – we are no longer envisioning the AI of the future as human-passing as in early 2000’s sci fi [1]. Instead we have observed the evolution of machine intelligence to the point where we don’t even think of it as AI, or indeed notice it at all. For example, Google’s Postmaster tools use machine learning to filter spam, sort your email and auto suggest email responses. Many of us have seen retail chat bots pop up in the corner of webstores whether or not we’ve used them, and may have even answered a call from a machine salesperson[2]. Although AI is not being physically manifested as a humanoid robot butler, many of Suchman’s skepticism and concerns remain pertinent.

Suchman mentions a number of times where the emergent behaviors of AI reveal it’s limitations in human/AI social interactions – the AI works only within a certain set of environmental and social parameters and can’t cope outside of its situated context and/or with the addition of too many/the wrong external stimuli. In terms of face to face interactions with humanoid AI this may still be the case. But as our interactions are often mediated by screens – do they need to be convincing in person? As demonstrated by Which face is real a tool developed by the Calling bullshit project telling a real face from a synthesized face is harder than you think. A combination of this, ‘deep fake’ videos and increasingly sophisticated chat bots, and the masking effects of low fi video calls may mean we are close to a convincing AI human interaction.

Much of the criticism of AI as mimic has been levelled at deepfake videos in particular. This has been mostly focused on their potential usefulness as a tool for nefarious ends; blackmail via faked pornography, the further legitimisation of ‘fake news’ stories. These are legitimate concerns, but in some ways represent individualistic fears. I don’t want my image to be appropriated/I don’t want to be scammed, of course. But apart from this, I share Suchman’s concern that such technologies are unsituated, and universalised from a US/Eurocentric perspective. As I touched upon in my last post, apart from its products entirely (and not dwelling too much on the politics of creating sentient service ‘beings’), technoscience itself not neutral, and is imbued with the biases of its authors and situated context[3].


[1] Without having done too much research beyond my own impression of the period, there seemed to be a proliferation of western film/tv in the late 90’s/early 2000’s wherein replicant-esque robots featured heavily. Examples include S1MONE (2002), A.i. Artificial Intelligence (2001), Bicentennial Man (1999), the three Matrix movies (1999-2003).

[2] Let’s not get into personhood right now.

[3] Following the voicing of concerns around AI biases, large vendors like IBM and Google announce further tools to uncover the biases of their existing tools.

the right hand doesn’t know what the left hand is doing

Jack’s Car skimmed down a slip road at 60 miles an hour, comfortably decelerating to an even 50 miles an hour as the road evened out. This deceleration to a slower speed was due to the decreased visibility of the road. Jack’s Car did not decelerate to 40 miles per hour because it was a dry night rather than a wet night, but it was a cloudy night. The route Jack’s Car took diverged from the motorway. Jack’s Car took this route because the motorway had many cars on it. This means that a car may get snarled up in a traffic jam. It also increases the chance of an accident happening due to many cars being on the road. Rather than risk the increased likelihood of an accident, the alternative was to leave the motorway via a slip road which leads to a lesser used, less well tarmacked road, cutting directly from one side of a plot of land to the other in a straight line. The land is used for tree farming and is not well lit. This is why Jack’s car decelerated to 50 miles per hour, and in addition, turned up it’s headlights to full beam. An accident is much less likely on this road than on the motorway, for although it is not as well tarmacked, the journey becomes shorter and there are less cars sharing the road. 

Jack was at home, waiting for his car, which was returning from a drive-through. Jack’s Car left Jack’s house for the drive-through, which is attached to a supermarket, at around 19:00 because other car owners like to have their shopping at home by the time they return from work. Jack does not have this preference enabled, so Jack’s car waits until there are fewer cars on the road before leaving. There is always traffic on the road motorway however, so the best route to and from the supermarket often involves cutting across the tree farm. In addition to fewer instances of car accidents, this road had the benefit of being shorter, and therefore more fuel efficient and therefore better for the environment and it is also cheaper. 

Jack’s Car drove along the road in near silence, bar a low hum which Jack’s Car (and all other cars) emitted to warn pedestrians that a car was driving towards them. There were never pedestrians on this particular road, or any road, but all cars hummed all the time because it was legislated. However, Jack’s Car was always ready to decelerate in response to a pedestrian stepping into the road at any time and was always ready to obey the directions of markings on the road. Jack’s Car was familiar with road markings which issued instructions to merge, give way etc, even when Jack’s Car had not encountered a particular set on markings the marking on that particular road. Road markings sometimes are changed and Jack’s Car needed to read the markings anew each time it encountered them, in case they had changed. This is why as Jack’s car was equidistant from the entrance and exit of the tree farm, Jack’s car detected a new dashed line followed by a solid line, and recognised it as ‘right of way’ and passed over it. This is also why, when having passed over the line and finding itself confronted with a solid line followed by a dashed line meaning ‘no entry’, Jack’s Car stopped. 

This trap, which trapped Jack’s Car, is simply laid by drawing a pair of concentric circles on the road. The outer dashed, the inner solid. From outside – right of way – from inside – no entry. 

A group of pedestrians stepped out from the gloom between the trees and made their way to Jack’s Car. Jack’s Car saw the pedestrians as they stepped into it’s high beams. They hummed as they made their way closer. 


James Bridle’s Autonomous Trap series, (in which the artists ‘traps’ autonomous cars in the manner described above) describes the dichotomy of the algorithm as both slavishly procedural and logic driven, and mystified and inscrutable. While it may appear to support the idea of the algorithm as procedural to a fault, the act of trapping the car in a magic inspired ring of salt demystifies the algorithm three-fold. It allows the pedestrian to arrest the procedure by subverting the rules, allows them to do so using analogue tools, therefore undermining the algorithm as ceaseless or incomprehensible. As I read it, the work is a call to arms, a demystification of the algorithm and an invitation to think inventively about their limitations. 

After reading Tarleton Gillespie’s Algorithm [draft] [#digitalkeywords], the idea which seems to me most potent, was the idea of the algorithm as a ‘talisman’. The talisman has the power to ward off culpability, absorb blame or anoint the actions of its author.  

The idea of the algorithm as autonomous from the author can be comforting. It suggests impartiality and fairness, utilitarianism and efficiency. And in many cases this is true, but fair and efficient for whom? Pay no attention to the man behind the curtain. 

Although many driverless cars use a range of methods to detect objects – radar for example – computer vision systems are cheaper and potentially a more market friendly option. However, researchers from Virgina Tech found that machine vision systems are consistently poorer at detecting people with darker skin tones than people with fairer skin tones. This was true even when testing the object detection systems when they removed occluded pedestrians and tested only using images of people in full view: 

[…](small pedestrians and occluded pedestrians) are known difficult cases for object detectors, so even on the relatively “easy” subset of pedestrian examples, we observe this predictive inequity. We have shown that simple changes during learning (namely, reweighting the terms in the loss function) can partially mitigate this disparity. We hope this study provides compelling evidence of the real problem that may arise if this source of capture bias is not considered before deploying these sort of recognition models. 

This is not a flaw in the algorithm, but a flaw in its training data.  

An algorithm has an author or set of authors. While the logic driven procedure of the algorithm may function impartially and fairly, it’s basing its decisions on data it was previously trained on during its development. If this data has not been sufficiently scrutinised, as in the case of the object detection software examined in the Virginia Tech Study, it may enact the biases of its author – conscious or otherwise. 

I think the thing to remember is that the algorithm is doing it’s job – we might just not know who hired it. 

Wk. 1: A Good Life & Flexing a muscle

I’m less and less sure my decisions are my own. I take the route I’m recommended, I listen to what’s up next. My impulse has been, when cognisant of being manipulated (guided?) in this way, to vow to secede modern life, delete a load of apps and clear some cookies.

It can be a kick in the ego to discover you behaviour, or your even your taste, is not your own. However, I think we can agree it’s futile to try and disavow our symbiotic relationship with technology and ‘the good life’ your way to autonomy. Would you want to if you could?

When reading this week’s set text A fish can’t judge the water’ by Femke Snelting, I was reminded of a work by the technology and design studio IF. The work, Data Licenses, (2015), was displayed as part of ‘Big Bang Data’, a 2014-15 exhibition at Somerset House, London. Visitors to the exhibition were invited to scan objects representing different types of personal data. When an object had been scanned (for example a bank card representing financial transactions), users were given a run down of the risks and benefits of sharing that data – you location may be traceable by using a card on public transport but that data could be used to improve/optimise a public service. Do you opt in? After making a series of these choices the visitor is then given a print out of their ‘licence’ for their records.

As Snelting alludes to in her writing and is implied in the Data Licences, a fuller understanding of the uses of ubiquitous technology grants us the ability to make informed decisions or make radical interventions. Seceding from the network, as I have been inclined in the past, does nothing to stop it functioning. As Laboria Cuboniks state in the Xenofeminist Manifesto, ‘slowing down and scaling back’ to a fictional, simpler time is a privilege few can practically afford.


My first class with Dr Helen Pritchard was reassuring in several ways. My colleagues are clearly very talented and experienced in a variety of fields. I hope to both contribute to and draw from what is clearly a wealth of collective expertise. The text I recommended as a representative of my interests and practice was Art Without Death: Conversations on Russian Cosmism, an edited collection and part of the e-flux book series. Artists and philosophers, including Hito Steyerl, Franco ‘Bifo’ Berardi and Anton Vidokle discuss an obscure Russian philosophical movement. A blend of eastern philosophy, the Russian Orthodox church and Marxist ideology the movement is centered on the idea that it is our moral and ethical obligation to work towards permanently curing death, and then reviving all those who have died. Although, these goals may at first appear incredible, they have pervaded within the transhumanist movement. Personally, I am interested in how art in particular could literally, physically heal in a world where access to adequate healthcare is uneven. 


BlogSoundTrack_Song1.