Worlds smallest violin

Next Friday (as of 23/02/2020) I will be having a class where I am part of a group discussion where we all thrash out what our research projects might be for the next term. This is the second research project I’ve undertaken in this particular field, the first being a group project investigating sound pollution. This will be my first project based on my specific research interests. And if you can’t tell by my rambling introduction, I’m bricking it a little bit. The thing that undermines my work during projects like this is my lack of specific focus. I want to try and make sure my work is embedded and contextual. I want to approach any project with the understanding that we are inexorably enmeshed in a multitude of balls of wool, tangled in another beings hair and that we are complicit and explicitly involved all the time, everywhen. But this makes it baggy and superficial (and badly researched).

It’s pretty overwhelming I have to say. I had an idea recently for an imagined speculative app which somehow calculates what your key cause would be for that day, for the maximum amount of self masturbatory aggrandising smugshittyness. It would be a systems based solution for the ultimate first world problem – what to care about more. Say it’s a sunny spring Tuesday – that’s a day for caring about the rights of sex workers – a rainy spring Tuesday might be a day to reflect on the problem of overfishing, a moody Thursday in Autumn is an FGM day for sure. This feels macabre and funny – at least to me – not in that I feel that any of these specific causes are funny or trivial, the complete opposite. It’s more of my own stress and guilt for not being a more active participant in alleviating the suffering that I am complicit in by being alive, now. For instance, I just looked up how much Dona Harraway: Story Telling For Earthly Survival (DVD) is on Amazon – I mean what the fuck me. It’s available on there for 20 euros but at what cost?

Would this be the ultimate ‘woe is me’ act – to remove the burden of choice from someone who has no existential threat to their existence (bar the big ones we are all ignoring (nukes etc)). Or – would it be really subversive and funny? ¯\_(ツ)_/¯ I’ll talk to some people and see what that say. I should probably talk to some critters too.

Like bunnies; the multiplicity of the digitally witnessed moment

Image credit unsplash-logo
Jennifer Chen

What is it that a screen does as intermediary between the witness and the moment? What happens as the act is captured, and given a new beginning and end, new dramatic beats and punctuation?

When the act has been captured, what happens to the veracity of the compressed act, its components split and sent as separate packages?

Knowing that these few transfiguration happen, not even what they do, what implications does this have for the documentarian? Even with the understanding that they have the privilege (or burden) of choosing when a moment is born how do they reckon with the ability to watch the moment on mute, or as a gif? The is moment reformed in the screen, more or less, with captions maybe, and an ident at the beginning or a watermark. The witness receives it a couple of inches across or stretched across the side of a building and then a second documentarian might capture that moment and then whats become of the first one?

I say capturing, but more accurately I mean subdue – as it will take off bounding down the street and start reproducing as soon as look at you. I tell you, no one will be agile enough to catch up to it, let alone compare it to how it looks now compared to before it was squished and stretched and reformed and everything. Not the the documentarian, the witness, the other documentarian or the moments offspring, nor even the bystanders of the thing that happened in the first place.

And don’t even get me started on the archivist. First you have to round up all the baby moments and split them into packages again to save them in some form– which is really just breeding them. Then you have to give them names, or try and get them in some semblance of order which just creates more babies. Someone will come and want to come and see the moments which really just enables the moments to mingle and network and shag and make more versions of themselves. Someone will want to restore one that doesn’t look as expected and there pops out another six. You can’t stop them – they’re like rabbits.

I was watching a film with a friend recently who speaks English as a second language. We watched a film in English with English subtitles. We noticed after a little while that the words spoken by the actors were not the same as the subtitles. Someone said something like, ‘Come in, take a seat why don’t you’, and the subtitles read ‘Sit down’. The two versions of speech had very distinct tonal differences and it sent us reeling. We couldn’t finish the thing. If it were dubbed too, I don’t think we would have recovered. But I mean if we can’t keep a scripted moment consistent, what hope in hell do we have of accurately subduing, packaging and sharing a moment found in the wild. I think we may have to resign ourselves to letting them roam free, and to reach the next witness with whatever battle scars they’ve picked up. Probably the most we can do is tag them.

Sad Emoji: VR as empathy machine

I find it interesting that, although it seems like a given that there will be multiple individual experiences of one particular event, that the aim of artists/practitioners can be understood to be to elicit a specific emotional reaction with a work. Although I understand the compulsion to try and share and experience wholesale via a work of art – I’ve tried to do it many times to varying degrees of success – my experience/gut tells me it can’t be done. This is not to say that a genuine reaction can’t be evoked by an artwork, but rather that it may be impossible to ensure that the lasting experiences was as you intended. You may be able to convey ‘sad’ for instance, but not necessarily ‘regret’.

I’ve had one or two conversations about the experience of VR recently.  When I refer to VR, I mean stereoscopic virtual reality, which uses goggles and handheld control i.e. Oculus or playstation VR. During these conversations, my colleagues suggested that VR gave the potential to create a more visceral experience, and in turn this would enable architect of that experience could elicit a more potent and lasting empathetic connection with their argument. I argued that this wouldn’t be the case for a number of reasons (more elegantly summarised by Deborah Levitt here), including the weighting of visual perception over all other forms of sensory perception, the assumption that being in a space, albeit virtual, creates a more potent experience than is possible through other less representational mediums like poetry or music and the side-lining of VRs potential for invention and speculation. My main objection, however, was the assumption that everyone would take their googles off having had the same experience.

This is in no way to suggest there is not value in VR as an artform – rather that than in the same way that we are inured to discussing more familiar forms of propaganda critically, we should be critical when discussing VR as an ‘empathy machine’. For instance a film may have a specific, if not explicit, intention – but we as a rule to not assume that every individual will interpret it as having the same message. Similarly, the song conjures up painful memories for you may be on my getting ready playlist. There is a sweeping-ness, or reduction of the significance of lived experience in the assumption that because you are looking at the same picture, that you see the same thing. For instance, in November 2018 I lead several workshops in the Tate Modern, with the aim to help young people access their existing visual lexicon and critical voices when discussing art. The turbine hall was hosting a series of works by artist/activist Tania Bruguera. In a small room of the main hall, a room was filled with an organic compound which caused temporary eye irritation similar to cutting onions and induced tears. The artist has described the work as provoking ‘forced empathy’, and reflection on the global migration crisis. The young people I was with, in contrast to this intent, approached the space as a site of play and humour. This is not a failing of the work, rather an indication that an audience is not homogenous and won’t behave as such.

It would be revolutionary if VR could allow us, in this era of non-facts, to create real empathy and knowing. Personally, I would find it alarming – who would have the right hands to hold that kind of influence?

‘Reply hazy, try again’; how Oracle Practice is better than a Magic 8-Ball.

Last week I asked the Oracle “what is the price of good health”.

This question was on my mind for a number of reasons, but mostly because more than one of my close female family members are experiencing health difficulties at the moment. I read my question aloud and then put it in the Basket of Yes, which in this moment happened to take the form of a laptop case, as did a number of others who were also present. When I read my question to the Oracle, I also gave a page number. The Oracle, otherwise known as M Archive, the second book in an experimental triptych containing series of black feminist vignettes written by Alexis Pauline Gumbs, answered my question via the writing on the page I suggested. The Oracle gave me an answer which I was invited to interpret alone, or in collaboration with others present.

The collective Basket of Yes exercise was devised by Gumbs, inspired by a vignette of a basket-wearer in M-Archive. Both the book, when it is read as a book, and the exercise, when the book is an Oracle, suggest that the archive is not just a dusty collection of records; static and useful for reference. Rather, they suggest that the archive is a collective and active body of knowledge which speaks to us in the present and to the possible versions of us in the future. Indeed, the book as Oracle answered me with a surprisingly apt and relevant passage (see pg. 63 of M-Archive).

The answer was not predetermined ala magic 8 ball. The book was not explicitly written to act as Oracle*, indeed during Gumb’s own staging of the exercise, the Oracle answers through a variety of black feminist writings. It may seem fantastical to imagine that the book could literally answer my question in this way. However, my experience in that moment was that it did answer me, although in true Oracle fashion, the answer required interpretation.

I found the experience quite affecting. This may be because of the sensitivity of the questions asked on the day. It may be because the Barnum effect gave me the impression the text was speaking directly to me. Moreover I think it was affecting because, as a mirror to the multiple possible futures the archive can address, shared Oracle practice allows the practitioner to appreciate a multiplicity of (mis)interpretations. This is a practice of thought which acknowledges the complexity and entanglement of individual perception. It undermines linearity and inevitability. It pays homage to our ancestors in the archive, and our possible future selves.

In the summer, Alexis Pauline Gumbs will be coming to Goldsmiths to speak at a seminar and will facilitate a basket of yes exercise. All the multitude of versions of me really want to go.

 *You could argue all books are destined to act as Oracle.

Gilt Cages

Image: Zach Blas’ Fag Face Mask, 2014

On Friday 22nd, I attended a talk by Lea Laura Michelsen, Aarhus University during which they outlined their current research, focussed on the aesthetic practice of Zach Blas. In this context, aesthetic practice can be understood as an artistic practice informed by research, and in turn, research methods informed by creative practice. Michelsen posits that by discussing Blas’ work as a strictly artistic practice, rather than acknowledge its epistemological and pedagogical elements, the political potential of Blas’ research and artistry is diminished. Michelsen argues that these practices are inextricably entangled.

Blas’ methods include epistemological research, fine art practice, collaborative workshops and discursive events. Arguably, his most well-known projects are embodied through a series of masks, which are exhibited in traditional gallery context as an art object, or through documentation of the masks being worn during performances and encounters. The masks serve dual purposes. The first being the obscuration the face, specifically to avoid the face being ‘seen’ and recorded as biometric data. The second is to encourage reflection of the politics of biometrics/metricisation, which are implicitly identified as problematic.

The series Face Cages uses the points created by biometric facial recognition software to create a grid-like mask. This is intended to make tangible the implicit racist, sexist and transphobic biases of these technologies. The masks are reminiscent of gilt cages, muzzles or scold’s bridle, a 16th century punitive device which binds the head and depresses the tongue to restrict speech. The series Facial Weaponization Suite creates homogenous masks for marginalized groups – for instance the ‘Fag Face Mask’ was made from the facial data of many queer faces in response to studies which link identifying sexual orientation with facial recognition techniques. Another mask explores the racisms of biometric technologies which cannot recognise dark skin tones. The masks subvert a reductionist classification of individuals to create a unity of resistance when worn during protest.

Critics of Blas criticise these series’ as aestheticizing resistance and protest and argue they reduce nuanced social issues without the alternative position being fully explored. Michelsen argues that this dismisses or ignores elements of his work which create an opening-up of conversations around ubiquitous technologies which otherwise may not take place, with humour and play. This combination of artistic, playful, collaborative practice and epistemological research in turn creates an opportunity for alternative futures and/or acts of resistance to be formulated.

This position raises a number of questions:

  • How much should an artist be expected to present a political, but also unbiased perspective.
  • To what extent can a researcher use creative methods, with the understanding that they cannot make curatorial or artistic licence.
  • Can artist-researchers be more accurately be described as aesthetic practice.
  • Are the political potentials increased/expanded when approached in such a way?
  • Is it possible to have such a practice without it being reduced to its constituent elements?

Fakers and phonies

I recently had a conversation in response to a reading of chapters 13 -14 of Suchman, Lucy. Human-machine reconfiguration: Plans and situated actions. Cambridge University Press, 2007. These two chapters focused on critique of contemporary developments in Artificial Intelligence which had the intention of creating a sentient and humanlike entity. Our ideation of AI and it’s speculative future seem to have shifted since Suchman’s piece was written – we are no longer envisioning the AI of the future as human-passing as in early 2000’s sci fi [1]. Instead we have observed the evolution of machine intelligence to the point where we don’t even think of it as AI, or indeed notice it at all. For example, Google’s Postmaster tools use machine learning to filter spam, sort your email and auto suggest email responses. Many of us have seen retail chat bots pop up in the corner of webstores whether or not we’ve used them, and may have even answered a call from a machine salesperson[2]. Although AI is not being physically manifested as a humanoid robot butler, many of Suchman’s skepticism and concerns remain pertinent.

Suchman mentions a number of times where the emergent behaviors of AI reveal it’s limitations in human/AI social interactions – the AI works only within a certain set of environmental and social parameters and can’t cope outside of its situated context and/or with the addition of too many/the wrong external stimuli. In terms of face to face interactions with humanoid AI this may still be the case. But as our interactions are often mediated by screens – do they need to be convincing in person? As demonstrated by Which face is real a tool developed by the Calling bullshit project telling a real face from a synthesized face is harder than you think. A combination of this, ‘deep fake’ videos and increasingly sophisticated chat bots, and the masking effects of low fi video calls may mean we are close to a convincing AI human interaction.

Much of the criticism of AI as mimic has been levelled at deepfake videos in particular. This has been mostly focused on their potential usefulness as a tool for nefarious ends; blackmail via faked pornography, the further legitimisation of ‘fake news’ stories. These are legitimate concerns, but in some ways represent individualistic fears. I don’t want my image to be appropriated/I don’t want to be scammed, of course. But apart from this, I share Suchman’s concern that such technologies are unsituated, and universalised from a US/Eurocentric perspective. As I touched upon in my last post, apart from its products entirely (and not dwelling too much on the politics of creating sentient service ‘beings’), technoscience itself not neutral, and is imbued with the biases of its authors and situated context[3].

[1] Without having done too much research beyond my own impression of the period, there seemed to be a proliferation of western film/tv in the late 90’s/early 2000’s wherein replicant-esque robots featured heavily. Examples include S1MONE (2002), A.i. Artificial Intelligence (2001), Bicentennial Man (1999), the three Matrix movies (1999-2003).

[2] Let’s not get into personhood right now.

[3] Following the voicing of concerns around AI biases, large vendors like IBM and Google announce further tools to uncover the biases of their existing tools.

the right hand doesn’t know what the left hand is doing

Jack’s Car skimmed down a slip road at 60 miles an hour, comfortably decelerating to an even 50 miles an hour as the road evened out. This deceleration to a slower speed was due to the decreased visibility of the road. Jack’s Car did not decelerate to 40 miles per hour because it was a dry night rather than a wet night, but it was a cloudy night. The route Jack’s Car took diverged from the motorway. Jack’s Car took this route because the motorway had many cars on it. This means that a car may get snarled up in a traffic jam. It also increases the chance of an accident happening due to many cars being on the road. Rather than risk the increased likelihood of an accident, the alternative was to leave the motorway via a slip road which leads to a lesser used, less well tarmacked road, cutting directly from one side of a plot of land to the other in a straight line. The land is used for tree farming and is not well lit. This is why Jack’s car decelerated to 50 miles per hour, and in addition, turned up it’s headlights to full beam. An accident is much less likely on this road than on the motorway, for although it is not as well tarmacked, the journey becomes shorter and there are less cars sharing the road. 

Jack was at home, waiting for his car, which was returning from a drive-through. Jack’s Car left Jack’s house for the drive-through, which is attached to a supermarket, at around 19:00 because other car owners like to have their shopping at home by the time they return from work. Jack does not have this preference enabled, so Jack’s car waits until there are fewer cars on the road before leaving. There is always traffic on the road motorway however, so the best route to and from the supermarket often involves cutting across the tree farm. In addition to fewer instances of car accidents, this road had the benefit of being shorter, and therefore more fuel efficient and therefore better for the environment and it is also cheaper. 

Jack’s Car drove along the road in near silence, bar a low hum which Jack’s Car (and all other cars) emitted to warn pedestrians that a car was driving towards them. There were never pedestrians on this particular road, or any road, but all cars hummed all the time because it was legislated. However, Jack’s Car was always ready to decelerate in response to a pedestrian stepping into the road at any time and was always ready to obey the directions of markings on the road. Jack’s Car was familiar with road markings which issued instructions to merge, give way etc, even when Jack’s Car had not encountered a particular set on markings the marking on that particular road. Road markings sometimes are changed and Jack’s Car needed to read the markings anew each time it encountered them, in case they had changed. This is why as Jack’s car was equidistant from the entrance and exit of the tree farm, Jack’s car detected a new dashed line followed by a solid line, and recognised it as ‘right of way’ and passed over it. This is also why, when having passed over the line and finding itself confronted with a solid line followed by a dashed line meaning ‘no entry’, Jack’s Car stopped. 

This trap, which trapped Jack’s Car, is simply laid by drawing a pair of concentric circles on the road. The outer dashed, the inner solid. From outside – right of way – from inside – no entry. 

A group of pedestrians stepped out from the gloom between the trees and made their way to Jack’s Car. Jack’s Car saw the pedestrians as they stepped into it’s high beams. They hummed as they made their way closer. 

James Bridle’s Autonomous Trap series, (in which the artists ‘traps’ autonomous cars in the manner described above) describes the dichotomy of the algorithm as both slavishly procedural and logic driven, and mystified and inscrutable. While it may appear to support the idea of the algorithm as procedural to a fault, the act of trapping the car in a magic inspired ring of salt demystifies the algorithm three-fold. It allows the pedestrian to arrest the procedure by subverting the rules, allows them to do so using analogue tools, therefore undermining the algorithm as ceaseless or incomprehensible. As I read it, the work is a call to arms, a demystification of the algorithm and an invitation to think inventively about their limitations. 

After reading Tarleton Gillespie’s Algorithm [draft] [#digitalkeywords], the idea which seems to me most potent, was the idea of the algorithm as a ‘talisman’. The talisman has the power to ward off culpability, absorb blame or anoint the actions of its author.  

The idea of the algorithm as autonomous from the author can be comforting. It suggests impartiality and fairness, utilitarianism and efficiency. And in many cases this is true, but fair and efficient for whom? Pay no attention to the man behind the curtain. 

Although many driverless cars use a range of methods to detect objects – radar for example – computer vision systems are cheaper and potentially a more market friendly option. However, researchers from Virgina Tech found that machine vision systems are consistently poorer at detecting people with darker skin tones than people with fairer skin tones. This was true even when testing the object detection systems when they removed occluded pedestrians and tested only using images of people in full view: 

[…](small pedestrians and occluded pedestrians) are known difficult cases for object detectors, so even on the relatively “easy” subset of pedestrian examples, we observe this predictive inequity. We have shown that simple changes during learning (namely, reweighting the terms in the loss function) can partially mitigate this disparity. We hope this study provides compelling evidence of the real problem that may arise if this source of capture bias is not considered before deploying these sort of recognition models. 

This is not a flaw in the algorithm, but a flaw in its training data.  

An algorithm has an author or set of authors. While the logic driven procedure of the algorithm may function impartially and fairly, it’s basing its decisions on data it was previously trained on during its development. If this data has not been sufficiently scrutinised, as in the case of the object detection software examined in the Virginia Tech Study, it may enact the biases of its author – conscious or otherwise. 

I think the thing to remember is that the algorithm is doing it’s job – we might just not know who hired it.