‘The New Dark Age’: Technology and how Bridle gets it right.

By Francesca Root

Technology allows us to access more information than ever before. But too much information is so often confused with an overload of knowledge, when in reality, they are very different things. Knowledge implies a full understanding, networks of information, rather than functional “bits”. Knowledge is also linked to language, to our ability to speak about a thing and its affect in the world, rather than just of the thing itself. As far as our understanding of the impact of information technology goes, we are lacking knowledge, as well as the language with which to express it.

In his recent book New Dark Age, this is the crux of James Bridle’s argument [1]. I came to Bridle’s book having just completed my MPhil thesis where I focused on how people mourn in the digital age, how national grief and tech intersect with power. What is brilliant about Bridle’s book is that, refreshingly, it sees its job as framing the right questions about tech rather than rehashing the tired stories we’ve already heard. It pays attention to the way that processes of knowledge-accumulation about “things” are never neutral: it recognises that the way that we arrange and share information – about a new technology, for example – is always useful to someone. This is exactly the kind of thinking behind ‘Re-‘. What is so good about Bridle’s book is that he acknowledges these dominant cultural narratives about tech, building them in to his overall view of it. However, refusing to stop at explaining what a form of tech is, he focuses instead on what it does, to and for people: in other words, how tech is powerful, and who the current narratives about its significance might serve.

But without being accompanied by knowledge, and language, without qualitative questions which ask how that thing is affective in the world and the power relations within which it functions, we end up with a lot of information but no idea how or why it matters.

Bridle maintains that a ‘simply functional’, or information-based understanding of technology and its systems is insufficient. A functional understanding is a bit like learning how to code without understanding what a piece of code can do, and how coding can be powerful. A functional knowledge of coding is solid gold as far as skills go these days. It allows you to mould and build digital space, from designing a website to hacking government systems. But without being accompanied by knowledge, and language, without qualitative questions which ask how that thing is affective in the world and the power relations within which it functions, we end up with a lot of information but no idea how or why it matters. Understanding the power of something or what something “does”, qualitatively, is a crucial first step in developing safeguarding processes and protective legislation. Developing language about the affect of a thing is a key first step in making something responsible, accountable and visible to its publics.

So, while you might have a great functional knowledge of Twitter, Facebook, or Amazon you’re relatively powerless in the face of those who know how the systems of these spaces work, who own the information, can organise it and make use of it to their advantage – who know what that information does. A vast amount of digital space is unaccountable in this way. It’s a kind of funny place to be in: we’ve got more information than ever, but the power to actually see, process, use and understand these enormous quantities of information is rarely in our hands.

What happens when we have lots of information and not enough understanding? Myth. Myths are just another form of knowledge – stories that get repeated enough to act as placeholders for truths that are denied to us, or we can’t yet access. Myths are useful for organising people and what they produce, which is why they get manufactured socially, politically, and culturally. Roland Barthes acknowledged this when he wrote that “myth is always motivated”[2]. What Barthes is referring to when he makes this observation is that a narrative about something usually benefits someone, somewhere. Think about the dominant narratives we currently hear about things like AI or VR gaming: who is creating and financing the popular stories about these emerging technologies? Who will benefit from a particular version of a story which, in reality, they know has yet to be fully told?

In place of real, accessible, networked knowledge about technology then, you often find two competing myths about what tech “does”. On the one hand there is techno-pessimism: the idea that technology is somehow corrosive, or inherently dangerous for “us” as a society. Popular with the media, this myth fits neatly into long-standing narratives of paranoia about science, and change. “Dating apps are ruining our lives”, “Facebook is destroying society”, “Social media is anti-politics”[3] and so on. On the flip side, you have techno-utopia: stories of “the Twitter revolution”, and instances of technology saving people (rather than the innovative behaviour of the people using that technology). The utopian narrative depends on a functional understanding as well, pushed of course by big tech firms. It says technology will save us, that it’s “bigger” than us, that we should just trust it and let the companies get on with improving our lives. The central problem with both these myths is that they are technologically and socially deterministic. This means that in both cases, tech, “the tools”, are given agency and seen to possess inherent qualities, which are then either good or bad for humanity.

In order to create deep, sufficiently complex, repeatable, new stories about tech we need more shared language, more interdisciplinary collaboration, more cross-border conversations because tech transcends all of these borders.

Myths are meant to simplify the complexities of relationships – but their danger lies in the basis of their success: their repeatability. They’re easier to understand than the truth. What Bridle repeatedly warns of in his book is that we don’t know what we don’t know, particularly when it comes to tech. There is an urgent need to develop a more-than-functional or technologically-deterministic knowledge of tech and its systems. This is our greatest challenge because the problem is so immanent and so networked. In order to create deep, sufficiently complex, repeatable, new stories about tech we need more shared language, more interdisciplinary collaboration, more cross-border conversations because tech transcends all of these borders. It is not enough to de-bunk the myths we currently have for talking about tech, we must replace them with alternatives. This is, for me, what ‘Re-‘ is – it’s a space where we can have these conversations, ask the right questions, find common experience across diversity and collectively, iteratively, re-write the myths.

References:

[1] Bridle, James. (2018). New Dark Age: Technology and the End of the Future. London: Verso.

[2] Barthes, R., Lavers, A., Reynolds, S. and Badmington, N. (2009). Mythologies. London: Vintage Books.

[3] Pappas, S. (2015). “French Flags on Facebook: Does Social Media Support Really Matter?” [online] Live Science. [Accessed 8 Jan. 2018] and Gladwell, M. (2010). Small Change. [online] The New Yorker. [Accessed 16 Dec. 2017].

This piece was written by Francesca Root

M. Phil Sociology, University of Cambridge ’18.
B.A. Liberal Arts and Sciences, University Birmingham ’17.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.