

Are we letting big tech outsource our humanity?
February 3, 2024
The biggest problem with Artificial Intelligence will be the way we use it, writes Dr Richard Hil.
Weve long been in a mirror world of hyper-reality, in which those old stalwarts of truth and reason have been mired in an algorithmic quagmire.
This began well before the onset of generative AI. The internet, once quaintly viewed as an information highway, became the hothouse repository of data-harvesting, mass surveillance and targeted consumerism. It has also encouraged violent, anonymised nativism and racialised tribalism through which junk theories and counter factualism have proliferated.
The lines between the public and private have been eviscerated. Routinely, as we gaze at our screens, Big Tech stares right back, all the time absorbing, assimilating, targeting. Theres really no escape, other than total abstinence on a remote island.
Over time, weve become entangled in a spiders web. Everyone has been granted a digital voice, with the shrillest gaining traction among the lonely and isolated, the angry and disaffected. Appeals to reason and rationality are swept aside, dumped in the dustbin of privileged, white-masculinist discourse. Truth has been relativised to the point of oblivion. Fake news, alternative facts, bogus ideas and straw men have taken care of the rest.
With AI, the waters are even cloudier. Were at the point where its generative capabilities can create ghost-like replicas and digitised avatars whose bearing and speech resemble the real thing. We can hardly tell the difference. The medium might be the message, but AIs endless reproductive troves should worry us all. It is parasitic, feeding off what it ingests while offering us a coke-line to hyperactivity and infinite, profit-generating possibilities.
The latter have been celebrated but the full consequences of AI are yet to reveal themselves.
Like the Internet, AI promises much but delivers wild, unaccountable spaces, mixing personas and messaging to suit particular political and commercial agendas. Sure, it has its many positive uses across a range of fields but again, it comes with many dark sides.
The mirror world of which Namoi Klein speaks in_Doppelganger_, dedicates itself, among other things, to reworking seemingly progressive ideas while repackaging them for political advantage. This appropriation began in earnest with the Tea Party back in 2009, but has its origins further back in the darkest reaches of the totalitarian state. Thus, it was possible (as it is now) to speak of the will of the people while repressing them, or to laud peace while prosecuting war.
The Tea Party spoke of a peoples movement, freedom and the excesses of big government, while feeding far-right libertarian interests, just as Trump and the Republican insurgents now speak of freedom and democracy while seeking to quell oppositional voices.
In classic Orwellian doublespeak, words like the people, freedom and democracy are deployed to signify unifying intentions but are in fact, as Naomi Klein notes, the uncanny twin of what we once knew.
The doppelganger appropriation of progressive discourse works because it resonates with a deep desire for justice. It sounds good. Democratic. Aspirational. Inclusive to a point. Yet fused with a sense of victimhood and the identification of enemies elites, the deep state, the mainstream media, illegal migrants, opportunistic refugees these dark forces, say their opponents, can only be conquered through one ideology and one anointed leader.
Causative complexity in this schema is displaced by simple Hollywood-standard binaries of good and evil, destructive enemies and people just like us. Bristling with evangelical zeal, such imagined polarities morph easily, as Steve Bannons historical fictions attest, into grand civilisational struggles over which there can and must be, only one righteous outcome.
For Klein, the algorithmic world is about replacing the authentic with the synthetic. It is a forgery of life which ends up destabilising our shared worlds. We should seek to understand these forces to get to firmer ground, Klein argues. This invites us to understand, as best we can, how modern technologies work, whose interests they serve, and the role they play in shaping the hegemonic order.
It should also compel us, as Noam Chomsky urges in his online critical thinking masterclass, to question how and why we engage with these technologies.
At times, Ive been shocked how unthinkingly many of my seemingly progressive friends use generative AI, most often to fashion text. Ive seen it used to dream up titles for newspaper columns and conferences. One friend told me how hed used AI to write an article for a local newspaper. When I opined that, well, you didnt write it, he appeared more bemused than outraged.
Why wouldnt I use it?, he inquired.
What ensued was a lengthy discussion about the ethics of using AI. Its important we have these sorts of discussions.
While the AI genie is well and truly out of the bottle, regulation has yet to catch up. But it may not. The more pressing concern is how do each of us engage with this tantalising technology and when and where to draw lines.
Theres nothing benign about AI. Nothing. Social media has taught us of the many problems with unleashing technologies over which we have little control. For all its claimed advances, social media has also contributed to more loneliness and isolation (despite claims of hyper-connectivity), diminished social skills (including empathy), and contributed to more anxiety and depression, mainly among young people.
The full gamut of social consequences of AI are yet to reveal themselves, but I am intrigued, for example, about how and why its being used to counsel young people, just as I am concerned about the net social effects of bots caring for the aged or becoming programmed, supine partners.
What are the real motivations behind such things? What do they say about our society more generally? Outsourcing caring functions to machines and relegating intellectual capital to generative AI may appear quick, easy and cost effective (its why AI is the leading investment theme among Big Tech companies), but the real cost may be the loss of key aspects of our humanity.
Surely in a society riven with alienation and loneliness thats too much of a price to pay? And what will happen to critical, independent thinking, creativity and the wild, wonderful world of the human imaginary?
Big Tech will tell you that this is Luddite, doomster chatter, but with an eye on spectacular profits they would say that, wouldnt they? What perhaps should worry us most is how AI is being used not simply for commercial purposes the profits of Apple, Microsoft, Alphabet, Nvidia and other companies have soared of late but how it serves to consolidate power in the hands of certain elites.
The latter do not want us to think too much about such things. Thats why simple acquiescence to this technology is so dangerous. It enables the powerful to remain so.
First published by newmatilda.com January 30, 2024