The Dark Forest – Psychometric Weapons in Cyberspace
Posted on: February 9, 2021
Written by John Rust, co-author with Michal Kosinski and David Stillwell of Modern Psychometrics, The Science of Psychological Assessment, 4th Edition.
I am a psychometrician, and my field first became embroiled in the privacy debate in 2014 when its methods were rifled by Cambridge Analytica in their scandalous efforts to use psychographic microtargeting to sway elections. One example was the US Presidential election of 2016, but this was just one among many. While it was not until 2018 that these shenanigans were fully exposed to a horrified public, many of these developments had been predictable since 2009. While increasingly aware of the dangers, the democratic world was slow to react - too slow.
Protecting private digital data
As the digital disruption that had first been welcomed as the Arab Spring slowly came home to roost among the world’s democracies, incomprehension and scepticism gave way to horror and confusion. Throughout 2018 and 2019, parliamentary committees sat and deliberated, as did electoral commissions and regulators. The only realistic solution seemed to be in the more rigorous application of privacy laws, although these were far from adequate and originally designed for an era in which personal information was kept on paper in secure filing cabinets. Once the existence of illegal databases containing the personality profiles of millions of social media users was recognized, steps were taken, such as the European Union’s General Data Protection Regulation, commonly known as GDPR, to regulate these, with large fines dealt out to perpetrators. But if the electoral malpractice was to be eliminated, this did not go far enough. Even after these databases had been deleted or been made, as far as possible, non-identifiable, they had already been mined to derive algorithmic models that could be applied on the fly to new data in real time and at scale. These models were not data in the true sense of the word (any more than a calculation to translate a temperature from Fahrenheit to Centigrade is in itself a temperature). The question was no longer who held the data, but how to regulate the use of the knowledge gleaned from it.
Data protection in the cyber realm
Photo by Tobias Tullius on Unsplash
Can the use of such derived algorithms be regulated by laws designed to protect data privacy? Perhaps. Laws could prohibit the use of surveillance cameras to determine a person’s identity, gender, race and so on. Such laws could be extended to include the recognition of emotional states, beliefs, intentions, attitudes, values, intelligence, and psychiatric diagnoses. But these cameras, like ourselves, exist in the physical world – the ‘real world’ if you like. This is not where the abuses are occurring. Rather, they take place within the very fabric of our online interconnectedness – the realm that cybersecurity experts have designated ‘cyberspace’. Here anyone, whether an individual, a company or a political party, can project their values or market their products according to a person’s online manifestation, and exquisitely microtarget their message to them alone. Cyberspace exists for interaction, hence invisibility there is not an option. Anyone or any machine has the potential to be on the receiving end of such broadcasts. And these are increasingly being produced and directed by AI without any human mediation, often to hundreds of thousands of recipients simultaneously.
Through a glass, darkly.
In 1964, the philosopher Marshal McLuhan coined the expression “The medium is the message” to describe how new forms of communication can dramatically change the ways in which we perceive the world. At that time, it was the advertising moguls of Madison Avenue who took note, as was so clearly illustrated in the widely acclaimed TV series ‘Madmen’ that premiered in 2007. Today, the new medium of cyberspace resembles Dante’s dark forest.
“In the midst of life, I found myself in a dark forest, for the straight path was lost” (Dante, Inferno, 1320)
In this forest, hell is not devil and demons, but manifestations of the darkest desires of our architypes and common unconscious heritage, providing fodder for AI algorithms designed to maximize impact. Anger and outrage trump good news every time - it is an environment of ‘kill or be killed’. Here, many of our traditional ways of regulating privacy, which typically deal with how data is held, no longer apply. Al systems operating in cyberspace are dynamic. The data needed to feed them are destroyed as soon as they have served their purpose. Hence even though personal data may have been deleted, the derived algorithms have not, and continue to evolve within the medium. They are built into the very fabric of cyberspace, underpinning the global business model of the tech companies, whose expansion has been almost entirely dependent on advertising revenue bought in by algorithmic microtargeting.
Predatory machines
Has this business model already grown too big to fail? Originally, the identification of suitable marketing keywords for search engines such as Google required skilled human intervention. Today, it is done by machine, and not just ordinary machines, intelligent machines. These AIs can find patterns of activity designed to maximize not just the ‘click through rate’ (CTR) but also the ‘Return-On-Investment’ (ROI) consequent on the use of any keyword, image, phrase, or meme. Viral genetic algorithms, designed to mimic the evolutionary processes in the model of ‘survival of the fittest’ inherent in Darwinian theory, can even introduce random variations into the choice of the memes themselves, allowing only the more successful to survive. Bots (autonomous online programmes) proliferate, designed to make phone calls, send online messages, generate tweets, create fake news, provide comments, and carry out cyber-attacks. Despite a desire for the rationale and rules of these algorithmic operators to be explainable to human users, their level of complexity can quickly grow to a point where any such explanation would be well beyond the reasoning powers of a human being. In cyberspace, we are no longer alone. There are now other denizens in the forest. For regulation, we may need to look elsewhere.
The way forward
It will take us time to adjust to this new environment. But there is still time - a perfect storm in cyberspace, that moment when multiple conditions collide to create an unmitigated disaster, has yet to occur. So, let us not just wait. There are many steps we can take. Already the colliding academic fields are expanding rapidly - philosophers studying data ethics, computer scientists developing machine learning methodologies, psychometricians and social psychologists researching internet use and linguists exploring natural language programming. But to avert the storm, these disciplines working independently will not be enough. Cyber security is a very practical field, advancing so rapidly that writing academic papers, sitting on committees, and presenting at conferences is no longer sufficient. We need a new workforce of cyber scientists, practically skilled and knowledgeable in all these fields. Now, while the current COVID-19 induced crisis in education and youth employment coincides with a widespread post-Trump reckoning of the dangers of doing nothing, governments, universities, and the tech companies must join forces and make sure these pioneers receive the support they need. They and their robots are our future.