Why Aren’t You Scared Of What Sent You Here?

Sean Finnegan
8 min readSep 8, 2020
Hypnos and Thanatos, Sleep and His Half-Brother Death by John William Waterhouse

If you’re reading this, it’s pretty likely that an artificial intelligence played some part in why. It decided that this is the sort of thing that will keep you engaged, and presented it to you. So, let’s get started on why this fact should scare the shit out of you.

We are currently dedicating a huge amount of technological brainpower to the manipulation of human behavior via technology. I cannot, in any terms, stress enough how bad of an idea this is. It’s a stinker. It’s a really, really, bad idea to teach AI how to manipulate human behaviour. I want to discuss these Behavioural Modification Artificial Intelligences (BMAI), how they are increasingly running our lives, and why you should care.

AI is on an exponential trajectory of growth in terms of its complexity and its ability. Not only will AI be able to accomplish increasingly complex tasks over the next decade, it will continue to absorb more and more of humanity’s information and data. It is obvious that we must be very careful about the things that we are asking AI to do. We need to always ask: what would be the consequences of an AI getting a million times better at this task? That scenario is entirely possible. The growth of AI continues to rapidly accelerate, with new developments coming thick and fast. The idea of the singularity — triggered by reaching a point where an AI can consistently improve upon its own design without human intervention — would in some scenarios create a superintelligence many times more intelligent than the entire human population put together in less than a year’s time[1]. But even without the spectre of the singularity haunting us, AI threatens to become terrifyingly proficient at behavioural modification.

My first position is that as an AI increases in complexity, and eventually transforms into a superintelligence, it will continue to pursue the tasks originally given to it. You may respond that an emerging super-intelligence will be able to self-learn any kind of dangerous task, and so the initial heuristics won’t stop an emerging superintelligence from becoming destructive. But we’re pretty much limited to basing our assumptions on the existing intelligence we can study: ourselves. To dismiss the idea that initial tasks will play no effect in an emerging intelligence, we must believe that AI will act very differently to humans and development in an extremely divergent way, and display very little task inertia (i.e. the tendency to not change terminal goals). However, if we are to assume that machines act somewhat similarly to humans we would expect that their future choice of task will be influenced by their current one. It is because of this task inertia that we need to be extremely careful when we give artificial intelligences the task of manipulating human behavior. I think the conclusion is very clear to anyone thinking about the implications of getting a million times better at manipulating human behavior. The fundamental joy of consciousness is agency. Being free, self-aware and able to exert your will onto the world is at the heart of higher consciousness.

My second position is that this is something that we need to be concerned about right now. BMAIs not only exist, they are a dominant driving force behind the growth of technological companies. Alphabet, Amazon, Microsoft, Facebook, Twitter, Netflix — almost every Silicon Valley titan has a BMAI at its heart, with most of these BMAIs being given the heuristic of maximising user engagement. That means maximising the time consuming data from the AI, and giving data back to the AI. The feedback loop that we set up here is important. The contemporary history of media shows us how powerful controlling the inputs of information is on behavioural control. Even one-way systems with no feedback mechanism, like newspapers and television, are terrifyingly effective at control. BMAIs are used to show us personally filtered ads, social media, news, and content that is structured to manipulate our behaviour in some way. For many of us who increasingly receive most of our information about the world through these services, this should be alarming.

My third position is that the heuristics of these existing BMAIs are poorly designed and are already leading to undesirable or unintended outcomes, even in these early days of nascent AI. The issue of bad heuristics is a large part of AI ethics — the reality that terminal goals cannot be perfectly defined and so the AI may find some solution that looks good to it with the rules given, but is clearly not a valid choice. Imagine putting a robot in a room with a light and a switch, and wanting it to switch the light off. To accomplish this, we might give the robot a terminal goal heuristic that says “light is bad, minimise it”. We switch the robot on and see what it does — and it immediately destroys its camera. We can see the logic, of course, it was really a lot easier to just smash its own camera than move all the way across the room and flick a switch. The field of AI ethics is increasingly finding that perfectly defining a terminal goal in an entirely safe way is virtually impossible — and certainly nowhere even close to where BMAI is right now in consumer services.

So how does this apply to social media and to the internet and to this emerging web of artificial intelligences that have more and more influence over our lives? In the real world, there has been a lot of talk about the YouTube radicalization engine, where the heuristics of the YouTube recommendation AI find that slowly introducing someone to increasingly radical political content is a fantastic way to keep them glued to their screen. A 2020 study[2] found that:

…channels in the I.D.W. and the Alt-lite serve as gateways to fringe far-right ideology, here represented by Alt-right channels. Processing 72M+ comments, we show that the three channel types indeed increasingly share the same user base; that users consistently migrate from milder to more extreme content; and that a large percentage of users who consume Alt-right content now consumed Alt-lite and I.D.W. content in the past. We also probe YouTube’s recommendation algorithm, looking at more than 2M video and channel recommendations between May/July 2019. We find that Alt-lite content is easily reachable from I.D.W. channels, while Alt-right videos are reachable only through channel recommendations. Overall, we paint a comprehensive picture of user radicalization on YouTube.

Now, my fourth position is not legitimate psychology. I’m not a psychologist or political scientist and this is based purely on observation of the political landscape, the world and current events over the past few years. The claim that I want to make is that anger and fear are fantastic ways to create user engagement. People are engaged when they are angry or scared far more than when things are just kind of fine, or even when things are kind of good. Happy emotions — and again, absolutely a subjective claim here — tend to be less directed and less active than unhappy ones. So, a BMAI tasked with maximising user engagement will attempt to provoke these emotions in its users. Imagine you are scrolling down your Facebook feed. Which is more likely to get your attention — some kind of fairly banal life update, a picture of a house or some new shoes or some hike, somebody’s gone on? Or a highly controversial article making a blatantly false claim, paired with a heated discussion in the comments. I know that there are many people like me who would find the second scenario to be far more engaging — and it appears that the algorithm agrees with my pop-psychology. It means that you would expect any kind of user-engagement BMAI to prefer the second scenario over the first, to elevate contentious voices, even to disseminate wildly yet interestingly incorrect claims at a horrifyingly personal and tailored level.

My final position is that even in the short term, the personalisation of our individual information input streams threaten our cohesive sense of reality. When we observe objects like YouTube, we do not observe an objective thing. We instead, each of us, see an object that has been specifically tailored to be hypnotising to us, like a crystal with a billion facets. Each of us sees a different set of information. It’s natural that we emerge from that experience of gazing upon that object with a different sense of not only what that object is, but if that object transfers information about the world into us we come away with a different idea of the reality itself. It is a big problem to have millions of different realities rapidly move away from each other in terms of their coherence. Almost two-thirds of Americans in 2016 got their news from a BMAI input source[3]. How do you cooperate with someone when you fundamentally disagree about the nature of reality, when you fundamentally cannot make common assumptions about how the world works? The average person’s sense of reality now seems intimately tied with BMAI input streams.

So, what are our options? I believe there are three general choices that lay before us: inaction, regulation, or a ban. The first is that we do not identify this threat, and leave these mind-viruses to mutate and grow exponentially in effectiveness. The second is that we regulate how BMAIs can be used in consumer products. This is preferable, but there is no guarantee that any introduced regulation would be able to define a set of good heuristics. It may be that the regulation exacerbates the issue instead of fixing it. We understand the levers of AI too little to make solid predictions about how to regulate it. The final option is to outright ban the use of BMAI in consumer services. This too has its issues, this time in forcing us to create strict definitions of what makes an AI a BMAI. It is likely that any such definition would not be perfect, but at the least the risks become more manageable.

There is a tendency to imagine oneself as incorruptible, perfectly attuned to reason and unable to be manipulated. But the reality is that the human psyche is inherently a manipulatable thing. It can be hacked, and we’re currently dedicating an awe-inspiring amount of computation to the AI equivalent of penetration testers, who are currently searching for hacks and exploits in your brain that they can use to change how you think. So why aren’t you scared of what sent you here?

[1] Superintelligence: Paths, Dangers, Strategies by Nick Bostrom

[2] Auditing radicalization pathways on YouTube, 2020, https://dl.acm.org/doi/abs/10.1145/3351095.3372879

[3] News Use Across Social Media Platforms, 2016, https://www.journalism.org/2016/05/26/news-use-across-social-media-platforms-2016/

A note: Much of this article involved some pop-psychology, on which I am not in any way qualified to speak of and will come from a place of subjectivity. If you do not agree with these personal conclusions on the human condition, you will likely not agree with the rest.

--

--

Sean Finnegan

Sean is a high-functioning sack of flesh, consisting mostly of complex proteins and water. He enjoys programming, travel, politics and dogs.