The manipulation machine.
Interview: Angelika Jacobs
Advances in artificial intelligence pervade social networks. Data scientist Geoffrey Fucile on the curated self, bots and the battle over the definition of truth.
UNI NOVA: Geoffrey Fucile, how much do you use social media?
GEOFFREY FUCILE: Very sparingly. I certainly use them, but more as an observer. I use LinkedIn for recruiting, I read the social media platform Reddit … Social media affect everybody and you can’t avoid them unless you live completely off the grid. But I’ve never had a Facebook, Instagram or Twitter account. Facebook in particular seemed creepy to me from the beginning.
UNI NOVA: Why creepy?
FUCILE: I suppose it was the notion of having virtual friends, but also this desire for approval from others that the platform exploits. Since the beginning, when Facebook was a campus thing, people size each other up on the contrived profiles they’ve set up that clearly don’t reflect the truth of who they actually are as individuals.
UNI NOVA: Facebook and other social media companies are investing heavily in developing machine-learning algorithms and neural networks. How does that make you feel about social media?
FUCILE: In some cases, it’s probably the other way around: The companies are investing heavily in social media to further their goals in artificial intelligence, or AI. However you define AI – it’s a problematic term. How do I feel about it? I still have the same reservations I’ve had from the start. As a father, I’m particularly concerned about virtual spaces, as there are clearly nefarious actors there with a lot of influence. And the tools for manipulation at their disposal are becoming more powerful. But it’s not all bad. Social media poses an opportunity for people to be more connected and a space for empathy.
UNI NOVA: When you say the social media companies are trying to further their goals in AI, what are those goals?
FUCILE: On the one hand, it’s clearly about profit, but it’s also about exerting control.
UNI NOVA: Control of what exactly?
FUCILE: Whatever is expedient. There is an ongoing war for control of what truth and reality is. The more you can shape opinions, the more control you have over dictating what reality is – which is problematic. You can imagine certain entities would find that useful to their ends.
UNI NOVA: What can we do about this?
FUCILE: We need to make these technologies, such as learning algorithms, publicly available, but also accessible in the sense that people understand what these technologies actually do. That’s one way we could maintain this decentralized notion of truth and reality that’s at the core of civil society.
UNI NOVA: Is it really realistic to hope that people could understand these technologies? Talking about machine learning soon gets very complicated.
FUCILE: It’s easy to be pessimistic and say it’s hopeless. But it’s not impossible. There are two big gaps: on the one hand, the development of legislation is lagging behind the pace of technological development; on the other hand, our education systems are not adapting fast enough. It’s unfair to expect people to understand the ramifications of these advanced technologies if they’ve never been educated in computer science.
UNI NOVA: So how should we go about bridging these gaps?
FUCILE: The scientific community really needs to strengthen its public outreach. If we scientists want people to use technology responsibly, then they have to understand it. And as developers of these technologies, it’s our responsibility to help resolve these gaps. There’s clearly a role for government at all levels here, too.
UNI NOVA: You mention the pace of technological development. Thinking about the last ten years, which developments in AI had the most noticeable impact on social media users?
FUCILE: The increased spread of misinformation is quite obvious, and societal divisions are largely due to how AI-based recommendation systems work. The goal is to keep users engaged, and a great way to do that is playing to our base primitive emotions like fear or anger, keeping us agitated. The recommender systems predict what’s going to interest or agitate you, and it wants you hooked in that state of agitation. Sowing discord and division is unfortunately the current central incentive.
UNI NOVA: What about advances in language recognition?
FUCILE: Underlying the recommendation systems are language models: They are used to make sense of people’s sentiments, how they react to what they see and read. These models can also be used to generate text. Some of them are really indistinguishable from text written by people now. We have language models that write their own code.
UNI NOVA: So can bots already create other bots?
FUCILE: There are programs that write other programs that are responsive. So it’s already possible. People are probably not aware of the scale at which these bots are interacting with us all the time. Now there are deep fakes for voice and video so you can recreate real or fictitious persons doing almost anything. It’s the result of many mathematical models working in concert. I don’t think people realize how quickly it’s changing and how much it’s influencing us. How much we are being manipulated in certain directions…
UNI NOVA: Seeing the rise of deep fakes, was Donald Trump right when he told his followers “What you're seeing and what you're reading is not what's happening”?
FUCILE: In a way. My hope is that it should be possible for us to agree on what’s authentic and what’s been constructed and artificial. And there is always a sort of arms race or rather interplay: as deep fakes become more sophisticated, so do methods of verifying whether something is authentic.
UNI NOVA: What developments can social media users expect over the next couple of years?
FUCILE: There are important unanswered questions about content ownership, fair use and privacy. Otherwise, I guess for now it’s an acceleration of the current trajectory. The bots are getting really convincing and the manipulation machine more effective. Whether specific tech companies are going to persist or not is hard to tell. I don’t know if the virtual reality world that Facebook/Meta are trying to construct will actually be used by a majority of people. But there is something to the general idea. Huge sums are spent on video games and VR equipment, some people spend much of their life in this digital realm.
UNI NOVA: Bots manipulating us, virtual realities to get people hooked, is anything positive coming out of the AI innovations?
FUCILE: It’s not all doom and gloom. There are certainly immensely useful developments in AI for basic research, for instance in the life sciences, and general tools such as translation services and aspects of the recommender systems. These models can also help us understand the directions in which we are being manipulated. Hopefully, the accessibility of these things will get better. We shouldn’t get too distracted by the negative aspects because innovations also open up new opportunities.
More articles in the current issue of UNI NOVA.