- Advertisement -

- Advertisement -

OHIO WEATHER

Divisive, demoralizing bots are winning, so big tech needs to think bigger


Bots frequently amplify misinformation and conspiracy theories shared by real people, giving a megaphone to what might otherwise be a lone misguided voice. They hijack conversations on controversial issues to derail or inflame the discussion. For example, bots have posed as Black Lives Matter activists and shared divisive posts designed to stoke racial tensions. When real people try to make their voices heard online, they do so within a landscape that’s increasingly poisoned and polarized by bots.

I have spent much of my career developing artificial intelligence to identify online bots. My colleagues and I are in a computational arms race: As the tools we build to track down fake accounts improve, so do the bots. As important as our work is, using software tools to find individual bots won’t eliminate the problem. Social media platforms must act to root out bots on a systemic level.

What makes bots increasingly dangerous is their sophistication and scale. Artificial intelligence has become so good at mimicking human speech that it’s hard for the average user to tell what’s real and what’s fake. Last fall, an account using the advanced GPT-3 language processing algorithm was released on Reddit. The conversations it had were so human-like that it took more than a week before users realized they were interacting with a bot. You can see for yourself just how sophisticated this AI is on sites like Talk to Transformer.

Bots also have tremendous reach. While the average person can share misinformation with dozens or perhaps hundreds of friends on social media, an army of bots can spread the same content to millions in a matter of hours through a steady drumbeat of posts. A 2018 study found that just 6 percent of Twitter accounts, all of them suspected bots, were responsible for spreading 31 percent of misinformation around the 2016 election. In many cases, the false information began trending in less than 10 seconds.

Simply removing bot accounts from popular platforms isn’t enough. Facebook deleted nearly nine billion bogus accounts in 2018 and 2019, but the company still estimates that at least 5 percent of its users are fake. Organized misinformation campaigns have also been known to hack real accounts and convert them to bots, taking advantage of these accounts’ existing networks and credibility.

Instead of playing whack-a-mole with individual accounts, social media platforms need to zoom out and attack the bots en masse. As AI becomes more sophisticated at mimicking humans, the best way to spot bot activity is by looking at the context of a post. Has a hashtag risen out of nowhere, driven by an interlinked network of suspicious accounts? Do a group of users post about a single topic ad nauseam, echo similar talking points, or repeatedly divert unrelated conversations to a particular topic?

When the algorithms that decide what you and I see are a black box, it’s difficult to stop misinformation from spreading and to gauge the authenticity of what we’re exposed to online. Only the companies themselves have the necessary back-end data, such as accounts’ IP addresses and posting patterns, to provide context about why a specific hashtag is trending or where and how a piece of viral misinformation started.

Once bot campaigns are identified, social media companies can take several steps to hinder them while respecting the free speech of human users. They could require a simple CAPTCHA test before publishing any post containing a hashtag that is largely being spread by bot accounts. They can give users more context about the information they encounter, such as the country where a viral hashtag originated or patterns in the prior posting history of other accounts. They could even experiment with computational techniques that generate a summary of each user’s activity, pulling the curtain back on accounts that post relentlessly about a single topic or tend toward inflammatory content. There are also changes companies can make behind the scenes, such as tweaking their algorithms to de-prioritize posts from bot-driven campaigns in users’ news feeds. Facebook did this temporarily in the aftermath of the 2020 election, and traffic to more authoritative news sources increased as a result.

Thus far, social media companies have been reluctant to fight bots as aggressively as possible. Twitter recently began labeling state-sponsored media accounts, such as Russia Today, which are often used to post content that is then amplified by bots. However, this small step came only after prolonged pressure from users and the US government. The reality is that platforms have ample incentives to continue promoting divisive content and misinformation as long as it engages their audience. All activity, whether authentic or not, is good for their bottom lines.

Ultimately, government regulations may be necessary to make platforms safeguard the integrity of online discourse. However, regulation does not mean…



Read More: Divisive, demoralizing bots are winning, so big tech needs to think bigger

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More

Privacy & Cookies Policy

Get more stuff like this
in your inbox

Subscribe to our mailing list and get interesting stuff and updates to your email inbox.

Thank you for subscribing.

Something went wrong.