Web Of Lies – Asian Scientist Magazine

Asian Scientist Magazine (31 October 2022) — As social networking platforms entered the media ecosystem, sharp headlines and striking images became key elements of virality. But in today’s saturated landscape where information moves quickly, speed has emerged as the primary tactic to beat the competition and build engagement. At the same time, media institutions must combat alternative narratives, half-truths and outright fabricated content.

Social media is plagued by an expanding information disorder, perpetuated by the platforms’ own algorithms and rules for success. In response, these platforms have implemented mechanisms to counter misinformation – from recruiting content moderators to artificial intelligence (AI) tools. Facebook, for example, has partnered with third-party fact-checkers and previously removed hundreds of ‘malicious’ fake accounts linked to a Philippine political party.

Even as big tech has begun to intervene, researchers studying this messy misinformation landscape can’t help but ask: Are tech giants doing enough, and can they be held accountable?

Processing manipulation

Networks of misinformation and disinformation have changed the online media landscape and have the power to shape public perception. The masterminds behind these networks construct targeted and consistent messages that appeal to specific audiences. The messages are then spread and amplified by legions of bot accounts, paid trolls and rising influencers.

In the Philippines, for example, such tactics have affected public health issues such as vaccine hesitancy. They have also promoted political agendas such as national elections and human rights abuses, including fabricated criminal charges.

But their success is partly made possible by the social media infrastructure itself. Platforms reward engagement: more likes and shares increase the likelihood that a post will appear in users’ feeds. Meanwhile, a large burst of tweets containing key phrases can catapult a topic to the trending list.

Since people within the same network are likely to have similar perspectives, recommendation algorithms arrange content to match perceived preferences. This traps users in a bubble – an echo chamber – shielded from potentially opposing views.

Even users who want to verify the information they come across can find it difficult to search for the answers they need amid the deluge of online information, Dr. Charibeth Cheng, Associate Dean of the College of Computer Studies at De La Salle University in the Philippines. Asian Scientist Magazine. For example, Google’s results are based on search engine optimization techniques. As such, sites that contain the relevant key phrases and receive the most clicks end up at the top of search rankings, potentially obscuring more reliable and robust sources.

“Constructing online discourse is not a matter of accessibility, but of visibility,” explained Fatima Gaw, assistant professor at the University of the Philippines’ Department of Communication Research, in an interview with Asian Scientist Magazine. “Robust sources of information cannot win the game of visibility if they do not master the platform.” For example, she explained, creators of biased or misleading content can still categorize their posts as ‘news’ to appear alongside other legitimate media sources, essentially guaranteeing their exposure to audiences.

Likewise, ‘cyber troops’ in Indonesia used misleading messages to sway the public in favor of the government legislature and drown out critics, according to a report released by the ISEAS-Yusof Ishak Institute, a Singapore-based research organization focused on socio-political and economic trends in Southeast Asia. These controversial policies included easing pandemic restrictions to encourage a return to normal activities just a few months into the COVID-19 outbreak, as well as legislative revisions that turned an independent anti-corruption agency into a government agency. Cybertroopers employ political actors to control the information space and manipulate public opinion online – supporting them with funds and numerous bot accounts to master the algorithms and spread misleading content.

“Cyber ​​troop operations not only feed public opinion with disinformation, but also prevent citizens from scrutinizing and evaluating the behavior and policy-making processes of the ruling elite,” the authors wrote.

Disinformation machinery therefore relies on a deep understanding of the types of content and engagement these platforms reward. And because social media thrives on engagement, there’s little incentive to stop content that has the power to start the next big trend.

“Platforms are complicit,” Gaw emphasized. “They allow disinformation actors to manipulate the infrastructure in massive and entrenched ways. This allows these actors to stay on the platforms, deepen their operations and ultimately profit from the disinformation and propaganda.”

Reshaping reality

Another troubling disinformation ecosystem exists on YouTube, where manipulation tends to be tolerated thanks to the platform’s content moderation algorithms and policies – as well as their lack of enforcement. First, the long video format provides an opportunity to possibly embed false and misleading content into the narrative in a more intricate, less obvious way.

“YouTube also has a narrow definition of disinformation, and it’s often contextualized to Western democracies,” Gaw said.

Flagging disinformation goes beyond picky facts. Misleading content can contain true information, such as an event that really happened or a statement that was made, but the interpretation can be twisted to fit a particular agenda, especially when presented out of context.

Gaw added that YouTube’s recommendation system exacerbates the problem by helping to construct a “metapartisan ecosystem where one lie becomes the basis for another to build a distorted view of political reality that is biased against a particular partisan group.”

TikTok has also fueled viral misinformation and historical distortion during the Philippine election earlier this year, as reported in the international press. The TikTok videos typically highlight the wealth and infrastructure built under a former president, while lampooning the country’s subsequent debt as well as corruption and human rights cases brought against the political family.

Social media platforms have further sanctioned the emergence of content creators as alternative voices, leading them to be perceived as as credible, if not more credible, than traditional news media, history books and academic institutions.

Even without credentials of expertise, online influencers “can create proxy signals of credibility by presenting their ‘own research’ while projecting authenticity as someone outside the establishment,” Gaw explained. “Their rise also comes against a backdrop of declining trust in institutions, particularly the media, as an authority on news and information.”

The digital media environment is one where every issue is left to personal perception and, perhaps most importantly, where established facts are fallible. However, Cheng believes that technology platforms cannot remain neutral.

“Technology companies should play a greater role in being more socially responsible and be willing to regulate the content they broadcast, even if taking it down could lead to negative business effects.”

Treatment of the information disorder

To counter the spread of false information and misleading narratives, AI-powered language technologies could potentially analyze text or audio and detect problematic content. Researchers are developing natural language processing models to better recognize patterns in texts and knowledge bases.

For example, content-based approaches can check for coherence and alignment within the text itself. If an article is supposed to be about COVID-19, the technology can look for unusual instances of unrelated words or paragraphs that might suggest misleading content.

Another approach called textual inclusion checks whether the meaning of a fragment, such as a phrase or a sentence, can be inferred from another fragment. However, Cheng noted that if both fragments are false, yet align with each other, the problematic content can likely still fly under the radar—much like Gaw’s earlier observation of a lie supporting another lie.

“If we have many known truths, match and alignment techniques can work well. But because the numerous truths in the world are constantly changing and need to be constantly curated, the model also needs to be updated and retrained—and that requires a lot of computational resources,” Cheng said.

Obviously, development of technologies to detect false or misleading content will first depend on building comprehensive references for comparing information and flagging inconsistencies. Another challenge Cheng highlighted is the lack of contextually rich Asian language resources, which hampers the development of linguistic models for analyzing texts in local languages.

However, the problem is much more complex. Decision making is never a purely rational affair, but rather a highly emotional and social process. Disputing false information and presenting contradictory evidence may not be enough to change perspectives and beliefs, especially deeply held ones.

When ivermectin was touted as an effective drug against COVID-19, stories of cured patients appeared online and quickly spread through social messaging apps. Many advocated the drug’s clinical benefits, placing a premium on personal experiences that could have been explained away by pure chance and other variables. One success story in a non-experimental setup should not have negated the evidence from large scientific trials.

“It’s no longer about facts and lies; we need a more comprehensive way to capture the spectrum of fake and manipulative content out there,” Gaw said.

Furthermore, current moderation responses, such as removing posts and providing links to reliable information centers, may not undo the damage. These interventions do not reach users who had already been exposed to such problematic content before it was removed. Despite these potential ways forward, technological interventions are far from a silver bullet for disrupting disinformation.

The emergence of alternative voices and distorted realities forces researchers to delve deeply into why such counter-narratives appeal to different communities and demographics.

“Influencers are able to embody the ‘ordinary’ citizen who has historically been marginalized in mainstream political discourse, while having the authority in their communities to advance their political agenda,” continued Gaw. “We need to strengthen our institutions to regain people’s trust through relationships and community building. News and content must engage with people’s real issues, including their anger, difficulties and aspirations.”

This article was first published in the print version of Asian Scientist Magazine, July 2022.
Click here to subscribe to Asian Scientist Magazine in print.

Copyright: Asian Scientist Magazine. Illustration: Shelly Liew/Asian Scientist Magazine

William

Leave a Reply

Your email address will not be published. Required fields are marked *