Spotting Actors Involved in Influence Operations

Introduction

The term influence operations (IO) is defined as “coordinated efforts to manipulate or corrupt public debate for a strategic goal.” IO are being carried out by actors who are either domestic or foreign, and working for either a particular state/government or a non-government entity, targeting domestic, foreign, and mixed audiences. Some of them may be driven by their political or ideological beliefs, but most are just rent seekers, i.e., motivated by financial rewards. Such definition was arrived at following discussions with a shared network of Philippine media organizations dedicated to identifying, analyzing and investigating influence operations in the country.

How do threat actors operate?

On Facebook, actors normally use fake identities in prosecuting covert operations. They resort to deception or what has been termed as coordinated inauthentic behavior (CIB), i.e., “any coordinated network of accounts, pages and groups that centrally relies on fake accounts to mislead Facebook and people using our services about who is behind it and what they are doing.”

An example of CIB is a a series of posts by newly-created FB accounts whose postings are all shared links, or posting the exact same content.

It must be emphasized though that Facebook deals with CIB based on behavior not on content, noting that “deceptive campaigns reuse popular, authentic content to build an audience, as well as real people unwittingly post memes originally created by IO actors.”

Other actors make use of state media, employing authentic identities to spread narratives that promote their political agenda and other goals.

They may also use witting and unwitting proxies – including influential media personalities and political figures – within and outside of their territories. The use of proxies enables the actors to hide their identities more effectively

1 20240330 111026 0000

What tactics do they commonly use?

Actors use the following tactics:

Dismiss allegations and denigrate the source, e.g., “It is the actions by the Philippine government that have heightened the tensions with China in the West Philippine Sea.”

Distort the narrative and twist the framing. An example is the statement by the Bangsamoro Governors Caucus claiming that the decommissioning of MILF forces lacks “a detailed plan, milestones, and timelines” when, in fact, its implementation is guided by the roadmap of the Comprehensive Agreement on the Bangsamoro, specifically in the Annex on Normalization.

Distract to shift attention and blame to a different actor or narrative. An example is the fake quote card regarding the December 3, 2023 blast inside MSU Marawi attributed to then BARMM Local Governments Minister Naguib Sinarimbo, which quoted him as saying that the attack happened because Christians have no place in Marawi City.

Dismay to threaten and frighten opponents, e.g., “Opposition candidates have allied themselves with the leftist underground movement”; and

Divide to generate conflict and broaden divisions within or between communities and groups, e.g., “A Catholic-run school has banned the wearing of hijab/kumbong by female Muslim students.”

Do they also employ techniques?

Yes, threat actors make use of techniques to achieve their operational goals as laid out in their tactics, the 5Ds mentioned earlier. Techniques can be broken down into the planning stage, preparatory stage and execution stage (Plan, Prepare, Execute).

At the planning stage, they undertake the following: degrade the adversary, discredit credible sources, facilitate state propaganda, and geographic segmentation (of target audience).

The preparatory stage includes, but is not limited to, the following: development of image-, text- and video-based content; reframing of the context, development of new narratives; responding to “breaking news” or active crises; amplification of existing conspiracy theories; development of manipulated/edited videos (deep fakes) and images (cheap fakes); creation of inauthentic documents; and creation of inauthentic websites.

Execution involves the following: posting across platforms and across groups, cross-posting, flooding the information space, calls to action to attend (events), and amplification of news and narratives using inauthentic websites.

2 20240330 111026 0001

Have actors been using other tactics to evade detection?

As noted by Facebook in its “Threat Report: The State of Influence Operations 2017-2020,” actors have evolved their techniques and have been using the following trends and tactics:

1. A shift from “wholesale” to “retail” IO: Threat actors pivot from widespread, noisy deceptive campaigns to smaller, more targeted operations. (Targeted operations means directing the IO at particular audiences or geographic areas.)

2. Blurring of the lines between authentic public debate and manipulation: Both foreign and domestic campaigns attempt to mimic authentic voices and co-opt real people into amplifying their operations.

3. Perception Hacking: Threat actors seek to capitalize on the public’s fear of IO to create the false perception of widespread manipulation of electoral systems, even if there is no evidence. (Note: Influence operations are not limited to electoral processes.)

4. IO as a service: Commercial actors offer their services to run influence operations both domestically and internationally, providing deniability to their customers and making IO available to a wider range of threat actors.

5. Increased operational security: Sophisticated IO actors have significantly improved their ability at hiding their identity, using technical obfuscation and witting and unwitting proxies. (Obfuscation means making something difficult to understand to protect sensitive information or personal data.)

6. Platform diversification: To evade detection and diversify risks, operations target multiple platforms (including smaller services) and the media, and rely on their own websites to carry on the campaign even when other parts of that campaign are shut down by any one company

How do we establish the identities of actors?

If these actors are using authentic identities, information about their family, friends, jobs, personal politics and associations that are available online would offer a clue.

For those who are using fake identities, start with the username. Feed the username to the various platforms, as some people cling to the same username across platforms and email providers, although sometimes with minimal variations.

In addition, usernames can be plugged into Google. Some users leave a trail on places like the comments section, reviews, etc. A caveat: the presence of the same username in different platforms does not necessarily mean it belongs to a single person, although it is a good starting point for your investigation.

Profile photos can also help in identifying a person across all platforms. Aside from Google’s reverse image search, other search engines, e.g. Yandex, and facial recognition tools like Face++ are available and can deliver better results.

For actors who are conscious about their privacy and so are not using real profile photos or obscuring real ones, the photos of the things they love or are proud of – homes, pets, cars – can help in finding connections or a network around these persons. The photo of one’s home that is posted on Facebook, for example, may also be found on Twitter or Instagram. (H. Marcos C. Mordeno / MindaNews)

(This explainer was produced with support from an Internews initiative aiming to build the capacity of news organizations to understand and monitor disinformation and influence operations in the Philippines.)

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *