Dubbed the "kill chain", this approach involves identifying commonalities between the techniques of these different malicious actors and encouraging other platforms, researchers and authorities to share their information.

"It's not a magic wand, it's a collaboration tool that will give us more chances of success," Eric Hutchins, one of the two Meta engineers behind the method, told AFP.

"People say all the time that attackers always have the advantage. I want to show that defense can win, thanks to an organization that can detect vulnerabilities."

Regularly accused of not fighting enough against disinformation and other digital scourges, Meta has set up teams specializing in cybersecurity and preventive detection of these threats, with the help of artificial intelligence (AI).

The American group regularly dismantles operations aimed at influencing opinion, carried out by foreign or domestic actors around the world.

Regardless of their motivations or affiliation (government agency, troll farm, etc.) malicious actors use similar tactics, such as using photos generated by algorithms, to create fake profiles, for example.

"If we can teach someone how to identify a fake photo, they can then recognize them easily," and detect inauthentic behavior early on, says Ben Nimmo, the co-author of this "kill chain" and one of the cybersecurity managers at Meta.

The success of this approach will depend on the degree of collaboration with the various partners.

Malicious actors, who are increasingly third-party companies paid by sponsors, "don't work in silos, they're on multiple platforms at once. So we need to reflect this reality, break down silos and share as much information as possible," Nimmo said.

© 2023 AFP