2016 was a year of intense political upheaval
in Europe and the USA. First of all, we saw the United Kingdom vote to leave
the European Union, and then we saw the rise of populism in the USA with the
election of Donald Trump.
But for those interested in cyber security,
one of the key revelations to emerge from these unpredictable and divisive
campaigns was how social media was being used as a minefield of misinformation
and untruths by nation states such as Russia.
One of the driving factors of this was their
use of bots and bad actors to circulate misleading news stories. Research has
shown that bots are being used to manipulate public opinion and interfere with political
The upcoming EU elections are expected to be
targeted by the same campaigns of disinformation. Politico reports that “the EU faces hackers, trolls and foreign agents,” with European officials concerned about bots and misinformation being
used to weaponize debates around immigration and freedom of movement.
The EU is fighting back however, with the help
of SafeGuard Cyber, a cybersecurity vendor with a unique algorithm to detect
bots and bad actors on social media.
Who are SafeGuard Cyber?
SafeGuard Cyber was founded in 2014 and has
created a platform to protect both businesses and countries from social media
“We’re the only firm that protects
corporations from threats on social and digital channels,” SafeGuard Cyber
cofounder Otavio Freire tells me. “We offer end to end protection across 50
plus channels and help prevent issues such as data loss, insider threats and
issues of brand reputation.”
Freire says that SafeGuard thinks of online
channels as different hubs. “So there’s a social media hub and in there you
have LinkedIn, Facebook, Twitter and Instagram. Then you have a mobile social
hub, with WhatsApp, Facebook Messenger and so on. For businesses we also have our
Enterprise collaboration hub, which is where we protect Slack and workplace
The Threats to Businesses from Online Hubs
Any social media manager will tell you that
the loss of reputation that a business can suffer from negative publicity on
Facebook or Twitter is huge. Every day we see companies and individuals facing
tweetstorms from angry customers.
But Safeguard Cyber sees a potential for much
more serious threats to businesses coming from online hubs.
“A lot of businesses don’t touch social and
digital channels in their data loss programs. But the nature of these platforms
can hurt brands and the reputation of the firm through fraud,” Freire says.
“We also see more and more insider threats.
So, an employee goes home, creates a fake account on a social media channel and
then leaks confidential information.
That’s something in the past that we’ve investigated.”
Phishing Attacks on Online Hubs
Alongside reputational threats, bad actors can
also target businesses with phishing attacks. This involves using fake accounts
to set up meetings via Slack or WhatsApp, with some even going as far as
creating a whole virtual business of fake accounts with the aim of stealing
sensitive information from large multinational organisations.
“If you send a phishing email, the open rate
is sub one-percent,” Freire says. “But if you send a WhatsApp message to
someone the open rate is 40%. People have learned not to trust what they see in
email, but with new technologies they haven’t experienced that reason to
“When we first started talking about these
threats with customers, they said ‘social media is like a toy,’” Freire tells
me. “But then when Brexit and the revelation of 2016 election meddling
happened, people started to look at social media as something a little bit more
important to think about.”
The Threats to Nations from Online Hubs
Beyond businesses, there is a clear threat at
a national level from fraud and misinformation on social media.
“Social media is a means for nation states to cause disruption in adversary organisations,” Freire says. “We’ve put out reports about how Russian bots are creating disinformation online, and we are working with the EU and other nations on how to address those disinformation campaigns.”
What is the purpose of a nation state using bots and bad actors to influence political debates?
Beyond just election campaigns, Freire tells
me that all different kinds of social and political debates are being
influenced by bots and bad actors from nations like Russia and China. He uses the
example of the debate around anti-vaccination, and tells me they’ve seen
examples of fake accounts creating arguments on both sides of that particular
“The purpose of this is really to disrupt
society,” Freire says. “It’s not even always about influencing elections, it’s
just to create division. They do this in the EU, they do this in the USA and
they want this polarization of society.”
Changing the course of History
It would be a fair statement to make that if
the goal of these bot and misinformation campaigns is to create division in
society, then they have been very successful. It’s very arguable that Western
democracies have not been so divided on key political debates in decades.
“These campaigns have changed the course of history,”
“They have weaponized our open information
democracy against us,” says George Kamide, Marketing Director at SafeGuard
Cyber. “An ex-director at NATO recently summarized it well for me when she
said: ‘Attackers don’t need to poison the water plant anymore. They just need
to convince everyone that it’s poisoned.’”
Stopping the spread of misinformation from bots and bad actors
One of the key challenges facing businesses,
nation states, individuals and the social media platforms themselves is the
process of being able to deal with the rise of bots online.
Freire tells me that: “What’s different about
our technology is that we aren’t just another alarm bell, when we find a
malicious piece of content, we will inform the social networks for them to take
But how can an algorithm tell the difference
between a bot spreading misinformation, and a voter who genuinely believes a
certain idea or topic to be the truth?
Freire first all of dispels any notion of
political bias or ideology in the platform.
“We’re completely apolitical,” he says. “We
use data science. We look for things from the users, like when there’s evidence
images have been altered. Or, when we see text that we know has come from
verified bot groups. We determine if information is unreliable, not based on us
but based on a bipartisan crowd source. Our technology lets us see patterns.”
Stopping threats in real time
On social media, ideas can take hold very
quickly. Campaigns can very quickly go viral, and a company or nation can have
a hard time removing fake news from the public consciousness, once it has
already been seen by an audience of millions.
“Our platform absolutely works in real time to
identify bots as soon as they post,” Freire tells me. “We’re always ingesting
massive amounts of data. But it truly is a needle in a haystack problem. We’ve
looked at these accounts and everything looks fine, you can see what they’re
posting and where they’re from and it all looks normal. So, it can be really
hard for a human to spot a fake account.”
“But what our platform can do is look at all
the places they’re posting. We can look at their behaviours and what time
they’re posting. So, if there’s a guy who says he lives in London, but he’s
always posting on China time, we can flag that. There’s a lot of things our
algorithms take into consideration, the content, the behaviour, the links and
their following lists.”
An important debate at the moment is between
governments and social networks about the responsibility of the social
networking platforms themselves to remove ‘fake news’ and dangerous content
from their platforms.
It is one thing to uncover a bot or a bad
actor intentionally spreading misinformation, but it is another difficult
process entirely to get that profile pulled and those damaging posts deleted.
“We work with Facebook, Twitter and all those
platforms on the hubs,” Freire tells me. “We work with them to report those
profiles. We help them in that respect. It’s always their decision whether to
take it down.”
“They are very good at investigating with us
and doing something about it, but it’s ultimately their decision on whether to take
action. We submit to Facebook all the necessary information, and they are very
good about responding to it if we have a strong case.”
Looking Towards the Future
No one could blame you for a feeling of
hopelessness when looking at the state of social media and the effectiveness of
bots and bad actors.
The founder of the World Wide Web, Tim Burners
Lee, himself recently expressed his concern that the internet was heading towards a dysfunctional
“The rise of misinformation does threaten the
future of the internet,” Freire agrees. “But we are in the business of
protecting against these threats.”
“We believe there is no stopping these
threats; to try and do that would be to try and stop the tide coming in. But
there will be a slow transformation.”
So how can everyone get better at spotting these fake profiles today?
“Check the sources, check the reliability and remember that any profile could be anyone online,” Freire says. “If everyone did that, we would see huge progress.”
Visit Expert Insights: https://www.expertinsights.com/
Find out more about SafeGuard Cyber: https://www.safeguardcyber.com/