Menu
When AI is used for evil

When AI is used for evil

Don’t assume only the good guys will be using these tools, reports Forrester.

'AI can generate text for social media, blog posts and fake news sites'

'AI can generate text for social media, blog posts and fake news sites'

Forrester is calling on security and risk professionals to plan now for future attacks aided by AI technologies.

The breaches in the past five years mean malicious AI tools and systems have a lot of valuable data to start with. As machine learning continues to evolve, entire industries will change, and AI will play a major role on both the defending and attacking sides of the cyber war, according to a recent report by the analyst firm.

Forrester says its research shows 64 per cent of global enterprise security leaders are concerned about AI technologies. Thus, they must familiarise themselves with the building blocks of AI technologies and security research.

The report, entitled Using AI for Evil, notes how criminals can use machine learning tools and techniques to search and mine large amounts of data in the public domain and in social networks to extract the targets’ personally identifiable information (PII).

The PII can be used to exploit user or business accounts for identity theft and other types of fraud.

“AI technologies can also be used to automatically monitor emails and text messages from hacked accounts and create personalised phishing emails for social engineering attacks at a massive scale,” they state.

Creating and targeting fake news is one of the ways AI can be used for malicious purposes.

Fake news has been proven to have powerful effects on people and can lead to mistrust of all media and to civic unrest, according to the authors of the report, Chase Cunningham and Joseph Blankenship.

The report cites how data taken from Facebook by Cambridge Analytica and the publicly available voter data used by AggregateIQ to target voters allowed nation-states and hacktivists to target specific groups with AI-generated fake news.

“With face-swapping technology and voice synthesis, attackers can create realistic looking videos and audio featuring powerful figures such as politicians, civic leaders, or entertainers,” reports Forrester.

“AI can also generate text for social media, blog posts and fake news sites. It may also be possible to launch a digital extortion attack of fake news to discredit a company, its executives, or its products, thereby holding the company hostage.”

“Don’t believe every overhyped vendor pitch you hear, but understand the current state of the art and experiment with those tools in your own environment,” advises Forrester.

Source: Forrester
Source: Forrester

Organisations can hire or contract an AI expert into the security team. Security leaders must also make the effort to be involved in initiatives using AI and provide guidance on how the organisation builds, uses and secures these technologies, the report states.

These initiatives include marketing departments using AI to understand customers and improve customer experience, and business units investing in physical robots or robotic process automation.

“Don’t let security become an afterthought as it usually is with emerging technology adoption,” says Forrester.

Join the CIO New Zealand group on LinkedIn. The group is open to CIOs, IT Directors, COOs, CTOs and senior IT managers.

Join the newsletter!

Or

Sign up to gain exclusive access to email subscriptions, event invitations, competitions, giveaways, and much more.

Membership is free, and your security and privacy remain protected. View our privacy policy before signing up.

Error: Please check your email address.

Tags fraudidentity theftData managementforrestercyberattacksAImachine learningRobotic Process AutomationFake newsCambridge AnalyticaAggregateIQ

More about Facebook

Show Comments