Phishing emails used to look obvious, full of grammar and spelling mistakes, but thanks to AI, even non-verbal hackers can send professional messages.

In a series of scenes straight out of a sci-fi movie, Hong Kong police explained how a bank employee in the city paid out $25 million (about $37.7 million) in an elaborate deepfake AI scam last month.

The worker, who declined to give his name or his employer's police, was concerned about an email allegedly sent by the UK-based company's chief financial officer requesting a transfer and called a video conference to confirm. I asked for it. But police said even that step wasn't enough, as the hackers created a deepfake AI version of the man's colleague to trick him during the call.

“[In the] In a video conference with multiple people, everyone [he saw] was a fake,” Senior Superintendent Baron Zhang Xun-ching said in remarks reported by broadcaster RTHK and CNN.

It is not clear how the AI ​​version of the anonymous company executive was able to be created on a reliable basis.

However, this is not the only worrying situation. As documented by of new yorker, An American woman received a call from her mother-in-law in the middle of the night, crying, saying, “I can't do it.''

A man called her, threatened her life and demanded money. Her ransom was paid. A subsequent call to her mother-in-law revealed that she was safely in bed. The scammer was using her AI clone of her voice.

Fraudsters used an AI-generated “deepfake” image of Commonwealth Bank CEO Matt Cummin.

50 million hacking attempts

But fraud, whether against individuals or businesses, is different from the kind of hacking that befell companies such as Medibank and DP World.

One reason pure AI attacks are so poorly documented is because there are so many different components involved in a hack. Businesses use a variety of IT products, and there are typically many versions of the same product. They work together in different ways. Even if a hacker infiltrates your organization or defrauds your employees, you will still need to move your funds or convert them into other currencies. It all requires human work.

While AI-powered deepfakes remain a potential risk for now, more mundane AI-based tools have been used in cybersecurity defenses by large enterprises for years. “We've been working on this for quite some time,” said National Australia Bank chief security officer Sandro Bucchianelli.

For example, NAB says it is probed 50 million times a month by hackers looking for vulnerabilities. These “attacks” are automated and relatively easy. But if a hacker discovers a flaw in a bank's defenses, it could be serious.

Microsoft research shows that it takes a hacker an average of 72 minutes to access corporate data after infiltrating a target computer through a malicious link. From there, the impact of last year's massive ransomware attacks on Optus and Medibank, where private information was leaked online and systems as critical as ports were taken down, is not far off.

This requires banks such as NAB to act quickly on potential breaches. Bucchianelli says AI tools are helping staff do just that. “When you think of threat analysts and cyber responders, they're looking at hundreds of lines of logs every day, and they need to find anomalies in them,” Bucchianelli says. “[AI] This aids threat hunting functions that need to find the proverbial needle in the haystack faster. ”

Mark Anderson, national security officer at Microsoft Australia, agrees that if malicious groups use AI as a sword, they should also use it as a shield.

“Over the past year, we have witnessed a tremendous number of technological advances, but these advances are met with an explosion of equally aggressive cyber threats.

“On the attacker side, we are seeing AI-enabled fraud such as text-to-speech and deepfakes, as well as nation-state adversaries using AI to enhance their cyber operations.

He says it's clear that AI is an equally powerful tool for attackers and defenders. “As defenders, we must maximize our potential in the asymmetric battle of cybersecurity.”

Beyond AI tools, NAB's Bucchianelli says, staff also need to be wary of requests that don't make sense. For example, banks will never ask for a customer's password. “Urgency in an email is always a red flag,” he says.

Thomas Seibold, security executive at IT infrastructure security firm Kyndryl, said similar basic practical tips should be used by staff working on emerging AI threats alongside more technical solutions. states that it will also apply.

“Turn on your critical abilities and don't take everything at face value,” Seibold says. “Don't be afraid to verify authenticity through company-approved messaging platforms.”

Harriet Farlow, founder of Mileva Security Labs, remains optimistic about AI despite the risks.

Even if humans begin to recognize the signs of AI hacking, the systems themselves could become vulnerable. Farlow, the founder of an AI security company, says the field known as “adversarial machine learning” is growing.

As AI is used in more places, such as in self-driving cars, it is overshadowed by ethical concerns that AI systems may be biased or take away jobs from humans. The potential security risks are obvious.

“We can create custom stop signs. [autonomous] The car doesn’t see it and just goes straight on,” Farlow said.

But despite the risks, Farlow remains an optimist. “I think that’s great,” she says. “I personally use her ChatGPT all the time.” If companies deploy her AI correctly, she says, risks can remain unrealized.

Read more special reports on artificial intelligence



Source link