Transcript
The following transcript of this discussion was edited for clarity.
Cyber threats are getting smarter every day. And with AI (artificial intelligence) helping attackers create impersonations, it’s getting harder than ever for companies to know what’s real. We’re even seeing surprising tactics, like North Korean IT workers posing as remote software developers. It’s a reminder of just how quickly things are changing and how easy it is for companies to be caught off guard.
I’m Sarah Cambon, and today I’m joined by Pete Marta, a partner at Goodwin, to dig into his article on how cyberattacks are evolving and what companies can do about it. His piece is part of Goodwin’s Forces of Law series.
Sarah Cambon: Pete, thanks so much for being here.
Pete Marta: Thanks for having me.
Cyberattacks aren’t new. We’ve been hearing about breaches for, really, decades now. But challenges have increased significantly. What changed?
For many years, cyber was viewed largely as an IT issue. Thankfully, that era is gone. Cybersecurity is now widely considered to be among the most significant enterprise risks facing organizations today. It’s up there with operational risk, business interruption, and supply chain risk. I do a lot of work with banks and for financial institutions; it’s taken as seriously as liquidity risk, which is saying something.
C-suites and boards have changed how they view cyber risk because the cyber threat landscape has shifted so significantly, particularly in the past five to six years. Over that period, we’ve seen an enormous increase in the frequency, the sophistication, and the overall impact of cyberattacks.
There’s been a steady increase in the sheer volume of attacks. Some are reported in the media, but many others are not. I don’t think I recall a busier time than the past three months.
We’ve been living in what I call the era of double extortion since late 2019. Today, a cyberattack or a cyber extortion typically includes both encryption, meaning victims can’t access their data, and exfiltration, meaning the threat actor actually steals that data.
The sophistication of cyberattacks has steadily risen over the past decade. That’s to be expected with the passage of time.
We’re now going through a paradigm shift, with cyber threat actors leveraging AI. This is likely to result in the complexity of cyberattacks increasing exponentially. Of particular concern is threat-actor use of agentic AI, or AI agents. Just a few weeks ago, a prominent AI company released a report that claimed it detected the first documented case of a cyberattack largely executed without human intervention. So that’s the future.
One anecdote that really surprised me in your article was that North Korean IT workers were infiltrating Western companies by posing as remote software developers. How does something like that happen?
It’s quite widespread. And as with cyber threats more generally, it’s industry agnostic, meaning we’ve seen this impact organizations in numerous sectors. Hundreds of companies have been impacted, in fact. Some are aware that this has happened to them. It’s very likely others are not.
I think it’s important to point out that the FBI (Federal Bureau of Investigation) and DOJ (U.S. Department of Justice) still consider this to be an active threat, even though there have been some arrests.
I wouldn’t necessarily classify these as cybersecurity incidents. Here, the threat actors are using stolen identities of individuals in the US to obtain jobs at victim companies. They’re not hacking into organizations’ systems. They have valid access because the victim companies have hired them.
Interestingly, the primary motivation of threat actors behind these incidents is different than what we typically see. More commonly, a nation-state threat actor compromises an organization to steal its IP (intellectual property) or to pre-position access that could be leveraged for a disruptive or destructive attack against critical infrastructure. Volt Typhoon was a good example of that. Or we see threat actors stealing or extorting money for their own financial gain.
Here, North Korean IT workers are obtaining jobs at US companies, and they’re actually working there. These are always fully remote positions. And they’re sending their wages to the North Korean regime. So it’s a fundraising mechanism, in essence, for North Korea’s nefarious activities, including its nuclear weapons program.
Like many threats, this one has evolved over the course of the past few years. Initially, we did not often see these IT workers doing anything other than working at these companies. And ironically, many were reportedly pretty good at their jobs. They just were working and sending their paychecks to North Korea. That changed in fall of 2024, when we started to see some evidence that these IT workers would try to steal the company’s data and attempt to extort it when they were caught.
You mentioned AI earlier in the conversation. We’re seeing AI drive new kinds of attacks, including impersonation via deepfakes, spear phishing, and enhanced translations. What can companies do to protect themselves against these AI-augmented threats?
Today, threat actors are able to leverage AI to create extremely believable deepfake attacks that impersonate real individuals, often senior executives of a target organization. A well-publicized example occurred in early 2024. It was an attack on a global design company in which the threat actor used a realistic and real-time deepfake to convince a Hong Kong–based employee of that company in its finance department to wire $25 million to accounts owned by the threat actor. It was successful, and the threat actor got away with those stolen funds.
Anyone can be a victim. Imagine sitting at your desk and receiving a video call from a senior executive at your company. You hit “accept” and her image pops up. She says, “Hello,” and it looks exactly like her. You may have been in her office before, and her background looks exactly like her office. Everything seems legitimate. She speaks with urgency, referring to a sensitive or secret transaction. She directs you to take a certain action, such as approving something or initiating some process. What would you do? That’s what we’re dealing with here.
Coming back to your question, employee training and awareness initiatives are among the most effective techniques for preventing these attacks. There are also a variety of technical controls that companies’ IT departments can consider to enhance their preparedness.
Finally, every company should be conducting an annual cyber tabletop exercise with their outside counsel and consider incorporating a deepfake into the scenario. We started doing that with clients last year, and it’s proven to be very eye-opening.
In terms of just realizing how convincing a lot of these deepfakes are?
Precisely. So, we will often pick a particular individual ahead of time. We did one where the CEO’s image came up. It looked like a (Microsoft) Teams call. He was sitting in his office, and he directed the recipient to take a certain action. It was just very eye-opening.
Well, Pete, thanks so much for your time. I really appreciate it.
Thanks for having me.
This informational piece, which may be considered advertising under the ethical rules of certain jurisdictions, is provided on the understanding that it does not constitute the rendering of legal advice or other professional advice by Goodwin or its lawyers. Prior results do not guarantee similar outcomes.
Contacts
- /en/people/m/marta-peter

Peter M. Marta
Partner
