North Korean flags flying on poles against a cloudy sky

U.S. companies are increasingly discovering that their remote IT hires aren't who they claim to be. Over the past two years, reports of North Korean nationals caught working for U.S. companies, particularly in IT and remote roles, have surged. This practice has been enhanced with innovations in real time AI voice and video. They use AI face-swapping software to mask their appearance in video calls, to edit and forge documents, and large language models to answer technical questions and fulfill the duties of the job.

Many of the employees only ever do their job, collecting a paycheck that ultimately flows back to Pyongyang. Others are far more malicious, stealing intellectual property, planting malware, and feeding access to North Korea's cyber units. Many are instruments of the Reconnaissance General Bureau, North Korea's primary intelligence agency. They are trained by the state, closely supervised, and forced to make quotas.

Often, these opportunities are only possible with external help from American citizens. For example, Christina Chapman, a 50-year-old woman from Arizona, was sentenced to 8.5 years in prison for operating a laptop farm. The laptop farm, which consisted of 90 laptops running remote-access software hosted out of her home, allowed North Korean operatives to appear as domestic U.S. workers by masking their true overseas location. In fact, employees placed through her network secured jobs at Fortune 500 companies, including Nike.

"Authorities revealed Chapman's operation alone helped the workers get 309 jobs that generated $17.1 million in revenue through their salaries. Nearly 70 Americans had their identities stolen in the operation, authorities said." (Gerut, Fortune)

The Scale of the Issue

The scale of the issue can be difficult to map out in entirety, but there are enough resources to paint a more accurate picture. CrowdStrike's 2025 Threat Hunting Report documented a 220% year-over-year increase in activity from FAMOUS CHOLLIMA, their designation for North Korean IT worker operatives. The surge was largely attributed to their use of generative AI tools including Google Gemini, OpenAI's models, and real-time deepfake software to obtain and maintain employment. In all, CrowdStrike found 320 different companies were infiltrated by North Korean operatives in the last 12 months.

The United Nations estimates roughly 100,000 North Korean workers are deployed across 40 countries, although that figure spans all industries. The same UN report estimated approximately 4,000 IT workers specifically, generating between $250–600 million annually for the regime. Isolating the remote IT worker population precisely is difficult, as no single body tracks them comprehensively, but between the UN and CrowdStrike's findings, the picture is clear enough: this is not a fringe operation.

Detecting Fraudulent Employees

So, how does an employer detect a North Korean applicant? One comical example would be of Taro Aikuchi's interview. The interviewer was already suspicious of the applicant and asked him to say disparaging words about the Supreme Leader, Kim Jong Un, to prove he wasn't from North Korea. Because of North Korea's extremely harsh laws against criticizing the regime, offenses that can carry the death penalty and punishment for the offender's entire family, Taro Aikuchi wisely refused and disconnected from the call.

CrowdStrike recommends a more rigorous approach when initially vetting social media accounts. During a video call, you can ask the applicant to turn their head or pick up an object to break up the AI deepfake. Once hired, IT staff should validate USB and peripheral devices, as they can indicate whether remote access hardware is being used. Mistakes in translation can be noticed, and unusual gaps in availability could indicate they are working multiple jobs, or potentially they live in a different time zone. Crucially, suspicion should be elevated if the employee is using an unauthorized VPN or proxy.

With the latest advancements in generative AI, we're in an entirely new era of fraudulent employment. Documents will need to be more thoroughly verified and a video call is no longer enough to truly know who you're speaking to. These challenges will carry forward as tools available to threat actors become more sophisticated and the burden falls onto companies and governments to keep pace.