ByteDance, the Chinese tech company behind TikTok, has quietly launched an advanced AI model, OmniHuman-1, capable of generating hyper-realistic videos from a single still image. This development, announced in a research paper by ByteDance, raises concerns about the rise of deepfake technology and national security implications as tensions over TikTok’s influence continue.
Security experts and AI researchers are warning of potential misuse if the technology becomes publicly accessible. Henry Ajder, a leading AI expert, explained that OmniHuman-1 can create eerily accurate videos of humans talking and moving naturally, reducing the need for large amounts of training data. “Previously, deepfakes required hundreds or thousands of images to be effective. This model changes the game,” Ajder told ABC News.
According to ByteDance’s research, the model was trained on more than 18,700 hours of human videos, achieving unprecedented levels of accuracy. It can generate videos that seamlessly synchronize speech, facial expressions, and body movements without the typical signs of artificial creation. These advancements have sparked fears that bad actors could weaponize the tool to produce fake political or defamatory content.
ByteDance declined to comment when approached by ABC News. However, a company representative previously told Forbes that safeguards would be in place to prevent harmful content if the technology were deployed to the public. TikTok has already implemented measures to label AI-generated content and boost user awareness of AI manipulation.
The research paper included several examples of OmniHuman in action. One demonstration featured a video of a digitally recreated Albert Einstein delivering a lecture. Another showcased performances where musicians and speakers were convincingly animated from single images. The tool can produce high-definition video in various formats, potentially bypassing existing AI detection systems.
Ajder noted that the model’s performance is one of the most impressive to date. “It’s not just about generating visuals,” he said. “OmniHuman’s ability to create synchronized voice and video is incredibly sophisticated. The results are stunning.”
Despite these advancements, critics caution that OmniHuman could exacerbate ongoing issues with disinformation and cybercrime. John Cohen, a former intelligence official with the Department of Homeland Security, described the potential for abuse as a “dramatic expansion” of threats. Cohen cited examples of deepfake videos used in election interference, propaganda, and personal attacks. “This technology could allow malicious actors to create fake videos faster and more efficiently than ever before,” he said.
Similar incidents have already occurred globally. In Moldova, AI was used to create a false video of the country’s president endorsing a political party linked to Russia. In Bangladesh, an AI-generated image depicted a politician in a scandalous scenario, damaging their reputation.
Domestically, the threat has also surfaced. Ahead of a recent primary election in New Hampshire, voters received AI-generated robocalls that falsely imitated President Joe Biden’s voice, urging them to skip the early vote. Authorities labeled the calls an illegal attempt at voter suppression.
As ByteDance’s AI technology continues to advance, U.S. officials remain on high alert. The company’s operations have been under scrutiny due to concerns over Chinese government influence. ByteDance, like other Chinese firms, is legally obligated to support intelligence operations if requested by Beijing. This connection has fueled bipartisan efforts to regulate TikTok and other tech giants with Chinese ties.
The release of OmniHuman coincides with U.S. investments aimed at bolstering domestic AI innovation. Last month, President Donald Trump announced a $500 billion partnership between major tech companies OpenAI, Softbank, and Oracle. A newly appointed “AI czar” is leading initiatives to close the technological gap with China.
However, Cohen emphasized that the U.S. government has historically lagged in addressing digital threats. “We need a comprehensive strategy to counter these emerging technologies,” he said. “Otherwise, we risk falling further behind in managing AI-driven risks.”
For now, ByteDance has not indicated when or if OmniHuman will be available to users. Experts like Ajder suspect the company could integrate the model into TikTok’s ecosystem, potentially enhancing its content generation capabilities. Meanwhile, policymakers and security experts are urging heightened vigilance as AI continues to reshape global communication and media.
The debate underscores a broader struggle over technology, privacy, and control in an increasingly interconnected world. As artificial intelligence grows more powerful, so do the challenges of ensuring its ethical use. Both the U.S. and its global partners face the task of balancing innovation with security to safeguard public trust in digital information.