DeepBrain AI and the Korean National Police Agency have introduced a GenAI deepfake detection system to combat rising digital crimes.

DeepBrain AI has partnered with the Korean National Police Agency to launch a GenAI deepfake detection system. This initiative addresses growing concerns over deepfake crimes, including phishing and election interference, which pose significant risks to society. The new system is designed to quickly and accurately analyze digital content, helping users verify its authenticity.

The deepfake detection system includes two core components: comprehensive detection and voice detection. It evaluates various behavioral patterns, such as head angles, lip movements, and facial muscle changes, to determine the authenticity of the content. Additionally, the voice detection feature assesses elements like frequency and noise to detect manipulation. The entire process is efficient, taking just 5 to 10 minutes to classify content as “real” or “fake,” applicable to both videos and static images.

Unlike most existing detection models that rely on Western-centric data, DeepBrain AI’s solution is enhanced with over one million Korean data points and 130,000 Asian race data points, significantly increasing detection accuracy.

Following its success with the Korean National Police, DeepBrain AI plans to offer this system to other organizations, aiming to reduce the spread of false information and digital deception. The service is available as both a SaaS product and an on-premise solution.