With the large-scale adoption of artificial intelligence, AIGC (AI-Generated Content) is profoundly transforming content creation. However, while convenience and efficiency continue to improve, the security risks introduced by AIGC are also escalating rapidly. Among them, deepfake technology has become a key driver of online fraud, misinformation, and identity impersonation. Increasingly realistic forged facial images and videos are placing unprecedented pressure on enterprise identity verification, risk control processes, and information security systems.
Against this backdrop, Yuanli Technology has continued to focus on deepfake detection research. Recently, a research paper on AIGC deepfake liveness detection, authored by the FinAuth algorithm team, was successfully accepted by the 2025 International Joint Conference on Neural Networks (IJCNN) and officially indexed by IEEE.
IJCNN is one of the most authoritative international conferences in the field of neural networks and computational intelligence. This acceptance not only represents international recognition of our research achievements, but also demonstrates our company’s continuous innovation and technical strength in deepfake detection technologies.
Escalating Deepfake Risks: Traditional Detection Technologies Struggle with Emerging Attacks
In recent years, the rapid development of GANs, diffusion models, and other generative technologies has significantly improved the quality of forged images. Along with this progress, two major challenges have emerged across the industry:
1. Strong Dependence on Specific Forgery Techniques
Many existing detection methods perform well only on specific types of forged data. When faced with unknown or newly emerging generation models, detection accuracy drops sharply. This means enterprises often remain passive in responding to new deepfake attacks, requiring continuous model retraining and adaptation to different forgery methods, making it difficult to achieve long-term and stable security protection.
2. Difficulty in Identifying Advanced Forgeries
As forged images become increasingly realistic, detection methods that rely on explicit features such as facial expressions or skin tone are gradually losing effectiveness. Many advanced deepfakes achieve near-perfect visual disguise, making it difficult for traditional algorithms to capture the subtle anomalies hidden within fine-grained texture details.
These challenges significantly increase the risks associated with identity fraud and verification processes, driving an urgent industry need for more generalizable and robust detection methods.
A Frequency-Domain Self-Supervised Deepfake Detection Framework Proposed and Accepted by IJCNN 2025
The IEEE-accepted paper introduces a forged face detection technology based on frequency-domain self-supervised learning. The core idea of this approach is to analyze an image’s frequency information by decomposing it into high-frequency and low-frequency components, with particular emphasis on high-frequency components, which typically represent fine textures and edge details within images.
Through this method, the algorithm can more accurately identify abnormal features in forged images, thereby improving overall detection accuracy.
Compared with existing methods, the proposed approach demonstrates significant advantages in the following aspects:
1. Focusing on high-frequency image information to reduce interference from facial expressions and skin tone, thereby improving accuracy
Traditional forged image detection methods may be affected by factors such as facial expressions or skin tone in images. In contrast, this approach leverages self-supervised learning to focus on the reconstruction of high-frequency information, concentrating on subtle details that are difficult to perceive. By avoiding reliance on prominent visual features, the method is not misled by obvious appearance cues, thus improving detection accuracy.
2. Pretrained on real facial data, exhibiting strong generalization across different diffusion-based forgeries
By transforming images from the spatial domain to the frequency domain, this method enables more effective differentiation between real facial images and forged ones. Its key advantage lies in pretraining on real facial datasets, allowing the model to maintain strong generalization performance even when encountering previously unseen forged data generated by different diffusion models.
AIGC technologies bring innovation and efficiency to society, but they also introduce new security challenges. Our company remains committed to independent technological research combined with academic openness, continuously investing in areas such as deepfake detection and defense algorithms.
The acceptance of this paper by IEEE represents international recognition of our technical capabilities and marks an important step in promoting industry-wide efforts to address AIGC-related risks. Looking ahead, we will continue to explore more advanced deepfake detection solutions, safeguard digital identity security, and promote the healthy and responsible development of AI technologies.
This article provides only a summary of the core research content. Full technical details, experimental design, and result analysis are available on IEEE Xplore:


