Author: imToken
Looking back at 2025, if you feel that on-chain scams are becoming increasingly "in tune with you," this is not an illusion.
With the widespread adoption of LLM, social engineering attacks launched by hackers have evolved from bloated mass emails to "precision targeting": AI can automatically generate highly enticing customized phishing content by analyzing your on-chain/off-chain preferences, and even perfectly simulate the tone and logic of your friends on social channels such as Telegram.
It can be said that on-chain attacks are entering a truly industrialized stage. In this context, if the shields we hold are still in the "manual era," security itself will undoubtedly become the biggest bottleneck for the large-scale adoption of Web3.
I. Web3 Security Stuttering: When AI Intervenes in On-Chain Attacks
If the past decade saw Web3 security issues primarily stemming from code vulnerabilities, a significant shift since 2025 is the "industrialization" of attacks, while security measures have not kept pace.
Phishing websites can be mass-generated using scripts, and fake airdrops can be automatically and precisely delivered, making social engineering attacks no longer reliant on hackers' deceptive skills but rather on model algorithms and data scale.
To understand the severity of this threat, we can break down a simple on-chain swap transaction. You'll then discover that risks are virtually ubiquitous throughout the entire lifecycle of the transaction, from creation to final confirmation: Before interaction: You might have entered a phishing page disguised as an official website, or used a DApp frontend with a malicious backdoor; During interaction: You might be interacting with a token contract containing "backdoor logic," or the counterparty itself might be a tagged phishing address; During authorization: Hackers often trick users into signing seemingly harmless signatures that actually grant them "unlimited deduction permissions"; After submission: Even if all operations are correct, in the final step of submitting the transaction, MEV scientists may still be lying in wait in the mempool, ready to steal your potential gains through a sandwich attack; This extends beyond Swap to all interaction types, including transfers, Stakes, Mint, etc. Risks are omnipresent in this chain process of transaction creation, verification, broadcasting, on-chain verification, and final confirmation. A problem at any point can render a secure on-chain interaction futile. It can be said that, based on the current account system, even the most secure private key protection cannot withstand a single accidental click by a user; even the most rigorous protocol design can be bypassed by an authorized signature; and even the most decentralized system is most vulnerable to "human flaws." This means a fundamental problem emerges—if attacks have entered the stage of automation and intelligence, while defense remains at the level of "human judgment," security itself will become a bottleneck. Ultimately, ordinary users still lack a one-stop solution that provides security protection for the entire transaction process. AI, however, has the potential to help us build a security solution for end users that covers the entire transaction lifecycle, providing a 24/7 defense to protect user assets. II. What can AI × Web3 do? Let's look at the theoretical possibilities and see how the combination of AI and Web3 can reconstruct a new paradigm for on-chain security in this game of technological asymmetry. First, for ordinary users, the most direct threat is often not protocol vulnerabilities, but social engineering attacks and malicious authorization. At this level, AI plays the role of a tireless, 24/7 security assistant. For example, AI can use Natural Language Processing (NLP) technology to identify communication scripts on social media or private chat channels that are highly suspected of being fraudulent: When you receive a "free airdrop" link, for example, the AI security assistant will not only check the website's blacklist, but also analyze the project's social media popularity, domain registration duration, and smart contract fund flow. If the link is backed by a newly created fake contract without any funds injected, the AI will mark a large red cross on your screen. "Malicious authorization" is currently the leading cause of asset theft. Hackers often induce users to sign seemingly harmless signatures that actually grant them "unlimited deduction permissions": When you click to sign, AI will first perform a transaction simulation in the background. It will tell you bluntly: "If you perform this operation, all ETH in your account will be transferred to address A." This ability to translate obscure code into intuitive consequences is the strongest barrier against malicious authorization. Secondly, the protocol and product side can achieve everything from static auditing to real-time defense. In the past, Web3 security mainly relied on periodic manual audits, which were often static and lagging. AI is now being embedded in real-time security processes, much like the well-known automated auditing. Compared to traditional auditing, which requires human experts to spend weeks reviewing code, AI-driven automated auditing tools (such as smart contract scanners incorporating deep learning) can complete logical modeling of tens of thousands of lines of code in seconds. Based on this logic, current AI can simulate thousands of extreme transaction scenarios, identifying subtle "logic traps" or "reentrancy vulnerabilities" before code deployment. This means that even if developers inadvertently leave backdoors, AI auditors can issue warnings before assets are attacked. In addition, security tools like GoPlus can intercept transactions before hackers can act. GoPlus SecNet, for example, allows users to configure on-chain firewalls to check transaction security in real time via an RPC network service. It can proactively intercept risky transactions to prevent asset losses, including transfer protection, authorization protection, protection against fraudulent token purchases, and MEV protection. It can check for risks in transaction addresses and assets before transactions and proactively intercept them if risks are found. Furthermore, I strongly agree with GPT-style AI services, such as providing a 24/7 on-chain security assistant for most novice users to guide them in resolving various Web3 security issues and quickly providing solutions for sudden security incidents. The core value of such systems is not "100% correctness," but rather shifting risk detection from "after the fact" to "during the incident" or even "before the incident." III. Where are the Boundaries of AI × Web3? Of course, it's still a matter of cautious optimism, as always. When discussing the new potential that AI × Web3 can bring to fields such as security, we need to remain restrained. Ultimately, AI is just a tool. It shouldn't replace user sovereignty, nor can it safeguard user assets, much less automatically "intercept all attacks." Its reasonable positioning leans more towards minimizing the cost of human judgment errors while maintaining decentralization. This means that while AI is powerful, it is not omnipotent. A truly effective security system must be the result of the combined effects of AI's technological advantages, users' clear security awareness, and the collaborative design of tools, rather than placing all security bets on a single model or system. Just as Ethereum has always upheld the value of decentralization, AI should exist as an auxiliary tool. Its goal is not to make decisions for people, but to help them make fewer mistakes. Looking back at the evolution of Web3 security, a clear trend emerges: early security was simply "keeping your mnemonic phrase safe," the mid-stage was "don't click on unfamiliar links, and promptly revoke invalid authorizations," and today, security is becoming a continuous, dynamic, and intelligent process. In this process, the introduction of AI has not diminished the significance of decentralization; on the contrary, it makes decentralized systems more suitable for long-term use by ordinary users. It hides complex risk analysis in the background, transforming key judgments into intuitive prompts presented to users, gradually turning security from an extra burden into a "default capability." This echoes my previous repeated assessment: AI and Web3/Crypto are essentially a mirror image of the new era's "productive forces" and "relations of production." If we view AI as an ever-evolving "spear"—which greatly improves efficiency but can also be used for large-scale malicious purposes—then the decentralized system built by Crypto is a "shield" that must evolve in parallel. From d/acc's perspective, the goal of this shield is not to create absolute security, but to ensure that, even in the worst-case scenario, the system remains trustworthy, allowing users to have options for withdrawal and self-rescue. In conclusion, the ultimate goal of Web3 has never been to make users understand more technology, but to protect users without them even realizing it. Therefore, when attackers begin using AI, a defense system that refuses to be intelligent is itself a risk. This is why protecting assets is an endless game. In this era, users who know how to use AI to arm themselves will become the most difficult fortress to breach in this game. The significance of AI × Web3 may lie precisely here—not in creating absolute security, but in making security a capability that can be scaled up and replicated.