Author: Haotian
Recently, I have observed the AI industry and found that it is becoming more and more "downward": from the original mainstream consensus of concentrated computing power and "large" models, a branch has evolved that favors local small models and edge computing.
This can be seen from Apple Intelligence covering 500 million devices, Microsoft launching the 330 million parameter small model Mu for Windows 11, and Google DeepMind's robot "offline" operation.
What will be the difference? Cloud AI competes on parameter scale and training data, and the ability to burn money is the core competitiveness; local AI competes on engineering optimization and scene adaptation, and will go further in protecting privacy, reliability and practicality. (The hallucination problem of the main general model will seriously affect the penetration of vertical scenarios)
This will actually give web3 AI a greater opportunity. In the past, when everyone competed for "generalization" (computing, data, algorithm) capabilities, they were naturally monopolized by traditional Giant manufacturers. It was simply a pipe dream to want to compete with Google, AWS, OpenAI, etc. by applying the concept of decentralization. After all, there is no resource advantage, technical advantage, and no user base.
But in the world of localized models + edge computing, the situation faced by blockchain technology services is very different.
When the AI model runs on the user's device, how to prove that the output result has not been tampered with? How to achieve model collaboration under the premise of protecting privacy? These problems are precisely the strengths of blockchain technology...
I have noticed some new web3 AI-related projects, such as the data communication protocol Lattica recently launched by Pantera's 10M zero-investment @Gradient_HQ to solve the data monopoly and black box problems of centralized AI platforms; @PublicAI_ EEG device HeadCap collects real human data and builds an "artificial verification layer", which has achieved 14M in revenue; in fact, they are all trying to solve the "credibility" problem of local AI.
In a word: Only when AI is truly "sunk" to each device, will decentralized collaboration change from a concept to a rigid demand?
Instead of continuing to roll in the general track, why not seriously think about how to provide infrastructure support for the localized AI wave?