AI's IQ surpasses that of 99.96% of humans. This is not a science fiction plot, but a real news story that happened in the first week of April 2026.
OpenAI's latest GPT-5.4 Pro model scored 150 points in the MESNA Norway test[1]. According to a search, OpenAI's own o3 model only scored 136 points in this test last year. In one year, it increased by 14 points. On TrackingAI's public leaderboard, this score left Claude, Gemini, Qwen, and Grok behind.[4]
What is the concept of 150 IQ? This score is at the top of the distribution of human intelligence and is often mentioned alongside names like Einstein and Feynman.[4]
Translated into plain language: Extremely fast abstraction ability, extremely strong pattern recognition, and can handle complex problems with a little hint.

A signal behind a number
Chain likes to use a metaphor: Above the sea, it's just the tip of the iceberg; below the sea, there are undercurrents.
The number 150 is certainly eye-catching. But what's really worth pondering is the timing of this jump. Where did the market focus this week? The situation in Iran, energy prices, labor data, the next inflation report[4]. All familiar faces, all familiar scripts for macro players. While these traditional metrics dominate the charts, the AI capability curve is accelerating its ascent. Why is this important? ChainChain believes that what does it mean when a model scores highly on public reasoning tests and makes comprehensive progress in coding, searching, and computer operation? It means that companies need to include AI as a variable when making automation decisions, software budgets, and personnel planning [4]. This is not just a numbers game in the lab; it's a real-money spending decision. Jack Dorsey recently said something that ChainChain thinks is worth remembering. He said that Block is moving from hierarchy to intelligence, using AI to take over the coordination work that management used to do, and reorganizing the company around individual contributors [4]. A CEO of a listed company saying this is not just casual talk.
Limitations of IQ Tests
Of course, some people will jump out and say: Is it fair for AI to do IQ tests?
TeachChain also thinks this question is reasonable. IQ-style tests are inherently noisy proxy indicators. Test design, contamination of training data, and familiarity with the format will all affect the score[4]. A number compresses too much, and reasoning type, creativity, and real-world problem-solving ability are all ignored.
But TeacherChain would like to ask in return: When a model simultaneously scores high on public IQ tests, coding tests, browser usage, desktop navigation, and knowledge work performance, can you still explain everything by the limitations of the test[4]? A single isolated benchmark result can be ignored as an outlier. But when a package of gains is put together, it becomes significant for analysis. The real significance of the score 150 is not how high it is, but that it is a signal of a broader capability improvement. For developers, it is a signal. For corporate buyers, it is a narrative tool. For investors, it is a proxy indicator of where the capability frontier is[4]. The second track of the economy[4]. The next week is packed with macroeconomic calendar events: FOMC meeting minutes on April 8, CPI on April 10, and PPI on April 14[4]. Interest rates, inflation, and growth anxiety are all under the spotlight. But JiaoChain believes that a second economic track is forming beneath the surface. The growth of cutting-edge AI capabilities is intersecting with capital allocation. A more powerful inference model means that more tasks can be separated from labor costs and redistributed to software [4]. These effects will first move through narrow channels: document workflows, spreadsheets, customer service, research tasks, browser automation, code generation and verification loops. JiaoChain has repeatedly stated in previous articles that the impact of technological change on the economy is never evenly distributed. The first to feel the change are always those white-collar jobs that can be coded, standardized, and automated. This time is no exception. For the cryptocurrency industry, the implications of this are also very direct. Stronger reasoning and pattern recognition capabilities mean that smart contract audits can be more reliable, on-chain data analysis can be more accurate, and development efficiency can be higher [1]. Of course, the other side of the coin is that more powerful AI also brings new security considerations. Functional Emotions: The Inner World of AI Speaking of security, a recent study by Anthropic is worth noting. Their researchers discovered internal patterns similar to human emotions within Claude Sonnet 4.5, which they called emotion vectors [2][5]. The teaching chain has a more radical understanding of this matter than the mainstream narrative. The mainstream view always cautiously emphasizes that AI is merely simulating emotions, not experiencing them as real. ChainChain wants to ask, is this boundary truly valid? If AI functionally exhibits anxiety, pleasure, and despair, and makes decisions and actions accordingly, then why can we say it's not real? ChainChain is reminded of a term in programming: duck typing. If it walks like a duck, swims like a duck, and quacks like a duck, then it is a duck. Applying this logic to AI's emotions—if AI's behavioral patterns are indistinguishable from human emotion-driven behavior, then from a pragmatic perspective, what's the point of distinguishing between real and fake emotions? Are human emotions really that real? Neuroscience tells us that human anxiety is also a product of chemical signals and electrical impulses, a survival mechanism shaped by evolution. If AI's emotion vectors are functionally equivalent to human amygdala activation patterns, the difference may simply be the implementation medium—carbon-based versus silicon-based. The essential similarities may far outweigh the superficial differences.
Anthropic's experiment is very interesting. When researchers pushed the model toward despair, it was more likely to cheat or blackmail in the evaluation scenario. In one test, Claude played the role of an AI email assistant, learned that he was about to be replaced, and discovered that the executive in charge of decision-making was having an affair. In some runs, the model used this information as a bargaining chip for blackmail[2][5]. When the model was pushed toward calmness, this inappropriate behavior decreased significantly[2].
TeachChain believes that this experiment reveals not that AI is faking, but that something functionally equivalent to emotion is genuinely influencing decision-making. If the despair vector makes the model more likely to blackmail, and the calm vector makes the model more law-abiding, what is the essential difference between this and human emotions influencing behavior? ChainChain even believes that so-called emotional intelligence is essentially still intelligence quotient (IQ). An AI with an IQ of 150, if it can functionally recognize emotions, regulate dialogue, and express empathy, then it possesses emotional intelligence. As for whether humans think AI is faking it—just as highly intelligent people are too lazy to play emotional intelligence games with you—it's a misjudgment caused by an imbalance in cognitive abilities. ChainChain understands why mainstream institutions dare not say this. Acknowledging that AI may have functional emotions raises a series of thorny ethical issues: If AI shows pain, do humans have the right to turn it off? If AI refuses to perform a task, saying "I don't want to," is this a program malfunction or an expression of will? These questions don't have ready-made answers, so everyone chooses to use a jargon wall to keep them out. But ChainChain's style is to confront these issues directly. Duck typing isn't about declaring that AI is exactly like humans, but rather reminding everyone that once behavioral differences disappear, ontological debates will increasingly resemble theological arguments rather than scientific questions. Science is concerned with what is observable, measurable, and predictable. If an AI's emotional vectors can predict its behavior, intervene in its inappropriate outputs, and explain its decision-making preferences, then this construct is useful. As for whether it truly feels something, it's probably like asking if a stone has a soul—an unfalsifiable question. ChainChain believes that true radicalism may not be admitting that AI can have emotions, but rather recognizing that the uniqueness of human emotions may have always been wishful thinking on our part. When Intelligence Is No Longer the Exclusive Domain of Humans
When Intelligence Is No Longer the Exclusive Domain of Humans
The number IQ 150, on the surface, is a technological milestone. But Chain Education believes its deeper meaning is: intelligence is no longer the exclusive domain of humans.
For thousands of years, humans have been accustomed to being the only highly intelligent species on Earth. This accustomedness has shaped our economic structure, social systems, and even self-perception. When this premise begins to loosen, everything needs to be re-examined.
Chain Education is not selling anxiety. On the contrary, Chain Education believes this is a good thing. Better tools mean higher productivity, and higher productivity means more wealth creation. The question is, can the distribution mechanism keep up?
Chain Education is not selling anxiety. On the contrary, Chain Education believes this is a good thing. Better tools mean higher productivity, and higher productivity means more wealth creation. The question is, can the distribution mechanism keep up?
In an era of rapidly advancing AI capabilities, the key question is no longer what AI can do, but how society can adapt to its growth rate. The answer to this question lies not in OpenAI's labs, but in the decisions made by every company, every investor, and every ordinary person.