Author | Sid @IOSG
The Current State of Web3 Games
With the emergence of newer and more attention-grabbing narratives, Web3 games as an industry have taken a back seat in both primary and public market narratives. According to Delphi's 2024 report on the gaming industry, Web3 games have raised less than $1 billion in cumulative primary market funding. This is not necessarily a bad thing, it just shows that the bubble has subsided and capital may now be compatible with higher quality games. The following figure is a clear indicator:

Throughout 2024, the number of users in gaming ecosystems such as Ronin has soared, and due to the emergence of high-quality new games such as Fableborn, it is almost comparable to the glory days of Axie in 2021.

Gaming ecosystems (L1s, L2s, RaaS) are becoming more and more like the Steam of Web3. They control the distribution within the ecosystem, which has become a motivation for game developers to develop games in these ecosystems because it can help them acquire players. According to their previous report, the user acquisition cost of Web3 games is about 70% higher than that of Web2 games.
Player Stickiness
Retaining players is just as important as attracting them, if not more so. While data on player retention for Web3 games is lacking, player retention is closely tied to the concept of "Flow," a term coined by Hungarian psychologist Mihaly Csikszentmihalyi.
The "flow state" is a psychological concept in which a player achieves a perfect balance between challenge and skill level. It's like "getting in the zone" — time seems to fly and you're completely immersed in the game.

Games that continuously create flow states tend to have higher retention rates due to the following mechanisms:
#Progression design
Early game: simple challenges, build confidence
Mid-game: gradually increase difficulty
Late game: complex challenges, master the game
As players improve their skills, this detailed difficulty adjustment keeps them within their own rhythm
#Engagement loop

Game and AI Symbiosis
AI agents can help players achieve this state of flow. Before we explore how to achieve this goal, let's first understand what kind of agents are suitable for application in the field of games:
LLM and Reinforcement Learning
Agents and NPCs
Game AI is all about speed and scale. When using LLM-powered agents in games, each decision requires calling a large language model. It's like having a middleman before every step. The middleman is smart, but waiting for his response makes everything slow and painful. Now imagine doing this for hundreds of characters in a game. Not only would it be slow, it would be expensive. This is the main reason why we haven’t seen large-scale LLM agents in games yet. The largest experiment we’ve seen so far is a 1,000-agent civilization on Minecraft. With 100,000 concurrent agents on different maps, this would be prohibitively expensive. Players would also be affected by interruptions in the flow, since each new agent added would cause latency. This breaks the flow state.
Reinforcement Learning (RL) is a different approach. Think of it like training a dancer, instead of giving them hand-in-hand instructions through an earpiece. With reinforcement learning, you spend time upfront teaching the AI how to “dance” and how to respond to different situations in the game. Once well-trained, the AI will be naturally fluid, making decisions in milliseconds without asking for anything from above. You can have hundreds of these well-trained agents running in your game, each making independent decisions based on what it sees and hears. They’re not as eloquent or as flexible as LLM agents, but they do things quickly and efficiently.
The real magic of RL comes when you need these agents to work together. Whereas LLM agents require lengthy “conversations” to coordinate, RL agents can develop an implicit tacit understanding in training — like a football team that practices together for months. They learn to anticipate each other’s moves and coordinate naturally. While this isn’t perfect, and sometimes they make mistakes that LLM wouldn’t, they can operate at a scale that LLM can’t match. For gaming applications, this tradeoff always makes sense.

LLM and Reinforcement Learning
Agents and NPCs
Agents as NPCs will solve the first core problem facing many games today: player liquidity. P2E was the first experiment in using cryptoeconomics to solve the player liquidity problem, and we all know how that turned out.
Pre-trained agents have two roles:
While this seems pretty obvious, it is difficult to build. Indie and early Web3 games don’t have the financial resources to hire an AI team, which opens up an opportunity for any agent framework service provider with RL at its core.
Games can work with these service providers during playtesting and testing to lay the foundation for player liquidity when the game is released.
This allows game developers to focus on game mechanics and making their games more fun. As much as we like to incorporate tokens into games, games are games and they should be fun.
Proxy Players
The Return of the Metaverse?
One of the most played games in the world, League of Legends, has a black market where players train their characters with the best attributes, which the game prohibits them from doing.
It helps to form the basis of game characters and attributes as NFTs, creating a market to do this.
What if a new subset of "players" emerges that serve as coaches for these AI agents? Players can coach these AI agents and monetize them in different forms, such as winning matches, and also act as "training partners" for esports players or passionate gamers.
LLM and Reinforcement Learning
The Return of the Metaverse?
Early versions of the Metaverse may simply create an alternative reality, not the ideal one, and therefore miss the mark. AI agents help Metaverse residents create an ideal world - escape.
This is where LLM-based agents can come in handy, in my opinion. Perhaps someone could populate their world with pre-trained agents that are domain experts and can have conversations about things they like. If I create an agent trained on 1,000 hours of Elon Musk interviews, and users want to use an instance of this agent in their world, I can get rewarded for that. This would create a new economy.
With Metaverse games like Nifty Island, this could become a reality.
In Today: The Game, the team has created an LLM-based AI agent called "Limbo" (speculative tokens released), with the vision of multiple agents interacting autonomously in this world, while we can watch a 24x7 live stream.

How does Crypto integrate with this?
Crypto can help solve these problems in different ways:
Players contribute their game data to improve models, get better experiences, and get rewarded for it
Coordinate multiple stakeholders such as character designers, trainers, etc. to create the best in-game agents
Create a marketplace to own and monetize in-game agents
There is a team that is doing all of these things and more: ARC Agents. They are solving all of the problems mentioned above.
They have the ARC SDK, which allows game developers to create human-like AI agents based on game parameters. With a very simple integration, it solves player liquidity problems, cleans game data and turns it into insights, and helps players stay in flow in the game by adjusting difficulty levels. To do this, they used Reinforcement Learning. They initially developed a game called AI Arena, where you basically train your AI character to fight. This helped them form a baseline learning model that formed the basis of the ARC SDK. This formed a flywheel of sorts like DePIN: All of this is coordinated with their ecosystem token $NRN. The Chain of Thought team explains this well in their article on ARC agents: Games like Bounty are taking an agent-first approach, building agents from the ground up in a wild west world. Conclusion The convergence of AI agents, game design, and Crypto is not just another tech trend, it has the potential to solve a variety of problems that plague indie games. The great thing about AI agents in gaming is that they enhance what makes games fun — good competition, rich interactions, and challenges that keep people coming back for more. As frameworks like ARC Agents mature and more games integrate AI agents, we’ll likely see entirely new kinds of gaming experiences emerge. Imagine worlds that come alive not because of the other players in them, but because the agents in them are able to learn and evolve with their community.
We’re moving from a “play-to-earn” era to something much more exciting: games that are both genuinely fun and infinitely scalable. The next few years are going to be fantastic for developers, players, and investors watching this space. The games of 2025 and beyond will not only be more technologically advanced, they will be fundamentally more engaging, more engaging, and more alive than anything we’ve seen before.