Author: Kydo, Head of Narrative at EigenCloud Translation: Saoirse, Foresight News
From time to time, friends send me tweets mocking restaking, but these mockeries never quite hit the mark. So I decided to write a reflective "rant."
You might think I'm too close to the issue to remain objective; or too proud to admit "we miscalculated." You might think that even if everyone agrees that "restaking failed," I'll still write a long explanation, refusing to utter the word "failure."
These views are all reasonable, and many have some merit.
These perspectives are all valid, and many of them have some merit.
This article simply aims to present the facts objectively: what happened, what was implemented, what failed, and what lessons we learned. I hope the experiences shared in this article will be universally applicable, providing some reference for developers in other ecosystems. After more than two years of integrating all mainstream AVS (Self-Verifying Services) on EigenLayer and designing EigenCloud, I want to honestly review where we went wrong, where we did right, and where we should go next. What exactly is restaking? The fact that I still need to specifically explain "what is restaking" shows that we failed to explain it clearly when restaking was still a focus of the industry. This is "Lesson 0"—focus on one core narrative and repeatedly convey it. The Eigen team's goal has always been "easy to say, hard to do": to enable people to build applications more securely on-chain by improving the verifiability of off-chain computation. AVS is our first and clearly defined attempt at this. AVS (Self-Verifying Service) is a Proof-of-Stake (PoS) network where a decentralized group of operators performs off-chain tasks. These operators' actions are monitored, and their staked assets are penalized for violations. To implement this "penalty mechanism," "staking capital" is required. This is precisely where restaking's value lies: instead of building a security system from scratch for each AVS, restaking allows the reuse of staked ETH to provide security for multiple AVSs. This not only reduces capital costs but also accelerates ecosystem launch. Therefore, the conceptual framework of restaking can be summarized as follows: AVS: The "Service Layer," which is the carrier of the new PoS crypto-economic security system; Restaking: The "Capital Layer," which provides security for these systems by reusing existing staked assets. I still think this concept is ingenious, but reality has not been as ideal as the diagram suggests—many aspects have not achieved the expected results in practice.
Things That Didn't Meet Expectations
1. We chose the wrong market: too niche
What we wanted wasn't "any kind of verifiable computation," but rather a system that was "decentralized from day one, based on a penalty mechanism, and fully crypto-economically secure."
We hoped AVS would become an "infrastructure service"—just as developers can build SaaS (Software as a Service), anyone can build AVS.
This positioning, while seemingly principled, greatly narrowed the scope of potential developers.
The end result was that we faced a market that was "small in scale, slow in progress, and high in barriers to entry": few potential users, high implementation costs, and long development cycles for both sides (teams and developers).
Whether it's the infrastructure and development tools of EigenLayer, or each AVS on top of it, it takes months or even years to build. Fast forward nearly three years: currently, we only have two mainstream AVSs running in production environments—Infura's DIN (Decentralized Infrastructure Network) and LayerZero's EigenZero. This "adoption rate" is far from "widespread." Frankly, our initial design scenario was that "the team wanted to have cryptoeconomic security and a decentralized operator from day one," but the actual market demand is for "more progressive, more application-centric" solutions. 2. Restricted by the Regulatory Environment, We Were Forced to Remain Silent. When we launched our project, it was at the height of the "Gary Gensler era" (Note: Gary Gensler was the chairman of the U.S. SEC and had adopted a strict regulatory approach to the crypto industry). At that time, many staking companies were facing investigations and lawsuits. As a "re-staking project," almost everything we said in public could be interpreted as an "investment promise" or "yield advertisement," and could even lead to subpoenas. This regulatory fog dictated our communication style: we couldn't speak freely, and even when faced with overwhelming negative reports, being blamed by partners, and public attacks, we couldn't clarify misunderstandings in real time. We couldn't even casually say "that's not how it is"—because we had to weigh the legal risks first. The result was that we launched the locked tokens without sufficient communication. Looking back, it was indeed somewhat risky. If you've ever felt that "the Eigen team was evasive or unusually silent on a certain issue," it's likely due to the regulatory environment—even a wrong tweet could carry significant risks. 3. Early AVS Diluted Brand Value Eigen's early brand influence largely stemmed from Sreeram (a core team member)—his energy, optimism, and belief that "systems and people can become better" garnered a lot of goodwill for the team. The billions of dollars in staked capital further strengthened this trust. However, our joint promotion of the initial batch of AVS failed to match this "brand height." Many early AVS projects made a lot of noise, but only chased industry trends. They were neither the "most technologically advanced" nor the "most trustworthy" examples of AVS. Over time, people began to associate "EigenLayer" with "the latest liquidity mining and airdrops." Much of the skepticism, aesthetic fatigue, and even aversion we face today can be traced back to this stage. If we could do it all over again, I would hope we started with "fewer but higher-quality AVSs," be more selective about "partners with brand endorsements," and be willing to accept a "slower pace and less hype" promotional approach. 4. Overemphasis on "trust minimization" in technology leads to design redundancy. We tried to build a "perfect universal penalty system"—one that was universal, flexible, and could cover all penalty scenarios, thereby achieving "trust minimization." However, in practice, this resulted in slow product iterations and required a significant amount of time explaining a mechanism that "most people weren't ready to understand." Even now, we still need to repeatedly explain the penalty system that was launched nearly a year ago. In hindsight, a more reasonable path would have been to first introduce a simple penalty scheme, allowing different AVS systems to try more focused models, and then gradually increase the system's complexity. But we prioritized "complex design," ultimately paying the price in terms of "speed" and "clarity." Those Things That We Actually Achieved People tend to label things as "failures" immediately, which is far too hasty. In the "re-staking" chapter, many things were actually done very well, and these achievements are crucial to our future direction. 1. We have proven that we can win tough battles in a fiercely competitive market. We prefer "win-win" scenarios, but we are not afraid of competition—once we choose to enter a market, we must become a leader. In the restaking space, Paradigm and Lido once joined forces to support our direct competitors. At that time, EigenLayer's total value locked (TVL) was less than $1 billion. Our competitors had narrative advantages, channel resources, capital support, and a built-in "default trust." Many people told me, "Their combination will be more effective than yours and crush you." But the reality is not like that—today we hold 95% of the restaking capital market share and have attracted 100% of the top-tier developers. In the field of "Data Availability (DA)," we started later, had a smaller team, and less funding, while industry pioneers already had a first-mover advantage and a strong marketing system. However, by any key metric, EigenDA (Eigen's data availability solution) holds a significant share of the DA market; this share will grow exponentially as our largest partners go live. Competition in both markets is fierce, but we have ultimately emerged victorious. 2. EigenDA Becomes a Mature Product that "Changes the Ecosystem" Launching EigenDA on top of the EigenLayer infrastructure was a huge surprise. It became the cornerstone of EigenCloud and brought Ethereum something it desperately needed—a massive DA channel. With it, Rollups can maintain high-speed operation without having to leave the Ethereum ecosystem and switch to other new public chains for the sake of "speed." MegaETH was launched because the team believed Sreeram could help them break through the DA bottleneck; Mantle initially proposed building L2 to BitDAO based on the same trust. EigenDA has also become a "defense shield" for Ethereum: when the Ethereum ecosystem has a high-throughput native DA solution, it becomes more difficult for external public chains to "attract attention through the Ethereum narrative while simultaneously siphoning ecosystem value." 3. Promoting the Development of the Preconfirmation Market One of the core issues in the early days of EigenLayer was how to unlock Ethereum's preconfirmation functionality through EigenLayer. Since then, pre-confirmation has gained significant attention through the Base network, but its implementation still faces challenges. To drive ecosystem growth, we also co-launched the Commit-Boost program—aimed at addressing the "lock-in effect" of pre-confirmation clients and building a neutral platform that allows anyone to innovate through validator commitments. Today, billions of dollars have flowed through Commit-Boost, and over 35% of validators have joined the program. This percentage will further increase as mainstream pre-confirmation services launch in the coming months. This is crucial for the "resilience" of the Ethereum ecosystem and lays the foundation for continued innovation in the pre-confirmation market. 4. Always Ensuring Asset Security Over the years, we have secured the security of hundreds of billions of dollars in assets. This statement may sound mundane, even somewhat "boring"—but just consider how many infrastructure projects in the crypto industry have "crashed" in various ways, and you'll understand how precious this "mundane" aspect is. To mitigate risks, we've built a robust operational security system, recruited and cultivated a world-class security team, and integrated "adversarial thinking" into our team culture. This culture is crucial for any business involving user funds, AI, or real-world systems, and it's "impossible to fix later"—the foundation must be laid from the very beginning. 5. Preventing Lido from holding over 33% of the staking share for an extended period. The restaking era has had an underestimated impact: a large amount of ETH flowed to LRT providers, preventing Lido's staking share from consistently exceeding 33%. This is significant for Ethereum's "social balance." If Lido maintains a stable 33% share of staking without a reliable alternative, it will inevitably trigger significant governance controversies and internal conflicts. Restaking and LRT haven't "magically achieved complete decentralization," but they have indeed changed the trend of centralized staking—this is by no means a trivial achievement. 6. Clarifying Where the "True Frontier" Lies The biggest "gain" is actually at the conceptual level: we have verified the core argument that "the world needs more verifiable systems," but we have also recognized the "implementation path"—our previous direction was misguided. The correct path is definitely not "starting with general cryptoeconomic security, insisting on building a fully decentralized operator system from day one, and then waiting for all businesses to connect to this level." The real way to accelerate the expansion of "cutting-edge" technologies is to provide developers with direct tools that enable them to achieve verifiability for their specific applications, and to match these tools with appropriate verification primitives. We need to "proactively address the needs of developers," rather than requiring them to become "protocol designers" from day one. To this end, we have begun building internal modular services—EigenCompute (verifiable computing service) and EigenAI (verifiable AI service). Some features that would require other teams to raise hundreds of millions of dollars and spend years to implement, we can launch in a few months. Moving Forward So, how should we respond to these experiences—timing, successes, failures, and brand "scars"? Here's a brief explanation of our next steps and the logic behind them: 1. Make the EIGEN token the core of the system. In the future, the entire EigenCloud ecosystem and all the products we build around it will revolve around the EIGEN token. The EIGEN token is positioned as: The core economic security driver of EigenCloud; An asset that underwrites the various risks undertaken by the platform; A core value capture tool covering all fee flows and economic activities of the platform. Early on, many people's expectations of "what value the EIGEN token can capture" differed from the "actual mechanism"—causing considerable confusion. In the next phase, we will bridge this gap through concrete design and implementation systems. More details will be announced later. 2. Enabling Developers to Build "Verifiable Applications," Not Just AVS Our core argument remains unchanged: by improving the verifiability of off-chain computation, we enable people to build applications more securely on-chain. However, the tools for achieving "verifiability" will no longer be limited to one. Sometimes it might be cryptoeconomic security; sometimes it might be ZK proofs, TEEs (Trusted Execution Environments), or hybrid solutions. The key is not to "promote a particular technology," but to make "verifiability" a standard primitive that developers can directly integrate into the technology stack. Our goal is to bridge the gap between two states: from "I have an application" to "I have an application that users, partners, or regulators can verify." From the current state of the industry, "CryptoEconomy + TEE" is undoubtedly the best choice—it achieves the optimal balance between "developer programmability" (what developers can build) and "security" (not theoretical security, but practical, implementable security). In the future, when ZK proofs and other verification mechanisms are mature enough to meet developer needs, we will also integrate them into EigenCloud. 3. Deepening our presence in the AI field. The biggest transformation in the global computing field today is AI—especially AI agents. The crypto industry is no exception. AI agents are essentially "tools wrapped in language models, performing operations in a specific environment." Currently, not only are language models "black boxes," but the operational logic of AI agents is also opaque—which is why hacking incidents have occurred due to the "necessity to trust developers." However, if AI agents possess "verifiability," people will no longer need to rely on trust in developers. To achieve verifiability for AI agents, three conditions must be met: the reasoning process of the LLM (Large Language Model) must be verifiable, the computational environment for performing operations must be verifiable, and the data layer capable of storing, retrieving, and understanding context must be verifiable. EigenCloud is designed specifically for these scenarios: EigenAI: Provides deterministic, verifiable LLM inference services; EigenCompute: Provides a verifiable operational execution environment; EigenDA: Provides verifiable data storage and retrieval services. We believe that "verifiable AI agents" are one of the most competitive application scenarios for our "verifiable cloud services"—therefore, we have assembled a dedicated team to delve into this field. 4. Reshaping the Narrative Logic of "Staking and Yields" To obtain real returns, one must bear real risks. We are exploring broader "staking application scenarios" to allow staked capital to support the following risks: Smart contract risks; risks associated with different types of computation; and risks that can be clearly described and quantified. Future returns will truly reflect the "transparent and understandable risks undertaken," rather than simply pursuing the "currently popular liquidity mining model." This logic will also be naturally integrated into the use cases, endorsement scope, and value transfer mechanism of the EIGEN token. In conclusion, restaking didn't become the "universal layer" I (and others) had hoped for, but it didn't disappear either. Over its long development, it became what most "first-generation products" become: an important chapter, a wealth of hard-won lessons, and the infrastructure that now supports a broader business. We are still maintaining and valuing restaking—we just don't want to be confined by the initial narrative. If you are a community member, an AVS developer, or an investor who still associates Eigen with "that restaking project," I hope this article has given you a clearer understanding of "what happened in the past" and "our current course." Today, we are entering a realm with a much larger Total Addressable Market (TAM): on one side are cloud services, and on the other are application-layer needs directly facing developers. We are also exploring the "untapped AI front" and will continue to push forward in these directions with our consistently high-intensity execution. The team remains highly motivated, and I can't wait to prove to all the skeptics that we can do it. I have never been more bullish on Eigen than I am now, and I have been increasing my EIGEN token holdings and will continue to do so. We are still in the early stages.