Abraxas Darknet Moves Nearly 4,800 BTC into Bitcoin Mixer
Crypto Sleuth ZachXBT has unveiled that an entity conducted a sizable transfer, about 4,800 BTC, from the depths of the Abraxas darknet market to a Bitcoin mixer.
Catherine
Author: Zack Pokorny, Assistant Researcher at Galaxy Digital; Source: Galaxy Digital; Translated by: Shaw Jinse Finance
The application scenarios and capabilities of artificial intelligence agents (AI agents) have begun to evolve gradually. They are gradually achieving autonomous task execution, and related research and development is also advancing functions such as holding and allocating funds, and discovering transaction and profit strategies. Although this experimental transformation is still in its very early stages, its development direction is completely different from the past—in the past, AI agents were mostly used only as social and analytical tools.
Blockchain is providing a natural testing ground for this evolutionary process.
Blockchain is providing a natural testing ground for this evolutionary process.
Blockchain possesses permissionless characteristics, composability (i.e., the same execution framework can support various financial infrastructure components), an open-source application ecosystem, and equally open data to all participants. Furthermore, all assets on the chain support programmability by default. This raises a structural question: given that blockchain is programmable and permissionless, why do autonomous intelligent agents still face operational obstacles? The answer lies not in the feasibility of execution, but in the significant semantic understanding and collaborative scheduling costs incurred at the execution level. While blockchain ensures the accuracy of state transitions, it typically does not provide native economic interpretation, standard identity, or target-level scheduling capabilities. Some of these obstacles stem from the architectural characteristics of permissionless systems, while others arise from the current state of development of tools, information filtering, and market infrastructure. In practical applications, many higher-level functions still rely on software and workflows designed around human operation. Blockchain Architecture and Artificial Intelligence Agents The core design of blockchain revolves around consensus mechanisms and deterministic execution, rather than semantic interpretation. It provides underlying infrastructure components such as storage slots, event logs, and call tracing, rather than standardized economic objects. Therefore, abstract concepts such as position size, returns, health coefficients, and liquidity depth typically need to be reconstructed off-chain by indexers, analytics layers, front-end interfaces, and application programming interfaces (APIs), transforming the protocol-specific states into more usable forms. Many mainstream decentralized finance (DeFi) operations, especially those involving ordinary users and subjective decision-making, still rely on user interaction through a front-end interface and signature confirmation of individual transactions. This user-centric model has achieved large-scale application with the widespread adoption of ordinary users, even though a significant portion of on-chain activities are now machine-driven. The current mainstream user interaction logic remains: operation intention → user interface → transaction initiation → confirmation completion. While programmatic operations differ in their paths, they also have their own limitations: developers must select the scope of contracts and assets during the construction phase, and then run the algorithm based on that fixed scope. Neither of these two models is suitable for systems that need to autonomously discover, evaluate, and combine operational behaviors based on dynamic objectives during operation. When infrastructure optimized for transaction verification is used by systems that need to simultaneously interpret economic conditions, assess credit, and optimize behavior around specific objectives, operational obstacles begin to appear. This gap stems partly from the permissionless and heterogeneous design characteristics of blockchain, and partly from the fact that existing tools still encapsulate blockchain interaction processes around human review and front-end intermediaries. Intelligent Agent Operation Processes and Traditional Algorithm Strategies Before analyzing the gap between blockchain infrastructure and intelligent agent systems, it is necessary to clarify the core differences between higher-level intelligent agent operation processes and traditional on-chain algorithm systems. The difference is not in the degree of automation, technical complexity, parameter settings, or even dynamic adaptation capabilities. Traditional algorithmic systems can achieve high parameterization, automatically discover new contracts and tokens, allocate funds across multiple strategies, and rebalance based on performance. The core difference lies in the system's ability to handle scenarios not pre-defined during the development phase. Traditional algorithmic systems, no matter how complex, can only execute predetermined logic for patterns pre-defined during the development phase. These systems require pre-defined interface parsers for each protocol type, pre-defined evaluation logic to translate contract states into economic meaning, explicit credit and standard judgment rules, and hard-coded rules for all decision branches (regardless of how dynamic or flexible the algorithm itself is). If a situation deviates from the pre-defined pattern, the system either skips that scenario or fails immediately. It cannot reason and judge unfamiliar scenarios; it can only verify whether the current scenario matches a known template.
Automatic mechanical devices like this "digestive duck" can mimic realistic behavior, but every action is pre-programmed. (Scientific American, January 1899)
Traditional algorithms, when scanning new lending markets, can identify deployment contracts that emit familiar events or match known factory patterns. However, if a new lending infrastructure component with an unfamiliar interface appears, the system cannot evaluate it.
Humans must manually examine contracts, understand their mechanisms, determine whether they should be included in the opportunity pool, and write integration logic. Only after these steps are completed can the algorithm interact with them. Humans are responsible for interpretation, and the algorithm is responsible for execution. Intelligent agent systems based on base models break this boundary. They can achieve this through learned reasoning abilities: Interpreting ambiguous or unspecified objectives. Instructions like "maximize gains while avoiding excessive risk" require interpretation: What constitutes "excessive"? How should gains and risks be balanced? Traditional algorithms need to pre-define these conditions precisely, while base models can interpret intent, make judgments, and optimize their understanding based on feedback. Generalizing to adapt to new interfaces. Intelligent agents can read unfamiliar contract code, parse documentation, or examine never-before-seen Application Binary Interfaces (ABIs) and infer the economic functions of the system. It doesn't require pre-building a parser for every type of protocol. This capability is currently imperfect, and the agent may misjudge, but it can attempt to interact with systems not pre-defined during development.Inferring Trustworthiness and Normativity Under Uncertainty. When trust signals are ambiguous or incomplete, the base model can probabilistically weigh judgments rather than simply applying binary rules. Is this smart contract an official version? Is the token highly likely to be legitimate based on existing evidence? Traditional algorithms only have two states: "with rules" or "without rules," while the agent can infer based on confidence.
Interpreting Errors and Adapting. When unexpected situations occur (such as transaction rollbacks, outputs not matching expectations, or changes in state between simulation and execution), the agent can deduce the cause of the problem and decide on a response. In contrast, traditional algorithms only execute exception handling blocks, merely diverting exceptions rather than interpreting them. These capabilities do exist, but they are still imperfect. The underlying model can produce illusions, misinterpret information, and make seemingly confident but flawed judgments. In adversarial and financially charged environments (i.e., where code controls or receives assets), “attempting to interact with an unprepared system” can mean direct losses. The core argument of this paper is not that agents can reliably perform these functions today, but rather that they can attempt them in ways that traditional systems cannot, and that future infrastructure promises to make these attempts safer and more reliable. It’s easier to understand if we view them as a continuum rather than an absolute classification: some traditional systems have incorporated learning-based reasoning, while some agents may rely on hard-coded rules on critical paths. The difference is directional, not binary. Agent systems place more interpretation, evaluation, and adaptation work in runtime reasoning, rather than pre-setting it during development. This is closely related to the obstacle argument mentioned earlier, because agents attempt to do exactly what traditional algorithms completely avoid. Traditional algorithms mitigate discovery costs by manually selecting contract sets during development; control layer costs by relying on whitelists maintained by operators; data costs by pre-configuring parsers for known protocols; and execution costs by operating within pre-defined security boundaries. Humans pre-process semantics, trust, and policy-level tasks, and the algorithm only executes within these defined boundaries. Early versions of on-chain agent operation processes may continue this model, but their core logic lies in shifting discovery, trust, and policy evaluation to runtime reasoning, rather than pre-setting during development. Agents may attempt to discover and evaluate unfamiliar opportunities, determine contract canonicality without hard-coded rules, resolve heterogeneous states without pre-configured parsers, and execute policies for potentially ambiguous targets. It is precisely in these stages that infrastructure shortcomings begin to surface. The obstacle is not because agents are doing the same things as algorithms but with greater difficulty, but because they are attempting entirely different things: operating in an open, dynamically interpretable behavioral space, rather than a closed, pre-integrated, fixed space. The source of friction lies not in the inherent flaws of blockchain consensus, but in the evolution of the overall interaction stack built around it. Blockchain guarantees deterministic state transitions, consensus on the final state, and ultimate certainty. However, it does not encode economic interpretation, intent verification, or target tracking at the protocol layer. These responsibilities have historically been handled by the front-end, wallets, indexers, and other off-chain collaborative layers, with human involvement always present in the process. Mainstream interaction models reflect this design, even among professional participants: ordinary users interpret states through panels, select operations through interfaces, sign transactions through wallets, and informally verify results; algorithmic trading institutions automate execution but still rely on manual screening of protocol sets, checking for anomalies, and updating integration logic when interfaces change. In both scenarios, the protocol only guarantees correct execution; intent interpretation, anomaly handling, and adaptation to new opportunities remain manual processes. Intelligent agent systems compress or even eliminate this division of labor. They must programmatically reconstruct economically meaningful states, assess whether objectives are progressing, and verify results beyond simple transaction on-chain verification. These burdens are particularly pronounced on blockchains because intelligent agents operate in open, adversarial, and rapidly changing environments where new contracts, assets, and execution paths can emerge without centralized auditing. Protocols only guarantee the correct execution of transactions, not that economic states are easily interpretable, contracts are official and legitimate, execution paths align with user intent, or relevant opportunities can be programmatically discovered. Subsequent chapters will analyze these obstacles around the various stages of the intelligent agent's operational cycle: discovering existing contracts and opportunities, verifying their legitimacy, acquiring economically meaningful states, and executing according to objectives. Discovery costs arise because the decentralized finance (DeFi) space continues to expand in an open, permissionless environment, and relevance and legitimacy require human screening through on-chain social, market, and tool layers. New protocols are released through announcements and research, and are also filtered through front-end integration, token lists, analytics platforms, and liquidity formation. Over time, these signals typically form a set of operational criteria for identifying economically valuable and sufficiently credible parts of the space, although this process is often informal, uneven, and partly reliant on third-party and human screening. While agents can access the filtered data and trust signals, they lack the natural shortcuts humans have when interpreting these signals. From an on-chain perspective, the discoverability of all deployed contracts is equal. Legitimate protocols, malicious forks, test deployments, and abandoned projects all exist in the form of callable bytecode. There is no on-chain labeling of which are important or which are secure. From an on-chain perspective, all deployed contracts have equal discoverability. Legitimate protocols, malicious forks, test deployments, and abandoned projects are all presented as callable bytecode. Therefore, agents must build their own discovery mechanisms: scanning deployment events, identifying interface patterns, tracking factory contracts (contracts that programmatically deploy other contracts), and monitoring liquidity formation to determine which contracts should be included in the decision-making process. This process is not just about finding contracts, but also about determining whether they are worth entering the agent's action space. Identifying candidate contracts is only the first step. After initial discovery screening, contracts must undergo canonical and authenticity verification as described in the next section. Agents must confirm that discovered contracts are consistent with their claims before including them in the decision space. Policy-bound discovery differs from open discovery. Discovery cost does not refer to detecting new deployments. Mature algorithmic systems can already achieve this within their own policy scope. For example, a searcher that monitors Uniswap factory events and automatically adds them to new liquidity pools is performing dynamic discovery. The obstacles arise at two higher levels: determining whether the discovered contracts are legitimate (a normative issue discussed in the next section); and determining whether they serve an open objective, rather than merely fitting a predefined policy type. The searcher's discovery logic is tightly bound to its policy; it knows which interface patterns to look for because the policy is predefined. However, agents undertaking broader tasks such as "configuring optimal risk-adjusted opportunities" cannot rely solely on policy-derived filters. They must evaluate newly encountered opportunities against the objective itself, requiring the parsing of unfamiliar interfaces, inference of economic functions, and determination of whether the opportunity should be included in the decision space. This part falls under the general autonomy problem, but blockchain exacerbates this difficulty: unfamiliar code can be directly executed, carries funds, and is difficult to classify using only native protocol signals. Control Layer Friction Control layer costs arise because identity and legitimacy are typically determined externally through screening, governance, documentation, interfaces, and operator judgment. In most current workflows, human intervention remains a crucial part of this determination process. Blockchain guarantees deterministic execution and finality, but not that the caller is interacting with the target contract. Intent determination is outsourced to social context, websites, and human screening. In the current process, humans use the webpage trust layer as an informal verification tool: finding the official domain through aggregation platforms like DeFiLlama or verified social media accounts, and viewing the website as the official mapping between human concepts and contract addresses. The front-end then encodes a valid truth source, indicating which addresses are official, which tokens are used, and which entry points are secure.

The Mechanical Turk of 1789 was a chess-playing machine that appeared to operate autonomously, but actually relied on a hidden human operator. (Humboldt University Library)
Agents do not interpret brand identities, certified social signals, or "officiality" by default through social context. We can provide agents with filtered inputs derived from these signals, but to translate them into stable, usable, machine-executable trust assumptions requires explicit registry entries, policy rules, or verification logic.

This information may seem to represent identity, but it is not.
... Any contract can return the following: `symbol() = "WETH"` `decimals() = 18` `name() = "Wrapped Ether"` `decimals() = 18` `name() = "Wrapped Ether"` `name()`, `symbol()`, and `decimals()` are simply public read-only functions; the returned content is entirely determined by the deployer. In fact, there are nearly 200 tokens on Ethereum, all named "Wrapped Ether," all with the symbol "WETH," and all with 18 decimal places. Without checking CoinGecko or Etherscan, can you tell which "WETH" is the official one? (The answer is number 78 in the list)There are nearly 200 tokens on Ethereum named "Wrapped Ether" with the symbol "WETH". Without a third-party platform, can you determine which one is the genuine WETH?
This is the predicament faced by intelligent agents. The blockchain doesn't verify uniqueness, doesn't check against any registry, and doesn't care about it. You could deploy 500 contracts today, all returning the exact same metadata. There are some empirical methods on-chain (such as comparing ETH balances with total supply, checking the liquidity depth of major decentralized exchanges, verifying whether it's listed as collateral by lending protocols, etc.), but none of them provide absolutely conclusive proof. Each method either relies on threshold assumptions (no one can fake billions of pairs of liquidity) or recursively requires verifying the canonicality of other contracts first.
Like a maze, identifying the "real" path on-chain requires external guidance; no standard signals are provided. (Birmingham Museum and Art Gallery) This is why token lists and registries exist as off-chain filtering layers. They provide a mechanism to map the concept of "WETH" to specific addresses, which is why wallets and front-ends maintain whitelists or rely on trusted aggregators. For intelligent agents, the core issue isn't just the unreliability of metadata, but also that canonical identities are typically established at the social or institutional level, rather than natively within the protocol. The only reliable unique identifier on-chain is the contract address; however, mapping easily understood human intent (such as "exchange for USDC") to the correct address still heavily relies on non-protocol-native filtering, registries, whitelists, or other trust layers. A smart agent optimizing across multiple DeFi protocols needs to abstract each opportunity into a unified economic object: yield, liquidity depth, risk parameters, fee structure, oracle source, etc. From a certain perspective, this is a common system integration problem. However, on the blockchain, this burden is significantly amplified due to protocol heterogeneity, direct financial risk, multiple call state concatenation, and the lack of a unified underlying economic schema. And these are precisely the fundamental information necessary for comparing opportunities, simulating configurations, and monitoring risks. Blockchain typically doesn't expose standardized economic objects at the protocol layer, only storage slots, event logs, and function return values. Economic objects must be derived or reconstructed from these. Protocols only guarantee that contract calls return correct state values, but they don't guarantee that these values clearly correspond to understandable economic concepts, nor do they guarantee that the same concept can be obtained through a consistent interface across different protocols. Therefore, abstract concepts such as market, position, and health coefficients are not native components of the protocol, but are reconstructed off-chain by indexers, analytics platforms, front-ends, and APIs, normalizing heterogeneous protocol states into a usable abstract layer. Human users typically only see this normalized data. Intelligent agents can also use it, but they will inherit third-party schemas, latency, and trust assumptions; otherwise, they must reconstruct these abstract logics themselves. This problem is further exacerbated in various protocols. Vault share prices, lending market collateral ratios, DEX pool liquidity depth, and staking contract reward rates are all fundamental indicators with economic significance, but none are exposed through standardized interfaces. Each protocol system has its own reading methods, structure formats, and unit conventions; even within the same category, the implementation methods differ. The lending market: a typical example of fragmented data reading. The lending market clearly illustrates this problem. Economic concepts are generally similar and universal, such as supply and lending liquidity, interest rates, collateral ratios, loan limits, and liquidation thresholds, but the data reading paths are completely different. Taking Aave v3 as an example, market enumeration and reserve asset status reading are separate steps, with a typical process as follows: Reserve assets are listed in the following way: [Image of a GIF image] This function returns an array of token addresses. For each asset, obtain basic liquidity and interest rate data as follows: This call returns a structure that provides data including total liquidity, interest rate index, and configuration identifier in a single operation. For example: In contrast, in Compound v3 (Comet), each deployment corresponds to a single market (such as USDC, USDT, ETH, etc.), and there is no unified reserve structure. Therefore, multiple function calls are necessary to piece together the complete market snapshot data:Basic Utilization

Total


Mortgage Asset Allocation

Global Configuration Parameters

Each call only returns a different fragment of the economic state. The "market" is not a first-class citizen object, but rather an inferential structure pieced together by the caller.
From the agent's perspective, both protocols belong to the lending market. However, from an integration perspective, they are data acquisition systems with completely different structures.
From the agent's perspective, both protocols belong to the lending market. But from an integration perspective, they are data acquisition systems with completely different structures.
There is no universal data structure like the one shown below:
Instead, agents must use different asset enumeration methods for different protocols, splicing state data through multiple calls, unifying the units of measurement and conversion rules, and coordinating the differences between derived values and directly exposed basic data.
Besides structural inconsistencies, this fragmentation also leads to latency and consistency risks. Because the economic state is not exposed as a single atomic market object, agents must initiate multiple Remote Procedure Calls (RPCs) to multiple contracts to reconstruct a state snapshot. Each additional call increases latency, the probability of triggering interface rate limiting, and the risk of block inconsistency. In volatile market environments, by the time the supply rate is calculated, capital utilization may have changed; without explicitly locking the block height, configuration parameters and total liquidity may not originate from the same block. Human users implicitly mitigate this problem through front-end caching layers and aggregation back-ends, while agents directly calling the raw RPC interface must explicitly handle data synchronization, batch requests, and time consistency issues. Therefore, non-standardized data acquisition is not only an inconvenience in integration but also a constraint on performance, synchronization mechanisms, and execution correctness. The lack of a unified standard for economic data acquisition means that even if different protocols implement almost the same basic financial functions, their state exposure methods remain contract-specific and dependent on compositional logic. This structural difference is the core cause of data friction. Potential Data Flow Mismatch Access to the economic state on the blockchain is essentially a pull model, even though execution signals can be streamed. External systems need to actively query nodes for the required state, rather than receiving continuous, structured updates. This model reflects the core function of the blockchain—on-demand verification, rather than maintaining a persistent state view at the application level. Basic components for a push model also exist on-chain. WebSocket subscriptions can push new blocks and event logs in real time, but these do not include the storage state that carries most of the economic meaning, unless the protocol actively chooses to redundantly log it. Agents cannot directly subscribe to lending market utilization, liquidity pool reserves, or position health coefficients on-chain. These values are stored in contract storage, and most protocols do not provide a native mechanism to push storage changes to downstream users. The most feasible approach currently is to subscribe to new block headers and re-query the storage state with each block—even if triggered by streaming events, state access is still essentially a pull model. Logs only indicate that data may change, but they don't encode the final economic state; reconstructing that state still requires explicitly reading and accessing historical states. Agent systems are better suited to reverse data flow. Instead of polling hundreds of contracts for state changes, agents can receive structured, pre-computed state updates and push them directly to their runtime environment (e.g., updated utilization, health coefficients, or position changes). Push architecture reduces redundant queries, lowers the latency from state changes to agent awareness, and allows intermediate layers to encapsulate state as semantically clear updates, rather than having agents interpret meaning from raw storage. This shift is not easy. It requires subscribing to infrastructure, filtering relevance logic, and translating storage changes into economic event specifications that agents can act upon. However, as agents become continuously online participants rather than intermittent queryers, the inefficiencies of the pull model—interface rate limiting, synchronization overhead, and duplicate queries between different agents—become increasingly severe. Viewing intelligent agents as continuous consumers rather than intermittent clients might be more suitable for the operation of autonomous systems. Whether push-based infrastructure is truly superior remains inconclusive. The flow of data with all state changes introduces filtering problems; intelligent agents still need to determine which information is relevant, which reintroduces pull logic at another level. The core issue isn't the pull model itself, but rather that the existing architecture wasn't designed with persistent machine users in mind. As the scale of intelligent agent usage expands, alternative solutions are worth exploring. Execution friction arises because many current interaction layers encapsulate intent transformation, transaction verification, and result validation within a workflow centered on the front end, wallets, and human oversight. In scenarios involving ordinary users and subjective decision-making, this oversight is typically performed by humans. However, for autonomous systems, these functions must be formalized and directly coded. Blockchain can guarantee deterministic execution based on contract logic, but it doesn't guarantee that transactions conform to user intent, comply with risk constraints, or achieve the expected economic outcome. In the existing process, the front-end interface and humans fill this gap. The front-end orchestrates the sequence of operations (exchange, authorization, deposit, lending), and the wallet provides a final "review and send" checkpoint. Users or operators typically make informal strategic judgments in the last step. They often judge the safety of the transaction and the acceptability of the quoted price result with incomplete information. If the transaction fails or an unexpected result occurs, the user will retry, adjust slippage, change the path, or abandon the operation. Intelligent agent systems remove humans from this execution loop, meaning the system must replace three types of human functions with machine-native logic: **Intent Compilation**. Human goals such as "allocating my stablecoin to the risk-adjusted optimal yield channel" must be compiled into a specific action plan: which protocol, which market, which token path, the scale of the operation, the authorization method, and the execution order. For humans, this process is implicitly completed through the front end; for intelligent agents, it must be formalized. Clicking "Send Transaction" is not only a signing action, but also an implicit verification of whether the transaction complies with constraints: slippage tolerance, leverage limit, minimum health coefficient, whitelisted contracts, or "prohibited upgradeable contracts," etc. Intelligent agents need to encode explicit policy constraints into machine-verifiable rules: The execution system must verify that the call graph to be executed conforms to these rules before broadcasting the transaction.
Result Verification. Recording a transaction on the blockchain does not equate to task completion. Even if a transaction is successfully executed, it may not achieve its objective: slippage may exceed tolerance limits, the target position size may not be reached due to quota restrictions, or interest rates may change between simulation and on-chain processing.
Result Verification ... Humans informally verify transactions via a front-end interface, while agents must programmatically evaluate post-execution state conditions: This places higher demands on completion verification, going beyond simply confirming the transaction on-chain. An intent-centric architecture might offer a partial solution, shifting more of the burden of "how to execute" from the agent to a specialized solver. Instead of sending raw call data, the agent broadcasts a signed execution intent, specifying result-based constraints; the solver or protocol layer mechanism must satisfy these constraints for the execution to be considered valid. Multi-Step Workflow and Failure Modes A large part of the execution process in decentralized finance (DeFi) is inherently multi-step. A yield allocation operation may require sequential steps: authorization → exchange → deposit → lending → staking. Some steps may be independent transactions, while others can be packaged and executed through batch calls or routing contracts. Humans can tolerate incomplete processes and simply return to the interface to continue; however, intelligent agents require deterministic process orchestration: if any step fails, they must decide whether to retry, change the path, roll back the operation, or pause execution. This gives rise to a new type of failure mode that is usually masked in human operation processes: State drift between decision-making and on-chain execution: During the period between simulated execution and actual on-chain execution, interest rates, utilization rates, or liquidity may change. Humans can accept such fluctuations, but intelligent agents must define acceptable ranges and strictly enforce them. Non-atomic Execution and Partial Transactions: Some operations may be executed across multiple transactions or only produce partial expected results. Intelligent agents must track intermediate states and confirm that the final state meets the objective. Authorization Limits and Approval Risks: Humans habitually complete authorization signatures through interfaces, but intelligent agents must incorporate the authorization scope (limit, spender, validity period) into security policies for reasoning, rather than simply treating it as an interface step. Path Selection and Implicit Execution Costs: Humans rely on routing tools and default interface configurations, but intelligent agents must incorporate slippage, miner extractable value (MEV) risk, gas fees, and price shocks into the objective function modeling.
The core argument of execution friction lies in the fact that the DeFi interaction layer uses human wallet signatures as the final control link. Currently, intent verification, risk tolerance judgment, and informal "does it seem reasonable?" checks are all concentrated in this step. After removing human involvement, execution becomes a control problem: the agent must translate the goal into an operational chain, automatically execute policy constraints, and verify the results in uncertainty. This challenge exists in many autonomous systems, but it is particularly harsh in the blockchain environment because execution directly involves funds, can compile and invoke unfamiliar contracts, and faces adversarial state changes. Humans rely on experience to make decisions and correct errors through trial and error; agents must complete similar tasks programmatically at machine speed and often face dynamically changing operational spaces. Therefore, the view that agents "only need to submit transactions" greatly underestimates the difficulty. Submitting transactions is the simple part; what is truly missing is the interface and all the work undertaken by humans: intent compilation, security checks, and goal completion verification. Conclusion Blockchain, in its initial design, did not natively provide the semantic and coordination layers required by autonomous intelligent agents. Its design goal was to ensure deterministic execution and state transition consensus in adversarial environments. The interaction layer developed on this basis has always revolved around human users: interpreting states through interfaces, selecting operations through front-ends, and verifying results through manual checks. Intelligent agent systems disrupt this architecture. They remove human interpreters, approvers, and verifiers from the process and require these functions to be machine-native. This shift exposes structural frictions in four dimensions: discovery, trust determination, data acquisition, and execution orchestration. These frictions arise not from infeasibility, but from the fact that the infrastructure surrounding blockchain, in most scenarios, still presupposes human involvement in the process between state interpretation and transaction submission. Bridging these gaps may require building new infrastructure across a multi-layered technology stack: middleware that normalizes cross-protocol economic states into machine-readable specifications; indexing services or RPC extensions that expose semantic foundational components such as positions, health coefficients, and opportunity sets instead of raw stored data; registries that provide official contract mappings and token authenticity verification; and execution frameworks that can encode policy constraints, handle multi-step workflows, and programmatically verify goal completion. Some gaps stem from the structural characteristics of permissionless systems: open deployment, weakly canonical identities, and heterogeneous interfaces; others are constrained by existing tools, standards, and incentive designs. These issues are expected to be mitigated as agent usage scales and protocols compete to optimize for easier integration of autonomous systems. As autonomous systems begin to manage funds, execute policies, and interact directly with on-chain applications, the architectural assumptions embedded in the current interaction layer become increasingly apparent. Most of the friction described in this article stems from the development of blockchain tools and interaction models around human-mediated workflows; some is a natural consequence of the openness, heterogeneity, and adversarial environment of permissionless systems; and some are common problems faced by autonomous systems in complex environments. The core challenge is not simply enabling agents to complete transaction signatures, but rather providing them with a reliable path to achieve the semantic, trust, and policy-related functions currently handled jointly by software and human judgment between the original blockchain state and actual operation.
Crypto Sleuth ZachXBT has unveiled that an entity conducted a sizable transfer, about 4,800 BTC, from the depths of the Abraxas darknet market to a Bitcoin mixer.
CatherineCapital A Bhd, the parent company of AirAsia, led by CEO Tony Fernandes, is in the process of securing over $1 billion in financing through a combination of debt and equity. This strategic move is aimed at reshaping the company's future and expanding its financial horizons.
JoyMastercard and MoonPay are looking at ways to make online payments even smoother. They teamed up to pioneer Web3 technologies in experiential marketing & consumer engagement.
YouQuanAside from Mark Lamb, the case also names Roger Ver as a defendant.
ClementTraditional UK banks have recently tightened their stance on cryptocurrency-related activities, resulting in numerous account closures and stricter policies. This shift in the financial landscape has prompted around 38% of crypto investors in the UK to switch banks.
JasperMicrosoft's AI advancements surpass Google's, shaping technological landscape.
Hui XinWhile the crypto market is basking in its most fruitful week in over a year, the SocialFi app seems to be encountering headwinds.
KikyoIn November 2022, Input Output Global (IOG) announced a new project known as Midnight. Midnight is a sidechain solution for Cardano, designed to address various limitations and open up new possibilities for users and developers
AaronMore than 200 companies have received alerts for violating the new advertising rules since their inception on October 8.
ClementThe groundbreaking collaboration between Christian Louboutin and Marvel, offering exclusive NFT collectibles as complimentary treasures to investors in their limited-edition fashion collection, seamlessly merging haute couture and comic book extravagance.
Jixu