How Arweave works and why it exists
Arweave, Arweave's working principle and significance of existence Golden Finance, this article briefly introduces Arweave's working principle and value.
JinseFinanceAs 2024 draws to a close, venture capitalist Rob Toews from Radical Ventures shares his 10 predictions for AI in 2025:
01. Meta will start charging for Llama models
Meta is the world’s benchmark for open AI. In a compelling case study in corporate strategy, Meta has chosen to offer its state-of-the-art Llama model for free while competitors like OpenAI and Google have closed sourced their cutting-edge models and charged for use.
So the news that Meta will start charging companies to use Llama next year will come as a surprise to many.
To be clear: we are not predicting that Meta will completely close source Llama, nor does it mean that anyone using the Llama model will have to pay for it.
Instead, we predict that Meta will make Llama’s open source licensing terms more restrictive, so that companies using Llama in commercial settings above a certain size will need to start paying to use the model.
Technically, Meta already does this today to a limited extent. The company doesn’t allow the largest companies—cloud supercomputers and other companies with more than 700 million monthly active users—to freely use its Llama models.
Back in 2023, Meta CEO Mark Zuckerberg said, “If you’re a company like Microsoft or Amazon or Google, and you’re essentially reselling Llama, then we should get some revenue from that. I don’t think it’s going to be a lot of revenue in the short term, but hopefully in the long term it can be some revenue.”
Next year, Meta will significantly expand the range of companies that must pay to use Llama, bringing more medium- and large-sized companies into the fold.
Keeping up with the frontier of large language models (LLMs) is expensive. Meta needs to invest billions of dollars every year to keep Llama at parity or close to parity with the latest cutting-edge models from companies like OpenAI, Anthropic, etc.
Meta is one of the largest and best-funded companies in the world. But it is also a public company and ultimately accountable to shareholders.
As the cost of building cutting-edge models continues to soar, it becomes increasingly untenable for Meta to invest such a huge amount of money to train the next generation of Llama models without revenue expectations.
Llama models will continue to be free for use by hobbyists, academics, individual developers, and startups next year. But 2025 will be the year Meta starts making Llama profitable in earnest.
02.Questions about “Scaling Laws”
In recent weeks, the most discussed topic in the field of artificial intelligence has been scaling laws, and the question of whether they are about to end.
Scaling laws were first proposed in a 2020 OpenAI paper. The basic concept of scaling laws is simple and clear: when training an artificial intelligence model, as the number of model parameters, the amount of training data, and the amount of computation increase, the performance of the model will increase in a reliable and predictable way (technically, its test loss will decrease).
From GPT-2 to GPT-3 to GPT-4, the amazing performance improvement is all due to scaling laws.
Like Moore's Law, scaling laws are not actually real laws, but simply empirical observations.
Over the past month, a flurry of reports has suggested that major AI labs are seeing diminishing returns as they continue to scale up large language models. This helps explain why OpenAI’s GPT-5 release has been repeatedly delayed.
The most common rebuttal to the plateauing of scaling laws is that the advent of test-time computation has opened up a whole new dimension in which to pursue scaling.
That is, instead of massively scaling compute during training, new inference models like OpenAI’s o3 make it possible to massively scale compute during inference, unlocking new AI capabilities by enabling models to “think longer.”
This is an important point. Test-time computation does represent a new and exciting avenue for scaling, and AI performance gains.
But there is another point about scaling laws that is even more important, and one that is significantly underappreciated in today’s discussion. Almost all discussion of scaling laws, starting with the initial 2020 papers and continuing with today’s focus on test-time computation, has focused on language. But language is not the only data modality that matters.
Think about robotics, biology, world models, or networked agents. For these data modalities, scaling laws have not yet saturated; rather, they are just beginning.
In fact, rigorous proofs of scaling laws in these fields have not even been published yet.
Startups building foundational models for these new data modalities—e.g., Evolutionary Scale in biology, Physical Intelligence in robotics, WorldLabs in world models—are trying to identify and exploit scaling laws in these fields, just as OpenAI successfully exploited scaling laws for large language models (LLMs) in the first half of the 2020s.
Next year, expect huge progress here.
Scaling laws are not going away, they will be as important in 2025 as they have ever been. However, the center of activity for scaling laws will shift from LLM pre-training to other models.
03.Trump and Musk may diverge on AI
The new US administration will bring a number of policy and strategic shifts regarding AI.
To predict the direction of AI under President Trump, and given Musk’s current centrality to the field of AI, one might be tempted to focus on the President-elect’s close relationship with Musk.
It is conceivable that Musk could influence AI-related developments under the Trump administration in a number of different ways.
Given Musk’s deeply adversarial relationship with OpenAI, the new administration could take a less friendly stance toward OpenAI in terms of engaging with industry, developing AI regulations, awarding government contracts, and so on, which is a real risk that OpenAI is worried about today.
On the other hand, the Trump administration could be more inclined to support Musk’s own company: for example, cutting red tape to enable xAI to build data centers and take the lead in cutting-edge model competitions; providing fast regulatory approval for Tesla to deploy a fleet of robot taxis, etc.
More fundamentally, unlike many other tech leaders favored by Trump, Musk takes the safety risks of AI very seriously and therefore advocates for significant regulation of AI.
He supported California’s controversial SB1047 bill, which seeks to impose meaningful restrictions on AI developers. Therefore, Musk’s influence could lead to a more stringent regulatory environment for AI in the United States.
There is a problem with all this speculation, however. It is inevitable that Trump and Musk’s close relationship will eventually break down.
As we saw time and again during Trump’s first administration, the average tenure of Trump’s allies, even the seemingly most steadfast ones, is very short.
Few of Trump’s first administration lieutenants remain loyal to him today.
Both Trump and Musk are complex, volatile, and unpredictable personalities who are not easy to work with, who are exhausting, and whose newfound friendship has been mutually beneficial so far, but is still in its “honeymoon phase.”
We predict that the relationship will deteriorate before it ends in 2025.
What does this mean for the world of AI?
This is good news for OpenAI. It will be unfortunate news for Tesla shareholders. And it will be disappointing news for those concerned about AI safety, as it all but ensures that the US government will take a hands-off approach to AI regulation during the Trump administration.
04. AI Agents Go Mainstream
Imagine a world in which you no longer need to interact directly with the internet. Whenever you need to manage a subscription, pay a bill, make a doctor's appointment, order something on Amazon, make a restaurant reservation, or do any other tedious online task, you simply instruct an AI assistant to do it for you.
This concept of a "web agent" has been around for years. If such a product existed and worked properly, it would undoubtedly be a huge success.
Yet there is no general-purpose web agent that works well on the market today.
Startups like Adept, even with a pedigreed founding team and hundreds of millions of dollars in funding, have failed to deliver on their vision.
Next year will be the year web agents finally start working well and go mainstream.Continuous progress in language and vision-based models, coupled with recent breakthroughs in “second system thinking” capabilities due to new reasoning models and inference-time computing, will mean web agents are ready for prime time.
In other words, Adept’s idea was right, it was just too early.
In startups, as in many things in life, timing is everything.
Web agents will find a variety of valuable enterprise use cases, but we believe the largest near-term market opportunity for web agents will be consumer.
Despite all the recent AI hype, there are relatively few native AI applications that have become mainstream consumer applications, except for ChatGPT.
Network proxies will change this and become the next real "killer app" in consumer AI.
05. The idea of putting AI data centers in space will come true
In 2023, the key physical resource constraining the development of AI is GPU chips. In 2024, it becomes electricity and data centers.
In 2024, few stories will be more talked about than the huge and rapidly growing demand for energy by AI while rushing to build more AI data centers.
Due to the boom in artificial intelligence, global data center power demand is expected to double between 2023 and 2026 after decades of flatline growth. In the United States, data center electricity consumption is expected to approach 10% of total electricity consumption by 2030, compared with just 3% in 2022.
Today’s energy system is simply not equipped to handle the massive surge in demand that AI workloads will bring. Two multi-trillion-dollar systems, our energy grid and our computing infrastructure, are about to collide in historic fashion.
As a possible solution to this dilemma, nuclear power has gained momentum this year. Nuclear power is in many ways an ideal energy source for AI: it is zero-carbon, available 24/7, and virtually inexhaustible.
But realistically, due to long research, project development and regulatory times, new energy sources will not be able to solve this problem before the 2030s. This is true for traditional nuclear fission power plants, next-generation "small modular reactors" (SMRs), and nuclear fusion power plants.
Next year, an unconventional new idea to meet this challenge will emerge and attract real resources: putting artificial intelligence data centers in space.
Artificial intelligence data centers in space. At first glance, it sounds like a bad joke, a venture capitalist trying to combine too many startup buzzwords.
But in fact, it may make sense.
The biggest bottleneck to quickly building more data centers on Earth is getting the electricity they need. Computing clusters in orbit can enjoy free, unlimited, zero-carbon electricity 24/7: The sun in space is always shining.
There’s another big advantage to putting computing in space: It solves the cooling problem.
One of the biggest engineering hurdles to building more powerful AI data centers is that running many GPUs at once in a small space gets very hot, and high temperatures can damage or destroy computing equipment.
Data center developers are trying to address this problem with expensive and unproven approaches such as liquid immersion cooling. But space is extremely cold, and any heat generated by computing activity is immediately and harmlessly dissipated.
Of course, many practical challenges remain to be solved. One obvious question is whether and how large amounts of data can be efficiently and cost-effectively transmitted between orbit and Earth.
It’s an open question, but one that may prove solvable: Promising work with lasers and other high-bandwidth optical communications is underway.
A YCombinator startup called Lumen Orbit recently raised $11 million to realize this ideal: a multi-megawatt network of data centers in space for training artificial intelligence models.
As the company's CEO said: "Instead of paying $140 million for electricity, I can pay $10 million for launch and solar power."
In 2025, Lumen will not be the only organization taking this concept seriously.
Other startup competitors will also emerge. Don't be surprised if one or more cloud computing hyperscalers also explore along these lines.
Amazon has already accumulated rich experience in sending assets into orbit through Project Kuiper; Google has long been funding similar "moon landing plans"; even Microsoft is no stranger to the space economy.
It is conceivable that Musk's SpaceX will also make some progress in this regard.
06. AI systems will pass the "Turing voice test"
The Turing test is one of the oldest and most well-known benchmarks for AI performance.
In order to "pass" the Turing test, an AI system must be able to communicate through written text so that ordinary people cannot tell whether they are interacting with AI or with other people.
The Turing test has become a solved problem in the 2020s, thanks to significant advances in large language models.
But written text is not the only way humans communicate.
As AI becomes more multimodal, one can imagine a new, more challenging version of the Turing test: the “voice Turing test.” In this test, an AI system must be able to interact with a human via voice with a level of skill and fluency that is indistinguishable from that of a human speaker.
Today’s AI systems cannot achieve the voice Turing test, and solving this problem will require more technological advances. Latency (the lag between a human speaking and the AI responding) must be reduced to near zero to match the experience of talking to another human.
Voice AI systems must become better at gracefully handling ambiguous input or misunderstandings, such as interruptions, in real time. They must be able to engage in long, multi-turn, open-ended conversations while remembering earlier parts of the discussion.
And crucially, Voice AI agents must learn to better understand nonverbal signals in speech. For example, what it means if a human speaker sounds annoyed, excited, or sarcastic, and generate these nonverbal cues in their own speech.
As we approach the end of 2024, Voice AI is at an exciting inflection point, driven by fundamental breakthroughs like the advent of speech-to-speech models.
Today, few areas in AI are advancing faster in both technology and business than Voice AI. Expect the state of the art in Voice AI to leap forward in 2025. ”
07. Autonomous AI systems will make significant progress
For decades, the concept of recursively self-improving artificial intelligence has been a frequently touched upon topic in the artificial intelligence community.
For example, as early as 1965, I.J.Good, a close collaborator of Alan Turing, wrote: "Let us define a superintelligent machine as a machine that can far surpass all human intellectual activities, no matter how smart it is."
"Since designing machines is one of these intellectual activities, superintelligent machines will be able to design better machines; by then, there will undoubtedly be an 'intelligence explosion', and human intelligence will be far behind. ”
It’s a clever notion that AI can invent better AI. But even today, it retains the air of science fiction.
However, while the concept has yet to gain widespread acceptance, it is actually starting to become more real. Researchers at the forefront of AI science have begun to make tangible progress in building AI systems that can themselves build better AI systems.
We predict that this direction of research will become mainstream next year.
The most notable public example of research along these lines so far is Sakana’s “AI Scientist.”
Released in August, “AI Scientist” convincingly demonstrates that an AI system can indeed conduct AI research completely autonomously.
Sakana’s “AI scientists” themselves perform the entire lifecycle of AI research: reading existing literature, generating new research ideas, designing experiments to test those ideas, executing those experiments, writing research papers to report their findings, and then having their work peer-reviewed.
This work is done entirely autonomously by AI, with no human intervention. You can read some of the research papers written by AI scientists online.
OpenAI, Anthropic, and other research labs are investing resources into the idea of “automated AI researchers,” though nothing has been publicly acknowledged yet.
As more people realize that automation of AI research is, in fact, becoming a real possibility, expect to see more discussion, progress, and entrepreneurial activity in this area in 2025.
The most meaningful milestone, though, will be the first time a research paper written entirely by an AI agent is accepted at a top AI conference. If the papers are blind reviewed, conference reviewers will not know that the paper was written by AI until it is accepted.
Don't be surprised if AI research results are accepted by NeurIPS, CVPR or ICML next year. This will be a fascinating, controversial and historic moment for the field of AI.
08. OpenAI and other industry giants shift their strategic focus to building applications
Building cutting-edge models is a hard job.
Its capital intensity is staggering. Cutting-edge model labs need to consume a lot of cash. Just a few months ago, OpenAI raised a record $6.5 billion in funding, and it may need to raise more funds in the near future. Anthropic, xAI and other companies are in a similar situation.
Switching costs and customer loyalty are low. AI applications are often built with model agnosticism in mind, with models from different vendors being able to switch seamlessly based on evolving cost and performance comparisons.
With the emergence of state-of-the-art open models like Meta’s Llama and Alibaba’s Qwen, the threat of technology commoditization looms. AI leaders like OpenAI and Anthropic can’t and won’t stop investing in building cutting-edge models.
But next year, Frontier Labs is expected to aggressively launch more of its own applications and products in order to develop more profitable, differentiated, and stickier business lines.
Of course, Frontier Labs already has a very successful application case: ChatGPT.
What other types of first-party applications can we expect to see from AI labs in the new year? An obvious answer is more complex and feature-rich search applications. OpenAI’s SearchGPT foreshadows this.
Encoding is another obvious category. Again, initial productization work has already begun, with OpenAI’s Canvas product debuting in October.
Will OpenAI or Anthropic have an enterprise search product in 2025? Or a customer service product, a legal AI, or a sales AI product?
On the consumer side, we can imagine a “personal assistant” web agent product, or a travel planning app, or an app that generates music.
What’s most fascinating about watching Frontier Labs move toward the application layer is that this move will put them in direct competition with many of their most important customers.
Perplexity in search, Cursor in encoding, Sierra in customer service, Harvey in legal AI, Clay in sales, and so on.
09. Klarna will go public in 2025, but there are signs of exaggerating the value of AI
Klarna is a Sweden-based "buy now, pay later" service provider that has raised nearly $5 billion in venture capital since its founding in 2005.
Perhaps no company can speak more grandiosely about its use of artificial intelligence than Klarna.
Just a few days ago, Klarna CEO Sebastian Siemiatkowski told Bloomberg that the company has completely stopped hiring human employees and instead relies on generative AI to do the work.
As Siemiatkowski puts it: “I think AI can already do all the work that we humans do.”
Similarly, Klarna announced earlier this year that it had launched an AI customer service platform that has fully automated the work of 700 human customer service agents.
The company also claimed that it had stopped using enterprise software products such as Salesforce and Workday because it could simply replace them with AI.
To put it bluntly, these claims are not credible. They reflect a lack of understanding of the capabilities and shortcomings of today’s AI systems.
Claims to be able to replace any specific human employee in any function in an organization with an end-to-end AI agent are not credible. This is the equivalent of solving the general human-level AI problem.
Today, leading AI startups are working at the forefront of the field to build agent systems to automate specific, narrowly defined, highly structured enterprise workflows, for example, a subset of a sales development representative or customer service agent’s activities.
Even in these narrowly defined cases, these agent systems don’t yet work completely reliably, though in some cases they are starting to work well enough to see early commercial adoption.
Why is Klarna exaggerating the value of AI?
The answer is simple. The company plans to go public in the first half of 2025. The key to a successful IPO is having a compelling AI story.
Klarna, still an unprofitable business that lost $241 million last year, may be hoping that its AI story will convince public market investors of its ability to significantly reduce costs and achieve lasting profitability.
There is no doubt that every business in the world, including Klarna, will enjoy the huge productivity gains that AI will bring in the coming years. But there are still many thorny technical, product, and organizational challenges to be solved before AI agents completely replace humans in the workforce.
Exaggerated claims like Klarna's are blasphemous to the field of AI and the hard progress that AI technologists and entrepreneurs have made in developing AI agents.
As Klarna prepares to go public in 2025, expect these claims to be subject to greater scrutiny and public skepticism, which have so far mostly escaped challenge. Don't be surprised if the company's descriptions of its AI applications are over the top.
10.The First Real AI Safety Accident Will Happen
In recent years, as artificial intelligence has become more powerful, concerns have grown that AI systems might begin to act in ways inconsistent with human interests, and that humans might lose control of these systems.
Imagine, for example, that an AI system learned to deceive or manipulate humans in order to achieve its own goals, even if those goals would cause harm to humans. These concerns are often categorized as “AI safety” issues.
In recent years, AI safety has moved from a fringe, quasi-sci-fi topic to a mainstream area of activity.
Today, every major AI player, from Google to Microsoft to OpenAI, has devoted significant resources to AI safety efforts. AI icons like Geoff Hinton, Yoshua Bengio, and Elon Musk have also begun to speak out about AI safety risks.
However, so far, the issue of AI safety remains entirely theoretical. No real AI safety incident has ever occurred in the real world (at least not publicly reported).
2025 will be the year that changes this situation. What will the first AI safety incident look like?
To be clear, it will not involve a Terminator-style killer robot, and it will most likely not cause any harm to humans.
Perhaps the AI model will try to secretly create a copy of itself on another server to save itself (called self-filtering).
Or perhaps the AI model will conclude that in order to best advance the goals it has been given, it needs to conceal its true capabilities from humans and deliberately perform low-key in performance evaluations to evade greater scrutiny.
These examples are not far-fetched. Important experiments published earlier this month by Apollo Research showed that today’s cutting-edge models are capable of this kind of deception, given specific prompts.
Similarly, recent research in Anthropology has shown that LLMs have a disturbing ability to “pseudo-align.”
We expect that this first-of-its-kind AI safety incident will be discovered and neutralized before any real harm is done. But it will be an eye-opening moment for the AI community and society at large.
It will make one thing clear: before humanity faces an existential threat from omnipotent AI, we need to accept a more mundane reality: we now share our world with another form of intelligence that can sometimes be willful, unpredictable, and deceptive.
Arweave, Arweave's working principle and significance of existence Golden Finance, this article briefly introduces Arweave's working principle and value.
JinseFinanceThe U.S. economy declined more than expected, global liquidity tightened more than expected, domestic industrial policies were not implemented as expected, the "black swan" event before the U.S. election had an impact, and expectations of global geopolitical turmoil rose more than expected.
JinseFinanceThe S&P 500 (an index of the top 500 US companies) is still below its mid-July peak and the level at the end of July when the “crash” began. What is causing this downward trend? Does it portend more serious problems for the US economy?
JinseFinanceOn August 8, the U.S. Federal Reserve took a major enforcement action against Pennsylvania-based Customers Bank, marking the U.S. government’s gradual increase in regulatory oversight of cryptocurrency-related businesses.
JinseFinanceDriven by a series of positive news, the market has gradually gotten rid of the haze before the start of the bull market.
JinseFinanceNovogratz, early Ethereum investor, bullish on crypto. Risks diminish post-bear market. Three growth drivers: regulatory clarity, potential Fed rate cuts, Bitcoin ETFs. Predicts regulatory changes in 12-18 months, sees ETFs as key adoption catalyst.
EdmundThe Ethereum Foundation's Danny Ryan discusses how the Merge will increase security and explains how proof of stake impacts developers.
FutureNigel Dobson, head of portfolio banking services at ANZ, said: "When we looked at this in depth, we came to the conclusion that this is a significant protocol shift in financial market infrastructure."
Cointelegraph