San Francisco Streets Become A Stage For AI Opposition
The quiet of San Francisco’s tech corridors was broken on 22 March 2026 as nearly 200 activists marched past the headquarters of the world’s most powerful artificial intelligence labs.
Starting at Anthropic before moving to OpenAI and xAI, the "Stop the AI Race" demonstration presented a unified front against the rapid expansion of frontier models.
Organised by Michael Trazzi, the group is calling for a global, conditional pause on building increasingly advanced systems.
The goal is to shift the industry’s focus from raw power to safety, provided that all major players agree to stop simultaneously.
Why Are Activists Demanding A Conditional Pause
The protesters, including researchers and academics from groups like PauseAI and the Machine Intelligence Research Institute, argue that current development is a "suicide race."
They fear that frontier AI could soon automate its own research and self-improve beyond human control.
Trazzi explained,
“The reason we are pausing AI is because we believe that building AI, can automate AI research, and can self improve, like a danger to the human race, especially human extinction.”
He noted that even lab CEOs have acknowledged these risks, yet the competitive nature of the industry forces them to keep building.
Can Global Competitors Actually Agree To Stop
A central theme of the protest was the need for international treaties, particularly between the U.S. and China.
Trazzi believes that if these two superpowers reached an agreement, the focus could shift toward beneficial applications.
He said,
“If China and the U.S. agreed to stop building more dangerous models, they could focus on making the systems better for us, like medical AI. Everyone would be better off.”
To ensure such a pause is verifiable, the group suggests monitoring and limiting the amount of computing power used to train new models, effectively capping the growth of potential threats.
How Does The New Federal Framework Change The Game
The demonstration coincides with a new AI legislative framework from the Trump administration designed to keep the U.S. ahead of global rivals.
This federal approach emphasizes "winning the AI race" and proposes liability protections for AI companies, similar to Section 230 for social media.
This move has sparked concern among experts like Ahmed Banafa of San Jose State University, who warned of repeating history.
Banafa said,
“It is the closest thing to section 230 that protected social media for years. Basically you can’t sue someone for posting something there.”
He cautioned that we are now dealing with the consequences of early social media regulation because "there was no accountability for the platforms."
Will Local Safety Laws Clash With Federal Policy
While the federal government pushes for deregulation to foster innovation, California officials are seeking tighter oversight.
State Senator Scott Wiener has been a vocal critic of the administration’s hands-off approach.
Wiener stated,
“He’s not interested in having smart public policy approach to AI where we promote and foster innovation while we assess and try to get ahead of some of the risk.”
He continues to advocate for state-level requirements that would force companies to publish their safety protocols.
This tension highlights a growing divide between a city that is physically expanding to house 50,000 AI workers by 2030 and a local government trying to prevent that growth from becoming a liability.
Can Whistleblowers Impact The Future From Within
Despite the large-scale investment, including Anthropic’s recent 100,000 square foot lease and OpenAI’s plans to double its workforce, protesters believe the real power lies with the people writing the code.
Trazzi intends to continue demonstrations specifically where employees work to encourage internal dialogue.
“We want to show up where the employees are. We want to talk to them, and we want them to talk to their leadership and have things moving from inside.”
By engaging directly with those building the systems, the movement hopes to trigger a shift in priorities that corporate leadership has so far ignored.