Binance Founder Changpeng Zhao Denies Dating Rumors With Sydney Sweeney
Binance founder Changpeng Zhao has publicly dismissed claims linking him romantically to actress Sydney Sweeney, stating on X that he has never met her and labelling the speculation as “fake news.”
Addressing the rumours that circulated online, Zhao wrote,
“Poor Sydney Sweeney—I’ve never met her. I don’t socialize much. It’s becoming increasingly difficult to distinguish which news is unreliable, but if you can do that, you will become wealthier.”
The clarification comes amid Zhao’s recent step down as Binance’s CEO, yet he emphasised that his role in the cryptocurrency world remains unaffected and that market activity and asset values showed no disruption despite the misinformation.
How Fake News Can Be Identified Through Language
The case of Zhao and Sweeney highlights the ongoing challenge of navigating misinformation in a digital age.
Linguistic research has shown that fake news often contains specific traits that can distinguish it from genuine reporting.
According to University of Oslo in Norway, Linguist Silje Susanne Alvestad and her colleagues, through the Fakespeak project, have analysed English, Russian, and Norwegian texts to uncover these patterns.
Their findings show that fabricated news often uses a more informal and conversational style, shorter words, and emphatic expressions such as “truly” or “really.”
The use of tense can also be telling: fabricated content tends to be written in the present tense, whereas genuine news often uses the past tense.
Articles can also appear more categorical, with the author displaying “epistemic certainty” using words like “obviously” or “evidently.”
AI Accelerates the Spread of Misinformation
The rise of artificial intelligence has transformed the landscape of disinformation.
Alvestad’s NxtGenFake project, running until 2029, studies AI-generated disinformation, showing that these texts often mix true and false information, sharpen details, misplace context, or overlap with propaganda.
AI-generated propaganda also differs from human-authored texts.
It often relies on generic “Appeal to Authority” references, such as “according to researchers” or “experts believe,” and concludes with “Appeal to Values” statements urging action for growth, fairness, or trust.
In studies testing American readers’ reactions, AI-generated disinformation was rated as more credible and informative than human-written disinformation, although it scored lower on emotional appeal.
Alvestad noted,
“I was personally a little surprised that the AI-generated texts did not score highly on emotional appeal. Instead, they were perceived as both more informative and more credible than texts written by humans.”
Why Misinformation Spreads So Quickly
Recent events, such as the terrorist attack on Bondi Beach in Sydney, have shown how rapidly AI-powered misinformation can circulate.
Deepfake images and manipulated videos appeared online shortly after the attack, misleading the public and challenging authentic narratives.
Experiments like the social media wargame Capture the Narrative have demonstrated how AI-driven content can influence public opinion and even affect political outcomes.
These developments highlight the importance of digital literacy and careful verification to navigate the flood of online information effectively.
The combination of human and AI-driven disinformation makes distinguishing fact from fiction increasingly complex, leaving audiences reliant on both linguistic analysis and critical evaluation to separate truth from fabrication.