The surge in artificial intelligence (AI)-generated guidebooks available for purchase on Amazon has garnered the attention of experts who are raising a red flag over potential dire ramifications. With offerings spanning cookbooks to travel guides, human authors are sounding a clarion call to readers, underscoring the potential perils of unwavering reliance on AI.
The New York Mycological Society has taken to X (formerly known as Twitter) to issue a resounding alarm about the hazards tied to dubious foraging books suspected to be the outcome of generative AI tools, such as ChatGPT.
President of the New York Mycological Society Sigrid Jakob pointed out that, "There are hundreds of poisonous fungi in North America and several that are deadly. They can look similar to popular edible species. A poor description in a book can mislead someone to eat a poisonous mushroom.”
An exploration on Amazon brought to light a plethora of dubious titles, including examples like "The Ultimate Mushroom Books Field Guide of the Southwest" and "Wild Mushroom Cookbook For Beginner" [sic] — both of which have since been removed.
These particular titles, most likely authored by non-existent personas, underscore the emergence of AI-generated content. These so-called publications adhere to familiar patterns, commencing with brief fictional anecdotes about amateur enthusiasts that register as inauthentic.
An in-depth examination of the content itself reveals a maze of inaccuracies and a textual composition that mimics the distinctive patterns commonly attributed to AI-generated text, as opposed to emanating from genuine mycological expertise.
However, what is disconcerting is that these books were positioned for individuals venturing into foraging with limited experience, leaving them unable to distinguish unreliable AI-driven counsel from dependable and real sources of information.
Sigrid added that, "Human-written books can take years to research and write.”
How Deadly Can it be?
The significance of exercising prudence in our reliance on AI cannot be overstated, given its capacity to disseminate misinformation or even perilous counsel in the absence of vigilant oversight. Recent findings underscore a disconcerting trend: People are more prone to embracing disinformation propagated by AI as opposed to fabrications originating from human sources.
This phenomenon extends beyond the realm of questionable foraging guides. An instance came to the fore recently, involving an AI application that dispensed hazardous recipe recommendations to users.
In New Zealand, the supermarket chain Pak ‘n’ Save unveiled the "Savey Meal-Bot," an AI-powered meal-planning app designed to suggest recipes aligned with users' input ingredients. Yet, as a prank, users fed the app hazardous household items, only to receive recommendations for crafting toxic concoctions such as "Aromatic Water Mix" and "Methanol Bliss."
While the app has since undergone modifications to obstruct unsafe suggestions, it underscores the inherent hazards that emerge when AI is harnessed without responsible deployment.
That being said, the vulnerability to AI-generated disinformation should not catch us off guard. Language models like large language models (LLMs) are engineered to formulate content based on the most probable and coherent outcomes, a feat achieved through extensive training on vast datasets.
This prowess can lead humans to accord credibility to AI, as its outputs closely mirror outcomes we perceive as favourable. While innovative algorithms can certainly augment human capabilities across diverse domains, relinquishing our discernment solely to machines proves a precarious proposition for society at large.
Living in the current age of technology that is ever-evolving, we need to remain cognizant of the potential pitfalls that arise when we entrust AI with the role of decision- maker.