How real is the threat of AI washing?

AI company Shiru searches for additional partners as the company works toward profitability.
Companies are leveraging AI for personalized health solutions, but concerns about AI-washing raise questions about transparency and regulation. (Getty Images (Userba011d64_201))

Artificial intelligence (AI) is transforming nutrition, with new platforms offering personalized health solutions. But as apps and websites claim AI-powered advice, how real are these claims—and what risks does AI-washing pose?

AI-washing is a deceptive marketing tactic where companies exaggerate the role of AI in promoting their products or services.

This can be a potential hazard for several reasons, explained food risk scientist and food regulations expert Luca Bucchini.

Regulators and consumers alike should approach AI-powered nutrition solutions with caution, he told NutraInrgedients.

AI’s growing use in the nutrition space

AI is increasingly being leveraged in the nutrition space to improve health outcomes. Some AI-powered apps and platforms are now able to provide individualized nutrition advice based on factors like genetic makeup (Zoe), metabolic health (Lumen), digestive health (Foodmarble), and activity levels (MyFitnessPal).

Machine learning algorithms can also now analyze massive datasets from clinical studies, food diaries and health metrics to identify patterns in how certain foods affect specific populations. AI can then use these patterns to suggest more effective and science-backed dietary interventions for individuals, which is what the brand InsideTracker does, offering personalized advice based on biological data tests.

AI image recognition technology has also evolved to the point where it can identify food items from photos. Apps, such as Yuka, use this technology to allow users to take pictures of barcodes, and the AI estimates the nutritional breakdown.

The technology is also being used to help predict the nutritional needs of individuals based on their health status, for example, nutrient deficiencies in infants as the app Alba Health does. It is also being used by brands to identify potential illness, as seen by Withings’ Health Mate, a smartwatch designed to “perform a comprehensive body checkup with ease.”

Furthermore, some AI-driven virtual assistants now offer real-time nutrition advice, reminding users about their diet, answering food-related queries and even helping people with food allergies or intolerances make safe choices.

So what’s the problem?

Established brands in the industry recognize the transformative potential of AI and are actively investing in it to enhance efficiency, reduce costs and improve product quality. Despite their awareness and investment, however, many companies face significant challenges in successfully implementing AI, Bucchini said.

If brands claim their products or services are powered by AI without actually using meaningful AI, this can mislead consumers into thinking a product is more advanced, accurate or scientifically backed than it actually is.

“It is my impression that often claims of AI-powered or similar are exaggerated,” Bucchini said.

Additionally, if AI generates complex content that humans cannot fully review, it may fail to comply with regulations. This could lead to misinformation and a decrease in trust in AI-generated content.

However, being overcautious with AI could make it less useful, in that if AI applies excessive restrictions, it may refuse to generate useful health and dietary advice, even when the information is valid and compliant, Bucchini explained.

“There is a duty to assess the risk of AI and, after this assessment, companies may need to adjust their approach,” he said.

“For example, if you plan to use a chatbot to guide consumers in choosing supplements, some obligations arise, including transparency. Some companies are tempted to give consumers the impression they are interacting with humans when they are not; this will no longer be possible.”

The EU AI Act

The EU AI Act is a recent proposal by the European Union to regulate AI technologies and ensure they are used in a manner that is ethical, safe and transparent.

The purpose of the act is to ensure that the benefits of AI are maximized while minimizing the risks and to establish Europe as a global leader in regulating AI technologies.

As Bucchini explained, the EU AI Act regulates the use of it across all industries, including the food and nutrition sector.

Most AI tools used in this industry are low-risk and do not require specific regulations, however, the act establishes rules on transparency, risk classification and some prohibitions, he said.

For instance, it requires the disclosure of virtual influencers that resemble real people, as these are becoming more common, especially for supplement promotion.

AI-powered medical devices used by consumers to guide food or supplement choices are also regulated.

“Overall, food and nutrition businesses must ensure their AI tools comply with the AI Act’s guidelines,” Bucchini said.

Advice to brands

Regulators are still in the early stages of assessing AI systems in the nutrition space for compliance with the Act.

“The act is not over complicated, and there is guidance, although some issues are still pending clarifications,” he said.

“If a brand or company intends to deploy AI, beyond using ChatGPT or Gemini internally, they should take a close look at the act, training staff or through professionals.”

Furthermore, both AI developers and deployers must understand their obligations, especially when using AI developed outside the EU, which can introduce additional challenges, he explained.

Companies using AI-generated health or nutrition advice without proper oversight face legal risks, as the AI Act includes enforcement mechanisms and penalties.

“In general, it’s early days, so skepticism is a wise strategy,” Bucchini said.