The Consumer Electronics Show (CES) 2026 showcased a surge of AI-powered health gadgets making bold promises, yet a growing chorus of experts voiced significant concerns regarding their accuracy, data privacy, and the implications of relaxed federal oversight, as reported by Fast Company.

Products ranged from smart scales scanning feet for heart health to egg-shaped hormone trackers using AI for conception timing. These innovations, while seemingly beneficial, sparked debates among tech and health professionals about their reliability and the security of sensitive personal data in an evolving regulatory landscape.

Adding to the complexity, the Food and Drug Administration (FDA) announced during the Las Vegas event that it would relax regulations on “low-risk” general wellness products, including heart monitors. This move aligns with broader administration efforts to remove barriers for AI innovation, following the repeal of previous executive orders and a new Department of Health and Human Services strategy to expand AI use.

Accuracy, bias, and the limits of AI in health

While AI technologies offer considerable benefits within the over $4.3 trillion healthcare industry, experts highlight crucial limitations. Marschall Runge, a professor of medical science at the University of Michigan, notes AI’s proficiency in analyzing medical imaging and streamlining doctors’ schedules.

However, Runge also warns that these systems can promote biases and “hallucinate,” providing incorrect information presented as fact. Cindy Cohn, executive director of the digital rights group Electronic Frontier Foundation (EFF), strongly advises against equating technology with a well-resourced, research-driven medical professional.

“I would urge people not to think that the technology is the same as a well-resourced, thoughtful, research-driven medical professional,” Cohn stated. This perspective underscores the view that these gadgets should serve as tools, not as definitive diagnostic or treatment alternatives.

Navigating data privacy and regulatory gaps

A significant concern revolves around data privacy. Information collected by consumer health devices often falls outside the protective umbrella of the Health Insurance Portability and Accountability Act (HIPAA), leaving users vulnerable. Cohn points out that companies could potentially use this data to train their AI models or even sell it to other businesses.

Determining where personal health information goes can be challenging. “You have to dig down through the fine print to try to figure that out, and I just don’t think that’s fair or right for the people who might rely on it,” Cohn explained. This opaqueness raises serious questions about user consent and control over their health data.

Conversely, product creators argue their innovations fill critical healthcare gaps, particularly in rural areas or under-researched fields like women’s health. Sylvia Kang, founder of Mira, created her hormone tracker to address a lack of hormonal health knowledge, asserting her company protects customer privacy and stores data securely in the cloud.

Other innovations, like the Peri device for perimenopause monitoring and the 0xmd AI chatbot for medical information access, aim to improve accessibility. Allen Au, 0xmd’s founder, believes these tools provide cost-effective alternatives and second opinions, though he stresses they won’t replace doctors.

The tension between groundbreaking innovation and the imperative for user safety and data security dominated discussions around AI-powered health gadgets at CES 2026. While these devices promise to democratize health insights and address care deficiencies, users must remain vigilant about their accuracy and the privacy implications.

The ongoing debate highlights the critical need for a balanced approach, where technological advancement is tempered with robust ethical considerations and clear regulatory frameworks. As Cohn wisely put it, “People need to remember that these are just tools; they’re not oracles who are delivering truths.”