OpenAI launches ChatGPT Health, encouraging users to connect their medical records

1 day ago 5

OpenAI has been dropping hints this week about AI’s role as a “healthcare ally” — and today, the company is announcing a product to go along with that idea: ChatGPT Health.

ChatGPT Health is a sandboxed tab within ChatGPT that’s designed for users to ask their health-related questions in a more secure and personalized environment, with a separate chat history and memory feature than the rest of ChatGPT. The company is encouraging users to connect their personal medical records and wellness apps, such as Apple Health, Peloton, MyFitnessPal,Weight Watchers, and Function, “to get more personalized, grounded responses to their questions.” It suggests connecting medical records so that ChatGPT can analyze lab results, visit summaries, and clinical history; MyFitnessPal and Weight Watchers for food guidance; Apple Health for health and fitness data, including movement, sleep, and activity patterns”; and Function for insights into lab tests.

“ChatGPT can help you understand recent test results, prepare for appointments with your doctor, get advice on how to approach your diet and workout routine, or understand the tradeoffs of different insurance options based on your healthcare patterns,” OpenAI writes in the blog post. On the medical records front, OpenAI says it’s partnered with b.well, which will provide back-end integration for users to upload their medical records, since the company works with about 2.2 million providers.

For now, ChatGPT Health requires users to sign up for a waitlist to request access, as it’s starting with a beta group of early users, but the product will roll out gradually to all users regardless of subscription tier.

In a blog post, OpenAI wrote that based on its “de-identified analysis of conversations,” more than 230 million people around the world already ask ChatGPT questions related to health and wellness each week. OpenAI also said that over the past two years, it’s worked with more than 260 physicians to provide feedback on model outputs more than 600,000 times over 30 areas of focus, to help shape the product’s responses.

The company makes sure to mention in the blog post that ChatGPT Health is “not intended for diagnosis or treatment,” but it can’t fully control how people end up using AI when they leave the chat. By the company’s own admission, in underserved rural communities, users send nearly 600,000 healthcare-related messages weekly, on average, and seven in 10 healthcare conversations in ChatGPT “happen outside of normal clinic hours.” In August, physicians published a report on a case of a man being hospitalized for weeks with an 18th-century medical condition, after taking ChatGPT’s alleged advice to replace salt in his diet with sodium bromide. Google’s AI Overview made headlines for weeks after its launch over dangerous advice, such as putting glue on pizza, and a recent investigation by The Guardian found that dangerous health advice has continued, with false advice for liver function tests, women’s cancer tests, and recommended diets for those with pancreatic cancer.

One part of health that OpenAI seemed to carefully avoid mentioning in its blog post: mental health. There are a number of examples of adults or minors dying by suicide after confiding with ChatGPT, and in the blog post, OpenAI stuck to a vague mention that users can customize instructions in the Health product “to avoid mentioning sensitive topics.” When asked during a Wednesday briefing with reporters whether ChatGPT Health would also summarize mental health visits and provide advice in that realm, OpenAI’s CEO of Applications Fidji Simo said, “Mental health is certainly part of health in general, and we see a lot of people turning to ChatGPT for mental health conversations,” adding that the new product “can handle any part of your health including mental health … We are very focused on making sure that in situations of distress we respond accordingly and we direct toward health professionals,” as well as loved ones or other resources.

It’s also possible that the product could worsen health anxiety conditions, such as hypochondria. When asked whether OpenAI had introduced any safeguards to help prevent people with such conditions from spiraling while using ChatGPT Health, Simo said, “We have done a lot of work on tuning the model to make sure that we are informative without ever being alarmist and that if there is action to be taken we direct to the healthcare system.”

When it comes to security concerns, OpenAI says that ChatGPT Health “operates as a separate space with enhanced privacy to protect sensitive data” and that the company introduced several layers of purpose-built encryption (but not end-to-end encryption), according to the briefing. Conversations within the Health product aren’t used to train its foundation models, by default, and if a user begins a health-related conversation in regular ChatGPT, the chatbot will suggest moving it into the Health product for “additional protections,” per the blog post. But OpenAI has had security breaches in the past, most notably a March 2023 issue that allowed some users to see chat titles, initial messages, names, email addresses, and payment information from other users. In the event of a court order, OpenAI would still need to provide access to the data “where required through valid legal processes or in an emergency situation,” OpenAI head of health Nate Gross said during the briefing.

When asked if ChatGPT Health is compliant with the Health Insurance Portability and Accountability Act (HIPAA), Gross said that “in the case of consumer products, HIPAA doesn’t apply in this setting — it applies toward clinical or professional healthcare settings.”

Follow topics and authors from this story to see more like this in your personalized homepage feed and to receive email updates.

Read Entire Article
Lifestyle | Syari | Usaha | Finance Research