The dangerous trend of patients relying on ChatGPT for self diagnosis is worrisome
Raising his flag on dangerous trend of using AI Chatbots by patients, our Community Health expert *Dr. Narersh Purohit, (Executive Member, Federation of Hospital Administrators), warns that replacing a Family Doctor with AI tool can be harmful !
Bhopal/New Delhi: The dangerous trend of many patients relying on Al (Artificial Intelligence) chat
bot like ChatGPT for self diagnosis before visiting doctors is worrisome.
All AI chat bots like ChatGPT, Gemini (Google) are developed by American companies. They have fed data of American people. When an Indian asks an AI about his health problem, AI software will answer based on American data. It will only suggest medicines available in the USA and not in India. AI will give suggestions based
on data fed on it and based on algorithms. If there is no data, for example of Indian health issues, AI will not be able to give correct answers to Indian patients.
AI is a powerful tool in healthcare, but it should never replace human doctors. While it excels in medical imaging, drug discovery, and administrative support, it fails at accurate self-diagnosis, mental health support, and complex medical decision-making.Patients should use AI cautiously and always consult a qualified doctor for serious health concerns. AI may assist in medicine, but human expertise, empathy, and judgment remain irreplaceable in ensuring safe and ethical healthcare.
Al is rapidly transforming healthcare, from diagnosing diseases to recommending treatments. Chatbots, symptom checkers, and AI-driven virtual assistants have made health information more accessible than ever. However, as more people turn to AI for medical advice, concerns about misdiagnosis, misinformation, and the risks of self-diagnosis are growing.
Patients should go to doctors, specifically family doctors who know the history of their patients for decades and can offer best medical advice. Besides, doctors are the experts who can do all physical tests and take history and analyse health problems of patients.
According to British Medical Journal recent published study the patients using AI chatGPT may not get the right diagnosis or treatment guidance of their medical condition leading to dangerous health decisions.
Unlike human doctors, AI lacks access to a patient’s full medical history, lifestyle, or genetic factors. This means it cannot make fully informed decisions about a person’s health.
It is to be noted that unlike licensed medical professionals, AI chatbots are not legally accountable for misdiagnosis or harm caused by their recommendations.
AI does not have empathy or its own morality. This works on data humans feed in it. If someone asks AI that he/she intends to commit suicide, AI will give methods to do so. This has actually happened with people attempting suicide based on AI suggestions. AI is an unreliable tool pretending to be human but it is not human. People need to meet doctors when they have problems.
All people have different stresses, different problems and hence different symptoms. AI Chatbot can give generalised answers but it cannot examine patients and tell the exact problems. Nowadays, people use AI Chatbot to express themselves and that is okay. But it should not be used as doctors.
*Dr. Narresh Purohit-MD, DNB, DIH, MHA, MRCP(UK), is an Epidemiologist, and Advisor-National Communicable Disease Control Program of Govt. of India, Madhya Pradesh and several state Health organizations. He’s the Principle Investigator – Association of Studies In Behavioural Science), Dr. Purohit is also Advisor-National Mental Health Program .