College Student’s Perspectives on AI in Research and Healthcare
Artificial intelligence (AI) is leaving an undeniably conspicuous impact on society. When ChatGPT was released in 2022, 1 million people signed up to use it within just 5 days. That number has continued to grow, and the site is now estimated to have over 120 million daily users. Other AI sites, such as Microsoft Copilot, Google Gemini, and Meta AI, are also seeing millions of users daily. AI is being designed to assist in all aspects of life. For example, AI software that perform natural language processing (NLP) are being used to filter spam and analyze emotions and attitudes expressed in social media users. Additionally, machine learning and computer vision are being used to detect fraud, develop autonomous vehicles, detect objects, recognize faces, and even make medical diagnoses.
It shouldn’t be surprising that AI has made its way into healthcare, it’s part of everything else, so why not? But, medicine is complicated—it's personal, sensitive, and interdisciplinary. AI use in healthcare needs to be approached thoughtully, because while it may seem beneficial to some, the reality is more complicated. Rapidly integrating AI can lead to mistakes being made. AI is trained on current data and this data can reflect human biases which are then passed down and sometimes amplified by AI. AI systems can also make mistakes and if humans over rely on the programs those errors may not be caught. These complications make implementing AI in healthcare less straightforward.
June 5, 2025, was the second annual NewYork-Presbyterian Quality in Care Symposium. The purpose of this symposium is to highlight current advances in medical care. The 237 posters presented covered a wide variety of issues, from errors in death certificates to implementation of in-office Botox injections for overactive bladder. The keynote speaker was David W. Bates, MD, MSc, a Professor at Harvard Medical School and the Harvard T.H. Chan School of Public Health. Dr. Bates spoke about the potential of AI to improve safety and quality of healthcare. I found the presentation extremely interesting, but what I found especially fascinating were the questions from the audience full of clinicians, students, and trainees. While I expected AI would be met with unconditional excitement, the audience thoughtful questions reflected the need for more safeguards to prevent errors in diagnosis and treatment, and that more work needs to be done to educate patients about the role of AI in their care.
Max Flanagan, a Research Coordinator in the Columbia Research Unit in the Division of Infectious Diseases, presented a poster titled ‘Healthcare Technology Use and Attitudes Towards AI-Assisted Diagnosis Among Individuals With and Without Long Covid in New York City’. This analysis was part of a larger CU-COMMITS cohort led by Drs. Delivette Castor, PhD, and Magdalena Sobieszczyk, MD. The project aimed to characterize digital healthcare technology use and evaluate attitudes and barriers to AI use in healthcare, specifically in patients with and without long COVID. People with Long COVID, which is when people develop long-term symptoms after their COVID infection, often require more complex and multidisciplinary care that often includes digital healthcare technology, which in some cases involves the use of AI. The problem is limited data exists about patient perception and comfort level regarding AI. Especially those who are vulnerable or have complex healthcare needs. To address this gap, the research team analyzed the data from 658 participants that participated in a one-time survey between Aug 2023 and Aug 2024.
Participants were split into three groups: no history of COVID-19, long COVID (LC), and a history of COVID-19 without long-COVID (it’s important to note that this was self-reported). After providing consent, participants then answered a large set of questions to both describe their social demographics and detail their healthcare technology utilization, including AI use and knowledge. It quickly became clear that participants with a history of COVID were more likely to use healthcare technology compared to those with no history of COVID. Specifically, 77% of participants with past COVID said they used a tablet or smartphone to help make a decision about how to treat an illness or condition compared to just 60% of those with no COVID history. Additionally, 77% of participants with LC found that online medical records were useful in monitoring health compared to 66% of the participants with no COVID history. This makes sense, as participants with COVID or LC need more integrated care and care for a longer duration of time, meaning they are more likely to use healthcare technology just because of their circumstances.
Now, when it comes to healthcare technology involving AI, participants seemed to have less knowledge regarding AI and expressed need for transparency in how it is assisting in their treatment and diagnosis. Only 38% of participants said they knew either quite a lot or a fair amount about how AI can change medicine. AI in healthcare can have a variety of uses, ranging from transcribing patient conversations, finding early cancer that was missed on a scan, to using it in training and medical education.
Because of this lack of understanding and knowledge of how AI is used in medicine, communication between healthcare workers and patients is crucial in easing concerns. Approximately 65% of all participants said transparency (meaning that medical practitioners revealed to their patients that they used AI as part of their diagnosis and treatment scheme) is either very or somewhat important. This doesn’t particularly mean that patients are anti-AI. In fact 45% of responses say that AI will somewhat or significantly improve healthcare in the next five years. But many patients are concerned (72%) that AI will make an incorrect diagnosis, and this is why transparency and education is so important! Individuals with long COVID express concerns with integrating AI into clinical care and health communication because of a fear of decreased focus on patient-centered communication. This is consistent with the concerns raised by healthcare workers at the symposium.
AI is a powerful tool that when used with care and attention will likely benefit many aspects of life, including by improving healthcare. As one audience member at the symposium put it, "AI is like a firehose." When misdirected or overused, AI can overwhelm rather than assist. Society mostly focuses on what AI can do, but we must also consider what happens when it does too much or is misunderstood. Healthcare is a system already riddled with complexity, and more is not always better. AI in healthcare is already being used in many important ways, but its true value will depend on how thoughtfully we choose to integrate it, and use it to communicate with patients.