Elon Musk is once again generating controversy, this time in the realm of health and data privacy. The entrepreneur publicly promoted the idea that users of X (formerly Twitter) upload medical images — such as X-rays, magnetic resonance imaging (MRI) or computed tomography scans (CT) — so that Grok, the artificial intelligence developed by xAI, can learn to interpret them and improve its diagnostic ability.
The proposal, presented as a technological advance, raised alarms among specialists due to the risks associated with the use of sensitive medical information on a social network. The initiative was directly promoted by Musk, who claimed that Grok can analyze clinical images and offer rapid diagnoses, and even asserted to have seen cases in which the system surpassed human doctors. According to his explanation, the goal is for the AI not only to "read" these studies, but also to train with the material that the users themselves decide to share, incorporating corrections and comments to perfect its performance.You can also read: Trump incorporates Milei into the Executive Board for Peace for Gaza
In that context, Musk acknowledged that he himself recently uploaded a personal MRI to Grok, although he did not detail the reasons or clarify what subsequent use was given to that information. The message was interpreted as an open call for the collection of medical data on a large scale, something unusual outside of regulated clinical environments. From a technological perspective, the promise is attractive. Millions of people receive medical studies accompanied by technical reports that are difficult to understand. An AI capable of translating those results into clear language could help improve health understanding and serve as informational support. However, specialists warn that this potential benefit does not eliminate the risks. Grok's performance in the medical field has shown mixed results. Some users claim that the tool identified relevant anomalies in clinical analyses, while others reported serious errors. Among the cited cases are misinterpretations of images, such as confusing signs of tuberculosis with spinal problems, or misidentifying a mammogram as an image of another part of the body. This type of failure, experts point out, can have serious consequences if users make decisions based on incorrect diagnoses. A study published in May 2025 analyzed the performance of different AI models in detecting pathologies in over 35,000 brain resonance imaging slices. In that work, Grok showed better results than Google's Gemini and ChatGPT-4o in certain specific tasks. However, radiologist Laura Heacock, from NYU Langone, clarified that, while there is technical capability in these tools, traditional and non-generative methods are still more reliable in medical imaging. The biggest concern is privacy and data handling. Ryan Tarzy, CEO of Avandra Imaging, explained that asking users to directly upload their medical information speeds up the training of the models, but also introduces biases, as the data will come only from people willing to share it. This can exclude large sectors of the population and affect the quality of the results.






