Final Pr. ID: Poster #: EDU-004
It hasn’t even been 1 year since chatGPT came out and changed the world. Yet, today, we already have Large Multimodal Models, similar to chatGPT, that can now process images and voice as well.
The capabilities of these models have specifically piqued the interest of Radiologists, who have been exploring use cases in both academic research and clinical practice. Researchers at the NIH have developed a GPT that can process radiology reports to process result findings, locally. This means that all the patient health information stays within your network, and you can use GPTs without compromising HIPAA or safety concerns.
What’s even greater about the local GPT that the NIH researchers have developed, is that there is no need for costly and time-consuming fine-tuning. The performance of the local GPT developed, just by utilizing some techniques, were on par with the state-of-the-art tool developed by Stanford.
The purpose of this educational exhibit is to highlight the capabilities that LMMs have in diagnosing and augmenting the radiologist workflow.
I will discuss how there is no need to become a technical expert to ‘fine tune’ models. I will discuss how you can implement LMMs locally to protect health information. I will highlight the advanced LLM techniques that is used to improve the accuracy of outputted results. I will convey the idea that we need to embrace these technologies that are being improved upon at an exponential rate.
Ultimately, I will tie this in to how pediatric radiologists can use the techniques I talk about to conduct academic research, and will get people to think about how they can utilize their own local GPTs for various applications in both augmenting the clinical work flow and in academics.
Read More
Authors: Jung Daniel