CIO Corner: Responsibly Exploring New Technologies
One of the hottest technology topics today is the use of artificial intelligence (AI). This groundbreaking technology continues to spark a plethora of opportunities across various industries — particularly health care — empowering us to identify and leverage the best and safest AI tools to better serve our patients and support our clinicians.
It’s important to note that AI is a broad field, and there are different types and categories based upon their capabilities and functions. Houston Methodist is currently leveraging AI in areas, such as imaging, and we’re actively researching other areas where AI would enable us to streamline processes, increase efficiencies and reduce administrative burden. This is exciting because these areas use embedded AI, which is a safe AI model. Embedded AI means that AI capabilities are directly integrated into devices or systems, where it performs specific tasks in a private, controlled environment. This type of AI is perfect for health care. It gives us the immense benefits of AI, while keeping our patient data and systems safe.
With great power comes great responsibility
As with any new, groundbreaking technology, areas of AI pose challenges that we must address responsibly, specifically the use of public generative AI tools. Generative AI is a type of AI that can create new content, such as images, text and music. It analyzes large amounts of data and identifies patterns and trends then uses these patterns to generate new content that’s similar to the original data it analyzed. You’ll sometimes hear of this tool used in a safe environment (as embedded AI), but there are also generative AI tools, such as ChatGPT and Bard, that the general public can access on the internet. They’re interactive and can help you do things like plan a trip, write a poem or complete a math problem. When you ask it a question, it will confidently give you the answer.
While generative AI has massive potential, it’s still in its beginning stages, so it’s important to be careful when using any of these publicly available tools. Most of the data these public tools are trained on is dated back to September, 2021. They can also learn biases from this data, which can result in one-sided, inaccurate answers.
Data privacy and security are paramount concerns for public generative AI tools. When you enter data into a public-facing, generative AI tool, like ChatGPT, that information is no longer private or secure. The AI model records and stores transcripts of your conversation, and that data is then used to train other AI models, causing a privacy risk.
How you can help
As health care custodians we have an obligation to protect HM patient information and intellectual property and adhere to stringent data protection standards. To facilitate this, HM System Policy IM01 Acceptable Use of Computing Resources (Policy IM01) was recently updated, introducing new HM guidelines on using generative AI tools (ChatGPT, Bard, etc.). These tools can only be used for authorized, HM business purposes, and confidential information can never be entered into these tools, including patients’ Protected Health Information (PHI), Personally Identifiable Information (PII) or any HM proprietary information/trade secrets.
While these tools may be a great resource for you personally, if you decide to use one of these sites at home, for personal use, never enter any of your personal or confidential information into your prompts. Remember, once you release your information to a public, generative AI tool, you’ve essentially released it publicly.
Final thoughts
AI-driven platforms aren’t just another technology fad — they’re here to stay. By prioritizing proper use and security, together we can navigate this evolving AI landscape with confidence, leveraging these tools to provide better care for our patients.