AI and Generative Technology Use at CUIMC
As we embrace innovative technologies to support the Medical Center’s mission, it is crucial to address the responsible use of Artificial Intelligence (AI) and generative technology tools at CUIMC to ensure compliance with healthcare regulatory standards and to safeguard sensitive information.
This document outlines the Medical Center’s supplemental IT guidance on the use of AI platforms while adhering to Columbia’s university-wide AI policy, https://provost.columbia.edu/content/office-senior-vice-provost/ai-policy.
Prohibited Uses
As a standard practice, personal or free online software tools should not be used to handle institutional data. These applications, often accessible through self-signup or as part of software-as-a-service (SaaS) platforms, are explicitly not approved for use with Columbia ‘Sensitive’ data.
Notedly, using unapproved cloud-based AI tools — such as commercial ChatGPT, Copilot, DeepSeek, Gemini, Claude, and similar services — with ‘non-public’ CUIMC data is not permitted. Users that violate this policy may be subject to sanctions consistent with the CUIMC Privacy and Information Security Sanction Policy.
Service Offerings
Columbia University provides access to compliant versions of OpenAI’s ChatGPT and Microsoft Copilot - a solution available only to Medical Center staff - enabling our workforce to leverage these AI tools responsibly and in a compliant manner.
Workforce members must access CUIMC-approved enterprise versions of AI tools using their CUIMC-issued accounts. This authentication ensures a secure and compliant environment, aligning with security and privacy standards.
To help users confirm they are using the approved software versions, visual indicators are displayed within the interfaces of CUIMC-sanctioned AI tools:
Users should always verify these indicators before entering any CUIMC-related data. If these elements are missing, you may not be using the approved instance. In such cases, immediately stop and report any issue to CUIMC IT Security Office (security@cumc.columbia.edu) to ensure compliance and prevent unauthorized data exposure
- ChatGPT Education: Displays the Columbia University logo at the top of the interface:
And
- Microsoft Copilot: Shows a green security shield with the CUIMC identifier in the upper corner of the chat interface. Additionally, confirm your CUIMC email address indicating that you are specifically logged into the CUIMC account and not that of another organization.
Important: Columbia University offers two distinct ChatGPT AI-powered tools—ChatGPT Education andCU-GPT —each designed to support different academic, research, and administrative needs while maintaining security and compliance standards.
Columbia ChatGPT Education
Columbia ChatGPT Education is a user-friendly AI platform, similar to the commercial version of OpenAI’s ChatGPT, but designed to provide secure and compliant access to advanced AI features. Available to the Columbia University community, including CUIMC, this tool is offered on a subscription basis, with costs billed annually to your department. It enhances security and privacy while granting access to the latest ChatGPT models. Key features include Advanced Data Analysis, the GPT Builder tool, and DALL·E for image creation and editing. This service is available via the CUIT website https://www.cuit.columbia.edu/content/chatgpt-education

Screenshot of Columbia’s approved ChatGPT page with Columbia Logo
PHI including coded or limited data sets may be used in Columbia’s instance of ChatGPT Education.
Workforce members should follow the principle of ‘Minimum Necessary’ limiting PHI data attributes and ensuring input data is de-identified where necessary. If possible, direct identifiers such as names, MRNs, addresses, phone numbers, and social security numbers should be removed before storing or sharing information in ChatGPT Education. Following these practices helps reduce potential IT risks, particularly in a system compromise, where past interactions could be exposed.
When using PHI sourced from any clinical system used at Columbia in ChatGPT Education for research protocols, prior authorization is required from the IRB and data owner. Workforce members must obtain approval through the TRAC and ACORD process to ensure compliance with institutional guidelines on clinical data access for research. Requests must detail the intended use and access to the data facilitated through designated data navigators.
Note: AI algorithms can assist clinicians in critical analysis, thereby enhancing treatment decision-making and patient outcomes. This underscores the potential benefits of AI in a clinical setting. However, it also highlights the necessity for rigorous oversight and approval processes to ensure patient safety.
Using Artificial Intelligence (AI) in learning and training activities, in research, and in the practice of medicine and health care delivery carries immense promise, which must be balanced with acknowledgement of unknowns and new potential risks. As AI tools, such as Columbia GPT Education and CUGPT, become available, there is a new set of competencies and considerations providers and the care team, researchers, educators, learners, staff and administrators must learn and incorporate into their use of AI.
We must strive to use AI in ways that ensure safety, privacy, efficiency and effective outcomes for all. We must all come to know that the content, outputs, suggestions, analyses, recommendations and information produced by AI, including generative AI, may be false, misleading or untrue. Everything must be examined for hallucination, bias, mistakes and substandard recommendations.
For those of us practicing patient care, we must all acknowledge that medical decision-making, care delivery and diagnosis or treatment recommendations are the sole responsibility of the provider and the care team. AI does not replace the provider's sole and ultimate responsibility to the patient to do no harm and to deliver the standard of care.
Columbia CU-GPT
Developed in-house, CU-GPT is a streamlined chat interface designed for various language tasks, including writing assistance, summarization, translation, and conversational engagement. This offering is currently in its pilot phase. It provides a cost-efficient alternative to ChatGPT Education by implementing a per-use charge model, allowing users to pay only for the AI interactions they engage with. This reduces the need for an annual subscription commitment, making it a more flexible and budget-conscious option for departments and individuals who require AI support without the fixed costs associated with maintaining a dedicated ChatGPT Education account. The service is available via https://www.cuit.columbia.edu/content/cu-gpt.
Unlike ChatGPT Education, CU-GPT is not approved for PHI use. However, CUIMC users can request access to the service for processing non-sensitivedata in accordance with institutional guidelines

Screenshot of Columbia’s approved CU-GPT webpage
Copilot Chat
CUIMC’s Microsoft license includes Copilot Chat, a generative AI tool with built-in contractual data protections to safeguard CUIMC information. When used correctly, all prompt data remains within the CUIMC tenant, ensuring it is not shared externally or used for AI model training. These protections are only active when the tool is accessed using your CUIMC issued O365 account. Please ensure you are logged in with your UNI@cumc.columbia.edu.

Screenshot of Columbia’s approved Microsoft Copilot page with the Green Shield verification badge
Confidential and Internal CUIMC data is approved for use by Medical Center staff on Microsoft Copilot when logged into a CUIMC Microsoft 365 account.
You may utilize this Columbia approved tool to analyze non-PHI and non-PII data (i.e. de-identified data) as well as for administrative processes. This includes drafting general communications, documentation, training, scheduling, creating non-sensitive materials and conducting preliminary research tasks.
The use of third-party plugins and externally accessible prompts is currently restricted on CUIMC’s Microsoft copilot. You will not be able to use prompts with intermediary data inputs that potentially expose Columbia information to external services.
Additional Guidance
All use of AI, Large Language Models (LLM), Natural Language processors (NLP) or Machine Learning (ML) systems at the Medical Center must comply with HIPAA and other relevant healthcare and IT regulations to uphold the highest standards of patient privacy and data security. For locally installed LLMs, NPL, ML or AI models, a formal IT Risk Assessment review is required before deployment to evaluate potential security, privacy, and compliance risks. CUIMC-IT may monitor AI software usage to ensure compliance with these guidelines.
For reviews of any Ai technology use cases at the Medical Center, please submit all requests that are outside the approved scenarios to CUIMC-IT through the IDEAS, technology project proposal request form.
When generating AI content for audiences beyond your immediate use, it is imperative to verify all AI-generated information through authoritative sources and report its use in approved research activities. This practice ensures information integrity and helps avoid potential AI-related negative outcomes such as the dissemination of inaccuracies or biases.
We encourage our staff to explore our CU learning content https://www.cuit.columbia.edu/content/chatgpt-education and general learning material on AI technologies available for all levels through Columbia’s LinkedIn learning resource portal.
Should you have any questions regarding AI guidance or become aware of any data exposures or misuse of sensitive information, please reach out to the CUIMC Information Security Office (security @cumc.columbia.edu)
Data Classification and AI Use
Data Type |
Definition |
AI Use Guidance |
Sensitive Data |
Information requiring protection due to privacy, regulatory, or security concerns. (PHI, RHI, PII) - including coded/limited datasets |
Permitted only on approved ChatGPT Education platform. -Research protocol use requires IRB, and TRAC/ACORD approval |
Confidential and Internal Data |
Contractually protect or proprietary data such as Internal policies, drafts, student, financial or unpublished reports. |
Permitted on approved ChatGPT (Columbia ChatGPT Education and CU-GPT) and MS Copilot Chat |
Non-Public Data |
General CUIMC-related data classified as sensitive, confidential or internal data |
Not allowed on unapproved AI tools (e.g. free/Personal/Commercial ChatGPT, Gemini, Claude, etc.).
|