Skip to Main Content
 

Major Digest Home A GRC framework for securing generative AI - Major Digest

A GRC framework for securing generative AI

A GRC framework for securing generative AI

From automating workflows to unlocking new insights, generative AI models like OpenAI’s GPT-4 are already delivering value in enterprises across every industry. But with this power comes a critical challenge for organizations: How do they secure and manage the expanding ecosystem of AI applications that touch sensitive business data? Generative AI solutions are popping up everywhere—embedded in platforms, integrated into products, and accessible via public tools.

In this article, we introduce a practical framework for categorizing and securing generative AI applications, giving businesses the clarity they need to govern AI interactions, mitigate risk, and stay compliant in today’s rapidly evolving technology landscape.

Types of AI applications and their impact on enterprise security

AI applications differ significantly in how they interact with data and integrate into enterprise environments, making categorization essential for organizations aiming to evaluate risk and enforce governance controls. Broadly, there are three main types of generative AI applications that enterprises need to focus on, each presenting unique challenges and considerations.

Web-based AI tools – Web-based AI products, such as OpenAI’s ChatGPT, Google’s Gemini, and Anthropic’s Claude, are widely accessible via the web and are often used by employees for tasks ranging from content generation to research and summarization. The open and public nature of these tools presents a significant risk: Data shared with them is processed outside the organization’s control, which can lead to the exposure of proprietary or sensitive information. A key question for enterprises is how to monitor and restrict access to these tools, and whether data being shared is adequately controlled. OpenAI’s enterprise features, for instance, provide some security measures for users, but these may not fully mitigate the risks associated with public models.

AI embedded in operating systems – Embedded AI products, such as Microsoft Copilot and the AI features within Google Workspace or Office 365, are tightly integrated into the systems employees already use daily. These embedded tools offer seamless access to AI-powered functionality without needing to switch platforms. However, deep integration poses a challenge for security, as it becomes difficult to delineate safe interactions from interactions that may expose sensitive data. The crucial consideration here is whether data processed by these AI tools adheres to data privacy laws, and what controls are in place to limit access to sensitive information. Microsoft’s Copilot security protocols offer some reassurance but require careful scrutiny in the context of enterprise use.

AI integrated into enterprise products – Integrated AI products, like Salesforce Einstein, Oracle AI, and IBM Watson, tend to be embedded within specialized software tailored for specific business functions, such as customer relationship management or supply chain management. While these proprietary AI models may reduce exposure compared to public tools, organizations still need to understand the data flows within these systems and the security measures in place. The focus here should be on whether the AI model is trained on generalized data or tailored specifically for the organization’s industry, and what guarantees are provided around data security. IBM Watson, for instance, outlines specific measures for securing AI-integrated enterprise products, but enterprises must remain vigilant in evaluating these claims.

Classifying AI applications for risk management

Understanding the three broad categories of AI applications is just the beginning. To effectively manage risk and governance, further classification is essential. By evaluating key characteristics such as the provider, hosting location, data flow, model type, and specificity, enterprises can build a more nuanced approach to securing AI interactions.

A crucial factor in this deeper classification is the provider of the AI model. Public AI models, like OpenAI’s GPT and Google’s Gemini, are accessible to everyone, but with this accessibility comes less control over data security and greater uncertainty around how sensitive information is handled. In contrast, private AI models, often integrated into enterprise solutions, offer more control and customization. However, these private models aren’t without risk. They must still be scrutinized for potential third-party vulnerabilities, as highlighted by PwC in their analysis of AI adoption across industries.

Another key aspect is the hosting location of the AI models—whether they are hosted on premises or in the cloud. Cloud-hosted models, while offering scalability and ease of access, introduce additional challenges around data residency, sovereignty, and compliance. Particularly when these models are hosted in jurisdictions with differing regulatory environments, enterprises need to ensure that their data governance strategies account for these variations. NIST’s AI Risk Management Framework provides valuable guidance on managing these hosting-related risks.

The data storage and flow of an AI application are equally critical considerations. Where the data is stored—whether in a general-purpose cloud or on a secure internal server—can significantly impact an organization’s ability to comply with regulations such as GDPR, CCPA, or industry-specific laws like HIPAA. Understanding the path that data takes from input to processing to storage is key to maintaining compliance and ensuring that sensitive information remains secure. The OECD AI Principles offer useful guidelines for maintaining strong data governance in the context of AI usage.

The model type also must be considered when assessing risk. Public models, such as GPT-4, are powerful but introduce a degree of uncertainty due to their general nature and the open-source nature of the data they are trained on. Private models, tailored specifically for enterprise use, may offer a higher level of control but still require robust monitoring to ensure security. OpenAI’s research on GPT-4, for instance, illustrates both the advancements and potential security challenges associated with public AI models.

Finally, model training has important risk implications. Distinguishing between generalized AI and industry-specific AI can help in assessing the level of inherent risk and regulatory compliance. Generalized AI models, like OpenAI’s GPT, are designed to handle a broad array of tasks, which can make it harder to predict how they will interact with specific types of sensitive data. On the other hand, industry-specific AI models, such as IBM Watson Health, are tailored to meet the particular needs and regulatory requirements of sectors like healthcare or financial services. While these specialized models may come with built-in compliance features, enterprises must still evaluate their suitability for all potential use cases and ensure that protections are comprehensive across the board.

Establishing a governance framework for AI interactions

Classifying AI applications is the foundation for creating a governance structure that ensures AI tools are used safely within an enterprise. Here are five key components to build into this governance framework:

  1. Access control: Who in the organization can access different types of AI tools? This includes setting role-based access policies that limit the use of AI applications to authorized personnel.
    Reference: Microsoft Security Best Practices outline strategies for access control in AI environments.
  2. Data sensitivity mapping: Align AI applications with data classification frameworks to ensure that sensitive data isn’t being fed into public AI models without the appropriate controls in place.
    Reference: GDPR Compliance Guidelines provide frameworks for data sensitivity mapping.
  3. Regulatory compliance: Make sure the organization’s use of AI tools complies with industry-specific regulations (e.g., GDPR, HIPAA) as well as corporate data governance policies.
    Reference: OECD AI Principles offer guidelines for ensuring regulatory compliance in AI deployments.
  4. Auditing and monitoring: Continual auditing of AI tool usage is essential for spotting unauthorized access or inappropriate data usage. Monitoring can help identify violations in real-time and allow for corrective action.
    Reference: NIST AI Risk Management Framework emphasizes the importance of auditing and monitoring in AI systems.
  5. Incident response planning: Create incident response protocols specifically for AI-related data leaks or security incidents, ensuring rapid containment and investigation when issues arise.
    Reference: AI Incident Database provides examples and guidelines for responding to AI-related security incidents.

Example: Classifying OpenAI GPT and IBM Watson Health

Let’s classify OpenAI ChatGPT and IBM Watson Health for risk management according to the characteristics we outlined above.

Now that we have the classifications, let’s overlay our governance framework.

Reducing AI risks through AI governance

As AI technology advances, it brings both transformative opportunities and unprecedented risks. For enterprises, the challenge is no longer whether to adopt AI, but how to govern AI responsibly, balancing innovation against security, privacy, and regulatory compliance.

By systematically categorizing generative AI applications—evaluating the provider, hosting environment, data flow, and industry specificity—organizations can build a tailored governance framework that strengthens their defenses against AI-related vulnerabilities. This structured approach enables enterprises to anticipate risks, enforce robust access controls, protect sensitive data, and maintain regulatory compliance across global jurisdictions.

The future of enterprise AI is about more than just deploying the latest models; it’s about embedding AI governance deeply into the fabric of the organization. Enterprises that take a proactive, comprehensive approach will not only safeguard their business against potential threats but also unlock AI’s full potential to drive innovation, efficiency, and competitive advantage in a secure and compliant manner.

Trevor Welsh is VP of products at WitnessAI.

Generative AI Insights provides a venue for technology leaders—including vendors and other outside contributors—to explore and discuss the challenges and opportunities of generative artificial intelligence. The selection is wide-ranging, from technology deep dives to case studies to expert opinion, but also subjective, based on our judgment of which topics and treatments will best serve InfoWorld’s technically sophisticated audience. InfoWorld does not accept marketing collateral for publication and reserves the right to edit all contributed content. Contact [email protected].

Source:
Published: