Secure MCP on AWS with TrueMCP

How can you benefit from the power of AI in secure environments, such as those of enterprise companies? The new Model Context Protocol (MCP) introduced by Anthropic is gaining popularity; however, securely implementing remote MCP servers is still a challenging task, preventing enterprise companies from adopting it.

 In this article, I review why MCPs are so important and how you can integrate AI into more aspects of your business, without compromising the security and privacy of your data. It is based on real-life implementations of MCP servers in such sensitive environments.

The Enterprise AI Dilemma: Promise vs. Reality

Large enterprises today are at a crossroads. They possess vast, invaluable data repositories, the result of decades of investment in CRMs, ERPs, and countless other business-critical systems. This data is the lifeblood of the organization, holding the potential to unlock unprecedented efficiency and innovation. Modern Artificial Intelligence promises to tap into this potential, turning siloed information into actionable intelligence.

However, the reality of AI adoption has been fraught with challenges. Many executive leaders are finding that the path to meaningful AI integration is far more complex than anticipated. The excitement around powerful new AI tools often gives way to the difficult realities of enterprise IT: stringent security requirements, complex compliance landscapes, and a heterogeneous technology stack.

Many emerging AI tools, while impressive in public demonstrations, are fundamentally immature. They are not built with the enterprise in mind, lacking the robust audit trails, granular access controls, and operational stability that large organizations demand. This leaves enterprises in a difficult position: how to embrace the transformative power of AI without compromising the security and integrity of their core systems?

A New Paradigm for Solving Old Problems

For years, the default solution to data silos was the data warehouse. These were massive, multi-year projects, often costing millions of dollars, that aimed to create a single source of truth by migrating data from various systems into one central repository. While well-intentioned, these projects are notoriously slow, expensive, and inflexible. By the time the warehouse is built, the business’s needs have often changed.

A new architectural pattern, centered on the Model Context Protocol (MCP), offers a more agile and effective approach. Instead of moving data, MCP allows AI to access data where it lives securely. An MCP server acts as a secure, intelligent gateway in front of an existing data system. It doesn’t replace the system; it augments it with an AI-native interface.

Most MCP servers I see today are installed locally on developers’ machines, which is too technical for most people, and more importantly, opens up huge security risks to the injection of malicious code into local environments. Only a rigorous process of installation of remote MCP servers by IT professionals to serve the less technical people of the organization, with the strict security described in this article, can offer the benefits without the major risks.

Example: Rethinking Data Harmonization

Consider a common business question at a large telecommunications company: “Of the users who signed up in the last three months using the ‘+50’ promotion, how many renewed their subscriptions, and how many upgraded to the family plan?”

Answering this requires data from at least five different systems: the marketing promotions database, the user registration system, the billing and subscription service, the customer relationship management (CRM) system, and the product catalog. The traditional approach would require a complex, brittle ETL (Extract, Transform, Load) pipeline to feed a data warehouse — a project that could take years to complete.

With an MCP-based architecture, the approach is fundamentally different. Each of the five data systems is fronted by its own MCP server. An AI-powered MCP client can then orchestrate a query across all of them in near real-time. It can intelligently access each system using its existing schema and APIs, retrieve the necessary pieces of information, and synthesize the final answer. The data isn’t moved or duplicated; it’s queried securely in place. This approach doesn’t just answer the question faster; it provides a flexible framework for answering new, unforeseen questions without re-architecting the entire data landscape.

Building on a Foundation of Trust: Secure Cloud Architecture

This new paradigm is only viable if it can be implemented with the same — or better — level of security and reliability as existing enterprise systems. This is where the importance of mature, trusted technologies comes into play. The goal is not to invent new security protocols, but to apply proven, industry-standard patterns to the new challenge of AI integration.

For any enterprise, the cornerstone of security is the identity and access management (IAM) system. Any solution that attempts to bypass or create a shadow of the existing identity provider (like Active Directory or Okta) is a non-starter. This is why OAuth2.0 is non-negotiable. It is the lingua franca of modern enterprise authentication, providing a robust framework for delegated, token-based access.

More importantly, this security model can be implemented using the very cloud services that enterprise IT teams already know how to operate and secure. By building on a foundation of AWS Lambda, API Gateway, and Amazon Cognito, we can construct an AI integration platform that is not only powerful but also deeply familiar.

  • Amazon API Gateway acts as the secure front door, handling all incoming requests, enforcing usage policies, and providing a centralized point of control and observation.
  • AWS Lambda provides the serverless compute layer, allowing the MCP server logic to scale automatically and efficiently, without the need to manage underlying infrastructure.
  • Amazon Cognito handles the critical task of user authentication and identity federation, ensuring that the AI tools operate with the same user tokens and permissions as any other enterprise application.

Crucially, in this architecture, security is not an afterthought applied to the application; it is baked into the infrastructure itself. A Lambda Authorizer, for instance, validates every single request’s OAuth2.0 token at the gateway level, before it can ever reach the MCP server’s business logic. This zero-trust model ensures that every interaction is authenticated, authorized, and auditable, meeting the stringent demands of enterprise compliance.

Solution Architecture: A Deeper Look

To understand how these components work together, let’s examine the solution’s architecture. The system is designed with clear separation of concerns, ensuring that each component has a single, well-defined responsibility.

High Level MCP Secure Flow

This diagram illustrates the flow of a request from an AI client to the underlying enterprise data. The API Gateway, fortified by a Lambda Authorizer, serves as the single point of entry. It is here that the critical first step of authentication happens. Only requests with a valid, unexpired token are allowed to proceed. Once authenticated, the request is passed to the MCP Server, which translates the AI’s request into actions against the defined Business Tools.

These tools offer the flexibility to interact with enterprise systems in multiple ways, meeting the needs of traditional enterprises where they are. While a unified GraphQL API is the recommended approach for modern, consistent data access, the architecture also allows tools to connect directly to Legacy REST APIs (such as those for SAP or other internal services) or even perform direct database access to SQL databases or mainframes when necessary. This adaptability is crucial for integrating with a complex and heterogeneous IT landscape.

The Authentication Flow: A Modular Approach

The authentication process itself is designed to be both secure and flexible, leveraging the industry-standard OAuth 2.0 protocol. A key principle of this architecture is its modularity, particularly regarding the identity provider. While the reference implementation uses AWS Cognito, the design is not coupled to it.

MCP Detailed Secure OAuth2.0 Flow

As the diagram shows, the MCP Flow interacts with a “Pluggable Identity Provider.” This is a crucial design choice for enterprises. It means that an organization can easily replace AWS Cognito with its existing, preferred identity provider, such as Okta or Azure Active Directory. The Lambda Authorizer’s logic would be adapted to validate tokens issued by that provider. Still, the rest of the architecture — the API Gateway, the MCP Server, and the business logic — remains unchanged. This ensures that the AI integration effort leverages, rather than circumvents, the enterprise’s established investment in identity and access management.

Enterprise Authentication Flow Components

1. OAuth2 Metadata Discovery

  • Purpose: AI clients discover enterprise OAuth2 capabilities
  • Endpoint: `/.well-known/openid-configuration`
  • Enterprise Features: Custom scopes, SAML federation support, MFA requirements
  • Response: Enterprise-specific OAuth2 configuration

2. Dynamic Client Registration (DCR)

  • Purpose: Automatic AI client onboarding without IT involvement
  • Client Detection: Automatically identifies public clients (Claude) vs confidential clients (Inspector)
  • Enterprise Integration: Creates clients in enterprise Cognito with proper permissions
  • Security: Client credentials are stored securely in the enterprise DynamoDB

3. Enterprise User Authorization

  • Integration: Seamless integration with existing enterprise identity providers
  • SSO Support: SAML, OIDC, Active Directory federation
  • MFA Enforcement: Enterprise MFA policies are automatically applied
  • Consent Management: Granular permission consent for AI access

4. Token Exchange with Enterprise Context

  • Enterprise Tokens: Cognito tokens include enterprise-specific claims
  • Permission Inheritance: AI tools inherit users’ enterprise permissions
  • Audit Trail: All token exchanges are logged with enterprise user attribution
  • Compliance: Token format supports SOC2, HIPAA, and other enterprise requirements

5. MCP Protocol with Enterprise Security

  • Authenticated Sessions: All MCP communications use enterprise tokens
  • Permission-Based Tools: Tool availability based on enterprise user roles
  • Data Sovereignty: Enterprise data never leaves the AWS environment
  • Real-time Validation: Continuous token validation for long-running sessions

6. Enterprise Data Access

  • Same-Token Access: AI tools use enterprise tokens for data access on behalf of the authenticated and authorized users
  • Permission Enforcement: Data access respects enterprise RBAC policies
  • Audit Logging: Complete audit trail linking AI actions to enterprise users
  • Data Classification: Support for enterprise data classification and handling

Enterprise Security Architecture

Multi-Layer Security Model

Enterprise Security Layers

This diagram illustrates the multiple layers of security that are supported with the proposed MCP solutions to meet the strict security requirements of almost all enterprise companies, at least the ones that are using the cloud already today.

Enterprise Permission Model Example:

Enterprise Resource Servers:
  enterprise-ai:
    scopes:
      – ai/basic-access: Basic AI tool access
      – ai/data-read: Read enterprise data 
      – ai/data-write: Modify enterprise data
      – ai/admin: Administrative AI operations
     
  enterprise-data:
    scopes:
      – data/customer-read: Access customer data
      – data/financial-read: Access financial data
      – data/hr-read: Access HR data
      – data/analytics: Access business analytics
     
Enterprise User Groups:
  executives:
    scopes: [ai/*, data/*]  # Full access
   
  analysts:
    scopes: [ai/basic-access, ai/data-read, data/customer-read, data/analytics]
   
  developers:
    scopes: [ai/basic-access, ai/data-read, data/customer-read]
   
  contractors:
    scopes: [ai/basic-access]  # Limited access

The Path Forward: A Practical, Mature Approach

Acknowledging the need for both agility and robustness, a mature implementation of this architecture offers a flexible path from development to production. For example, the project started with an implementation in Python, a language known for its speed of iteration and rich ecosystem, making it ideal for prototyping and debugging.

As the use case matures and performance becomes critical, the system was seamlessly switched to a Rust implementation, which offers superior performance, memory efficiency, and compile-time safety for production workloads. This ability to use the right tool for the right job, without altering the client-facing contract, is a hallmark of a mature, enterprise-ready system.

The Power of Serverless: Security and Scalability by Default

A significant advantage of this architecture is its reliance on serverless technologies like AWS Lambda, API Gateway, Amazon Cognito, and DynamoDB. Traditional server-based infrastructure requires constant management — patching operating systems, configuring network security, and scaling capacity. Each of these tasks is a potential source of human error that can introduce security vulnerabilities.

Serverless computing abstracts away this entire layer of management.

  • Reduced Attack Surface: With no servers to manage, there are no servers to compromise. The cloud provider handles the security of the underlying infrastructure, including patching and maintenance, allowing development teams to focus on application-level security.
  • Cost Efficiency and Scalability: Serverless services operate on a pay-for-what-you-use model. This is incredibly cost-effective, especially for new initiatives where usage may be low and unpredictable. As demand grows, these services scale automatically to handle millions of requests without any need for manual intervention, ensuring both high availability and cost control.

By building on a serverless foundation, enterprises can create highly secure, scalable, and cost-efficient solutions without the operational overhead of traditional infrastructure.

Infrastructure as Code (IaC): A Blueprint for Security

In a modern enterprise environment, manual configuration of infrastructure is a significant liability. It is slow, prone to error, and nearly impossible to audit effectively. This is why embracing Infrastructure as Code (IaC) is not just a best practice, but a security imperative.

Using tools like the AWS Cloud Development Kit (CDK) or AWS CloudFormation, the entire architecture — from the API Gateway and Lambda functions to the IAM roles and database tables — is defined in code. This approach offers several critical benefits:

  • Automated Security Scanning: IaC templates can be integrated into CI/CD pipelines and automatically scanned for security misconfigurations before they are ever deployed. This proactive approach helps catch potential vulnerabilities, such as overly permissive IAM roles or unencrypted data stores, early in the development lifecycle.
  • Repeatability and Consistency: IaC eliminates configuration drift between environments. The exact same secure, tested, and compliant infrastructure can be deployed in development, staging, and production, ensuring consistency and reducing the risk of environment-specific vulnerabilities.
  • Auditability and Compliance: With the entire infrastructure defined as code, you have a version-controlled, auditable blueprint of your environment. This makes it dramatically simpler to demonstrate compliance with regulatory standards and to track every change made to the production system.

By codifying the infrastructure, security is no longer a manual checklist but an automated, repeatable, and auditable part of the development process, ensuring that the MCP servers and the data systems they protect are deployed into a secure and well-governed environment.

Final Thoughts

Ultimately, the challenge of AI integration is not about chasing the newest, most hyped technology. It is about building a durable, secure, and scalable foundation upon which future innovations can be built.

By leveraging a standardized protocol like MCP and implementing it with mature, trusted cloud services that IT teams already master, enterprises can finally begin to bridge the gap between the promise of AI and the reality of their complex data landscapes. The future is not about replacing existing systems, but about augmenting them with intelligent capabilities, securely and effectively.

Take the Next Step with TrueMCP

If your organization is ready to move beyond experimentation and into real, secure, enterprise-grade AI adoption, TrueMCP offers a practical, proven path forward.
Whether you’re a CTO, architect, or innovation lead, now is the time to explore how TrueMCP can help you unlock the full potential of your data—without compromising on security, compliance, or control.

Scroll to Top