Introduction to MCP and Industry Background
MCP serves as a "universal interface key" for AI assistants to access various internal content libraries, business systems, and development environments of enterprises, helping large models obtain the necessary external data and knowledge to generate more practical and evidence-based responses. In current SaaS applications, AI often faces the "data island" problem: even the most advanced models are constrained by data isolation, and each integration with a new enterprise system requires custom development, making it difficult for AI to form low-cost, scalable, and sustainable connections with business data. MCP was born precisely to address this pain point, replacing fragmented integration methods with a unified protocol, allowing AI systems to access the required data more simply and reliably.
MCP provides a universal key for integrating LLMs with business systems
As AI assistants gradually become popular in enterprises, aside from model capabilities and prompt optimization, the technical and development costs of integrating numerous real business systems have become increasingly prominent. This integration gap has long been the "Achilles' heel" that restricts the large-scale deployment of AI.
MCP fills this gap—it clearly defines how to connect existing data sources (such as file systems, databases, APIs, etc.) into the AI workflow. Therefore, MCP is regarded as a key piece in building industrial application AI agents.
Since Anthropic announced the open-source MCP in November 2024, its ecosystem has rapidly developed in a matter of months: companies like Block (Square), Apollo, Zed, and Replit were the first to integrate MCP into their platforms. By February 2025, over 1000 community-built MCP servers (i.e., connectors) had emerged.
This thriving community has led to exponential growth in MCP's value—the more available tools, the greater the benefits of adopting this standard.
Moreover, MCP is open and model-agnostic; whether it is Claude, GPT-4, or open-source large models, any developer and enterprise can develop their own MCP integration without licensing.
With the support of large AI vendors and the promotion of an open ecosystem, the industry generally believes that MCP is likely to become the de facto standard for AI access to external data, just like USB and HTTP are ubiquitous in their respective fields. For SaaS vendors, embracing MCP means that the AI capabilities of their products will be built on a universal standard, avoiding becoming "information islands" and allowing continuous expansion of accessible business data through community collaboration.
How MCP Works and Core Components
MCP Architecture Overview. MCP adopts a classic client-server model and establishes communication among three roles through JSON-RPC 2.0 messages:
Host, i.e., the LLM application or agent initiating the connection;
Client, embedded in the host application, used to manage specific connections, the "MCP connector";
Server, implemented by external data sources or tools, providing contextual data and operational capabilities for AI use.
MCP architecture, sourced from Anthropic's official architecture diagram
Since JSON-RPC 2.0 is used as the foundation for the communication protocol, structured request/response data and command exchanges can occur between the host and the server, featuring strongly typed parameters and error codes, which is more standardized and secure than simple text prompts.
It is worth noting that MCP connections are stateful and support capability negotiation between clients and servers, meaning that when the client connects to the server, both parties will exchange their supported sets of functions to ensure consistency and reliability in subsequent interactions. This handshake mechanism at the protocol level establishes a foundation of trust and understanding between AI and external systems.
In the MCP protocol, the server can provide three main categories of functions to the client:
Resources, meaning data content controlled by applications, available for AI retrieval and reference (such as file contents, database records, API responses, etc.);
Tools, referring to functional operations that the model can invoke, triggered by the model side (e.g., "retrieve/search data", "send message", or "update records" etc.);
Prompts, predefined prompt templates by users or developers, used to guide the AI's interaction style or output format (e.g., document Q&A templates, summary reporting templates, JSON output formats, etc.).
Through these three types of interfaces, MCP enables the model to acquire static contextual information, perform dynamic operations, and standardize the language style or workflow when interacting with specific domains.
It is noteworthy that MCP supports dynamic discovery: AI agents do not need to hard-code every data source's access method; whenever a new server adhering to the MCP standard comes online (for example, integrating a new CRM or MES system), the AI client can automatically recognize and utilize it through the standard API.
This capability is not achievable with traditional integration methods—previously, any new integration meant developing and deploying new plugins or connection code, whereas using MCP, adding a new data source feels more like plugging in a ready-to-use module, greatly enhancing scalability and flexibility. This transforms the (N*M) problem into an (N+M) problem, significantly reducing development difficulty.
How SaaS Products Integrate MCP
From a developer's perspective, integrating MCP into SaaS products mainly includes steps like deploying the MCP server, integrating the MCP client, connecting to proprietary data sources, and scheduling plugin calls.
The following introduces these steps in a technical process manner:
Deploying the MCP server (connector): First, it is necessary to deploy an MCP server for the target data source or business system. This essentially creates (or installs) a "plugin" service for your application to conform to the MCP protocol.
The Anthropic team has open-sourced a series of MCP server implementations for commonly used systems, covering popular enterprise data sources like Google Drive, Slack, Github, and PostgreSQL databases. Developers can directly install these pre-built servers and complete the configuration by providing corresponding credentials or API keys. For instance, to allow AI to access enterprise document storage, simply run the officially provided Google Drive MCP server and enter OAuth credentials to bring it online.
If integration with proprietary or special data sources is needed, you can utilize MCP's SDK to write your own server. In general, this only involves wrapping a thin layer around existing system APIs, exposing their functions in accordance with the MCP specifications.
The official provides SDKs in various languages (such as Python, TypeScript, Java, etc.) to accelerate development, while the community has built rich examples and template projects for reference. Once deployed, the MCP server can run as an independent service either locally or in the cloud, waiting for client connections.
Integrating the MCP client (connecting to AI models): Next, the MCP client feature needs to be enabled in the AI module of your SaaS application.
For scenarios using ready-made AI platforms like Claude that support MCP, this step might be very simple—e.g., in the desktop application of Claude, one can add the address of the server just deployed through the UI to complete the binding. If your SaaS uses a self-developed AI agent or other LLM services, you can establish the client connection in your code using MCP's SDK.
Regardless of the method, the core is to specify the address and port of the MCP server in the AI application and complete the handshake authentication, informing the AI agent of the availability of new data sources/tools. Once the client successfully connects to the server, it will automatically obtain the list of functions and resource descriptions provided by that server. For example, if the server declares it provides the "file retrieval (search_files)" tool and "file content" resource, the client will register these capabilities in the AI agent.
It is worth mentioning that this registration does not require modifications to the AI model itself; the MCP protocol allows the expansion of AI's capability set without changing the main program code of the client. Newly added MCP servers will be automatically discovered and loaded. For SaaS developers, this means that AI assistant skills can be dynamically expanded through configuration, without frequently releasing new versions of the code.
Connecting business data and implementing plugin calls: Upon completing the client integration, the SaaS application possesses the ability to invoke the functions provided by this MCP server.
At this point, from the user's perspective, the product appears unchanged, but behind the scenes, AI has "learned" a set of new tools. When a user makes a request, the AI model will decide whether to call an external tool to get an answer or execute an operation based on the received instruction and the available tools list. For example, if a user asks in the customer service system, "Who produced the device with SN number 20140711NCT?", the model finds an MCP server available for complete traceability of this process and will send a query request to that server via the MCP client, querying the SN text (invoking a resource-reading tool), and then incorporate the result into the response.
The entire process is transparent to the end user—they simply receive an answer containing the precise SN without needing to search for information across different systems themselves. In other words, MCP has given AI multiple "keys," allowing AI to easily decide which "key" to use to open which door. Of course, in implementation, developers can provide the model with some guidance through prompt engineering to ensure it understands when to use new tools or how to select reasonably among multiple tools. But overall, MCP significantly lowers the threshold for allowing AI to connect to multiple systems simultaneously, making the standardized interface as simple as calling local functions.
LLM connects to business data and can implement function calls: query interface
LLM's function calls: create
Testing and iterative optimization: Before putting the MCP integration into actual use, developers should thoroughly test the AI assistant's utilization of the new integrated tools and make necessary adjustments and optimizations. In practice, it is possible to track the model's calling behavior through logs and monitoring: observing whether the model correctly sends requests to the MCP server, whether the parameters are accurate, and whether the server returns the expected results, etc. For instance, when a user asks a question requiring cross-database retrieval, check whether the AI invoked the corresponding search tool and whether any inappropriate calls occurred. With these logs, the development team can identify potential misuse or omissions in the model's calls and improve prompt words or tool description documents accordingly, to assist AI in understanding better when to use which tools.
Additionally, it should be verified that data is correctly transmitted and that permission controls are in place—ensuring that AI receives content it is permitted to access. After several rounds of testing iterations, SaaS vendors can gradually refine the strategies for using MCP in various business scenarios.
Examples of Use Cases
After introducing MCP, SaaS products can showcase significant intelligent upgrades in many scenarios. One of the most direct applications of MCP currently is seamlessly integrating enterprise knowledge and context into chat-type AI assistants. Below, we will provide examples of the practical utility that MCP brings using management information systems like the "New Core Cloud" MES system.
Collaborative Office Scenario
Imagine an enterprise collaboration platform (such as New Core Cloud) that integrates an MCP-supported AI assistant. This platform gathers various data such as knowledge documents, work logs, project management, etc. When employees ask questions to the AI assistant, the assistant can dynamically invoke multiple backend systems to retrieve information: for instance, when someone asks in the dialogue, "Please help me calculate the production cost of the CA809 camera," the AI assistant will break down the sub-tasks behind this request—it can first connect through MCP to the BOM list and production order of this product (resource retrieval); then utilize the pre-set "summary" prompt template from three aspects,
Material: calculate the cost of each material in the BOM list;
Labor: compile processing costs from the production order;
Expense: provide a management cost estimate in line with industry standards.
Then, through another MCP server interface, it uses the messaging tool to send the summary to the user. The entire process is automatically completed by the AI assistant with minimal human intervention. Thanks to the unified interface of MCP, AI can collaborate across multiple internal tools in a single conversation, maintaining the context consistency without breaks. Even when involving multi-step operations across systems (data search → analysis and calculation → executing operations), this can be cohesively completed within the same conversational context under the MCP framework.
Customer Service Scenario
In the customer service field, MCP is also highly useful. Imagine an online customer service platform for a company that has introduced an MCP-supported AI assistant. When customers come to consult complex issues, the AI assistant can extract various internal data in real-time to provide professional answers. For instance:
A customer inquires: "What is the progress of the fault report I submitted last week?" The AI assistant will perform several tasks in parallel: it will look up the customer’s account and their report records through the CRM system's MCP connector, access the processing status of the corresponding fault order via the MES system's connector, and then synthesize this information to respond to the customer in natural language. Each of these backend queries is conducted through the MCP standard interface, and the AI customer service does not require complicated API invocation logic.
When a customer executes an operation ("Please help me increase the priority of this issue"), the AI assistant can also use the "update priority" tool of the MES system to fulfill the customer's request (provided there is clear authorization). In the dialogue, the customer receives direct solutions or operation results, while the AI assistant completes cross-system data aggregation and command issuance in the background. Through MCP, customer service AI can access the enterprise knowledge base (to answer common questions), user information (to understand customer backgrounds), and business process systems (to execute operations), achieving truly personalized and automated services.
This capability goes far beyond what large language models can do simply through pre-training—only by connecting to real-time business data and systems can AI provide responses with business value. For customers, this means faster and more accurate problem resolution; for enterprises, this represents a leap in the level of intelligence in the customer service system, capable of reducing personnel costs while enhancing satisfaction.
AI customer service based on business systems—3Chat
Technical Challenges and Practical Recommendations
Any new technology encounters challenges during its implementation, and MCP is no exception. To assist technical teams in applying MCP more effectively in SaaS products, the following analysis combines practical experience to address potential challenges regarding data security, performance optimization, and permission control, along with practical recommendations.
Data Security and Permission Control: Because MCP empowers AI with the strong capability to read internal enterprise data and execute operations, security must be prioritized. First, ensuring user authorization and awareness is a critical precondition—AI can only access the respective data sources through MCP when it has received clear permission. Developers should implement strict identity authentication and authorization mechanisms during the MCP integration process, such as validating AI's access rights to each data source using API keys, OAuth tokens, or enterprise single sign-on.
System Performance and Architecture Optimization: The main challenges concerning performance and availability lie in managing multiple connectors and distributed deployment. As the number of MCP data sources connected to SaaS products increases, the need arises to maintain multiple MCP servers. These servers may be deployed locally or in the cloud, each consuming resources and requiring long-lasting connections. When concurrent requests increase, ensuring the high availability and response speed of all connectors is a test. If all connectors run on a single machine in the traditional manner, certain server processes may crash or stall, leading to partial failure of AI capabilities. Therefore, it is advisable to deploy MCP servers using containerization and microservices, utilizing container orchestration (such as Kubernetes) to manage scaling and failover, ensuring each connector service is independent and robust. Since MCP was primarily designed for local and desktop applications, care should be taken to keep connections stateless or minimize state dependencies as much as possible for smooth migration and scaling across different instances in cloud multi-user environments.
Tool Utilization Effectiveness and Ecosystem Evolution: To ensure that AI truly makes good use of the various tools offered by MCP, some skill and patience are required. First, although MCP makes extending tools easier, whether the model correctly utilizes new tools depends on its understanding of tool descriptions and its inherent reasoning ability. Practical experiences suggest that large models sometimes overlook the available tools or hesitate when determining when to invoke them. Therefore, developers should carefully write descriptions for each MCP tool (including function descriptions, input/output examples, etc.), and suggest in the dialogue context through Few-Shot prompts what problems could be addressed using which tools. This is akin to teaching the model new skills. Analysis of logs mentioned in the testing phase is also crucial: continually adjust descriptions and strategies based on the model's actual behavior and, if necessary, provide additional training or reinforcement learning to help it utilize tools more effectively. Secondly, the current MCP specifications and implementations are still rapidly evolving, and new versions may introduce non-backward-compatible changes. Development teams should maintain close attention to the MCP community, update SDKs and server implementations promptly to leverage new features and maintain compatibility. At the product level, a version-adaptation layer can be designed to decouple MCP calls from application logic, facilitating future protocol upgrades without affecting the business itself.
MCP as a More Intelligent Path for SaaS Products
The Model Context Protocol, as an open standard, is injecting new vitality into SaaS products. It addresses the last mile problem of connecting AI with business data, preventing large models from becoming islands and making intelligence truly integrate into processes. From technical architecture to development practices, we have seen the experience upgrade and capability expansion that MCP brings under the premise of security and control.
For technical developers, MCP provides clear specifications and a rich toolchain, making it relatively easy to integrate into existing systems; for CIOs, introducing MCP to build SaaS products means choosing an open and sustainable path towards intelligence—quickly enhancing the AI content of current products while reserving interfaces for future access to more data sources and AI models.
It is foreseeable that, with the development of the ecosystem and the joining of more vendors, MCP will accelerate its evolution toward an industry standard. Those SaaS products that first master and apply MCP will gain a competitive advantage in a new round of intelligent competition and lead the industry towards a new situation of interconnectedness and intelligent collaboration while creating greater value for users.
References:
Introduction to MCP and Industry Background
MCP serves as a "universal interface key" for AI assistants to access various internal content libraries, business systems, and development environments of enterprises, helping large models obtain the necessary external data and knowledge to generate more practical and evidence-based responses. In current SaaS applications, AI often faces the "data island" problem: even the most advanced models are constrained by data isolation, and each integration with a new enterprise system requires custom development, making it difficult for AI to form low-cost, scalable, and sustainable connections with business data. MCP was born precisely to address this pain point, replacing fragmented integration methods with a unified protocol, allowing AI systems to access the required data more simply and reliably.
MCP provides a universal key for integrating LLMs with business systems
As AI assistants gradually become popular in enterprises, aside from model capabilities and prompt optimization, the technical and development costs of integrating numerous real business systems have become increasingly prominent. This integration gap has long been the "Achilles' heel" that restricts the large-scale deployment of AI.
MCP fills this gap—it clearly defines how to connect existing data sources (such as file systems, databases, APIs, etc.) into the AI workflow. Therefore, MCP is regarded as a key piece in building industrial application AI agents.
Since Anthropic announced the open-source MCP in November 2024, its ecosystem has rapidly developed in a matter of months: companies like Block (Square), Apollo, Zed, and Replit were the first to integrate MCP into their platforms. By February 2025, over 1000 community-built MCP servers (i.e., connectors) had emerged.
This thriving community has led to exponential growth in MCP's value—the more available tools, the greater the benefits of adopting this standard.
Moreover, MCP is open and model-agnostic; whether it is Claude, GPT-4, or open-source large models, any developer and enterprise can develop their own MCP integration without licensing.
With the support of large AI vendors and the promotion of an open ecosystem, the industry generally believes that MCP is likely to become the de facto standard for AI access to external data, just like USB and HTTP are ubiquitous in their respective fields. For SaaS vendors, embracing MCP means that the AI capabilities of their products will be built on a universal standard, avoiding becoming "information islands" and allowing continuous expansion of accessible business data through community collaboration.
How MCP Works and Core Components
MCP Architecture Overview. MCP adopts a classic client-server model and establishes communication among three roles through JSON-RPC 2.0 messages:
Host, i.e., the LLM application or agent initiating the connection;
Client, embedded in the host application, used to manage specific connections, the "MCP connector";
Server, implemented by external data sources or tools, providing contextual data and operational capabilities for AI use.
MCP architecture, sourced from Anthropic's official architecture diagram
Since JSON-RPC 2.0 is used as the foundation for the communication protocol, structured request/response data and command exchanges can occur between the host and the server, featuring strongly typed parameters and error codes, which is more standardized and secure than simple text prompts.
It is worth noting that MCP connections are stateful and support capability negotiation between clients and servers, meaning that when the client connects to the server, both parties will exchange their supported sets of functions to ensure consistency and reliability in subsequent interactions. This handshake mechanism at the protocol level establishes a foundation of trust and understanding between AI and external systems.
In the MCP protocol, the server can provide three main categories of functions to the client:
Resources, meaning data content controlled by applications, available for AI retrieval and reference (such as file contents, database records, API responses, etc.);
Tools, referring to functional operations that the model can invoke, triggered by the model side (e.g., "retrieve/search data", "send message", or "update records" etc.);
Prompts, predefined prompt templates by users or developers, used to guide the AI's interaction style or output format (e.g., document Q&A templates, summary reporting templates, JSON output formats, etc.).
Through these three types of interfaces, MCP enables the model to acquire static contextual information, perform dynamic operations, and standardize the language style or workflow when interacting with specific domains.
It is noteworthy that MCP supports dynamic discovery: AI agents do not need to hard-code every data source's access method; whenever a new server adhering to the MCP standard comes online (for example, integrating a new CRM or MES system), the AI client can automatically recognize and utilize it through the standard API.
This capability is not achievable with traditional integration methods—previously, any new integration meant developing and deploying new plugins or connection code, whereas using MCP, adding a new data source feels more like plugging in a ready-to-use module, greatly enhancing scalability and flexibility. This transforms the (N*M) problem into an (N+M) problem, significantly reducing development difficulty.
How SaaS Products Integrate MCP
From a developer's perspective, integrating MCP into SaaS products mainly includes steps like deploying the MCP server, integrating the MCP client, connecting to proprietary data sources, and scheduling plugin calls.
The following introduces these steps in a technical process manner:
Deploying the MCP server (connector): First, it is necessary to deploy an MCP server for the target data source or business system. This essentially creates (or installs) a "plugin" service for your application to conform to the MCP protocol.
The Anthropic team has open-sourced a series of MCP server implementations for commonly used systems, covering popular enterprise data sources like Google Drive, Slack, Github, and PostgreSQL databases. Developers can directly install these pre-built servers and complete the configuration by providing corresponding credentials or API keys. For instance, to allow AI to access enterprise document storage, simply run the officially provided Google Drive MCP server and enter OAuth credentials to bring it online.
If integration with proprietary or special data sources is needed, you can utilize MCP's SDK to write your own server. In general, this only involves wrapping a thin layer around existing system APIs, exposing their functions in accordance with the MCP specifications.
The official provides SDKs in various languages (such as Python, TypeScript, Java, etc.) to accelerate development, while the community has built rich examples and template projects for reference. Once deployed, the MCP server can run as an independent service either locally or in the cloud, waiting for client connections.
Integrating the MCP client (connecting to AI models): Next, the MCP client feature needs to be enabled in the AI module of your SaaS application.
For scenarios using ready-made AI platforms like Claude that support MCP, this step might be very simple—e.g., in the desktop application of Claude, one can add the address of the server just deployed through the UI to complete the binding. If your SaaS uses a self-developed AI agent or other LLM services, you can establish the client connection in your code using MCP's SDK.
Regardless of the method, the core is to specify the address and port of the MCP server in the AI application and complete the handshake authentication, informing the AI agent of the availability of new data sources/tools. Once the client successfully connects to the server, it will automatically obtain the list of functions and resource descriptions provided by that server. For example, if the server declares it provides the "file retrieval (search_files)" tool and "file content" resource, the client will register these capabilities in the AI agent.
It is worth mentioning that this registration does not require modifications to the AI model itself; the MCP protocol allows the expansion of AI's capability set without changing the main program code of the client. Newly added MCP servers will be automatically discovered and loaded. For SaaS developers, this means that AI assistant skills can be dynamically expanded through configuration, without frequently releasing new versions of the code.
Connecting business data and implementing plugin calls: Upon completing the client integration, the SaaS application possesses the ability to invoke the functions provided by this MCP server.
At this point, from the user's perspective, the product appears unchanged, but behind the scenes, AI has "learned" a set of new tools. When a user makes a request, the AI model will decide whether to call an external tool to get an answer or execute an operation based on the received instruction and the available tools list. For example, if a user asks in the customer service system, "Who produced the device with SN number 20140711NCT?", the model finds an MCP server available for complete traceability of this process and will send a query request to that server via the MCP client, querying the SN text (invoking a resource-reading tool), and then incorporate the result into the response.
The entire process is transparent to the end user—they simply receive an answer containing the precise SN without needing to search for information across different systems themselves. In other words, MCP has given AI multiple "keys," allowing AI to easily decide which "key" to use to open which door. Of course, in implementation, developers can provide the model with some guidance through prompt engineering to ensure it understands when to use new tools or how to select reasonably among multiple tools. But overall, MCP significantly lowers the threshold for allowing AI to connect to multiple systems simultaneously, making the standardized interface as simple as calling local functions.
LLM connects to business data and can implement function calls: query interface
LLM's function calls: create
Testing and iterative optimization: Before putting the MCP integration into actual use, developers should thoroughly test the AI assistant's utilization of the new integrated tools and make necessary adjustments and optimizations. In practice, it is possible to track the model's calling behavior through logs and monitoring: observing whether the model correctly sends requests to the MCP server, whether the parameters are accurate, and whether the server returns the expected results, etc. For instance, when a user asks a question requiring cross-database retrieval, check whether the AI invoked the corresponding search tool and whether any inappropriate calls occurred. With these logs, the development team can identify potential misuse or omissions in the model's calls and improve prompt words or tool description documents accordingly, to assist AI in understanding better when to use which tools.
Additionally, it should be verified that data is correctly transmitted and that permission controls are in place—ensuring that AI receives content it is permitted to access. After several rounds of testing iterations, SaaS vendors can gradually refine the strategies for using MCP in various business scenarios.
Examples of Use Cases
After introducing MCP, SaaS products can showcase significant intelligent upgrades in many scenarios. One of the most direct applications of MCP currently is seamlessly integrating enterprise knowledge and context into chat-type AI assistants. Below, we will provide examples of the practical utility that MCP brings using management information systems like the "New Core Cloud" MES system.
Collaborative Office Scenario
Imagine an enterprise collaboration platform (such as New Core Cloud) that integrates an MCP-supported AI assistant. This platform gathers various data such as knowledge documents, work logs, project management, etc. When employees ask questions to the AI assistant, the assistant can dynamically invoke multiple backend systems to retrieve information: for instance, when someone asks in the dialogue, "Please help me calculate the production cost of the CA809 camera," the AI assistant will break down the sub-tasks behind this request—it can first connect through MCP to the BOM list and production order of this product (resource retrieval); then utilize the pre-set "summary" prompt template from three aspects,
Material: calculate the cost of each material in the BOM list;
Labor: compile processing costs from the production order;
Expense: provide a management cost estimate in line with industry standards.
Then, through another MCP server interface, it uses the messaging tool to send the summary to the user. The entire process is automatically completed by the AI assistant with minimal human intervention. Thanks to the unified interface of MCP, AI can collaborate across multiple internal tools in a single conversation, maintaining the context consistency without breaks. Even when involving multi-step operations across systems (data search → analysis and calculation → executing operations), this can be cohesively completed within the same conversational context under the MCP framework.
Customer Service Scenario
In the customer service field, MCP is also highly useful. Imagine an online customer service platform for a company that has introduced an MCP-supported AI assistant. When customers come to consult complex issues, the AI assistant can extract various internal data in real-time to provide professional answers. For instance:
A customer inquires: "What is the progress of the fault report I submitted last week?" The AI assistant will perform several tasks in parallel: it will look up the customer’s account and their report records through the CRM system's MCP connector, access the processing status of the corresponding fault order via the MES system's connector, and then synthesize this information to respond to the customer in natural language. Each of these backend queries is conducted through the MCP standard interface, and the AI customer service does not require complicated API invocation logic.
When a customer executes an operation ("Please help me increase the priority of this issue"), the AI assistant can also use the "update priority" tool of the MES system to fulfill the customer's request (provided there is clear authorization). In the dialogue, the customer receives direct solutions or operation results, while the AI assistant completes cross-system data aggregation and command issuance in the background. Through MCP, customer service AI can access the enterprise knowledge base (to answer common questions), user information (to understand customer backgrounds), and business process systems (to execute operations), achieving truly personalized and automated services.
This capability goes far beyond what large language models can do simply through pre-training—only by connecting to real-time business data and systems can AI provide responses with business value. For customers, this means faster and more accurate problem resolution; for enterprises, this represents a leap in the level of intelligence in the customer service system, capable of reducing personnel costs while enhancing satisfaction.
AI customer service based on business systems—3Chat
Technical Challenges and Practical Recommendations
Any new technology encounters challenges during its implementation, and MCP is no exception. To assist technical teams in applying MCP more effectively in SaaS products, the following analysis combines practical experience to address potential challenges regarding data security, performance optimization, and permission control, along with practical recommendations.
Data Security and Permission Control: Because MCP empowers AI with the strong capability to read internal enterprise data and execute operations, security must be prioritized. First, ensuring user authorization and awareness is a critical precondition—AI can only access the respective data sources through MCP when it has received clear permission. Developers should implement strict identity authentication and authorization mechanisms during the MCP integration process, such as validating AI's access rights to each data source using API keys, OAuth tokens, or enterprise single sign-on.
System Performance and Architecture Optimization: The main challenges concerning performance and availability lie in managing multiple connectors and distributed deployment. As the number of MCP data sources connected to SaaS products increases, the need arises to maintain multiple MCP servers. These servers may be deployed locally or in the cloud, each consuming resources and requiring long-lasting connections. When concurrent requests increase, ensuring the high availability and response speed of all connectors is a test. If all connectors run on a single machine in the traditional manner, certain server processes may crash or stall, leading to partial failure of AI capabilities. Therefore, it is advisable to deploy MCP servers using containerization and microservices, utilizing container orchestration (such as Kubernetes) to manage scaling and failover, ensuring each connector service is independent and robust. Since MCP was primarily designed for local and desktop applications, care should be taken to keep connections stateless or minimize state dependencies as much as possible for smooth migration and scaling across different instances in cloud multi-user environments.
Tool Utilization Effectiveness and Ecosystem Evolution: To ensure that AI truly makes good use of the various tools offered by MCP, some skill and patience are required. First, although MCP makes extending tools easier, whether the model correctly utilizes new tools depends on its understanding of tool descriptions and its inherent reasoning ability. Practical experiences suggest that large models sometimes overlook the available tools or hesitate when determining when to invoke them. Therefore, developers should carefully write descriptions for each MCP tool (including function descriptions, input/output examples, etc.), and suggest in the dialogue context through Few-Shot prompts what problems could be addressed using which tools. This is akin to teaching the model new skills. Analysis of logs mentioned in the testing phase is also crucial: continually adjust descriptions and strategies based on the model's actual behavior and, if necessary, provide additional training or reinforcement learning to help it utilize tools more effectively. Secondly, the current MCP specifications and implementations are still rapidly evolving, and new versions may introduce non-backward-compatible changes. Development teams should maintain close attention to the MCP community, update SDKs and server implementations promptly to leverage new features and maintain compatibility. At the product level, a version-adaptation layer can be designed to decouple MCP calls from application logic, facilitating future protocol upgrades without affecting the business itself.
MCP as a More Intelligent Path for SaaS Products
The Model Context Protocol, as an open standard, is injecting new vitality into SaaS products. It addresses the last mile problem of connecting AI with business data, preventing large models from becoming islands and making intelligence truly integrate into processes. From technical architecture to development practices, we have seen the experience upgrade and capability expansion that MCP brings under the premise of security and control.
For technical developers, MCP provides clear specifications and a rich toolchain, making it relatively easy to integrate into existing systems; for CIOs, introducing MCP to build SaaS products means choosing an open and sustainable path towards intelligence—quickly enhancing the AI content of current products while reserving interfaces for future access to more data sources and AI models.
It is foreseeable that, with the development of the ecosystem and the joining of more vendors, MCP will accelerate its evolution toward an industry standard. Those SaaS products that first master and apply MCP will gain a competitive advantage in a new round of intelligent competition and lead the industry towards a new situation of interconnectedness and intelligent collaboration while creating greater value for users.
References:
Introduction to MCP and Industry Background
MCP serves as a "universal interface key" for AI assistants to access various internal content libraries, business systems, and development environments of enterprises, helping large models obtain the necessary external data and knowledge to generate more practical and evidence-based responses. In current SaaS applications, AI often faces the "data island" problem: even the most advanced models are constrained by data isolation, and each integration with a new enterprise system requires custom development, making it difficult for AI to form low-cost, scalable, and sustainable connections with business data. MCP was born precisely to address this pain point, replacing fragmented integration methods with a unified protocol, allowing AI systems to access the required data more simply and reliably.
MCP provides a universal key for integrating LLMs with business systems
As AI assistants gradually become popular in enterprises, aside from model capabilities and prompt optimization, the technical and development costs of integrating numerous real business systems have become increasingly prominent. This integration gap has long been the "Achilles' heel" that restricts the large-scale deployment of AI.
MCP fills this gap—it clearly defines how to connect existing data sources (such as file systems, databases, APIs, etc.) into the AI workflow. Therefore, MCP is regarded as a key piece in building industrial application AI agents.
Since Anthropic announced the open-source MCP in November 2024, its ecosystem has rapidly developed in a matter of months: companies like Block (Square), Apollo, Zed, and Replit were the first to integrate MCP into their platforms. By February 2025, over 1000 community-built MCP servers (i.e., connectors) had emerged.
This thriving community has led to exponential growth in MCP's value—the more available tools, the greater the benefits of adopting this standard.
Moreover, MCP is open and model-agnostic; whether it is Claude, GPT-4, or open-source large models, any developer and enterprise can develop their own MCP integration without licensing.
With the support of large AI vendors and the promotion of an open ecosystem, the industry generally believes that MCP is likely to become the de facto standard for AI access to external data, just like USB and HTTP are ubiquitous in their respective fields. For SaaS vendors, embracing MCP means that the AI capabilities of their products will be built on a universal standard, avoiding becoming "information islands" and allowing continuous expansion of accessible business data through community collaboration.
How MCP Works and Core Components
MCP Architecture Overview. MCP adopts a classic client-server model and establishes communication among three roles through JSON-RPC 2.0 messages:
Host, i.e., the LLM application or agent initiating the connection;
Client, embedded in the host application, used to manage specific connections, the "MCP connector";
Server, implemented by external data sources or tools, providing contextual data and operational capabilities for AI use.
MCP architecture, sourced from Anthropic's official architecture diagram
Since JSON-RPC 2.0 is used as the foundation for the communication protocol, structured request/response data and command exchanges can occur between the host and the server, featuring strongly typed parameters and error codes, which is more standardized and secure than simple text prompts.
It is worth noting that MCP connections are stateful and support capability negotiation between clients and servers, meaning that when the client connects to the server, both parties will exchange their supported sets of functions to ensure consistency and reliability in subsequent interactions. This handshake mechanism at the protocol level establishes a foundation of trust and understanding between AI and external systems.
In the MCP protocol, the server can provide three main categories of functions to the client:
Resources, meaning data content controlled by applications, available for AI retrieval and reference (such as file contents, database records, API responses, etc.);
Tools, referring to functional operations that the model can invoke, triggered by the model side (e.g., "retrieve/search data", "send message", or "update records" etc.);
Prompts, predefined prompt templates by users or developers, used to guide the AI's interaction style or output format (e.g., document Q&A templates, summary reporting templates, JSON output formats, etc.).
Through these three types of interfaces, MCP enables the model to acquire static contextual information, perform dynamic operations, and standardize the language style or workflow when interacting with specific domains.
It is noteworthy that MCP supports dynamic discovery: AI agents do not need to hard-code every data source's access method; whenever a new server adhering to the MCP standard comes online (for example, integrating a new CRM or MES system), the AI client can automatically recognize and utilize it through the standard API.
This capability is not achievable with traditional integration methods—previously, any new integration meant developing and deploying new plugins or connection code, whereas using MCP, adding a new data source feels more like plugging in a ready-to-use module, greatly enhancing scalability and flexibility. This transforms the (N*M) problem into an (N+M) problem, significantly reducing development difficulty.
How SaaS Products Integrate MCP
From a developer's perspective, integrating MCP into SaaS products mainly includes steps like deploying the MCP server, integrating the MCP client, connecting to proprietary data sources, and scheduling plugin calls.
The following introduces these steps in a technical process manner:
Deploying the MCP server (connector): First, it is necessary to deploy an MCP server for the target data source or business system. This essentially creates (or installs) a "plugin" service for your application to conform to the MCP protocol.
The Anthropic team has open-sourced a series of MCP server implementations for commonly used systems, covering popular enterprise data sources like Google Drive, Slack, Github, and PostgreSQL databases. Developers can directly install these pre-built servers and complete the configuration by providing corresponding credentials or API keys. For instance, to allow AI to access enterprise document storage, simply run the officially provided Google Drive MCP server and enter OAuth credentials to bring it online.
If integration with proprietary or special data sources is needed, you can utilize MCP's SDK to write your own server. In general, this only involves wrapping a thin layer around existing system APIs, exposing their functions in accordance with the MCP specifications.
The official provides SDKs in various languages (such as Python, TypeScript, Java, etc.) to accelerate development, while the community has built rich examples and template projects for reference. Once deployed, the MCP server can run as an independent service either locally or in the cloud, waiting for client connections.
Integrating the MCP client (connecting to AI models): Next, the MCP client feature needs to be enabled in the AI module of your SaaS application.
For scenarios using ready-made AI platforms like Claude that support MCP, this step might be very simple—e.g., in the desktop application of Claude, one can add the address of the server just deployed through the UI to complete the binding. If your SaaS uses a self-developed AI agent or other LLM services, you can establish the client connection in your code using MCP's SDK.
Regardless of the method, the core is to specify the address and port of the MCP server in the AI application and complete the handshake authentication, informing the AI agent of the availability of new data sources/tools. Once the client successfully connects to the server, it will automatically obtain the list of functions and resource descriptions provided by that server. For example, if the server declares it provides the "file retrieval (search_files)" tool and "file content" resource, the client will register these capabilities in the AI agent.
It is worth mentioning that this registration does not require modifications to the AI model itself; the MCP protocol allows the expansion of AI's capability set without changing the main program code of the client. Newly added MCP servers will be automatically discovered and loaded. For SaaS developers, this means that AI assistant skills can be dynamically expanded through configuration, without frequently releasing new versions of the code.
Connecting business data and implementing plugin calls: Upon completing the client integration, the SaaS application possesses the ability to invoke the functions provided by this MCP server.
At this point, from the user's perspective, the product appears unchanged, but behind the scenes, AI has "learned" a set of new tools. When a user makes a request, the AI model will decide whether to call an external tool to get an answer or execute an operation based on the received instruction and the available tools list. For example, if a user asks in the customer service system, "Who produced the device with SN number 20140711NCT?", the model finds an MCP server available for complete traceability of this process and will send a query request to that server via the MCP client, querying the SN text (invoking a resource-reading tool), and then incorporate the result into the response.
The entire process is transparent to the end user—they simply receive an answer containing the precise SN without needing to search for information across different systems themselves. In other words, MCP has given AI multiple "keys," allowing AI to easily decide which "key" to use to open which door. Of course, in implementation, developers can provide the model with some guidance through prompt engineering to ensure it understands when to use new tools or how to select reasonably among multiple tools. But overall, MCP significantly lowers the threshold for allowing AI to connect to multiple systems simultaneously, making the standardized interface as simple as calling local functions.
LLM connects to business data and can implement function calls: query interface
LLM's function calls: create
Testing and iterative optimization: Before putting the MCP integration into actual use, developers should thoroughly test the AI assistant's utilization of the new integrated tools and make necessary adjustments and optimizations. In practice, it is possible to track the model's calling behavior through logs and monitoring: observing whether the model correctly sends requests to the MCP server, whether the parameters are accurate, and whether the server returns the expected results, etc. For instance, when a user asks a question requiring cross-database retrieval, check whether the AI invoked the corresponding search tool and whether any inappropriate calls occurred. With these logs, the development team can identify potential misuse or omissions in the model's calls and improve prompt words or tool description documents accordingly, to assist AI in understanding better when to use which tools.
Additionally, it should be verified that data is correctly transmitted and that permission controls are in place—ensuring that AI receives content it is permitted to access. After several rounds of testing iterations, SaaS vendors can gradually refine the strategies for using MCP in various business scenarios.
Examples of Use Cases
After introducing MCP, SaaS products can showcase significant intelligent upgrades in many scenarios. One of the most direct applications of MCP currently is seamlessly integrating enterprise knowledge and context into chat-type AI assistants. Below, we will provide examples of the practical utility that MCP brings using management information systems like the "New Core Cloud" MES system.
Collaborative Office Scenario
Imagine an enterprise collaboration platform (such as New Core Cloud) that integrates an MCP-supported AI assistant. This platform gathers various data such as knowledge documents, work logs, project management, etc. When employees ask questions to the AI assistant, the assistant can dynamically invoke multiple backend systems to retrieve information: for instance, when someone asks in the dialogue, "Please help me calculate the production cost of the CA809 camera," the AI assistant will break down the sub-tasks behind this request—it can first connect through MCP to the BOM list and production order of this product (resource retrieval); then utilize the pre-set "summary" prompt template from three aspects,
Material: calculate the cost of each material in the BOM list;
Labor: compile processing costs from the production order;
Expense: provide a management cost estimate in line with industry standards.
Then, through another MCP server interface, it uses the messaging tool to send the summary to the user. The entire process is automatically completed by the AI assistant with minimal human intervention. Thanks to the unified interface of MCP, AI can collaborate across multiple internal tools in a single conversation, maintaining the context consistency without breaks. Even when involving multi-step operations across systems (data search → analysis and calculation → executing operations), this can be cohesively completed within the same conversational context under the MCP framework.
Customer Service Scenario
In the customer service field, MCP is also highly useful. Imagine an online customer service platform for a company that has introduced an MCP-supported AI assistant. When customers come to consult complex issues, the AI assistant can extract various internal data in real-time to provide professional answers. For instance:
A customer inquires: "What is the progress of the fault report I submitted last week?" The AI assistant will perform several tasks in parallel: it will look up the customer’s account and their report records through the CRM system's MCP connector, access the processing status of the corresponding fault order via the MES system's connector, and then synthesize this information to respond to the customer in natural language. Each of these backend queries is conducted through the MCP standard interface, and the AI customer service does not require complicated API invocation logic.
When a customer executes an operation ("Please help me increase the priority of this issue"), the AI assistant can also use the "update priority" tool of the MES system to fulfill the customer's request (provided there is clear authorization). In the dialogue, the customer receives direct solutions or operation results, while the AI assistant completes cross-system data aggregation and command issuance in the background. Through MCP, customer service AI can access the enterprise knowledge base (to answer common questions), user information (to understand customer backgrounds), and business process systems (to execute operations), achieving truly personalized and automated services.
This capability goes far beyond what large language models can do simply through pre-training—only by connecting to real-time business data and systems can AI provide responses with business value. For customers, this means faster and more accurate problem resolution; for enterprises, this represents a leap in the level of intelligence in the customer service system, capable of reducing personnel costs while enhancing satisfaction.
AI customer service based on business systems—3Chat
Technical Challenges and Practical Recommendations
Any new technology encounters challenges during its implementation, and MCP is no exception. To assist technical teams in applying MCP more effectively in SaaS products, the following analysis combines practical experience to address potential challenges regarding data security, performance optimization, and permission control, along with practical recommendations.
Data Security and Permission Control: Because MCP empowers AI with the strong capability to read internal enterprise data and execute operations, security must be prioritized. First, ensuring user authorization and awareness is a critical precondition—AI can only access the respective data sources through MCP when it has received clear permission. Developers should implement strict identity authentication and authorization mechanisms during the MCP integration process, such as validating AI's access rights to each data source using API keys, OAuth tokens, or enterprise single sign-on.
System Performance and Architecture Optimization: The main challenges concerning performance and availability lie in managing multiple connectors and distributed deployment. As the number of MCP data sources connected to SaaS products increases, the need arises to maintain multiple MCP servers. These servers may be deployed locally or in the cloud, each consuming resources and requiring long-lasting connections. When concurrent requests increase, ensuring the high availability and response speed of all connectors is a test. If all connectors run on a single machine in the traditional manner, certain server processes may crash or stall, leading to partial failure of AI capabilities. Therefore, it is advisable to deploy MCP servers using containerization and microservices, utilizing container orchestration (such as Kubernetes) to manage scaling and failover, ensuring each connector service is independent and robust. Since MCP was primarily designed for local and desktop applications, care should be taken to keep connections stateless or minimize state dependencies as much as possible for smooth migration and scaling across different instances in cloud multi-user environments.
Tool Utilization Effectiveness and Ecosystem Evolution: To ensure that AI truly makes good use of the various tools offered by MCP, some skill and patience are required. First, although MCP makes extending tools easier, whether the model correctly utilizes new tools depends on its understanding of tool descriptions and its inherent reasoning ability. Practical experiences suggest that large models sometimes overlook the available tools or hesitate when determining when to invoke them. Therefore, developers should carefully write descriptions for each MCP tool (including function descriptions, input/output examples, etc.), and suggest in the dialogue context through Few-Shot prompts what problems could be addressed using which tools. This is akin to teaching the model new skills. Analysis of logs mentioned in the testing phase is also crucial: continually adjust descriptions and strategies based on the model's actual behavior and, if necessary, provide additional training or reinforcement learning to help it utilize tools more effectively. Secondly, the current MCP specifications and implementations are still rapidly evolving, and new versions may introduce non-backward-compatible changes. Development teams should maintain close attention to the MCP community, update SDKs and server implementations promptly to leverage new features and maintain compatibility. At the product level, a version-adaptation layer can be designed to decouple MCP calls from application logic, facilitating future protocol upgrades without affecting the business itself.
MCP as a More Intelligent Path for SaaS Products
The Model Context Protocol, as an open standard, is injecting new vitality into SaaS products. It addresses the last mile problem of connecting AI with business data, preventing large models from becoming islands and making intelligence truly integrate into processes. From technical architecture to development practices, we have seen the experience upgrade and capability expansion that MCP brings under the premise of security and control.
For technical developers, MCP provides clear specifications and a rich toolchain, making it relatively easy to integrate into existing systems; for CIOs, introducing MCP to build SaaS products means choosing an open and sustainable path towards intelligence—quickly enhancing the AI content of current products while reserving interfaces for future access to more data sources and AI models.
It is foreseeable that, with the development of the ecosystem and the joining of more vendors, MCP will accelerate its evolution toward an industry standard. Those SaaS products that first master and apply MCP will gain a competitive advantage in a new round of intelligent competition and lead the industry towards a new situation of interconnectedness and intelligent collaboration while creating greater value for users.
References: