Mcp 代码执行器
概览
什么是 MCP 代码执行器?
MCP 代码执行器是一个专门的服务器,旨在在指定的 Conda 环境中执行 Python 代码。它作为大型语言模型(LLMs)无缝运行 Python 脚本的桥梁,确保必要的依赖项和环境得到正确管理。这个工具对于需要在受控环境中测试和运行代码片段的开发人员和研究人员特别有用。
MCP 代码执行器的特点
- Conda 环境管理:自动设置和管理 Conda 环境,以确保满足 Python 代码执行所需的所有依赖项。
- 与 LLM 的集成:允许 LLM 执行 Python 代码,使得在编码任务中更容易利用 AI 能力。
- 公共代码库:代码可在公共代码库中获取,允许社区贡献和协作。
- 用户友好的界面:以可用性为设计理念,使用户能够轻松执行代码,而无需深厚的技术知识。
- 开源:该项目是开源的,鼓励开发者贡献并增强其功能。
如何使用 MCP 代码执行器
-
克隆代码库:首先从 GitHub 克隆 MCP 代码执行器代码库。
git clone https://github.com/bazinga012/mcp_code_executor.git
-
安装依赖项:导航到克隆的目录,并使用 Conda 安装所需的依赖项。
cd mcp_code_executor conda env create -f environment.yml conda activate mcp_env
-
运行服务器:启动 MCP 代码执行器服务器以开始执行 Python 代码。
python server.py
-
执行代码:使用提供的 API 或界面发送 Python 代码进行执行。服务器将在指定的 Conda 环境中处理执行。
-
检查结果:通过界面或 API 响应获取执行代码的输出。
常见问题解答
MCP 代码执行器支持哪些编程语言?
目前,MCP 代码执行器专门为 Python 设计。然而,未来的更新可能会包括对其他语言的支持。
MCP 代码执行器是免费使用的吗?
是的,MCP 代码执行器是开源的,免费使用。您可以在 GitHub 上找到源代码。
我如何为 MCP 代码执行器项目做贡献?
您可以通过分叉代码库、进行更改并提交拉取请求来贡献。欢迎并鼓励社区贡献。
运行 MCP 代码执行器的系统要求是什么?
您需要一个支持 Conda 和 Python 的系统。具体要求可以在代码库中的 environment.yml
文件中找到。
我可以将 MCP 代码执行器用于生产应用程序吗?
虽然 MCP 代码执行器主要设计用于测试和开发,但在适当的配置和优化下,可以调整为生产使用。
详情
MCP Code Executor
The MCP Code Executor is an MCP server that allows LLMs to execute Python code within a specified Python environment. This enables LLMs to run code with access to libraries and dependencies defined in the environment. It also supports incremental code generation for handling large code blocks that may exceed token limits.
<a href="https://glama.ai/mcp/servers/45ix8xode3"><img width="380" height="200" src="https://glama.ai/mcp/servers/45ix8xode3/badge" alt="Code Executor MCP server" /></a>
Features
- Execute Python code from LLM prompts
- Support for incremental code generation to overcome token limitations
- Run code within a specified environment (Conda, virtualenv, or UV virtualenv)
- Install dependencies when needed
- Check if packages are already installed
- Dynamically configure the environment at runtime
- Configurable code storage directory
Prerequisites
- Node.js installed
- One of the following:
- Conda installed with desired Conda environment created
- Python virtualenv
- UV virtualenv
Setup
- Clone this repository:
git clone https://github.com/bazinga012/mcp_code_executor.git
- Navigate to the project directory:
cd mcp_code_executor
- Install the Node.js dependencies:
npm install
- Build the project:
npm run build
Configuration
To configure the MCP Code Executor server, add the following to your MCP servers configuration file:
Using Node.js
{
"mcpServers": {
"mcp-code-executor": {
"command": "node",
"args": [
"/path/to/mcp_code_executor/build/index.js"
],
"env": {
"CODE_STORAGE_DIR": "/path/to/code/storage",
"ENV_TYPE": "conda",
"CONDA_ENV_NAME": "your-conda-env"
}
}
}
}
Using Docker
{
"mcpServers": {
"mcp-code-executor": {
"command": "docker",
"args": [
"run",
"-i",
"--rm",
"mcp-code-executor"
]
}
}
}
Note: The Dockerfile has been tested with the venv-uv environment type only. Other environment types may require additional configuration.
Environment Variables
Required Variables
CODE_STORAGE_DIR
: Directory where the generated code will be stored
Environment Type (choose one setup)
-
For Conda:
ENV_TYPE
: Set toconda
CONDA_ENV_NAME
: Name of the Conda environment to use
-
For Standard Virtualenv:
ENV_TYPE
: Set tovenv
VENV_PATH
: Path to the virtualenv directory
-
For UV Virtualenv:
ENV_TYPE
: Set tovenv-uv
UV_VENV_PATH
: Path to the UV virtualenv directory
Available Tools
The MCP Code Executor provides the following tools to LLMs:
1. execute_code
Executes Python code in the configured environment. Best for short code snippets.
{
"name": "execute_code",
"arguments": {
"code": "import numpy as np\nprint(np.random.rand(3,3))",
"filename": "matrix_gen"
}
}
2. install_dependencies
Installs Python packages in the environment.
{
"name": "install_dependencies",
"arguments": {
"packages": ["numpy", "pandas", "matplotlib"]
}
}
3. check_installed_packages
Checks if packages are already installed in the environment.
{
"name": "check_installed_packages",
"arguments": {
"packages": ["numpy", "pandas", "non_existent_package"]
}
}
4. configure_environment
Dynamically changes the environment configuration.
{
"name": "configure_environment",
"arguments": {
"type": "conda",
"conda_name": "new_env_name"
}
}
5. get_environment_config
Gets the current environment configuration.
{
"name": "get_environment_config",
"arguments": {}
}
6. initialize_code_file
Creates a new Python file with initial content. Use this as the first step for longer code that may exceed token limits.
{
"name": "initialize_code_file",
"arguments": {
"content": "def main():\n print('Hello, world!')\n\nif __name__ == '__main__':\n main()",
"filename": "my_script"
}
}
7. append_to_code_file
Appends content to an existing Python code file. Use this to add more code to a file created with initialize_code_file.
{
"name": "append_to_code_file",
"arguments": {
"file_path": "/path/to/code/storage/my_script_abc123.py",
"content": "\ndef another_function():\n print('This was appended to the file')\n"
}
}
8. execute_code_file
Executes an existing Python file. Use this as the final step after building up code with initialize_code_file and append_to_code_file.
{
"name": "execute_code_file",
"arguments": {
"file_path": "/path/to/code/storage/my_script_abc123.py"
}
}
9. read_code_file
Reads the content of an existing Python code file. Use this to verify the current state of a file before appending more content or executing it.
{
"name": "read_code_file",
"arguments": {
"file_path": "/path/to/code/storage/my_script_abc123.py"
}
}
Usage
Once configured, the MCP Code Executor will allow LLMs to execute Python code by generating a file in the specified CODE_STORAGE_DIR
and running it within the configured environment.
LLMs can generate and execute code by referencing this MCP server in their prompts.
Handling Large Code Blocks
For larger code blocks that might exceed LLM token limits, use the incremental code generation approach:
- Initialize a file with the basic structure using
initialize_code_file
- Add more code in subsequent calls using
append_to_code_file
- Verify the file content if needed using
read_code_file
- Execute the complete code using
execute_code_file
This approach allows LLMs to write complex, multi-part code without running into token limitations.
Backward Compatibility
This package maintains backward compatibility with earlier versions. Users of previous versions who only specified a Conda environment will continue to work without any changes to their configuration.
Contributing
Contributions are welcome! Please open an issue or submit a pull request.
License
This project is licensed under the MIT License.
Server配置
{
"mcpServers": {
"mcp-code-executor": {
"command": "docker",
"args": [
"run",
"-i",
"--rm",
"ghcr.io/metorial/mcp-container--bazinga012--mcp_code_executor--mcp-code-executor",
"node ./build/index.js"
],
"env": {
"CODE_STORAGE_DIR": "code-storage-dir",
"CONDA_ENV_NAME": "conda-env-name"
}
}
}
}