Mcp 代碼執行器
概覽
什麼是 MCP 代碼執行器?
MCP 代碼執行器是一個專門的伺服器,旨在在指定的 Conda 環境中執行 Python 代碼。它作為大型語言模型(LLMs)運行 Python 腳本的橋樑,確保所需的依賴和環境得到正確管理。這個工具對於需要在受控環境中測試和運行代碼片段的開發者和研究人員特別有用。
MCP 代碼執行器的特點
- Conda 環境管理:自動設置和管理 Conda 環境,以確保滿足 Python 代碼執行的所有依賴。
- 與 LLM 的集成:允許 LLM 執行 Python 代碼,使得在編碼任務中更容易利用 AI 能力。
- 公共庫:代碼可在公共庫中獲得,允許社區貢獻和合作。
- 用戶友好的界面:以可用性為設計理念,使得用戶可以輕鬆執行代碼,而無需深入的技術知識。
- 開源:該項目是開源的,鼓勵開發者貢獻並增強其功能。
如何使用 MCP 代碼執行器
-
克隆庫:首先從 GitHub 克隆 MCP 代碼執行器庫。
git clone https://github.com/bazinga012/mcp_code_executor.git
-
安裝依賴:導航到克隆的目錄並使用 Conda 安裝所需的依賴。
cd mcp_code_executor conda env create -f environment.yml conda activate mcp_env
-
運行伺服器:啟動 MCP 代碼執行器伺服器以開始執行 Python 代碼。
python server.py
-
執行代碼:使用提供的 API 或界面發送 Python 代碼以進行執行。伺服器將在指定的 Conda 環境中處理執行。
-
檢查結果:通過界面或 API 回應檢索執行代碼的輸出。
常見問題解答
MCP 代碼執行器支持哪些編程語言?
目前,MCP 代碼執行器專門為 Python 設計。然而,未來的更新可能會包括對其他語言的支持。
MCP 代碼執行器是免費使用的嗎?
是的,MCP 代碼執行器是開源的,並且可以免費使用。您可以在 GitHub 上找到源代碼。
我如何能夠為 MCP 代碼執行器項目做出貢獻?
您可以通過分叉庫、進行更改並提交拉取請求來貢獻。社區貢獻受到歡迎並鼓勵。
運行 MCP 代碼執行器的系統要求是什麼?
您需要一個支持 Conda 和 Python 的系統。具體要求可以在庫中的 environment.yml
文件中找到。
我可以將 MCP 代碼執行器用於生產應用程序嗎?
雖然 MCP 代碼執行器主要設計用於測試和開發,但可以通過適當的配置和優化來適應生產使用。
詳細
MCP Code Executor
The MCP Code Executor is an MCP server that allows LLMs to execute Python code within a specified Python environment. This enables LLMs to run code with access to libraries and dependencies defined in the environment. It also supports incremental code generation for handling large code blocks that may exceed token limits.
<a href="https://glama.ai/mcp/servers/45ix8xode3"><img width="380" height="200" src="https://glama.ai/mcp/servers/45ix8xode3/badge" alt="Code Executor MCP server" /></a>
Features
- Execute Python code from LLM prompts
- Support for incremental code generation to overcome token limitations
- Run code within a specified environment (Conda, virtualenv, or UV virtualenv)
- Install dependencies when needed
- Check if packages are already installed
- Dynamically configure the environment at runtime
- Configurable code storage directory
Prerequisites
- Node.js installed
- One of the following:
- Conda installed with desired Conda environment created
- Python virtualenv
- UV virtualenv
Setup
- Clone this repository:
git clone https://github.com/bazinga012/mcp_code_executor.git
- Navigate to the project directory:
cd mcp_code_executor
- Install the Node.js dependencies:
npm install
- Build the project:
npm run build
Configuration
To configure the MCP Code Executor server, add the following to your MCP servers configuration file:
Using Node.js
{
"mcpServers": {
"mcp-code-executor": {
"command": "node",
"args": [
"/path/to/mcp_code_executor/build/index.js"
],
"env": {
"CODE_STORAGE_DIR": "/path/to/code/storage",
"ENV_TYPE": "conda",
"CONDA_ENV_NAME": "your-conda-env"
}
}
}
}
Using Docker
{
"mcpServers": {
"mcp-code-executor": {
"command": "docker",
"args": [
"run",
"-i",
"--rm",
"mcp-code-executor"
]
}
}
}
Note: The Dockerfile has been tested with the venv-uv environment type only. Other environment types may require additional configuration.
Environment Variables
Required Variables
CODE_STORAGE_DIR
: Directory where the generated code will be stored
Environment Type (choose one setup)
-
For Conda:
ENV_TYPE
: Set toconda
CONDA_ENV_NAME
: Name of the Conda environment to use
-
For Standard Virtualenv:
ENV_TYPE
: Set tovenv
VENV_PATH
: Path to the virtualenv directory
-
For UV Virtualenv:
ENV_TYPE
: Set tovenv-uv
UV_VENV_PATH
: Path to the UV virtualenv directory
Available Tools
The MCP Code Executor provides the following tools to LLMs:
1. execute_code
Executes Python code in the configured environment. Best for short code snippets.
{
"name": "execute_code",
"arguments": {
"code": "import numpy as np\nprint(np.random.rand(3,3))",
"filename": "matrix_gen"
}
}
2. install_dependencies
Installs Python packages in the environment.
{
"name": "install_dependencies",
"arguments": {
"packages": ["numpy", "pandas", "matplotlib"]
}
}
3. check_installed_packages
Checks if packages are already installed in the environment.
{
"name": "check_installed_packages",
"arguments": {
"packages": ["numpy", "pandas", "non_existent_package"]
}
}
4. configure_environment
Dynamically changes the environment configuration.
{
"name": "configure_environment",
"arguments": {
"type": "conda",
"conda_name": "new_env_name"
}
}
5. get_environment_config
Gets the current environment configuration.
{
"name": "get_environment_config",
"arguments": {}
}
6. initialize_code_file
Creates a new Python file with initial content. Use this as the first step for longer code that may exceed token limits.
{
"name": "initialize_code_file",
"arguments": {
"content": "def main():\n print('Hello, world!')\n\nif __name__ == '__main__':\n main()",
"filename": "my_script"
}
}
7. append_to_code_file
Appends content to an existing Python code file. Use this to add more code to a file created with initialize_code_file.
{
"name": "append_to_code_file",
"arguments": {
"file_path": "/path/to/code/storage/my_script_abc123.py",
"content": "\ndef another_function():\n print('This was appended to the file')\n"
}
}
8. execute_code_file
Executes an existing Python file. Use this as the final step after building up code with initialize_code_file and append_to_code_file.
{
"name": "execute_code_file",
"arguments": {
"file_path": "/path/to/code/storage/my_script_abc123.py"
}
}
9. read_code_file
Reads the content of an existing Python code file. Use this to verify the current state of a file before appending more content or executing it.
{
"name": "read_code_file",
"arguments": {
"file_path": "/path/to/code/storage/my_script_abc123.py"
}
}
Usage
Once configured, the MCP Code Executor will allow LLMs to execute Python code by generating a file in the specified CODE_STORAGE_DIR
and running it within the configured environment.
LLMs can generate and execute code by referencing this MCP server in their prompts.
Handling Large Code Blocks
For larger code blocks that might exceed LLM token limits, use the incremental code generation approach:
- Initialize a file with the basic structure using
initialize_code_file
- Add more code in subsequent calls using
append_to_code_file
- Verify the file content if needed using
read_code_file
- Execute the complete code using
execute_code_file
This approach allows LLMs to write complex, multi-part code without running into token limitations.
Backward Compatibility
This package maintains backward compatibility with earlier versions. Users of previous versions who only specified a Conda environment will continue to work without any changes to their configuration.
Contributing
Contributions are welcome! Please open an issue or submit a pull request.
License
This project is licensed under the MIT License.
伺服器配置
{
"mcpServers": {
"mcp-code-executor": {
"command": "docker",
"args": [
"run",
"-i",
"--rm",
"ghcr.io/metorial/mcp-container--bazinga012--mcp_code_executor--mcp-code-executor",
"node ./build/index.js"
],
"env": {
"CODE_STORAGE_DIR": "code-storage-dir",
"CONDA_ENV_NAME": "conda-env-name"
}
}
}
}