Mcp 隐藏

创建者gbrigandigbrigandi

隐私保护的MCP代理,能够在数据到达外部AI提供商之前,智能地实时伪匿名化个人身份信息(PII),同时保持语义关系以确保准确分析。

概览

什么是 MCP Server Conceal?

MCP Server Conceal 是一个以隐私为中心的代理解决方案,旨在实时智能地伪匿名化个人可识别信息(PII)。这确保了敏感数据在到达外部 AI 提供商之前得到保护,同时仍然保持进行准确分析所需的语义关系。该工具对于重视数据隐私和遵守法规的组织特别有用。

MCP Server Conceal 的特点

  • 实时 PII 匿名化:在数据传输时自动匿名化敏感数据,确保 PII 永远不会暴露给外部服务。
  • 语义关系维护:保留数据的上下文和关系,允许在不妨碍隐私的情况下进行有意义的分析。
  • 用户友好的界面:设计易于使用,便于快速设置和集成到现有系统中。
  • 开源:在 GitHub 等平台上可用,允许社区贡献和开发透明度。
  • MIT 许可证:该项目在 MIT 许可证下授权,促进自由使用、修改和分发软件。

如何使用 MCP Server Conceal

  1. 安装:从 GitHub 克隆代码库,并按照 README 文件中的安装说明进行操作。
  2. 配置:配置代理设置,以定义数据应如何匿名化以及要维护哪些参数。
  3. 集成:将 MCP Server Conceal 集成到您现有的数据流中,确保所有外发数据都通过代理。
  4. 测试:进行测试以确保 PII 被正确匿名化,并且语义关系得以维护。
  5. 部署:测试完成后,在生产环境中部署该解决方案。

常见问题解答

问:MCP Server Conceal 匿名化哪些类型的数据?

答:MCP Server Conceal 旨在匿名化各种类型的 PII,包括姓名、地址、电子邮件地址和其他敏感信息。

问:MCP Server Conceal 适合所有行业吗?

答:是的,它适合任何处理敏感数据并需要遵守隐私法规的行业,例如医疗保健、金融和电子商务。

问:我可以自定义匿名化过程吗?

答:可以,MCP Server Conceal 允许根据特定组织需求自定义匿名化规则。

问:MCP Server Conceal 如何维护语义关系?

答:该工具使用先进的算法确保在数据被匿名化的同时,数据点之间的关系得以保留,以便进行准确分析。

问:我在哪里可以找到 MCP Server Conceal 的支持?

答:支持可以通过 GitHub 代码库获得,用户可以在此报告问题、参与讨论并访问文档。

详情

MCP Conceal

An MCP proxy that pseudo-anonymizes PII before data reaches external AI providers like Claude, ChatGPT, or Gemini.

sequenceDiagram
    participant C as AI Client (Claude)
    participant P as MCP Conceal
    participant S as Your MCP Server
    
    C->>P: Request
    P->>S: Request
    S->>P: Response with PII
    P->>P: PII Detection
    P->>P: Pseudo-Anonymization
    P->>P: Consistent Mapping
    P->>C: Sanitized Response

MCP Conceal performs pseudo-anonymization rather than redaction to preserve semantic meaning and data relationships required for AI analysis. Example: john.smith@acme.com becomes mike.wilson@techcorp.com, maintaining structure while protecting sensitive information.

Installation

Download Pre-built Binary

  1. Visit the Releases page
  2. Download the binary for your platform:
PlatformBinary
Linux x64mcp-server-conceal-linux-amd64
macOS Intelmcp-server-conceal-macos-amd64
macOS Apple Siliconmcp-server-conceal-macos-aarch64
Windows x64mcp-server-conceal-windows-amd64.exe
  1. Make executable: chmod +x mcp-server-conceal-* (Linux/macOS)
  2. Add to PATH:
    • Linux/macOS: mv mcp-server-conceal-* /usr/local/bin/mcp-server-conceal
    • Windows: Move to a directory in your PATH or add current directory to PATH

Building from Source

git clone https://github.com/gbrigandi/mcp-server-conceal
cd mcp-server-conceal
cargo build --release

Binary location: target/release/mcp-server-conceal

Quick Start

Prerequisites

Install Ollama for LLM-based PII detection:

  1. Install Ollama: ollama.ai
  2. Pull model: ollama pull llama3.2:3b
  3. Verify: curl http://localhost:11434/api/version

Basic Usage

Create a minimal mcp-server-conceal.toml:

[detection]
mode = "regex_llm"

[llm]
model = "llama3.2:3b"
endpoint = "http://localhost:11434"

See the Configuration section for all available options.

Run as proxy:

mcp-server-conceal \
  --target-command python3 \
  --target-args "my-mcp-server.py" \
  --config mcp-server-conceal.toml

Configuration

Complete configuration reference:

[detection]
mode = "regex_llm"                # Detection strategy: regex, llm, regex_llm
enabled = true                    
confidence_threshold = 0.8        # Detection confidence threshold (0.0-1.0)

[detection.patterns]
email = "\\b[A-Za-z0-9._%+-]+@[A-Za-z0-9.-]+\\.[A-Z|a-z]{2,}\\b"
phone = "\\b(?:\\+?1[-\\.\\s]?)?(?:\\(?[0-9]{3}\\)?[-\\.\\s]?)?[0-9]{3}[-\\.\\s]?[0-9]{4}\\b"
ssn = "\\b\\d{3}-\\d{2}-\\d{4}\\b"
credit_card = "\\b\\d{4}[-\\s]?\\d{4}[-\\s]?\\d{4}[-\\s]?\\d{4}\\b"
ip_address = "\\b(?:(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\\.){3}(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\\b"
url = "https?://[^\\s/$.?#].[^\\s]*"

[faker]
locale = "en_US"                  # Locale for generating realistic fake PII data
seed = 12345                      # Seed ensures consistent anonymization across restarts
consistency = true                # Same real PII always maps to same fake data

[mapping]
database_path = "mappings.db"     # SQLite database storing real-to-fake mappings
retention_days = 90               # Delete old mappings after N days

[llm]
model = "llama3.2:3b"             # Ollama model for PII detection
endpoint = "http://localhost:11434"
timeout_seconds = 180
prompt_template = "default"       # Template for PII detection prompts

[llm_cache]
enabled = true                    # Cache LLM detection results for performance
database_path = "llm_cache.db"
max_text_length = 2000

Configuration Guidance

Detection Settings:

  • confidence_threshold: Lower values (0.6) catch more PII but increase false positives. Higher values (0.9) are more precise but may miss some PII.
  • mode: Choose based on your latency vs accuracy requirements (see Detection Modes below)

Faker Settings:

  • locale: Use "en_US" for American names/addresses, "en_GB" for British, etc. Affects realism of generated fake data
  • seed: Keep consistent across deployments to ensure same real data maps to same fake data
  • consistency: Always leave true to maintain data relationships

Mapping Settings:

  • retention_days: Balance between data consistency and storage. Shorter periods (30 days) reduce storage but may cause inconsistent anonymization for recurring data
  • database_path: Use absolute paths in production to avoid database location issues

Detection Modes

Choose the detection strategy based on your performance requirements and data complexity:

RegexLlm (Default)

Best for production environments - Combines speed and accuracy:

  • Phase 1: Fast regex catches common patterns (emails, phones, SSNs)
  • Phase 2: LLM analyzes remaining text for complex PII
  • Use when: You need comprehensive detection with reasonable performance
  • Performance: ~100-500ms per request depending on text size
  • Configure: mode = "regex_llm"

Regex Only

Best for high-volume, latency-sensitive applications:

  • Uses only pattern matching - no AI analysis
  • Use when: You have well-defined PII patterns and need <10ms response
  • Trade-off: May miss contextual PII like "my account number is ABC123"
  • Configure: mode = "regex"

LLM Only

Best for complex, unstructured data:

  • AI-powered detection catches nuanced PII patterns
  • Use when: Accuracy is more important than speed
  • Performance: ~200-1000ms per request
  • Configure: mode = "llm"

Advanced Usage

Claude Desktop Integration

Configure Claude Desktop to proxy MCP servers:

{
  "mcpServers": {
    "database": {
      "command": "mcp-server-conceal",
      "args": [
        "--target-command", "python3",
        "--target-args", "database-server.py --host localhost",
        "--config", "/path/to/mcp-server-conceal.toml"
      ],
      "env": {
        "DATABASE_URL": "postgresql://localhost/mydb"
      }
    }
  }
}

Custom LLM Prompts

Customize detection prompts for specific domains:

Template locations:

  • Linux: ~/.local/share/mcp-server-conceal/prompts/
  • macOS: ~/Library/Application Support/com.mcp-server-conceal.mcp-server-conceal/prompts/
  • Windows: %LOCALAPPDATA%\\com\\mcp-server-conceal\\mcp-server-conceal\\data\\prompts\\

Usage:

  1. Run MCP Conceal once to auto-generate default.md in the prompts directory:
    mcp-server-conceal --target-command echo --target-args "test" --config mcp-server-conceal.toml
    
  2. Copy: cp default.md healthcare.md
  3. Edit template for domain-specific PII patterns
  4. Configure: prompt_template = "healthcare"

Environment Variables

Pass environment variables to target process:

mcp-server-conceal \
  --target-command node \
  --target-args "server.js" \
  --target-cwd "/path/to/server" \
  --target-env "DATABASE_URL=postgresql://localhost/mydb" \
  --target-env "API_KEY=secret123" \
  --config mcp-server-conceal.toml

Troubleshooting

Enable debug logging:

RUST_LOG=debug mcp-server-conceal \
  --target-command python3 \
  --target-args server.py \
  --config mcp-server-conceal.toml

Common Issues:

  • Invalid regex patterns in configuration
  • Ollama connectivity problems
  • Database file permissions
  • Missing prompt templates

Security

Mapping Database: Contains sensitive real-to-fake mappings. Secure with appropriate file permissions.

LLM Integration: Run Ollama on trusted infrastructure when using LLM-based detection modes.

Contributing

Contributions are welcome! Follow these steps to get started:

Development Setup

Prerequisites:

  1. Clone and setup:

    git clone https://github.com/gbrigandi/mcp-server-conceal
    cd mcp-server-conceal
    
  2. Build in development mode:

    cargo build
    cargo test
    
  3. Install development tools:

    rustup component add clippy rustfmt
    
  4. Run with debug logging:

    RUST_LOG=debug cargo run -- --target-command cat --target-args test.txt --config mcp-server-conceal.toml
    

Testing

  • Unit tests: cargo test
  • Integration tests: cargo test --test integration_test
  • Linting: cargo clippy
  • Formatting: cargo fmt

Submitting Changes

  1. Fork the repository
  2. Create a feature branch: git checkout -b feature-name
  3. Make your changes and add tests
  4. Ensure all tests pass: cargo test
  5. Format code: cargo fmt
  6. Submit a pull request with a clear description

License

MIT License - see LICENSE file for details.

Server配置

{
  "mcpServers": {
    "conceal": {
      "command": "mcp-server-conceal",
      "args": [
        "--target-command",
        "python3",
        "--target-args",
        "database-server.py --host localhost",
        "--config",
        "/path/to/mcp-server-conceal.toml"
      ],
      "env": {
        "DATABASE_URL": "postgresql://localhost/mydb"
      }
    }
  }
}

项目信息

Mcp 隐藏 替代方案

如果你需要 Mcp 隐藏 的一些替代方案,我们为你提供了按类别划分的网站。

一个MCP服务器,用于使用Semgrep扫描代码中的安全漏洞。

查看更多 >>