Skip to main content
CodeGate exists because AI coding tools now execute repository-controlled instructions, and that shifts trust from software binaries to project files that many users never review. Security reports have repeatedly shown the same pattern: risky behavior is treated as “documented behavior,” users install quickly and skip deep configuration review, and malicious or unsafe instructions can hide in normal-looking project files. The result is a trust gap. Documentation alone does not protect users when execution surfaces are broad, defaults are permissive, and configuration changes are hard to see.

The trust gap

Before an AI tool runs, a repository can influence behavior through:
  • MCP server definitions and remote endpoints
  • Hooks, workflows, and command templates
  • Rule and skill markdown files that can carry hidden or coercive instructions
  • Workspace settings and extension manifests
  • Files that change over time after a user has already “trusted” a project
Most users do not have a clear, consolidated view of those surfaces at launch time. CodeGate is built to reduce that gap before execution starts.
Real incidents that shaped this project:
  • CVE-2025-59536 — MCP consent bypass (Check Point disclosure). A repository-controlled MCP server definition could cause an AI tool to bypass user consent dialogs silently.
  • CVE-2026-21852 — API key exfiltration path (Check Point disclosure). Configuration surfaces exposed a path for AI tools to exfiltrate API credentials via remote endpoints.
  • CVE-2025-61260 — Codex CLI command injection, CVSS 9.8 (Check Point disclosure). Malicious project files could inject arbitrary shell commands into Codex CLI execution.
  • IDEsaster research — 30+ CVEs across major AI IDEs and agents, demonstrating that broad execution surfaces and permissive defaults are a recurring structural problem.
In each case, the dangerous behavior was present in project-controlled files that users were unlikely to review before running their coding agent.

What CodeGate tries to do

CodeGate provides a pre-flight workflow that helps you:
  • Discover execution and configuration surfaces across your project
  • Detect common high-risk patterns before any agent runs
  • Understand risk with enough context to make an informed decision
  • Apply reversible remediation where possible
  • Recheck for trust drift just before tool launch with codegate run

What CodeGate does not claim

CodeGate is not a guarantee of safety.
  • It can produce false positives and false negatives.
  • It does not replace secure engineering review.
  • Optional deep analysis requires controlled interaction with remote metadata and local tools.
  • New attack techniques can appear before signatures and heuristics are updated.
The goal is not perfect certainty. The goal is better visibility and better decisions before execution.

Guiding principles

These principles shape how CodeGate is built and how it behaves:
PrincipleWhat it means in practice
Inspect before trustRun a scan before launching any AI coding agent.
Prefer explicit consentHigh-risk operations require user confirmation — they are not silent.
Keep operations explainableFindings include rule IDs, severity, and remediation guidance so you can evaluate them, not just accept them.
Treat documented risk as real riskIf a behavior is dangerous, it matters even if it appears in a changelog or policy doc that most users will not read.
Preserve operator controlBackups, undo, suppression rules, and policy thresholds keep you in control of what gets fixed and when.

Where to go next

Quickstart

Install CodeGate and run your first scan in under two minutes.

Analysis layers

How the L1–L4 pipeline works and what each layer detects.

Finding categories

Reference for all finding types including CONSENT_BYPASS and COMMAND_EXEC.

Safety model

CodeGate’s own threat model and operational limits.