Technical anti-patterns mistakes design

Anti-Patterns to Avoid

Common mistakes in agent skill design: the god skill, leaky abstractions, over-parameterization, and other patterns that lead to unreliable agents.

Learning what to do is important. Learning what not to do can save you weeks of debugging. This article catalogs the most common mistakes in agent skill design: patterns that seem reasonable at first but lead to unreliable, unmaintainable, or confusing agent behavior.

Each anti-pattern includes a description of the problem, a concrete example, the consequences, and the fix.

The god skill

The problem: A single skill that tries to do everything. It accepts dozens of parameters, handles multiple unrelated use cases, and has a description so long that the agent can’t figure out when to use it.

What it looks like:

{
  name: "manage_project",
  description: "Create, read, update, delete, search, analyze, deploy, test, " +
    "lint, format, document, and monitor projects and their files, " +
    "dependencies, configurations, and environments.",
  parameters: {
    action: { type: "string", enum: ["create", "read", "update", "delete",
      "search", "analyze", "deploy", "test", "lint", "format", "doc", "monitor"] },
    target: { type: "string" },
    path: { type: "string", optional: true },
    config: { type: "object", optional: true },
    options: { type: "object", optional: true },
    filters: { type: "object", optional: true },
    output_format: { type: "string", optional: true },
    recursive: { type: "boolean", optional: true },
    dry_run: { type: "boolean", optional: true },
    verbose: { type: "boolean", optional: true },
    // ... 15 more parameters
  }
}

Why it fails:

  • The agent can’t reliably pick the right action value for a given task
  • The parameters are a grab bag where most are irrelevant for any given invocation
  • Error messages are generic because the skill doesn’t know which specific operation failed
  • Testing requires covering a combinatorial explosion of action-parameter pairs
  • A bug in the “deploy” path can break the “search” path because they share code

The fix: Split it into focused skills, each with a clear purpose and minimal parameters. Skill Composition shows how to combine focused skills into complex capabilities without losing the power of the monolithic approach.

// Instead of one god skill, build focused skills:
{ name: "search_files", parameters: { pattern: "string", path: "string" } }
{ name: "run_linter", parameters: { path: "string", fix: "boolean" } }
{ name: "deploy_project", parameters: { environment: "string", version: "string" } }
{ name: "run_tests", parameters: { path: "string", filter: "string" } }

Leaky abstractions

The problem: A skill that exposes internal implementation details in its interface, forcing the agent to understand the underlying system rather than working through a clean abstraction.

What it looks like:

# Leaky: requires knowledge of internal SQL schema
async def query_users(
    table: str = "auth_users_v2",
    join_clause: str = "",
    where_clause: str = "",
    select_columns: str = "*",
) -> list[dict]:
    """Query the user database using SQL fragments."""
    query = f"SELECT {select_columns} FROM {table} {join_clause} WHERE {where_clause}"
    return await db.execute(query)

The agent now needs to know the table is called auth_users_v2, how to write SQL join clauses, and what columns exist. The implementation is leaking straight through the interface.

What it should look like:

# Clean abstraction: domain-level interface
async def find_users(
    name: str | None = None,
    email: str | None = None,
    role: str | None = None,
    active: bool | None = None,
    limit: int = 50,
) -> list[dict]:
    """Find users matching the given criteria.

    Returns user records with id, name, email, role, and status fields.
    All filters are optional and combined with AND logic.
    """
    query = build_user_query(name=name, email=email, role=role, active=active)
    return await db.execute(query, limit=limit)

Why leaky abstractions fail:

  • The agent has to learn your internal data model, which eats context and introduces errors
  • Implementation changes (renaming a table, changing a schema) break the agent’s learned patterns
  • SQL injection and other security problems become the agent’s responsibility to avoid
  • The skill can’t be reused because it’s tied to one specific database schema

The fix: Design skill interfaces at the domain level, not the implementation level. Ask “what does the user want to accomplish?” rather than “what does the system need to execute?” The parameter schema section of Tool Use Patterns covers this idea in more detail.

Over-parameterization

The problem: A skill with too many parameters, most of which rarely get used. The agent has to decide which parameters to set on every invocation, which increases errors and wastes context on parameter descriptions.

What it looks like:

{
  name: "read_file",
  parameters: {
    path: { type: "string", description: "File path to read" },
    encoding: { type: "string", description: "File encoding", default: "utf-8" },
    start_line: { type: "number", description: "Starting line number" },
    end_line: { type: "number", description: "Ending line number" },
    max_lines: { type: "number", description: "Maximum lines to return" },
    include_line_numbers: { type: "boolean", description: "Add line numbers to output" },
    strip_comments: { type: "boolean", description: "Remove comment lines" },
    collapse_whitespace: { type: "boolean", description: "Collapse multiple blank lines" },
    language_hint: { type: "string", description: "Programming language for syntax detection" },
    follow_imports: { type: "boolean", description: "Also read imported files" },
    include_metadata: { type: "boolean", description: "Include file metadata in response" },
    format: { type: "string", enum: ["raw", "markdown", "json"], description: "Output format" },
  }
}

Twelve parameters for reading a file. The agent will sometimes set parameters it shouldn’t, forget ones it should, or construct invalid combinations.

The fix: Start with the minimum viable parameter set and create separate skills for specialized behavior. Progressive disclosure.

// Core skill: simple and focused
{
  name: "read_file",
  parameters: {
    path: { type: "string", description: "File path to read" },
    offset: { type: "number", description: "Start at this line number", optional: true },
    limit: { type: "number", description: "Maximum lines to return", optional: true },
  }
}

// Specialized skill for when you need metadata
{
  name: "file_info",
  parameters: {
    path: { type: "string", description: "File path to inspect" },
  }
}

A reasonable guideline: if a skill has more than 5 parameters, take a hard look at whether it’s doing too much. Most well-designed skills need 2-4 parameters.

Ignoring error cases

The problem: Skills that only handle the happy path and return cryptic errors (or crash) when anything goes wrong.

What it looks like:

async def deploy(environment: str, version: str) -> dict:
    """Deploy the application."""
    image = f"registry.example.com/app:{version}"
    await docker_pull(image)
    await docker_stop("app-container")
    await docker_run(image, name="app-container")
    return {"status": "deployed"}

What happens when the image doesn’t exist? When the container won’t stop? When the port is already in use? When the registry is unreachable? This skill treats deployment as an atomic operation that always succeeds. In reality, each step can fail in different ways.

What it should look like:

async def deploy(environment: str, version: str) -> dict:
    """Deploy the application to the specified environment."""
    image = f"registry.example.com/app:{version}"

    # Step 1: Verify image exists
    try:
        await docker_pull(image)
    except ImageNotFoundError:
        return {
            "status": "failed",
            "error": f"Image {image} not found in registry",
            "suggestions": [
                f"Check that version '{version}' has been built and pushed",
                "Run the CI pipeline to build the image first",
            ],
        }
    except ConnectionError:
        return {
            "status": "failed",
            "error": "Cannot reach container registry",
            "suggestions": ["Check network connectivity", "Verify registry URL"],
            "retryable": True,
        }

    # Step 2: Stop existing container (may not exist)
    try:
        await docker_stop("app-container")
    except ContainerNotFoundError:
        pass  # No container running — that's fine

    # Step 3: Start new container
    try:
        await docker_run(image, name="app-container")
    except PortInUseError as e:
        return {
            "status": "failed",
            "error": f"Port {e.port} is already in use",
            "suggestions": [f"Stop the process using port {e.port}", "Use a different port"],
            "partial": {"image_pulled": True, "old_container_stopped": True},
        }

    return {
        "status": "deployed",
        "image": image,
        "environment": environment,
    }

Every step has error handling. Every error message tells the agent what went wrong and what to try next. Partial progress is reported so the agent knows what state the system is in. For a thorough treatment of these patterns, see Error Handling Patterns.

Tightly coupled skill chains

The problem: Skills that assume they’ll always be called in a specific sequence, with each skill depending on side effects of the previous one rather than on explicit inputs.

What it looks like:

// Skill 1: writes to a global temp file
async function prepareData() {
  const data = await fetchData();
  await writeFile("/tmp/agent_data.json", JSON.stringify(data));
  return { status: "prepared" };
}

// Skill 2: reads from the same global temp file
async function analyzeData() {
  const raw = await readFile("/tmp/agent_data.json");
  const data = JSON.parse(raw);
  const analysis = performAnalysis(data);
  await writeFile("/tmp/agent_analysis.json", JSON.stringify(analysis));
  return { status: "analyzed" };
}

// Skill 3: reads from both global temp files
async function generateReport() {
  const data = JSON.parse(await readFile("/tmp/agent_data.json"));
  const analysis = JSON.parse(await readFile("/tmp/agent_analysis.json"));
  return createReport(data, analysis);
}

Why this fails:

  • If prepareData runs twice, the temp file is silently overwritten
  • If analyzeData runs without prepareData, it fails with a confusing “file not found” error
  • If two agent sessions run at the same time, they clobber each other’s temp files
  • The skills can’t be reused anywhere else because they depend on hardcoded file paths
  • Testing requires filesystem setup and teardown

The fix: Pass data explicitly through parameters. Each skill should be self-contained.

async function prepareData(): Promise<{ data: Record<string, unknown> }> {
  const data = await fetchData();
  return { data };
}

async function analyzeData(
  data: Record<string, unknown>,
): Promise<{ analysis: AnalysisResult }> {
  const analysis = performAnalysis(data);
  return { analysis };
}

async function generateReport(
  data: Record<string, unknown>,
  analysis: AnalysisResult,
): Promise<{ report: string }> {
  return { report: createReport(data, analysis) };
}

Now each skill is independent, testable, and composable. The orchestration layer (the agent or a workflow) passes data between them explicitly.

Vague or misleading descriptions

The problem: Skill descriptions that are too vague for the agent to know when to use them, or that mislead the agent about what the skill actually does.

Examples of bad descriptions:

DescriptionProblem
”Handles files”What does “handle” mean? Read? Write? Delete? Search?
”Database operations”Which operations? On which database?
”Helper utility”For what? When should it be used?
”Process data”What kind of data? What kind of processing?
”Smart search”What makes it “smart”? How is it different from regular search?

The fix: Be specific about what the skill does, when to use it, and when not to use it. Tool Use Patterns covers this in depth, but the short version: include the action, the target, example use cases, and explicit guidance on when to prefer a different skill.

Good: "Search for files by name or extension using glob patterns.
Use when you need to find files (e.g., all .tsx components, config files).
Do NOT use for searching file contents — use grep_search instead."

Hidden side effects

The problem: Skills that do more than their name and description suggest. A “read” skill that also logs access. A “search” skill that caches results to disk. An “analyze” skill that sends telemetry.

Hidden side effects break the agent’s ability to reason about what has happened. If the agent calls read_file and doesn’t expect any state to change, but the skill quietly modifies a cache or writes a log, the agent’s model of the system drifts from reality.

The fix:

  • If a skill has side effects, say so. “Reads the file and records the access in the audit log.”
  • If a side effect is optional (like caching), make it a parameter. “Set cache: true to cache the result for future reads.”
  • Prefer pure skills (same input always produces same output, no external state changes) whenever possible. Side effects should be deliberate, not accidental.

Key takeaways

  1. Split god skills into focused skills. If a skill has more than one “and” in its description, it’s doing too much.

  2. Design interfaces at the domain level. Hide implementation details behind clean abstractions that match how users think about the task.

  3. Limit parameters to what’s needed. More than 5 is a warning sign. Use progressive disclosure and specialized skills instead.

  4. Handle every error path. The happy path isn’t enough. Every failure should produce an actionable error message.

  5. Pass data explicitly, not through shared state. Tightly coupled skill chains are fragile, untestable, and impossible to reuse.

  6. Write descriptions for the agent. Be specific about what, when, and when not. Vague descriptions lead to unreliable tool selection.

  7. Declare all side effects. If a skill changes state beyond its return value, say so in the description.

These anti-patterns aren’t theoretical. They’re the ones that show up most often in real agent skill codebases, and they’re behind most reliability issues. Avoiding them from the start is a lot easier than refactoring them out later.