Skip to content

Pipeline Tools API

Execute multi-step file transformation pipeline with automatic backup, rollback, and risk assessment. In v4.0.0, pipelines are invoked via the batch_operations tool with a pipeline_json parameter.

New in v3.14.0 — Consolidated into batch_operations in v4.0.0

ParameterTypeRequiredDescription
pipeline_jsonstringJSON-encoded pipeline definition
interface PipelineRequest {
name: string; // Pipeline name (required)
dry_run?: boolean; // Preview without changes (default: false)
force?: boolean; // Bypass risk warnings (default: false)
stop_on_error?: boolean; // Stop on first error (default: true)
create_backup?: boolean; // Auto-backup before destructive ops (default: true)
verbose?: boolean; // Return intermediate data: file contents, per-file counts (default: false)
parallel?: boolean; // Enable DAG-based parallel execution (v4.1.0, default: false)
steps: PipelineStep[]; // Array of steps (1-20)
}
interface PipelineStep {
id: string; // Unique step identifier (alphanumeric + - _)
action: string; // One of 12 supported actions
input_from?: string; // ID of previous step to get files from
input_from_all?: string[]; // IDs of multiple steps (v4.1.0, for aggregate/merge)
condition?: StepCondition; // Conditional execution (v4.1.0)
params: Record<string, any>; // Action-specific parameters
}
interface StepCondition {
type: string; // Condition type (see table below)
step_ref?: string; // Referenced step ID
value?: number; // Comparison value (for count_gt/lt/eq)
path?: string; // File path (for file_exists/file_not_exists)
}

Find files matching a pattern.

Parameters:

{
path?: string; // Directory to search (default: ".")
pattern: string; // Text or regex pattern (required)
file_types?: string[]; // Filter by extensions (e.g., [".js", ".ts"])
include_content?: boolean; // Include file content (default: false)
}

Output:

  • files_matched: string[] - List of matching file paths
  • Internal: Search match data for content access

Read file contents.

Parameters:

{
files?: string[]; // Direct file paths (if no input_from)
// OR use input_from to get files from previous step
}

Output:

  • content: Record<string, string> - Map of path → content
  • files_matched: string[] - Files read

Requires: Either input_from or files parameter


Simple find/replace operation.

Parameters:

{
files?: string[]; // Direct file paths (if no input_from)
old_text: string; // Text to find (required)
new_text: string; // Replacement text (required)
}

Output:

  • edits_applied: number - Total replacements made
  • counts: Record<string, number> - Per-file replacement counts
  • risk_level: string - LOW | MEDIUM | HIGH | CRITICAL
  • files_matched: string[] - Files modified

Requires: Either input_from or files parameter


Multiple edits in one operation.

Parameters:

{
files?: string[]; // Direct file paths (if no input_from)
edits: Array<{ // Array of edits (required)
old_text: string; // Text to find
new_text: string; // Replacement text
}>;
}

Output:

  • edits_applied: number - Total replacements made
  • counts: Record<string, number> - Per-file replacement counts
  • risk_level: string - Risk assessment
  • files_matched: string[] - Files modified

Requires: Either input_from or files parameter


Count pattern occurrences in files.

Parameters:

{
files?: string[]; // Direct file paths (if no input_from)
pattern: string; // Text pattern to count (required)
}

Output:

  • counts: Record<string, number> - Map of path → count
  • files_matched: string[] - Files analyzed

Requires: Either input_from or files parameter


Advanced regex-based transformations.

Parameters:

{
files?: string[]; // Direct file paths (if no input_from)
patterns: Array<{ // Array of patterns (required)
pattern: string; // Regex pattern
replacement: string; // Replacement (supports $1, $2 capture groups)
}>;
}

Output:

  • edits_applied: number - Total transformations made
  • counts: Record<string, number> - Per-file transformation counts
  • risk_level: string - Risk assessment
  • files_matched: string[] - Files transformed

Requires: Either input_from or files parameter


Copy files to destination directory.

Parameters:

{
files?: string[]; // Direct file paths (if no input_from)
destination: string; // Target directory (required)
}

Output:

  • files_matched: string[] - New file paths (in destination)

Requires: Either input_from or files parameter


Rename or move files to destination directory.

Parameters:

{
files?: string[]; // Direct file paths (if no input_from)
destination: string; // Target directory (required)
}

Output:

  • files_matched: string[] - New file paths (in destination)

Requires: Either input_from or files parameter


Soft-delete files (moves to trash/recycle bin).

Parameters:

{
files?: string[]; // Direct file paths (if no input_from)
// OR use input_from
}

Output:

  • files_matched: string[] - Files deleted

Requires: Either input_from or files parameter


Combine content and file lists from multiple prior steps.

Parameters:

{
separator?: string; // Content separator (default: "\n")
// Uses input_from_all to reference multiple steps
}

Output:

  • files_matched: string[] - Combined file list (deduplicated)
  • aggregated_content: string - Merged content from all referenced steps

Requires: input_from_all with at least 2 step references


Generate a unified diff between two files.

Parameters:

{
file_a: string; // First file path (required)
file_b: string; // Second file path (required)
}

Output:

  • aggregated_content: string - Unified diff output
  • counts: {"changes": number} - Number of changed sections

Union or intersection of file lists from multiple steps.

Parameters:

{
mode?: string; // "union" (default) or "intersection"
// Uses input_from_all to reference multiple steps
}

Output:

  • files_matched: string[] - Resulting file list

Requires: input_from_all with at least 2 step references


interface PipelineResult {
name: string; // Pipeline name
success: boolean; // Overall success status
total_steps: number; // Total steps in pipeline
completed_steps: number; // Steps completed before stop/finish
results: StepResult[]; // Per-step results
backup_id?: string; // Backup ID if created
files_affected: string[]; // All unique files affected
total_edits: number; // Sum of all edits
overall_risk_level?: string; // LOW | MEDIUM | HIGH | CRITICAL
rollback_performed?: boolean; // True if rollback was executed
dry_run: boolean; // Whether this was a dry-run
total_duration: number; // Total execution time (ms)
}
interface StepResult {
step_id: string; // Step identifier
action: string; // Action type
success: boolean; // Step success status
skipped?: boolean; // True if condition was false (v4.1.0)
skip_reason?: string; // Why step was skipped (v4.1.0)
files_matched?: string[]; // Files found/affected
content?: Record<string, string>; // File contents (read_ranges)
aggregated_content?: string; // Combined content (aggregate/diff/merge, v4.1.0)
edits_applied?: number; // Number of edits made
counts?: Record<string, number>; // Per-file counts
error?: string; // Error message if failed
duration: number; // Step execution time (ms)
risk_level?: string; // Risk assessment for destructive ops
}
OK: 4/4 steps | 12 files | 45 edits | medium risk

or on error:

FAIL: 2/4 steps | search failed: access denied
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
📋 Pipeline: refactor-namespace
✅ Success: true
📊 Steps: 4/4 completed
⏱️ Duration: 1.2s
💾 Backup: 20250213_120530_abc123def
📝 Step Results:
1. find [search] ✅
Duration: 234ms
Files: 12 matched
- src/file1.go
- src/file2.go
... and 10 more
2. replace [edit] ✅
Duration: 567ms
Edits: 45 replacements
⚠️ Risk: MEDIUM
3. verify [count_occurrences] ✅
Duration: 123ms
Counts: 45 total occurrences
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
📁 Files affected: 12
✏️ Total edits: 45
⚠️ Overall risk: MEDIUM
batch_operations({
pipeline_json: "{\"name\":\"fix-typo\",\"steps\":[{\"id\":\"find\",\"action\":\"search\",\"params\":{\"path\":\"./src\",\"pattern\":\"recieve\"}},{\"id\":\"fix\",\"action\":\"edit\",\"input_from\":\"find\",\"params\":{\"old_text\":\"recieve\",\"new_text\":\"receive\"}}]}"
})
batch_operations({
pipeline_json: "{\"name\":\"preview-changes\",\"dry_run\":true,\"steps\":[{\"id\":\"find\",\"action\":\"search\",\"params\":{\"pattern\":\"console.log\"}},{\"id\":\"remove\",\"action\":\"regex_transform\",\"input_from\":\"find\",\"params\":{\"patterns\":[{\"pattern\":\"\\\\s*console\\\\.log\\\\([^)]*\\\\);?\\\\n?\",\"replacement\":\"\"}]}}]}"
})
batch_operations({
pipeline_json: "{\"name\":\"rename-class\",\"force\":false,\"steps\":[{\"id\":\"find-class\",\"action\":\"search\",\"params\":{\"pattern\":\"class OldName\"}},{\"id\":\"rename-class\",\"action\":\"edit\",\"input_from\":\"find-class\",\"params\":{\"old_text\":\"OldName\",\"new_text\":\"NewName\"}},{\"id\":\"find-imports\",\"action\":\"search\",\"params\":{\"pattern\":\"import.*OldName\"}},{\"id\":\"fix-imports\",\"action\":\"edit\",\"input_from\":\"find-imports\",\"params\":{\"old_text\":\"OldName\",\"new_text\":\"NewName\"}}]}"
})
  • Invalid JSON: "Invalid pipeline JSON: <error>"
  • Empty name: "pipeline name is required"
  • No steps: "at least one step is required"
  • Too many steps: "too many steps (max 20, got N)"
  • Duplicate step IDs: "duplicate step ID 'X' at indices M and N"
  • Invalid step ID: "invalid step ID 'X' (only alphanumeric, -, and _ allowed)"
  • Forward reference: "step 'X' has forward reference to step 'Y'"
  • Missing parameters: "<action> action requires '<param>' parameter"
  • File limit exceeded: "too many files affected (N > 100). Use force=true to bypass"
  • High risk blocked: "operation blocked due to HIGH risk. Use force=true to proceed"
  • Step failure: Step error appears in results[i].error
  • Rollback failure: "pipeline failed at step N, rollback failed"
LimitValueConfigurable
Max steps per pipeline20No
Max files affected100Yes (via force)
Step ID length255 charsNo
Pipeline name length255 charsNo
LevelConditionAction Required
LOW<30 files OR <100 editsNone
MEDIUM30-49 files OR 100-499 editsWarning shown
HIGH50-79 files OR 500-999 editsforce: true required
CRITICAL80+ files OR 1000+ editsforce: true required
OK: 3/3 steps | 2 files | 0 edits

Minimal token usage. Ideal for edit workflows where you don’t need intermediate data.

━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
📋 Pipeline: my-pipeline
✅ Success: true
📊 Steps: 3/3 completed
⏱️ Duration: 3.4ms
📝 Step Results:
1. find [search] ✅
Files: 2 matched
- src/engine.go
- src/pipeline.go
2. count [count_occurrences] ✅
Counts: 54 total occurrences
src/engine.go: 37
src/pipeline.go: 17
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━

Verbose includes: file contents from read_ranges (truncated at 50 lines), per-file counts, complete file lists, and per-step durations. Use when Claude needs to inspect or report results.

TypeDescriptionRequired Fields
has_matchesReferenced step found filesstep_ref
no_matchesReferenced step found no filesstep_ref
count_gtSum of step counts > valuestep_ref, value
count_ltSum of step counts < valuestep_ref, value
count_eqSum of step counts = valuestep_ref, value
file_existsFile exists on diskpath
file_not_existsFile does not existpath
step_succeededReferenced step succeededstep_ref
step_failedReferenced step failedstep_ref

Use {{step_id.field}} in any string parameter to reference prior step results:

FieldTypeDescription
countnumberSum of all values in step’s counts map
files_countnumberLength of files_matched
filesstringComma-separated file paths
riskstringRisk level (LOW/MEDIUM/HIGH/CRITICAL)
editsnumberedits_applied value

Example: "pattern": "Found {{find.files_count}} files with {{count.count}} matches"

Set "parallel": true to enable DAG-based parallel scheduling:

  1. Dependencies are extracted from input_from, input_from_all, and condition.step_ref
  2. Steps are grouped into levels via topological sort (Kahn’s algorithm)
  3. Steps within a level execute concurrently using the worker pool
  4. Destructive actions (edit, multi_edit, regex_transform, delete, rename) are split into separate sub-levels for safety
  1. Use input_from instead of re-searching in each step
  2. Enable dry_run first to preview changes
  3. Keep pipelines under 10 steps for maintainability
  4. Use specific file types in search to reduce matches
  5. Batch similar operations using multi_edit instead of multiple edit steps
  • All file paths validated against allowed_paths configuration
  • Symlinks resolved before access checks
  • Backup created automatically before destructive operations
  • Rollback on failure prevents partial state
  • Dry-run mode for safe preview