Efficient Code Editing
Overview
Section titled “Overview”This guide shows how to use MCP Filesystem Ultra tools efficiently to minimize token usage when editing code. Following these patterns can reduce token consumption by up to 98%.
The Problem: Token Waste
Section titled “The Problem: Token Waste”AI assistants often read entire files (even 500KB+ files) and rewrite them completely, wasting tokens unnecessarily. A 5000-line file read and rewritten costs approximately 250,000 tokens.
Optimal Workflow for Small Changes
Section titled “Optimal Workflow for Small Changes”When you need to edit a specific function or section (less than 50 lines):
Step 1: Locate the code
Section titled “Step 1: Locate the code”search_files(file="engine.go", pattern="func ReadFile")Returns: "Found at lines 45-67"Step 2: Read only that section
Section titled “Step 2: Read only that section”read_file(file="engine.go", start_line=45, end_line=67)Returns: 23 lines of code (instead of 3000+ lines)Step 3: Analyze and plan changes
Section titled “Step 3: Analyze and plan changes”- Identify the exact lines that need to change
- Plan the replacement carefully
Step 4: Apply targeted edit
Section titled “Step 4: Apply targeted edit”edit_file(file="engine.go", old_text="return nil", new_text="return content")Token savings: 99% reduction (3000+ line file becomes 23 lines searched)
Optimal Workflow for Large Files
Section titled “Optimal Workflow for Large Files”For files with more than 1000 lines, never use read_file(). Instead:
- Use
search_files()to locate the exact lines - Use
read_file()withstart_line/end_lineto read ONLY the necessary lines - Edit with
edit_file()using context from step 2
Example:
- File size: 5000 lines
- Old way: Read 5000 lines (125k tokens) = waste
- New way: Search (500 tokens) + Read 50 lines (1.2k tokens) + Edit (500 tokens) = 2.2k tokens
- Savings: 98%
Antipatterns to Avoid
Section titled “Antipatterns to Avoid”| Antipattern | Problem | Better Way |
|---|---|---|
read_file() on large file | Reads entire file (high tokens) | Use read_file() with start_line/end_line |
| Edit without context | Risk of wrong replacement | Use search_files() first to verify location |
| Multiple edits in one go | If one edit fails, all fail | Apply edits incrementally with validation |
| Rewriting entire file | Massive token waste | Use edit_file() for surgical changes |
Tools Quick Reference
Section titled “Tools Quick Reference”| Tool | Purpose | Use When |
|---|---|---|
search_files | Find code location | You need to locate where code is |
read_file with start_line/end_line | Read lines N-M | You know the line numbers (from search) |
read_file | Read entire file | File is small (less than 1000 lines) |
edit_file | Replace text in file | You have old_text and new_text |
write_file | Create/overwrite entire file | File does not exist or needs complete rewrite |
search_files with count_only:true | Count matches without reading | You need to verify multiple occurrences |
edit_file with mode:"search_replace" | Replace specific match | You need to change only the 1st, 2nd, or last occurrence |
Real Example: Refactoring a Function
Section titled “Real Example: Refactoring a Function”Scenario: Change function ProcessData() in a 2000-line file
Bad approach (100k tokens wasted):
Section titled “Bad approach (100k tokens wasted):”read_file("main.go")- 2000 lines (50k tokens)- Analyze and rewrite
write_file("main.go", entire_content)- 50k tokens- Total: 100k tokens wasted
Good approach (2.5k tokens):
Section titled “Good approach (2.5k tokens):”search_files("main.go", "func ProcessData")- returns “lines 156-189”read_file("main.go", start_line=156, end_line=189)- 34 lines (850 tokens)- Analyze: “Change line 165 and 170”
edit_file("main.go", old_snippet, new_snippet)- Total: approximately 2.5k tokens (98% savings)
Context Validation
Section titled “Context Validation”The edit_file() tool includes built-in safety:
- Before replacing text, it validates surrounding context
- If file changed since you read it, edit fails safely
- You get error: “Context mismatch - please re-read file”
- No accidental overwrites of modified content
This is why edit_file() is safer than write_file() for ongoing edits.
Coordinate Tracking
Section titled “Coordinate Tracking”Search results include character-level positioning within matched lines:
{ "file": "main.go", "line_number": 42, "line": "func main() {", "match_start": 5, "match_end": 9}Why Coordinates Matter
Section titled “Why Coordinates Matter”With coordinates, you can:
- Pinpoint exact edits instead of guessing positions
- Avoid editing wrong occurrences (when multiple on same line)
- Combine with read_file (start_line/end_line) for surgical changes
- Reduce token usage significantly
Example: Multiple Occurrences
Section titled “Example: Multiple Occurrences”Line 42: "test_value = test_helper()" match_start: 0 (first "test") match_start: 14 (second "test")
search_files returns BOTH matches with coordinates.Use coordinates to pick the CORRECT one.Coordinates + Edit Flow
Section titled “Coordinates + Edit Flow”1. search_files("pattern") Returns: match_start, match_end for each result
2. Verify coordinates line[match_start:match_end] == "pattern"
3. read_file with start_line/end_line to get context Know exactly what is around the match
4. edit_file with confidence Know precisely which occurrence you are changingTechnical Details
Section titled “Technical Details”- 0-indexed: First character is position 0
- Per-line basis: Coordinates relative to matched line
- Always populated:
search_filesresults - Backward compatible: Existing code unaffected
Pipeline Workflows (v3.14.0+)
Section titled “Pipeline Workflows (v3.14.0+)”For multi-file operations, the pipeline system eliminates sequential round-trips entirely.
When to Use Pipelines
Section titled “When to Use Pipelines”| Scenario | Use Pipeline? | Why |
|---|---|---|
| Refactor across N files | ✅ Yes | 1 call vs N×3 calls |
| Bulk search + count | ✅ Yes | 1 call vs N+1 calls |
| Read single file | No | read_file with start_line/end_line is simpler |
| Edit single occurrence | ❌ No | edit_file is sufficient |
| Inspect code to answer user | ✅ Verbose | Get contents + counts in 1 call |
Pattern: Search → Edit → Verify
Section titled “Pattern: Search → Edit → Verify”The most common pipeline replaces the manual search-edit-verify cycle:
{ "name": "rename-function", "create_backup": true, "steps": [ { "id": "find", "action": "search", "params": { "path": "src/", "pattern": "oldFunc" } }, { "id": "edit", "action": "edit", "input_from": "find", "params": { "old_text": "oldFunc", "new_text": "newFunc" } }, { "id": "verify", "action": "count_occurrences", "input_from": "find", "params": { "pattern": "newFunc" } } ]}Result: 1 call instead of 7+, with automatic backup and rollback.
Pattern: Inspect with Verbose
Section titled “Pattern: Inspect with Verbose”When you need to read and analyze file contents (not just edit), use verbose: true:
{ "name": "inspect-config", "verbose": true, "steps": [ { "id": "find", "action": "search", "params": { "path": "core/", "pattern": "MaxPipeline" } }, { "id": "read", "action": "read_ranges", "input_from": "find" }, { "id": "count", "action": "count_occurrences", "input_from": "find", "params": { "pattern": "func " } } ]}Returns full file contents (truncated at 50 lines) and per-file counts — all in 1 call.
Pattern: Dry-Run Before Commit
Section titled “Pattern: Dry-Run Before Commit”Preview changes without modifying files:
{ "name": "preview-migration", "dry_run": true, "verbose": true, "steps": [ { "id": "find", "action": "search", "params": { "path": ".", "pattern": "deprecated_api", "file_types": [".go"] } }, { "id": "preview", "action": "edit", "input_from": "find", "params": { "old_text": "deprecated_api", "new_text": "new_api" } } ]}Shows which files would be affected and how many replacements, without touching disk.
Summary
Section titled “Summary”- Never read large files completely - use
read_filewithstart_line/end_line - Always search first - use
search_filesto find line numbers - Edit surgically - use
edit_fileinstead ofwrite_file - Use coordinates - for precise multi-occurrence handling
- Batch when possible - use
multi_editfor multiple changes - Pipeline for multi-file - use
batch_operationswithpipeline_jsonto chain operations in 1 call
Following these patterns typically saves 95-99% of tokens compared to naive approaches.