Skip to content

Efficient Code Editing

This guide shows how to use MCP Filesystem Ultra tools efficiently to minimize token usage when editing code. Following these patterns can reduce token consumption by up to 98%.

AI assistants often read entire files (even 500KB+ files) and rewrite them completely, wasting tokens unnecessarily. A 5000-line file read and rewritten costs approximately 250,000 tokens.

When you need to edit a specific function or section (less than 50 lines):

search_files(file="engine.go", pattern="func ReadFile")
Returns: "Found at lines 45-67"
read_file(file="engine.go", start_line=45, end_line=67)
Returns: 23 lines of code (instead of 3000+ lines)
  • Identify the exact lines that need to change
  • Plan the replacement carefully
edit_file(file="engine.go", old_text="return nil", new_text="return content")

Token savings: 99% reduction (3000+ line file becomes 23 lines searched)

For files with more than 1000 lines, never use read_file(). Instead:

  1. Use search_files() to locate the exact lines
  2. Use read_file() with start_line/end_line to read ONLY the necessary lines
  3. Edit with edit_file() using context from step 2

Example:

  • File size: 5000 lines
  • Old way: Read 5000 lines (125k tokens) = waste
  • New way: Search (500 tokens) + Read 50 lines (1.2k tokens) + Edit (500 tokens) = 2.2k tokens
  • Savings: 98%
AntipatternProblemBetter Way
read_file() on large fileReads entire file (high tokens)Use read_file() with start_line/end_line
Edit without contextRisk of wrong replacementUse search_files() first to verify location
Multiple edits in one goIf one edit fails, all failApply edits incrementally with validation
Rewriting entire fileMassive token wasteUse edit_file() for surgical changes
ToolPurposeUse When
search_filesFind code locationYou need to locate where code is
read_file with start_line/end_lineRead lines N-MYou know the line numbers (from search)
read_fileRead entire fileFile is small (less than 1000 lines)
edit_fileReplace text in fileYou have old_text and new_text
write_fileCreate/overwrite entire fileFile does not exist or needs complete rewrite
search_files with count_only:trueCount matches without readingYou need to verify multiple occurrences
edit_file with mode:"search_replace"Replace specific matchYou need to change only the 1st, 2nd, or last occurrence

Scenario: Change function ProcessData() in a 2000-line file

  1. read_file("main.go") - 2000 lines (50k tokens)
  2. Analyze and rewrite
  3. write_file("main.go", entire_content) - 50k tokens
  4. Total: 100k tokens wasted
  1. search_files("main.go", "func ProcessData") - returns “lines 156-189”
  2. read_file("main.go", start_line=156, end_line=189) - 34 lines (850 tokens)
  3. Analyze: “Change line 165 and 170”
  4. edit_file("main.go", old_snippet, new_snippet)
  5. Total: approximately 2.5k tokens (98% savings)

The edit_file() tool includes built-in safety:

  1. Before replacing text, it validates surrounding context
  2. If file changed since you read it, edit fails safely
  3. You get error: “Context mismatch - please re-read file”
  4. No accidental overwrites of modified content

This is why edit_file() is safer than write_file() for ongoing edits.

Search results include character-level positioning within matched lines:

{
"file": "main.go",
"line_number": 42,
"line": "func main() {",
"match_start": 5,
"match_end": 9
}

With coordinates, you can:

  • Pinpoint exact edits instead of guessing positions
  • Avoid editing wrong occurrences (when multiple on same line)
  • Combine with read_file (start_line/end_line) for surgical changes
  • Reduce token usage significantly
Line 42: "test_value = test_helper()"
match_start: 0 (first "test")
match_start: 14 (second "test")
search_files returns BOTH matches with coordinates.
Use coordinates to pick the CORRECT one.
1. search_files("pattern")
Returns: match_start, match_end for each result
2. Verify coordinates
line[match_start:match_end] == "pattern"
3. read_file with start_line/end_line to get context
Know exactly what is around the match
4. edit_file with confidence
Know precisely which occurrence you are changing
  • 0-indexed: First character is position 0
  • Per-line basis: Coordinates relative to matched line
  • Always populated: search_files results
  • Backward compatible: Existing code unaffected

For multi-file operations, the pipeline system eliminates sequential round-trips entirely.

ScenarioUse Pipeline?Why
Refactor across N files✅ Yes1 call vs N×3 calls
Bulk search + count✅ Yes1 call vs N+1 calls
Read single fileNoread_file with start_line/end_line is simpler
Edit single occurrence❌ Noedit_file is sufficient
Inspect code to answer user✅ VerboseGet contents + counts in 1 call

The most common pipeline replaces the manual search-edit-verify cycle:

{
"name": "rename-function",
"create_backup": true,
"steps": [
{ "id": "find", "action": "search", "params": { "path": "src/", "pattern": "oldFunc" } },
{ "id": "edit", "action": "edit", "input_from": "find", "params": { "old_text": "oldFunc", "new_text": "newFunc" } },
{ "id": "verify", "action": "count_occurrences", "input_from": "find", "params": { "pattern": "newFunc" } }
]
}

Result: 1 call instead of 7+, with automatic backup and rollback.

When you need to read and analyze file contents (not just edit), use verbose: true:

{
"name": "inspect-config",
"verbose": true,
"steps": [
{ "id": "find", "action": "search", "params": { "path": "core/", "pattern": "MaxPipeline" } },
{ "id": "read", "action": "read_ranges", "input_from": "find" },
{ "id": "count", "action": "count_occurrences", "input_from": "find", "params": { "pattern": "func " } }
]
}

Returns full file contents (truncated at 50 lines) and per-file counts — all in 1 call.

Preview changes without modifying files:

{
"name": "preview-migration",
"dry_run": true,
"verbose": true,
"steps": [
{ "id": "find", "action": "search", "params": { "path": ".", "pattern": "deprecated_api", "file_types": [".go"] } },
{ "id": "preview", "action": "edit", "input_from": "find", "params": { "old_text": "deprecated_api", "new_text": "new_api" } }
]
}

Shows which files would be affected and how many replacements, without touching disk.

  1. Never read large files completely - use read_file with start_line/end_line
  2. Always search first - use search_files to find line numbers
  3. Edit surgically - use edit_file instead of write_file
  4. Use coordinates - for precise multi-occurrence handling
  5. Batch when possible - use multi_edit for multiple changes
  6. Pipeline for multi-file - use batch_operations with pipeline_json to chain operations in 1 call

Following these patterns typically saves 95-99% of tokens compared to naive approaches.