Skip to content

Dashboard & Monitoring

MCP Filesystem Ultra includes an audit logging system and a separate web dashboard binary for monitoring operations, backups, normalizer activity, and error patterns.

The system has two parts:

  1. Audit logging — enabled via --log-dir on the MCP server, writes JSON Lines logs and metrics snapshots
  2. Dashboard binary — separate dashboard.exe that reads those log files and serves a web UI

There is no direct coupling between the MCP server and the dashboard. Communication is file-based only.


Add --log-dir to the MCP server args:

{
"mcpServers": {
"filesystem-ultra": {
"command": "C:\\path\\to\\filesystem-ultra-v4.exe",
"args": [
"--log-dir", "C:\\Logs\\MCP",
"C:\\project",
"C:\\Logs\\MCP"
]
}
}
}

Note: Include the log directory in allowed paths so the dashboard can read it.

When --log-dir is not set, audit logging is disabled with zero overhead — the auditWrap() middleware returns the handler unchanged.

FileFormatDescription
operations.jsonlJSON LinesOne entry per tool call. Auto-rotates at 10 MB, keeps last 3 files.
metrics.jsonJSONPerformance snapshot, updated every 30 seconds.
normalizer_stats.jsonJSONNormalizer activity: per-tool counts, per-rule hits, recent normalizations.

Each line in operations.jsonl contains:

FieldDescription
tsTimestamp (ISO 8601)
toolTool name (e.g., edit_file, search_files)
pathPrimary file path
duration_msExecution time in milliseconds
bytes_in / bytes_outBytes received / returned
statusok or error
errorError message (when status is error)
riskRisk level (LOW, MEDIUM, HIGH, CRITICAL)
file_sizeSize of the primary file
argsSummarized parameters
sub_opSub-operation detail (e.g., step:2/5:edit-step:edit for pipeline steps)
lines_changedNumber of lines modified
matchesNumber of search/count matches
cache_hitWhether the operation hit cache
normsNormalizations applied (rule ID, param, from/to)

All 16 tools are wrapped with auditWrap() in main.go.

metrics.json contains:

{
"updated_at": "2026-03-16T10:30:00Z",
"ops_total": 1234,
"ops_per_sec": 2.5,
"cache_hit_rate": 0.92,
"memory_mb": 85.3,
"reads": 450,
"writes": 120,
"lists": 200,
"searches": 300,
"edits": {
"total": 164,
"targeted": 140,
"rewrites": 24,
"avg_bytes": 350.5
}
}

When a pipeline runs with --log-dir enabled, each completed step emits a separate audit entry with sub_op: "step:N/M:stepID:action". This enables per-step visibility in the dashboard.


Terminal window
# Build
go build -ldflags="-s -w" -trimpath -o dashboard.exe ./cmd/dashboard/
# Run
dashboard.exe --log-dir=C:\Logs\MCP --backup-dir=C:\Backups\MCP --port=9100

Or use the included run-dashboard.bat.

FlagDefaultDescription
--log-dir(required)Same directory as the MCP server’s --log-dir
--backup-dir(required)Same directory as the MCP server’s --backup-dir
--port9100HTTP port for the web UI
  • Single binary with embedded web assets (go:embed)
  • Reads operations.jsonl and metrics.json from --log-dir
  • Real-time updates via Server-Sent Events (SSE)
  • Backup cache with 30-second TTL (avoids repeated disk scans)
  • Unified backup format: both normal backups and batch backups normalized into a common structure

Live overview of server activity:

  • Operations per second
  • Cache hit rate
  • Memory usage
  • Operation breakdown (reads, writes, searches, edits)
  • Edit telemetry (targeted vs. rewrites, average bytes)

Searchable, filterable list of all tool invocations:

  • Timestamp, tool name, path, duration, status
  • Risk level indicators
  • Normalizations applied
  • Sub-operation detail for pipeline steps

Enterprise search/filter/recovery system:

  • Summary cards: Total backups, total size, latest backup, protected files
  • Search: Text filter (file name, backup ID, context), operation type dropdown, date presets (today, 24h, 7d, 30d, custom range)
  • Content search: Grep inside backup files with context snippets (2 lines before/after match, 10-second timeout)
  • Pagination: Server-side with configurable limit/offset
EndpointDescription
GET /api/backupsAll backups (cached, unified format)
GET /api/backups/search?q=&operation=&preset=&from=&to=&limit=&offset=Filtered + paginated
GET /api/backups/search-content?q=&max_results=Grep inside backup files
GET /api/backups/detail/{id}Single backup with file details
GET /api/backups/file/{id}/{filename}Serve backup file content

Aggregate statistics over the session:

  • Operations by tool
  • Top paths by access count
  • Error rates
  • Duration percentiles

Normalizer activity monitoring:

  • Total requests processed vs. normalized
  • Per-tool normalization counts
  • Per-rule hit counts (which rules fire most)
  • Recent normalizations with timestamps

Edit-specific telemetry:

  • Targeted edits vs. rewrites
  • Average bytes per edit
  • Risk level distribution
  • Edit patterns over time

Error monitoring and analysis:

  • Errors by tool
  • Error messages grouped by pattern
  • Recent errors with full context

Full observability setup:

{
"mcpServers": {
"filesystem-ultra": {
"command": "C:\\path\\to\\filesystem-ultra-v4.exe",
"args": [
"--compact-mode",
"--log-dir", "C:\\Logs\\MCP",
"--log-level", "info",
"--backup-dir", "C:\\Backups\\MCP",
"--backup-max-age", "30",
"--backup-max-count", "500",
"C:\\project",
"C:\\Backups\\MCP",
"C:\\Logs\\MCP"
]
}
}
}

Then run the dashboard:

Terminal window
dashboard.exe --log-dir=C:\Logs\MCP --backup-dir=C:\Backups\MCP --port=9100

Access at http://localhost:9100.



Last updated: March 2026 Version: 4.1.0