BNNR

Artifacts and Run Output

What you will find here

Real on-disk outputs produced by BNNR and consumed by report/dashboard/export flows.

When to use this page

Use this for experiment tracking, debugging missing outputs, and integration with external tooling.

Output directories

Configured by BNNRConfig:

  • checkpoint_dir
  • report_dir

Typical run structure:

report_dir/
  run_YYYYMMDD_HHMMSS/
    report.json
    events.jsonl
    run.log
    artifacts/
      xai/
      samples/
      candidate_previews/

Checkpoints are saved in checkpoint_dir.

report.json

Generated by Reporter (src/bnnr/reporting.py).

Common top-level keys:

  • config
  • best_path
  • best_metrics
  • selected_augmentations
  • total_time
  • checkpoints
  • iteration_summaries
  • analysis

For detection runs, best_metrics includes map_50 and map_50_95.

events.jsonl

Written when event_log_enabled=true. Used by replay and export in src/bnnr/events.py and src/bnnr/dashboard/backend.py.

Common event types emitted by current code:

  • run_started
  • dataset_profile
  • probe_set_initialized
  • pipeline_phase
  • epoch_end
  • branch_created
  • branch_evaluated
  • branch_selected
  • sample_snapshot
  • sample_prediction_snapshot
  • xai_snapshot
  • pipeline_complete

Dashboard export artifacts

python3 -m bnnr dashboard export --run-dir <run_dir> --out <out_dir> writes:

  • index.html
  • data/events.jsonl
  • data/state.json
  • optional data/report.json
  • copied artifacts/
  • manifest.json

Operational checks

If replay/export appears empty, verify:

  • target run directory exists,
  • events.jsonl exists and is non-empty,
  • run was produced with event logging enabled (CLI keeps this enabled for train command).

For end-user dashboard operations (live/replay/mobile/QR), see the Dashboard Guide.