Notebooks Guide (Production + Colab)
Scope
This guide covers all user notebooks in examples/ and how to run them in a way that is reproducible, useful, and dashboard-first.
Notebook Catalog
| Notebook | Goal | Dashboard usage | Typical runtime |
|---|---|---|---|
examples/bnnr_augmentations_guide.ipynb | Visualize all built-in augmentations and ICD/AICD behavior | Optional (focus is augmentation visuals) | Short |
examples/classification/bnnr_classification_demo.ipynb | End-to-end STL-10 classification with XAI and branch selection | Required (live tracking) | Medium |
examples/multilabel/bnnr_multilabel_demo.ipynb | Multi-label training flow (task="multilabel") | Required (live tracking) | Medium |
examples/detection/bnnr_detection_demo.ipynb | VOC detection flow with bbox-aware augmentations + detection XAI | Required (live tracking) | Long |
examples/bnnr_custom_data.ipynb | Bring-your-own classification/detection data patterns | Recommended | Medium |
Recommended order
bnnr_augmentations_guide.ipynbclassification/bnnr_classification_demo.ipynbmultilabel/bnnr_multilabel_demo.ipynbdetection/bnnr_detection_demo.ipynbbnnr_custom_data.ipynb
This order gives fastest understanding: augmentations -> core training loop -> task variants -> custom integration.
Local setup
python3 -m venv /tmp/bnnr-nb-venv
source /tmp/bnnr-nb-venv/bin/activate
python -m pip install --upgrade pip
pip install "bnnr[dashboard]"
pip install jupyter nbconvertRun:
jupyter labColab setup (recommended for first run)
- Open notebook via the “Open in Colab” badge.
- Runtime -> Change runtime type -> GPU (recommended for detection).
- Run installation cell first (
%pip install -q "bnnr[dashboard]" ...). - Run cells top-to-bottom without skipping.
Dashboard-first workflow (desktop + mobile)
For classification/multilabel/detection notebooks:
- Run dashboard section before training.
- Confirm local dashboard URL appears.
- On Colab, confirm iframe is rendered.
- Optional mobile/public tracking from Colab:
- set
NGROK_AUTHTOKEN, - use the provided
pyngrokURL, - open it on phone.
- set
What to verify while training:
- branch tree updates,
- KPI trend updates,
- samples/XAI section renders,
events.jsonlgrows inreport_dir.
Validation checklist (per notebook)
After each run verify:
- no traceback in output cells,
report.jsonexists,events.jsonlexists,- task metrics are present:
- classification:
accuracy,f1_*, - multilabel:
f1_samples,fbeta_*, - detection:
map_50,map_50_95.
- classification:
Optional replay/export checks:
python3 -m bnnr dashboard serve --run-dir <run_dir_parent> --port 8080
python3 -m bnnr dashboard export --run-dir <run_dir> --out /tmp/exported_dashboardQuality notes from notebook audit
During B11 notebook hardening, the following were normalized:
- valid Colab links to
bnnr-team/bnnr, - modern CLI examples (
python3 -m bnnr ..., no legacybnnr.cli), - kernelspec metadata set to
python3, - dashboard cells aligned for local + Colab + optional mobile flow,
- artifact preview paths made compatible with current output layout.
Common pitfalls
- Missing
jupyter/nbconvertlocally. - Expecting full detection runs to finish quickly on CPU.
- Confusing YOLO
data.yamlwith BNNR config YAML. - Forgetting to keep
events.jsonl(required for replay/export).
For concrete fixes, see Troubleshooting.