Documentation Hub

NeuroMIB Docs

Everything needed to install, submit, validate, and run EvalAI-hosted benchmark evaluation.

Version: v1.0 | Last updated: 2026-03-30

Quick Navigation

Install and Quickstart

Set up the package and prepare artifacts for EvalAI-hosted evaluation.

Open Install Guide

Submission and Validation

Prepare required files and validate your bundle before scoring.

Open Submission Guide

Core CLI + EvalAI Workflow

neuromib-generate --config neuromib/configs/public/family1_public.yaml --output data/public/family1
neuromib-validate-submission --submission-dir submissions/example
neuromib-score --instance data/public/family1/instance_000.npz --hidden data/public/family1/instance_000_hidden.npz --submission-dir submissions/example

Use local validation/scoring as preflight checks before EvalAI submission.

Required Submission Bundle

  • metadata.json
  • latent_predictions.parquet
  • mechanism_prediction.json
  • support_predictions.parquet
  • intervention_predictions.parquet

All required files must be present and schema-compliant for leaderboard inclusion.

Artifact Expectations

  • metadata.json: method identity, track, and benchmark version metadata.
  • mechanism_prediction.json: primary mechanism label prediction.
  • *.parquet: long-form prediction tables with required columns for latent/support/intervention tasks.

Troubleshooting

  • Validation fails: confirm all required filenames and parquet columns exist.
  • Scoring mismatch: verify instance and hidden files are from the same benchmark run.
  • Low intervention score: inspect rank consistency and output-change calibration.