Install and Quickstart
Set up the package and prepare artifacts for EvalAI-hosted evaluation.
Open Install GuideEverything needed to install, submit, validate, and run EvalAI-hosted benchmark evaluation.
Set up the package and prepare artifacts for EvalAI-hosted evaluation.
Open Install GuidePrepare required files and validate your bundle before scoring.
Open Submission GuideContribute bug fixes, docs improvements, or benchmark features.
Open Contributing Guideneuromib-generate --config neuromib/configs/public/family1_public.yaml --output data/public/family1
neuromib-validate-submission --submission-dir submissions/example
neuromib-score --instance data/public/family1/instance_000.npz --hidden data/public/family1/instance_000_hidden.npz --submission-dir submissions/example
Use local validation/scoring as preflight checks before EvalAI submission.
metadata.jsonlatent_predictions.parquetmechanism_prediction.jsonsupport_predictions.parquetintervention_predictions.parquetAll required files must be present and schema-compliant for leaderboard inclusion.
metadata.json: method identity, track, and benchmark version metadata.mechanism_prediction.json: primary mechanism label prediction.*.parquet: long-form prediction tables with required columns for latent/support/intervention tasks.