llm-eval-pipeline/ ├── serve/ │ ├── serve.py # Start the ollama inference server │ └── client.py # Sample prompt generation client ├── eval_runner/ │ ├── model.py # Custom lm-eval wrapper for ollama │ ...
submitted to Neurocomputing as an Open Source Project (OSP). The code provides a unified MONAI-based pipeline to train and evaluate a set of 3D architectures (CNNs and transformers) on multiple ...