vllm

vLLM

High-throughput LLM inference.

vLLM server configs, tensor parallelism and quantization settings for fast LLM serving.

Configuration Recipes0

No recipes yet for vLLM. Check back soon.