Run LLMs locally.
Ollama model and environment configs for local LLM inference.
No recipes yet for Ollama. Check back soon.