
Ollama v0.19
Developer ToolsMassive local model speedup on Apple Silicon with MLX
About
Ollama v0.19 reconstructs Apple Silicon inference using MLX, delivering significantly faster local performance for coding and agent tasks. This update also introduces NVFP4 support along with more intelligent cache reuse, snapshots, and eviction for more responsive sessions.
Launched
April 2, 2026Week 4
Builder
BU
BuilderReviews
Be the first to review
Comments
Sign in to leave a comment
Sign In