Tag Archives: ollama

How to force locally run Ollama AI models to use all your CPU or GPU cores

By | May 12, 2025

Experimenting with different Ollama sourced AI models I discovered that sometimes in a very strange way my CPU or GPU resources are not used efficiently. By looking at the way Ollama models are packed we see that they are basically very similar to docker images where a full environment is specified. For example we can… Read More »