2025-10-20

Do Local AI Agents Dream of Electric Sheep? (Setup Addendum)

Notes
This article was translated by GPT-5.2-Codex. The original is here.

Recap

This article is a supplement to Do Local AI Agents Dream of Electric Sheep? (Environment Setup) covering what I couldn't in the previous post. Last time I installed Ollama and LM Studio, then ran the mastra sample code using the API server created by LM Studio.

This time I explain how to run the sample using Ollama instead of LM Studio.

Using Ollama

The official docs recommend ollama-ai-provider-v2 for Ollama.

However, there is a bug where agents do not use tools with ollama-ai-provider-v2. As reported in the issue below, it fails to generate text after tool calls.

As noted in the issue, you can avoid this by using @ai-sdk/openai-compatible instead.

src/mastra/agents/weather-agent.ts

Specifically, define an ollama variable (name arbitrary) instead of lmstudio, and set baseURL to localhost port 11434, which is Ollama's default. If you changed the port, use your own.

src/mastra/agents/weather-agent.tsdiff
1
diff --git a/src/mastra/agents/weather-agent.ts b/src/mastra/agents/weather-agent.ts
2
index eaedfcb..e22ecb9 100644
3
--- a/src/mastra/agents/weather-agent.ts
4
+++ b/src/mastra/agents/weather-agent.ts
5
@@ -10,6 +10,11 @@ const lmstudio = createOpenAICompatible({
6
apiKey: "lm-studio",
7
});
8
9
+const ollama = createOpenAICompatible({
10
+ name: "ollama",
11
+ baseURL: "http://localhost:11434/v1",
12
+});
13
+
14
export const weatherAgent = new Agent({
15
name: 'Weather Agent',
16
instructions: `
17
@@ -26,7 +31,8 @@ export const weatherAgent = new Agent({
18
19
Use the weatherTool to fetch current weather data.
20
`,
21
- model: lmstudio("openai/gpt-oss-20b"),
22
+ // model: lmstudio("openai/gpt-oss-20b"),
23
+ model: ollama("gpt-oss:20b"),
24
tools: { weatherTool },
25
memory: new Memory({
26
storage: new LibSQLStore({

Then use ollama for model and pass the model name. Here I want the gpt-oss 20b model, so gpt-oss:20b. You can check model names with ollama list.

1
$ ollama list
2
NAME ID SIZE MODIFIED
3
qwen3:8b 500a1f067a9f 5.2 GB 6 days ago
4
llama4:scout bf31604e25c2 67 GB 6 days ago
5
llama4:latest bf31604e25c2 67 GB 6 days ago
6
gpt-oss:20b aa4295ac10c3 13 GB 12 days ago
7
embeddinggemma:latest 85462619ee72 621 MB 13 days ago
8
nomic-embed-text:latest 0a109f422b47 274 MB 4 weeks ago
9
qwen3:30b-a3b-instruct-2507-fp16 c699578934a3 61 GB 5 weeks ago
10
gpt-oss:120b f7f8e2f8f4e0 65 GB 6 weeks ago
11
Gemma3:27b a418f5838eaf 17 GB 5 months ago

Check it works

Start mastra and enter a prompt.

You should get a response. On the right, the model should show as ollama.

Summary

As a supplement to the previous article, I showed how to run the mastra sample with Ollama instead of LM Studio.

I recommended LM Studio before, but there isn't much difference in practice. If the models available on Ollama are sufficient, Ollama is probably fine.

Amazon アソシエイトについて

この記事には Amazon アソシエイトのリンクが含まれています。Amazonのアソシエイトとして、SuzumiyaAoba は適格販売により収入を得ています。