From 4c1fcf3ec7913409618a08cc3b1d8a52189e31d1 Mon Sep 17 00:00:00 2001 From: Martin <49105846+Fosowl@users.noreply.github.com> Date: Mon, 10 Mar 2025 14:22:50 +0100 Subject: [PATCH] Update README.md --- README.md | 3 +++ 1 file changed, 3 insertions(+) diff --git a/README.md b/README.md index 5879256..f401b49 100644 --- a/README.md +++ b/README.md @@ -145,12 +145,15 @@ The 32B model needs a GPU with 24GB+ VRAM. Deepseek R1 excels at reasoning and tool use for its size. We think it’s a solid fit for our needs—other models work fine, but Deepseek is our primary pick. **Q: I get an error running `main.py`. What do I do?** + Ensure Ollama is running (`ollama serve`), your `config.ini` matches your provider, and dependencies are installed. If none work feel free to raise an issue. **Q: Can it really run 100% locally?** + Yes with Ollama or Server providers, all speech to text, LLM and text to speech model run locally. Non-local options (OpenAI, Deepseek API) are optional. **Q: How can it is older than manus ?** + we started this a fun side project to make a fully local, Jarvis-like AI. However, with the rise of Manus and openManus, we saw the opportunity to redirected some tasks priority to make yet another alternative. **Q: How is it better than manus or openManus ?**