diff --git a/README.md b/README.md index 5879256..f401b49 100644 --- a/README.md +++ b/README.md @@ -145,12 +145,15 @@ The 32B model needs a GPU with 24GB+ VRAM. Deepseek R1 excels at reasoning and tool use for its size. We think it’s a solid fit for our needs—other models work fine, but Deepseek is our primary pick. **Q: I get an error running `main.py`. What do I do?** + Ensure Ollama is running (`ollama serve`), your `config.ini` matches your provider, and dependencies are installed. If none work feel free to raise an issue. **Q: Can it really run 100% locally?** + Yes with Ollama or Server providers, all speech to text, LLM and text to speech model run locally. Non-local options (OpenAI, Deepseek API) are optional. **Q: How can it is older than manus ?** + we started this a fun side project to make a fully local, Jarvis-like AI. However, with the rise of Manus and openManus, we saw the opportunity to redirected some tasks priority to make yet another alternative. **Q: How is it better than manus or openManus ?**