mirror of
https://github.com/tcsenpai/agenticSeek.git
synced 2025-06-06 11:05:26 +00:00
Docs : improve readme
This commit is contained in:
commit
8b619fe71f
107
README.md
107
README.md
@ -1,56 +1,105 @@
|
|||||||
# localJarvis
|
|
||||||
|
|
||||||
A fully local assistant using swarm of deepseek agents, with multiple capabilities such as code execution, web browsing, etc...
|
# 🚀 agenticSeek: Local AI Assistant Powered by DeepSeek Agents
|
||||||
|
|
||||||
THIS IS A WORK IN PROGRESS
|
**A fully local AI assistant** using a swarm of DeepSeek agents, capable of:
|
||||||
|
✅ **Code execution** (Python, Bash)
|
||||||
|
✅ **Web browsing**
|
||||||
|
✅ **Speech-to-text & text-to-speech**
|
||||||
|
✅ **Self-correcting code execution**
|
||||||
|
|
||||||
## Install
|
> 🛠️ **Work in Progress** – Looking for contributors! 🚀
|
||||||
|
|
||||||
- Make sure you have ollama installed on your machine
|
---
|
||||||
- Install dependencies (`pip3 install -r requirements.txt`)
|
|
||||||
|
|
||||||
## Run fully local
|
## 🌟 Why?
|
||||||
|
|
||||||
Simplest way is to use ollama
|
- **Privacy-first**: Runs 100% locally – **no data leaves your machine**
|
||||||
- First change the config.ini file to set the provider_name to `ollama` and provider_model to `deepseek-r1:7b`
|
- ️ **Voice-enabled**: Speak and interact naturally
|
||||||
- In first terminal run `ollama serve`
|
- **Self-correcting**: Automatically fixes its own code
|
||||||
- In second terminal run `python3 main.py`
|
- **Multi-agent**: Use a swarm of agents to answer complex questions
|
||||||
- Ollama will download `deepseek-r1:7b` on your machine
|
- **Web browsing (not implemented yet)**: Browse the web and search the internet
|
||||||
- 2 model are also downloaded:
|
- **Knowledge base (not implemented yet)**: Use a knowledge base to answer questions
|
||||||
* For text to speech: `kokoro`
|
|
||||||
* For speech to text: `distil-whisper/distil-medium.en`
|
|
||||||
- type or say goodbye to exit.
|
|
||||||
|
|
||||||
# Run model on another machine
|
---
|
||||||
|
|
||||||
- First change the config.ini file to set the provider_name to `server` and provider_model to `deepseek-r1:7b` (or higher)
|
## Installation
|
||||||
- On the machine that will run the model execute the script in stream_llm.py
|
|
||||||
|
|
||||||
```
|
### 1️⃣ **Install Dependencies**
|
||||||
python3 stream_llm.py
|
Make sure you have [Ollama](https://ollama.com/) installed, then run:
|
||||||
|
```sh
|
||||||
|
pip3 install -r requirements.txt
|
||||||
```
|
```
|
||||||
|
|
||||||
- In the config.ini file, set the provider_server_address to the ip address of the machine that will run the model.
|
### 2️⃣ **Download Models**
|
||||||
|
|
||||||
- On the machine that will run the assistant execute main.py
|
Download the `deepseek-r1:7b` model from [DeepSeek](https://deepseek.com/models)
|
||||||
|
|
||||||
|
```sh
|
||||||
|
ollama pull deepseek-r1:7b
|
||||||
```
|
```
|
||||||
|
|
||||||
|
### 3️⃣ **Run the Assistant (Ollama)**
|
||||||
|
|
||||||
|
Start the ollama server
|
||||||
|
```sh
|
||||||
|
ollama serve
|
||||||
|
```
|
||||||
|
|
||||||
|
Change the config.ini file to set the provider_name to `ollama` and provider_model to `deepseek-r1:7b`
|
||||||
|
|
||||||
|
```sh
|
||||||
|
[MAIN]
|
||||||
|
is_local = True
|
||||||
|
provider_name = ollama
|
||||||
|
provider_model = deepseek-r1:7b
|
||||||
|
```
|
||||||
|
|
||||||
|
Run the assistant:
|
||||||
|
|
||||||
|
```sh
|
||||||
python3 main.py
|
python3 main.py
|
||||||
```
|
```
|
||||||
|
|
||||||
## Text to speech
|
### 4️⃣ **Alternative: Run the Assistant (Own Server)**
|
||||||
|
|
||||||
If you want your AI to speak, run with the `--speak` option.
|
On the other machine that will run the model execute the script in stream_llm.py
|
||||||
|
|
||||||
|
|
||||||
|
```sh
|
||||||
|
python3 stream_llm.py
|
||||||
```
|
```
|
||||||
python3 main.py --speak
|
|
||||||
|
Get the ip address of the machine that will run the model
|
||||||
|
|
||||||
|
```sh
|
||||||
|
ip a | grep "inet " | grep -v 127.0.0.1 | awk '{print $2}' | cut -d/ -f1
|
||||||
|
```
|
||||||
|
|
||||||
|
Change the `config.ini` file to set the `provider_name` to `server` and `provider_model` to `deepseek-r1:7b`.
|
||||||
|
Set the `provider_server_address` to the ip address of the machine that will run the model.
|
||||||
|
|
||||||
|
```sh
|
||||||
|
[MAIN]
|
||||||
|
is_local = False
|
||||||
|
provider_name = server
|
||||||
|
provider_model = deepseek-r1:14b
|
||||||
|
provider_server_address = x.x.x.x:5000
|
||||||
|
```
|
||||||
|
|
||||||
|
Run the assistant:
|
||||||
|
|
||||||
|
```sh
|
||||||
|
python3 main.py
|
||||||
```
|
```
|
||||||
|
|
||||||
## Current capabilities
|
## Current capabilities
|
||||||
|
|
||||||
- All running locally
|
- All running locally
|
||||||
- Reasoning with deepseek R1
|
- Reasoning with deepseek R1
|
||||||
- Python code execution capabilities
|
- Code execution capabilities (Python, Golang, C)
|
||||||
- Bash execution capabilities
|
- Shell control capabilities in bash
|
||||||
- Get feedback from python/bash interpreter attempt to fix code by itself.
|
- Will try to fix code by itself
|
||||||
- Fast text-to-speech using kokoro.
|
- Fast text-to-speech using kokoro.
|
||||||
|
- Speech-to-text using distil-whisper/distil-medium.en
|
||||||
|
- Web browsing (not implemented yet)
|
||||||
|
- Knowledge base RAG (not implemented yet)
|
Loading…
x
Reference in New Issue
Block a user