2025-03-09 13:09:35 +01:00
2025-03-06 20:49:04 +01:00
2025-03-07 13:59:57 +01:00
2025-03-06 17:57:03 +01:00
2025-03-07 20:37:29 +01:00
2025-03-06 17:57:03 +01:00
2025-03-09 13:09:35 +01:00
2025-03-07 20:35:43 +01:00

AgenticSeek: Fully local AI Assistant Powered by Deepseek R1 Agents.

A fully local AI assistant using AI agents. The goal of the project is to create a truly Jarvis like assistant using reasoning model such as deepseek R1.

🛠️ Work in Progress Looking for contributors! 🚀


Features:

  • Privacy-first: Runs 100% locally no data leaves your machine
  • Voice-enabled: Speak and interact naturally
  • Filesystem interaction: Use bash to interact with your filesystem.
  • Coding abilities: Code in Python, C, Golang, and soon more
  • Trial-and-error: Automatically fixes code or command upon execution failure
  • Agent routing: Select the best agent for the task
  • Multi-agent: For complex tasks, divide and conquer with multiple agents
  • Tools:: All agents have their respective tools ability. Basic search, flight API, files explorer, etc...
  • Web browsing (not implemented yet | Hight priority task): Browse the web autonomously to conduct task.
  • Memory&Recovery: Compress conversation over time to retain usefull information, recover conversation session.

Run locally

We recommand you use at least deepseek 14b, as smaller small will have trouble using tool and forget quickly the conversations.

1 Install Dependencies

pip3 install -r requirements.txt

2 Download Models

Make sure you have Ollama installed.

Download the deepseek-r1:7b model from DeepSeek

ollama pull deepseek-r1:7b

3 Run the Assistant (Ollama)

Start the ollama server

ollama serve

Change the config.ini file to set the provider_name to ollama and provider_model to deepseek-r1:7b

[MAIN]
is_local = True
provider_name = ollama
provider_model = deepseek-r1:7b

Run the assistant:

python3 main.py

Alternative: Run the LLM on your own server

1 Set up and start the server scripts

On your "server" that will run the AI model, get the ip address

ip a | grep "inet " | grep -v 127.0.0.1 | awk '{print $2}' | cut -d/ -f1

Clone the repository and then, run the script stream_llm.py in server/

python3 stream_llm.py

2 Run it

Now on your personal computer:

Clone the repository.

Change the config.ini file to set the provider_name to server and provider_model to deepseek-r1:7b. Set the provider_server_address to the ip address of the machine that will run the model.

[MAIN]
is_local = False
provider_name = server
provider_model = deepseek-r1:14b
provider_server_address = x.x.x.x:5000

Run the assistant:

python3 main.py

Provider

Currently the only provider are :

  • ollama -> Use ollama running on your computer. Ollama program for running locally large language models.
  • server -> A custom script that allow you to have the LLM model run on another machine. Currently it use ollama but we'll switch to other options soon.
  • openai -> Use ChatGPT API (not private).
  • deepseek -> Deepseek API (not private).

To select a provider change the config.ini:

is_local = False
provider_name = openai
provider_model = gpt-4o
provider_server_address = 127.0.0.1:5000

is_local: should be True for any locally running LLM, otherwise False.

provider_name: Select the provider to use by its name, see the provider list above.

provider_model: Set the model to use by the agent.

provider_server_address: can be set to anything if you are not using the server provider.

Description
No description provided
Readme GPL-3.0
Languages
Python 86.8%
JavaScript 5.2%
Shell 3.6%
CSS 3%
Batchfile 1.1%
Other 0.2%