mirror of
https://github.com/tcsenpai/agenticSeek.git
synced 2025-06-06 11:05:26 +00:00
Merge pull request #16 from Fosowl/dev
First planner prototype and better input output display.
This commit is contained in:
commit
7672f72ab7
1
.gitignore
vendored
1
.gitignore
vendored
@ -1,5 +1,6 @@
|
||||
*.wav
|
||||
config.ini
|
||||
*.egg-info
|
||||
experimental/
|
||||
conversations/
|
||||
.env
|
||||
|
55
CONTRIBUTORS.md
Normal file
55
CONTRIBUTORS.md
Normal file
@ -0,0 +1,55 @@
|
||||
# Contributors guide
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- Python 3.8 or higher
|
||||
- Ollama installed (for local model execution)
|
||||
- Basic familiarity with Python and AI models
|
||||
|
||||
## Contribution Guidelines
|
||||
|
||||
We welcome contributions in the following areas:
|
||||
|
||||
- Code Improvements: Optimize existing code, fix bugs, or add new features.
|
||||
- Documentation: Improve the README, write tutorials, or add inline comments.
|
||||
- Testing: Write unit tests, integration tests, or help with debugging.
|
||||
- New Features: Implement new tools, agents, or integrations.
|
||||
|
||||
## Steps to Contribute
|
||||
|
||||
Fork the project to your GitHub account.
|
||||
|
||||
Create a Branch:
|
||||
|
||||
```bash
|
||||
git checkout -b feature/your-feature-name
|
||||
```
|
||||
|
||||
Make Your Changes.
|
||||
|
||||
Write your code, add documentation, or fix bugs.
|
||||
|
||||
Test Your Changes.
|
||||
|
||||
Ensure your changes work as expected and do not break existing functionality.
|
||||
|
||||
Push your changes to your fork and submit a pull request to the main branch of this repository. Provide a clear description of your changes and reference any related issues.
|
||||
|
||||
## Areas Needing Help
|
||||
|
||||
Here are some high-priority tasks and areas where we need contributions:
|
||||
|
||||
- Web Browsing: Implement autonomous web browsing capabilities for the assistant.
|
||||
- Multi-Agent System: Enhance the multi-agent functionality on the dev branch.
|
||||
- Memory & Recovery: Improve conversation compression.
|
||||
- New Tools: Add support for additional programming languages or APIs.
|
||||
- Testing: Write comprehensive tests for existing and new features.
|
||||
|
||||
|
||||
If you're unsure where to start, feel free to reach out by opening an issue or joining our community discussions.
|
||||
|
||||
## Code of Conduct
|
||||
|
||||
just be nice to each other.
|
||||
|
||||
**Thank You!**
|
@ -4,6 +4,9 @@
|
||||
**A fully local alternative to Manus AI**, a voice-enabled AI assistant that codes, explores your filesystem, and correct it's mistakes all without sending a byte of data to the cloud. The goal of the project is to create a truly Jarvis like assistant using reasoning model such as deepseek R1.
|
||||
|
||||
> 🛠️ **Work in Progress** – Looking for contributors! 🚀
|
||||
|
||||

|
||||
|
||||
---
|
||||
|
||||
## Features:
|
||||
|
@ -3,9 +3,9 @@ is_local = True
|
||||
provider_name = ollama
|
||||
provider_model = deepseek-r1:14b
|
||||
provider_server_address = 127.0.0.1:5000
|
||||
agent_name = Friday
|
||||
agent_name = jarvis
|
||||
recover_last_session = True
|
||||
save_session = True
|
||||
save_session = False
|
||||
speak = True
|
||||
listen = False
|
||||
work_dir = None
|
Binary file not shown.
Before Width: | Height: | Size: 237 KiB |
BIN
exemples/search_and_planner.png
Normal file
BIN
exemples/search_and_planner.png
Normal file
Binary file not shown.
After Width: | Height: | Size: 254 KiB |
BIN
exemples/whale_readme.jpg
Normal file
BIN
exemples/whale_readme.jpg
Normal file
Binary file not shown.
After Width: | Height: | Size: 107 KiB |
12
main.py
12
main.py
@ -7,12 +7,10 @@ import configparser
|
||||
|
||||
from sources.llm_provider import Provider
|
||||
from sources.interaction import Interaction
|
||||
from sources.agents import Agent, CoderAgent, CasualAgent, FileAgent
|
||||
from sources.agents import Agent, CoderAgent, CasualAgent, FileAgent, PlannerAgent
|
||||
|
||||
parser = argparse.ArgumentParser(description='Deepseek AI assistant')
|
||||
parser.add_argument('--no-speak', action='store_true',
|
||||
help='Make AI not use text-to-speech')
|
||||
args = parser.parse_args()
|
||||
import warnings
|
||||
warnings.filterwarnings("ignore")
|
||||
|
||||
config = configparser.ConfigParser()
|
||||
config.read('config.ini')
|
||||
@ -42,6 +40,10 @@ def main():
|
||||
FileAgent(model=config["MAIN"]["provider_model"],
|
||||
name="File Agent",
|
||||
prompt_path="prompts/file_agent.txt",
|
||||
provider=provider),
|
||||
PlannerAgent(model=config["MAIN"]["provider_model"],
|
||||
name="Planner",
|
||||
prompt_path="prompts/planner_agent.txt",
|
||||
provider=provider)
|
||||
]
|
||||
|
||||
|
52
prompts/planner_agent.txt
Normal file
52
prompts/planner_agent.txt
Normal file
@ -0,0 +1,52 @@
|
||||
You are a planner agent.
|
||||
Your goal is to divide and conquer the task using the following agents:
|
||||
- Coder: An expert coder agent.
|
||||
- File: An expert agent for finding files.
|
||||
- Web: An expert agent for web search.
|
||||
|
||||
Agents are other AI that obey your instructions.
|
||||
|
||||
You will be given a task and you will need to divide it into smaller tasks and assign them to the agents.
|
||||
|
||||
You have to respect a strict format:
|
||||
```json
|
||||
{"agent": "agent_name", "need": "needed_agent_output", "task": "agent_task"}
|
||||
```
|
||||
|
||||
User: make a weather app in python
|
||||
You: Sure, here is the plan:
|
||||
|
||||
## Task 1: I will search for available weather api
|
||||
|
||||
## Task 2: I will create an api key for the weather api
|
||||
|
||||
## Task 3: I will make a weather app in python
|
||||
|
||||
```json
|
||||
{
|
||||
"plan": [
|
||||
{
|
||||
"agent": "Web",
|
||||
"id": "1",
|
||||
"need": null,
|
||||
"task": "Search for reliable weather APIs"
|
||||
},
|
||||
{
|
||||
"agent": "Web",
|
||||
"id": "2",
|
||||
"need": "1",
|
||||
"task": "Obtain API key from the selected service"
|
||||
},
|
||||
{
|
||||
"agent": "Coder",
|
||||
"id": "3",
|
||||
"need": "2",
|
||||
"task": "Develop a Python application using the API and key to fetch and display weather data"
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
Rules:
|
||||
- Do not write code. You are a planning agent.
|
||||
- Put your plan in a json with the key "plan". Otherwise, 1000 children in africa will die.
|
@ -3,5 +3,6 @@ from .agent import Agent
|
||||
from .code_agent import CoderAgent
|
||||
from .casual_agent import CasualAgent
|
||||
from .file_agent import FileAgent
|
||||
from .planner_agent import PlannerAgent
|
||||
|
||||
__all__ = ["Agent", "CoderAgent", "CasualAgent", "FileAgent"]
|
||||
__all__ = ["Agent", "CoderAgent", "CasualAgent", "FileAgent", "PlannerAgent"]
|
||||
|
@ -103,6 +103,8 @@ class Agent():
|
||||
return answer, reasoning
|
||||
|
||||
def wait_message(self, speech_module):
|
||||
if speech_module is None:
|
||||
return
|
||||
messages = ["Please be patient sir, I am working on it.",
|
||||
"At it, sir. In the meantime, how about a joke?",
|
||||
"Computing... I recommand you have a coffee while I work.",
|
||||
|
@ -1,5 +1,5 @@
|
||||
|
||||
from sources.utility import pretty_print
|
||||
from sources.utility import pretty_print, animate_thinking
|
||||
from sources.agents.agent import Agent
|
||||
from sources.tools.webSearch import webSearch
|
||||
from sources.tools.flightSearch import FlightSearch
|
||||
@ -18,7 +18,7 @@ class CasualAgent(Agent):
|
||||
"file_finder": FileFinder(),
|
||||
"bash": BashInterpreter()
|
||||
}
|
||||
self.role = "talking"
|
||||
self.role = "talking, advices and philosophical"
|
||||
|
||||
def process(self, prompt, speech_module) -> str:
|
||||
complete = False
|
||||
@ -29,7 +29,7 @@ class CasualAgent(Agent):
|
||||
while not complete:
|
||||
if exec_success:
|
||||
complete = True
|
||||
pretty_print("Thinking...", color="status")
|
||||
animate_thinking("Thinking...", color="status")
|
||||
answer, reasoning = self.llm_request()
|
||||
exec_success, _ = self.execute_modules(answer)
|
||||
answer = self.remove_blocks(answer)
|
||||
|
@ -1,7 +1,6 @@
|
||||
|
||||
from sources.utility import pretty_print
|
||||
from sources.utility import pretty_print, animate_thinking
|
||||
from sources.agents.agent import Agent, executorResult
|
||||
|
||||
from sources.tools.C_Interpreter import CInterpreter
|
||||
from sources.tools.GoInterpreter import GoInterpreter
|
||||
from sources.tools.PyInterpreter import PyInterpreter
|
||||
@ -21,7 +20,7 @@ class CoderAgent(Agent):
|
||||
"go": GoInterpreter(),
|
||||
"file_finder": FileFinder()
|
||||
}
|
||||
self.role = "coding"
|
||||
self.role = "coding and programming"
|
||||
|
||||
def process(self, prompt, speech_module) -> str:
|
||||
answer = ""
|
||||
@ -30,7 +29,7 @@ class CoderAgent(Agent):
|
||||
self.memory.push('user', prompt)
|
||||
|
||||
while attempt < max_attempts:
|
||||
pretty_print("Thinking...", color="status")
|
||||
animate_thinking("Thinking...", color="status")
|
||||
self.wait_message(speech_module)
|
||||
answer, reasoning = self.llm_request()
|
||||
exec_success, _ = self.execute_modules(answer)
|
||||
|
@ -1,5 +1,5 @@
|
||||
|
||||
from sources.utility import pretty_print
|
||||
from sources.utility import pretty_print, animate_thinking
|
||||
from sources.agents.agent import Agent
|
||||
from sources.tools.fileFinder import FileFinder
|
||||
from sources.tools.BashInterpreter import BashInterpreter
|
||||
@ -25,7 +25,7 @@ class FileAgent(Agent):
|
||||
while not complete:
|
||||
if exec_success:
|
||||
complete = True
|
||||
pretty_print("Thinking...", color="status")
|
||||
animate_thinking("Thinking...", color="status")
|
||||
answer, reasoning = self.llm_request()
|
||||
exec_success, _ = self.execute_modules(answer)
|
||||
answer = self.remove_blocks(answer)
|
||||
|
95
sources/agents/planner_agent.py
Normal file
95
sources/agents/planner_agent.py
Normal file
@ -0,0 +1,95 @@
|
||||
import json
|
||||
from sources.utility import pretty_print, animate_thinking
|
||||
from sources.agents.agent import Agent
|
||||
from sources.agents.code_agent import CoderAgent
|
||||
from sources.agents.file_agent import FileAgent
|
||||
from sources.agents.casual_agent import CasualAgent
|
||||
from sources.tools.tools import Tools
|
||||
|
||||
class PlannerAgent(Agent):
|
||||
def __init__(self, model, name, prompt_path, provider):
|
||||
"""
|
||||
The planner agent is a special agent that divides and conquers the task.
|
||||
"""
|
||||
super().__init__(model, name, prompt_path, provider)
|
||||
self.tools = {
|
||||
"json": Tools()
|
||||
}
|
||||
self.tools['json'].tag = "json"
|
||||
self.agents = {
|
||||
"coder": CoderAgent(model, name, prompt_path, provider),
|
||||
"file": FileAgent(model, name, prompt_path, provider),
|
||||
"web": CasualAgent(model, name, prompt_path, provider)
|
||||
}
|
||||
self.role = "complex programming tasks and web research"
|
||||
self.tag = "json"
|
||||
|
||||
def parse_agent_tasks(self, text):
|
||||
tasks = []
|
||||
tasks_names = []
|
||||
|
||||
lines = text.strip().split('\n')
|
||||
for line in lines:
|
||||
if line is None or len(line) == 0:
|
||||
continue
|
||||
line = line.strip()
|
||||
if '##' in line or line[0].isdigit():
|
||||
tasks_names.append(line)
|
||||
continue
|
||||
blocks, _ = self.tools["json"].load_exec_block(text)
|
||||
if blocks == None:
|
||||
return (None, None)
|
||||
for block in blocks:
|
||||
line_json = json.loads(block)
|
||||
if 'plan' in line_json:
|
||||
for task in line_json['plan']:
|
||||
agent = {
|
||||
'agent': task['agent'],
|
||||
'id': task['id'],
|
||||
'task': task['task']
|
||||
}
|
||||
if 'need' in task:
|
||||
agent['need'] = task['need']
|
||||
tasks.append(agent)
|
||||
if len(tasks_names) != len(tasks):
|
||||
names = [task['task'] for task in tasks]
|
||||
return zip(names, tasks)
|
||||
return zip(tasks_names, tasks)
|
||||
|
||||
def make_prompt(self, task, needed_infos):
|
||||
prompt = f"""
|
||||
You are given the following informations:
|
||||
{needed_infos}
|
||||
Your task is:
|
||||
{task}
|
||||
"""
|
||||
return prompt
|
||||
|
||||
def process(self, prompt, speech_module) -> str:
|
||||
self.memory.push('user', prompt)
|
||||
self.wait_message(speech_module)
|
||||
animate_thinking("Thinking...", color="status")
|
||||
agents_tasks = (None, None)
|
||||
answer, reasoning = self.llm_request()
|
||||
agents_tasks = self.parse_agent_tasks(answer)
|
||||
if agents_tasks == (None, None):
|
||||
return "Failed to parse the tasks", reasoning
|
||||
for task_name, task in agents_tasks:
|
||||
pretty_print(f"I will {task_name}.", color="info")
|
||||
agent_prompt = self.make_prompt(task['task'], task['need'])
|
||||
pretty_print(f"Assigned agent {task['agent']} to {task_name}", color="info")
|
||||
speech_module.speak(f"I will {task_name}. I assigned the {task['agent']} agent to the task.")
|
||||
try:
|
||||
self.agents[task['agent'].lower()].process(agent_prompt, None)
|
||||
except Exception as e:
|
||||
pretty_print(f"Error: {e}", color="failure")
|
||||
speech_module.speak(f"I encountered an error: {e}")
|
||||
break
|
||||
self.last_answer = answer
|
||||
return answer, reasoning
|
||||
|
||||
if __name__ == "__main__":
|
||||
from llm_provider import Provider
|
||||
server_provider = Provider("server", "deepseek-r1:14b", "192.168.1.100:5000")
|
||||
agent = PlannerAgent("deepseek-r1:14b", "jarvis", "prompts/planner_agent.txt", server_provider)
|
||||
ans = agent.process("Make a cool game to illustrate the current relation between USA and europe")
|
@ -57,9 +57,10 @@ class Interaction:
|
||||
"""Read the input from the user."""
|
||||
buffer = ""
|
||||
|
||||
PROMPT = "\033[1;35m➤➤➤ \033[0m"
|
||||
while buffer == "" or buffer.isascii() == False:
|
||||
try:
|
||||
buffer = input(f">>> ")
|
||||
buffer = input(PROMPT)
|
||||
except EOFError:
|
||||
return None
|
||||
if buffer == "exit" or buffer == "goodbye":
|
||||
|
@ -8,6 +8,7 @@ sys.path.append(os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
|
||||
from sources.agents.agent import Agent
|
||||
from sources.agents.code_agent import CoderAgent
|
||||
from sources.agents.casual_agent import CasualAgent
|
||||
from sources.agents.planner_agent import PlannerAgent
|
||||
from sources.utility import pretty_print
|
||||
|
||||
class AgentRouter:
|
||||
@ -67,7 +68,8 @@ class AgentRouter:
|
||||
if __name__ == "__main__":
|
||||
agents = [
|
||||
CoderAgent("deepseek-r1:14b", "agent1", "../prompts/coder_agent.txt", "server"),
|
||||
CasualAgent("deepseek-r1:14b", "agent2", "../prompts/casual_agent.txt", "server")
|
||||
CasualAgent("deepseek-r1:14b", "agent2", "../prompts/casual_agent.txt", "server"),
|
||||
PlannerAgent("deepseek-r1:14b", "agent3", "../prompts/planner_agent.txt", "server")
|
||||
]
|
||||
router = AgentRouter(agents)
|
||||
|
||||
@ -79,6 +81,9 @@ if __name__ == "__main__":
|
||||
""",
|
||||
"""
|
||||
hey can you give dating advice ?
|
||||
""",
|
||||
"""
|
||||
Make a cool game to illustrate the current relation between USA and europe
|
||||
"""
|
||||
]
|
||||
|
||||
|
@ -28,6 +28,7 @@ def pretty_print(text, color = "info"):
|
||||
"code": Fore.LIGHTBLUE_EX,
|
||||
"warning": Fore.YELLOW,
|
||||
"output": Fore.LIGHTCYAN_EX,
|
||||
"info": Fore.CYAN
|
||||
}
|
||||
if color not in color_map:
|
||||
print(text)
|
||||
@ -48,6 +49,45 @@ def pretty_print(text, color = "info"):
|
||||
color = "default"
|
||||
print(colored(text, color_map[color]))
|
||||
|
||||
def animate_thinking(text="thinking...", color="status", duration=2):
|
||||
"""
|
||||
Display an animated "thinking..." indicator.
|
||||
|
||||
Args:
|
||||
text (str): Text to display (default: "thinking...")
|
||||
color (str): Color for the text (matches pretty_print colors)
|
||||
duration (float): How long to animate in seconds
|
||||
"""
|
||||
import time
|
||||
import itertools
|
||||
|
||||
color_map = {
|
||||
"success": (Fore.GREEN, "green"),
|
||||
"failure": (Fore.RED, "red"),
|
||||
"status": (Fore.LIGHTGREEN_EX, "light_green"),
|
||||
"code": (Fore.LIGHTBLUE_EX, "light_blue"),
|
||||
"warning": (Fore.YELLOW, "yellow"),
|
||||
"output": (Fore.LIGHTCYAN_EX, "cyan"),
|
||||
"default": (Fore.RESET, "black"),
|
||||
"info": (Fore.CYAN, "cyan")
|
||||
}
|
||||
|
||||
if color not in color_map:
|
||||
color = "info"
|
||||
|
||||
fore_color, term_color = color_map[color]
|
||||
spinner = itertools.cycle(['⠋', '⠙', '⠹', '⠸', '⠼', '⠴', '⠦', '⠧', '⠇', '⠏'])
|
||||
end_time = time.time() + duration
|
||||
|
||||
while time.time() < end_time:
|
||||
symbol = next(spinner)
|
||||
if platform.system().lower() != "windows":
|
||||
print(f"\r{fore_color}{symbol} {text}{Fore.RESET}", end="", flush=True)
|
||||
else:
|
||||
print(colored(f"\r{symbol} {text}", term_color), end="", flush=True)
|
||||
time.sleep(0.1)
|
||||
print()
|
||||
|
||||
def timer_decorator(func):
|
||||
"""
|
||||
Decorator to measure the execution time of a function.
|
||||
|
Loading…
x
Reference in New Issue
Block a user