mirror of
https://github.com/tcsenpai/agenticSeek.git
synced 2025-06-07 03:25:32 +00:00
upd readme
This commit is contained in:
parent
7c4f283a05
commit
81b772df9a
@ -24,7 +24,7 @@ COPY requirements.txt .
|
||||
RUN pip install --no-cache-dir -r requirements.txt
|
||||
|
||||
# Copy application code
|
||||
COPY app.py .
|
||||
COPY api.py .
|
||||
COPY sources/ ./sources/
|
||||
COPY prompts/ ./prompts/
|
||||
COPY config.ini .
|
||||
@ -33,4 +33,4 @@ COPY config.ini .
|
||||
EXPOSE 8000
|
||||
|
||||
# Run the application
|
||||
CMD ["uvicorn", "app:app", "--host", "0.0.0.0", "--port", "8000"]
|
||||
CMD ["uvicorn", "api:api", "--host", "0.0.0.0", "--port", "8000"]
|
26
README.md
26
README.md
@ -86,35 +86,37 @@ python3 setup.py install
|
||||
|
||||
**We recommend using at the very least Deepseek 14B, smaller models will struggle with tasks especially for web browsing.**
|
||||
|
||||
### 1️⃣ **Download Models**
|
||||
|
||||
Make sure you have [Ollama](https://ollama.com/) installed.
|
||||
### **Run the Assistant (Locally)**
|
||||
|
||||
Download the `deepseek-r1:14b` model from [DeepSeek](https://deepseek.com/models)
|
||||
Start your local provider, for example with ollama:
|
||||
|
||||
```sh
|
||||
ollama pull deepseek-r1:14b
|
||||
```
|
||||
|
||||
### 2️ **Run the Assistant (Ollama)**
|
||||
|
||||
Start the ollama server
|
||||
```sh
|
||||
ollama serve
|
||||
```
|
||||
|
||||
Change the config.ini file to set the provider_name to `ollama` and provider_model to `deepseek-r1:14b`
|
||||
See below for a list of local supported provider.
|
||||
|
||||
Change the config.ini file to set the provider_name to a supported provider and provider_model to `deepseek-r1:14b`
|
||||
|
||||
NOTE: `deepseek-r1:14b`is an example, use a bigger model if your hardware allow it.
|
||||
|
||||
```sh
|
||||
[MAIN]
|
||||
is_local = True
|
||||
provider_name = ollama
|
||||
provider_name = ollama # or lm-studio, openai, etc..
|
||||
provider_model = deepseek-r1:14b
|
||||
provider_server_address = 127.0.0.1:11434
|
||||
```
|
||||
|
||||
**List of local providers**
|
||||
|
||||
| Provider | Local? | Description |
|
||||
|-----------|--------|-----------------------------------------------------------|
|
||||
| ollama | Yes | Run LLMs locally with ease using ollama as a LLM provider |
|
||||
| lm-studio | Yes | Run LLM locally with LM studio (set `provider_name` to `lm-studio`)|
|
||||
| openai | No | Use API compatible |
|
||||
|
||||
start all services :
|
||||
|
||||
```sh
|
||||
|
@ -10,7 +10,7 @@
|
||||
|
||||
**Manus AI 的本地替代品**,它是一个具有语音功能的大语言模型秘书,可以 Coding、访问你的电脑文件、浏览网页,并自动修正错误与反省,最重要的是不会向云端传送任何资料。采用 DeepSeek R1 等推理模型构建,完全在本地硬体上运行,进而保证资料的隐私。
|
||||
|
||||
[](https://fosowl.github.io/agenticSeek.html)  [](https://discord.gg/4Ub2D6Fj)
|
||||
[](https://fosowl.github.io/agenticSeek.html)  [](https://discord.gg/4Ub2D6Fj) [](https://x.com/Martin993886460)
|
||||
|
||||
> 🛠️ **目前还在开发阶段** – 欢迎任何贡献者加入我们!
|
||||
|
||||
@ -84,49 +84,52 @@ python3 setup.py install
|
||||
|
||||
**建议至少使用 Deepseek 14B 以上参数的模型,较小的模型难以使用助理功能并且很快就会忘记上下文之间的关系。**
|
||||
|
||||
### 1️⃣ **下载模型**
|
||||
|
||||
确定已经安装 [Ollama](https://ollama.com/)。
|
||||
### **本地运行助手**
|
||||
|
||||
请在 [DeepSeek](https://deepseek.com/models) 下载至少大于 `deepseek-r1:14b` 的模型。
|
||||
启动你的本地提供者,例如使用 ollama:
|
||||
|
||||
```sh
|
||||
ollama pull deepseek-r1:14b
|
||||
```
|
||||
|
||||
### 2️ **启动框架 (ollama)**
|
||||
|
||||
启动 Ollama 服务器。
|
||||
```sh
|
||||
ollama serve
|
||||
```
|
||||
|
||||
请更改 `config.ini` 文件,将 `provider_name` 设置为 `ollama` 并且 `provider_model` 设置为你刚刚下载的模型,如 `deepseek-r1:14b`。
|
||||
请参阅下方支持的本地提供者列表。
|
||||
|
||||
注意:`deepseek-r1:14b` 只是范例,如果你的电脑允许的话,请使用更大的模型。
|
||||
修改 `config.ini` 文件,将 `provider_name` 设置为支持的提供者,并将 `provider_model` 设置为 `deepseek-r1:14b`。
|
||||
|
||||
注意:`deepseek-r1:14b` 只是一个示例,如果你的硬件允许,可以使用更大的模型。
|
||||
|
||||
```sh
|
||||
[MAIN]
|
||||
is_local = True
|
||||
provider_name = ollama
|
||||
provider_name = ollama # 或 lm-studio, openai 等
|
||||
provider_model = deepseek-r1:14b
|
||||
provider_server_address = 127.0.0.1:11434
|
||||
```
|
||||
|
||||
开始所有服务:
|
||||
**本地提供者列表**
|
||||
|
||||
| 提供者 | 本地? | 描述 |
|
||||
|-------------|--------|-------------------------------------------------------|
|
||||
| ollama | 是 | 使用 ollama 作为 LLM 提供者,轻松本地运行 LLM |
|
||||
| lm-studio | 是 | 使用 LM Studio 本地运行 LLM(将 `provider_name` 设置为 `lm-studio`)|
|
||||
| openai | 否 | 使用兼容的 API |
|
||||
|
||||
启动所有服务:
|
||||
|
||||
```sh
|
||||
sudo ./start_services.sh # MacOS
|
||||
start ./start_services.cmd # Window
|
||||
start ./start_services.cmd # Windows
|
||||
```
|
||||
|
||||
|
||||
运行 AgenticSeek:
|
||||
运行助手:
|
||||
|
||||
```sh
|
||||
python3 cli.py
|
||||
```
|
||||
|
||||
|
||||
|
||||
*如果你不知道如何开始,请参阅 **Usage** 部分*
|
||||
|
||||
*如果遇到问题,请先参考 **Known issues** 部分*
|
||||
|
@ -10,7 +10,7 @@
|
||||
|
||||
**Manus AI 的本地替代品**,它是一個具有語音功能的大語言模型秘書,可以 Coding、訪問你的電腦文件、瀏覽網頁,並自動修正錯誤與反省,最重要的是不會向雲端傳送任何資料。採用 DeepSeek R1 等推理模型構建,完全在本地硬體上運行,進而保證資料的隱私。
|
||||
|
||||
[](https://fosowl.github.io/agenticSeek.html)  [](https://discord.gg/4Ub2D6Fj)
|
||||
[](https://fosowl.github.io/agenticSeek.html)  [](https://discord.gg/4Ub2D6Fj) [](https://x.com/Martin993886460)
|
||||
|
||||
> 🛠️ **目前還在開發階段** – 歡迎任何貢獻者加入我們!
|
||||
|
||||
@ -80,49 +80,48 @@ pip3 install -r requirements.txt
|
||||
python3 setup.py install
|
||||
```
|
||||
|
||||
|
||||
## 在本地機器上運行 AgenticSeek
|
||||
|
||||
**建議至少使用 Deepseek 14B 以上參數的模型,較小的模型難以使用助理功能並且很快就會忘記上下文之間的關係。**
|
||||
|
||||
### 1️⃣ **下載模型**
|
||||
### **本地运行助手**
|
||||
|
||||
確定已經安裝 [Ollama](https://ollama.com/)。
|
||||
启动你的本地提供者,例如使用 ollama:
|
||||
|
||||
請在 [DeepSeek](https://deepseek.com/models) 下載至少大於 `deepseek-r1:14b` 的模型。
|
||||
|
||||
```sh
|
||||
ollama pull deepseek-r1:14b
|
||||
```
|
||||
|
||||
### 2️ **啟動框架 (ollama)**
|
||||
|
||||
啟動 Ollama 服務器。
|
||||
```sh
|
||||
ollama serve
|
||||
```
|
||||
|
||||
請更改 `config.ini` 文件,將 `provider_name` 設置為 `ollama` 並且 `provider_model` 設置為你剛剛下載的模型,如 `deepseek-r1:14b`。
|
||||
请参阅下方支持的本地提供者列表。
|
||||
|
||||
注意:`deepseek-r1:14b` 只是範例,如果你的電腦允許的話,請使用更大的模型。
|
||||
修改 `config.ini` 文件,将 `provider_name` 设置为支持的提供者,并将 `provider_model` 设置为 `deepseek-r1:14b`。
|
||||
|
||||
注意:`deepseek-r1:14b` 只是一个示例,如果你的硬件允许,可以使用更大的模型。
|
||||
|
||||
```sh
|
||||
[MAIN]
|
||||
is_local = True
|
||||
provider_name = ollama
|
||||
provider_name = ollama # 或 lm-studio, openai 等
|
||||
provider_model = deepseek-r1:14b
|
||||
provider_server_address = 127.0.0.1:11434
|
||||
```
|
||||
|
||||
開始所有服務:
|
||||
**本地提供者列表**
|
||||
|
||||
| 提供者 | 本地? | 描述 |
|
||||
|-------------|--------|-------------------------------------------------------|
|
||||
| ollama | 是 | 使用 ollama 作为 LLM 提供者,轻松本地运行 LLM |
|
||||
| lm-studio | 是 | 使用 LM Studio 本地运行 LLM(将 `provider_name` 设置为 `lm-studio`)|
|
||||
| openai | 否 | 使用兼容的 API |
|
||||
|
||||
启动所有服务:
|
||||
|
||||
```sh
|
||||
sudo ./start_services.sh # MacOS
|
||||
start ./start_services.cmd # Window
|
||||
start ./start_services.cmd # Windows
|
||||
```
|
||||
|
||||
|
||||
運行 AgenticSeek:
|
||||
运行助手:
|
||||
|
||||
```sh
|
||||
python3 cli.py
|
||||
|
31
README_FR.md
31
README_FR.md
@ -9,7 +9,7 @@
|
||||
|
||||
Une alternative **entièrement locale** à Manus AI, un assistant IA qui code, explore votre système de fichiers, navigue sur le web et corrige ses erreurs, tout cela sans envoyer la moindre donnée dans le cloud. Cet agent autonome fonctionne entièrement sur votre hardware, garantissant la confidentialité de vos données.
|
||||
|
||||
[](https://fosowl.github.io/agenticSeek.html)  [](https://discord.gg/4Ub2D6Fj)
|
||||
[](https://fosowl.github.io/agenticSeek.html)  [](https://discord.gg/4Ub2D6Fj) [](https://x.com/Martin993886460)
|
||||
|
||||
> 🛠️ **En cours de développement** – On cherche activement des contributeurs!
|
||||
|
||||
@ -81,34 +81,33 @@ pip3 install -r requirements.txt
|
||||
|
||||
## Faire fonctionner sur votre machine
|
||||
|
||||
**Nous recommandons d’utiliser au moins DeepSeek 14B, les modèles plus petits ont du mal avec l’utilisation des outils et oublient rapidement le contexte.**
|
||||
|
||||
### 1️⃣ **Téléchargement du modèle**
|
||||
|
||||
Assurer vous d'avoir [Ollama](https://ollama.com/) installé.
|
||||
|
||||
Télécharger `deepseek-r1:14b` de [DeepSeek](https://deepseek.com/models) (ou autre en fonction de votre hardware, voir section FAQ)
|
||||
|
||||
```sh
|
||||
ollama pull deepseek-r1:14b
|
||||
```
|
||||
|
||||
### 2️ **Démarrage d'ollama**
|
||||
**Nous recommandons d’utiliser au minimum DeepSeek 14B, les modèles plus petits ont du mal avec l’utilisation des outils et oublient rapidement le contexte.**
|
||||
|
||||
Lancer votre provider local, par exemple avec ollama:
|
||||
```sh
|
||||
ollama serve
|
||||
```
|
||||
|
||||
Modifiez le fichier config.ini pour définir provider_name sur ollama et provider_model sur deepseek-r1:14b
|
||||
Voyez la section **Provider** pour la liste de provideurs disponible.
|
||||
|
||||
Modifiez le fichier config.ini pour définir provider_name sur le nom d'un provideur et provider_model sur le LLM à utiliser.
|
||||
|
||||
```sh
|
||||
[MAIN]
|
||||
is_local = True
|
||||
provider_name = ollama
|
||||
provider_name = ollama # ou lm-studio, openai, etc...
|
||||
provider_model = deepseek-r1:14b
|
||||
provider_server_address = 127.0.0.1:11434
|
||||
```
|
||||
|
||||
**Liste des provideurs locaux**
|
||||
|
||||
| Fournisseur | Local ? | Description |
|
||||
|-------------|---------|-----------------------------------------------------------|
|
||||
| ollama | Oui | Exécutez des LLM localement avec facilité en utilisant ollama comme fournisseur LLM |
|
||||
| lm-studio | Oui | Exécutez un LLM localement avec LM studio (définissez `provider_name` sur `lm-studio`) |
|
||||
| openai | Non | Utilisez une API compatible |
|
||||
|
||||
démarrer tous les services :
|
||||
|
||||
```sh
|
||||
|
202
api.py
Executable file
202
api.py
Executable file
@ -0,0 +1,202 @@
|
||||
#!/usr/bin/env python3
|
||||
|
||||
import os, sys
|
||||
import uvicorn
|
||||
import aiofiles
|
||||
import configparser
|
||||
import asyncio
|
||||
from typing import List
|
||||
from fastapi import FastAPI
|
||||
from fastapi.responses import JSONResponse
|
||||
from fastapi.responses import FileResponse
|
||||
from fastapi.middleware.cors import CORSMiddleware
|
||||
from fastapi.staticfiles import StaticFiles
|
||||
|
||||
from sources.llm_provider import Provider
|
||||
from sources.interaction import Interaction
|
||||
from sources.agents import CasualAgent, CoderAgent, FileAgent, PlannerAgent, BrowserAgent
|
||||
from sources.browser import Browser, create_driver
|
||||
from sources.utility import pretty_print
|
||||
from sources.logger import Logger
|
||||
from sources.schemas import QueryRequest, QueryResponse
|
||||
|
||||
from concurrent.futures import ThreadPoolExecutor
|
||||
|
||||
from celery import Celery
|
||||
|
||||
app = FastAPI(title="AgenticSeek API", version="0.1.0")
|
||||
celery_app = Celery("tasks", broker="redis://localhost:6379/0", backend="redis://localhost:6379/0")
|
||||
celery_app.conf.update(task_track_started=True)
|
||||
logger = Logger("backend.log")
|
||||
config = configparser.ConfigParser()
|
||||
config.read('config.ini')
|
||||
|
||||
app.add_middleware(
|
||||
CORSMiddleware,
|
||||
allow_origins=["*"],
|
||||
allow_credentials=True,
|
||||
allow_methods=["*"],
|
||||
allow_headers=["*"],
|
||||
)
|
||||
|
||||
if not os.path.exists(".screenshots"):
|
||||
os.makedirs(".screenshots")
|
||||
app.mount("/screenshots", StaticFiles(directory=".screenshots"), name="screenshots")
|
||||
|
||||
executor = ThreadPoolExecutor(max_workers=1)
|
||||
|
||||
def initialize_system():
|
||||
stealth_mode = config.getboolean('BROWSER', 'stealth_mode')
|
||||
personality_folder = "jarvis" if config.getboolean('MAIN', 'jarvis_personality') else "base"
|
||||
languages = config["MAIN"]["languages"].split(' ')
|
||||
|
||||
provider = Provider(
|
||||
provider_name=config["MAIN"]["provider_name"],
|
||||
model=config["MAIN"]["provider_model"],
|
||||
server_address=config["MAIN"]["provider_server_address"],
|
||||
is_local=config.getboolean('MAIN', 'is_local')
|
||||
)
|
||||
logger.info(f"Provider initialized: {provider.provider_name} ({provider.model})")
|
||||
|
||||
browser = Browser(
|
||||
create_driver(headless=config.getboolean('BROWSER', 'headless_browser'), stealth_mode=stealth_mode),
|
||||
anticaptcha_manual_install=stealth_mode
|
||||
)
|
||||
logger.info("Browser initialized")
|
||||
|
||||
agents = [
|
||||
CasualAgent(
|
||||
name=config["MAIN"]["agent_name"],
|
||||
prompt_path=f"prompts/{personality_folder}/casual_agent.txt",
|
||||
provider=provider, verbose=False
|
||||
),
|
||||
CoderAgent(
|
||||
name="coder",
|
||||
prompt_path=f"prompts/{personality_folder}/coder_agent.txt",
|
||||
provider=provider, verbose=False
|
||||
),
|
||||
FileAgent(
|
||||
name="File Agent",
|
||||
prompt_path=f"prompts/{personality_folder}/file_agent.txt",
|
||||
provider=provider, verbose=False
|
||||
),
|
||||
BrowserAgent(
|
||||
name="Browser",
|
||||
prompt_path=f"prompts/{personality_folder}/browser_agent.txt",
|
||||
provider=provider, verbose=False, browser=browser
|
||||
),
|
||||
PlannerAgent(
|
||||
name="Planner",
|
||||
prompt_path=f"prompts/{personality_folder}/planner_agent.txt",
|
||||
provider=provider, verbose=False, browser=browser
|
||||
)
|
||||
]
|
||||
logger.info("Agents initialized")
|
||||
|
||||
interaction = Interaction(
|
||||
agents,
|
||||
tts_enabled=config.getboolean('MAIN', 'speak'),
|
||||
stt_enabled=config.getboolean('MAIN', 'listen'),
|
||||
recover_last_session=config.getboolean('MAIN', 'recover_last_session'),
|
||||
langs=languages
|
||||
)
|
||||
logger.info("Interaction initialized")
|
||||
return interaction
|
||||
|
||||
interaction = initialize_system()
|
||||
is_generating = False
|
||||
|
||||
@app.get("/screenshot")
|
||||
async def get_screenshot():
|
||||
logger.info("Screenshot endpoint called")
|
||||
screenshot_path = ".screenshots/updated_screen.png"
|
||||
if os.path.exists(screenshot_path):
|
||||
return FileResponse(screenshot_path)
|
||||
logger.error("No screenshot available")
|
||||
return JSONResponse(
|
||||
status_code=404,
|
||||
content={"error": "No screenshot available"}
|
||||
)
|
||||
|
||||
@app.get("/health")
|
||||
async def health_check():
|
||||
logger.info("Health check endpoint called")
|
||||
return {"status": "healthy", "version": "0.1.0"}
|
||||
|
||||
@app.get("/is_active")
|
||||
async def is_active():
|
||||
logger.info("Is active endpoint called")
|
||||
return {"is_active": interaction.is_active}
|
||||
|
||||
def think_wrapper(interaction, query, tts_enabled):
|
||||
try:
|
||||
interaction.tts_enabled = tts_enabled
|
||||
interaction.last_query = query
|
||||
logger.info("Agents request is being processed")
|
||||
success = interaction.think()
|
||||
if not success:
|
||||
interaction.last_answer = "Error: No answer from agent"
|
||||
interaction.last_success = False
|
||||
else:
|
||||
interaction.last_success = True
|
||||
return success
|
||||
except Exception as e:
|
||||
logger.error(f"Error in think_wrapper: {str(e)}")
|
||||
interaction.last_answer = f"Error: {str(e)}"
|
||||
interaction.last_success = False
|
||||
raise e
|
||||
|
||||
@app.post("/query", response_model=QueryResponse)
|
||||
async def process_query(request: QueryRequest):
|
||||
global is_generating
|
||||
logger.info(f"Processing query: {request.query}")
|
||||
query_resp = QueryResponse(
|
||||
done="false",
|
||||
answer="Waiting for agent...",
|
||||
agent_name="Waiting for agent...",
|
||||
success="false",
|
||||
blocks={}
|
||||
)
|
||||
if is_generating:
|
||||
logger.warning("Another query is being processed, please wait.")
|
||||
return JSONResponse(status_code=429, content=query_resp.jsonify())
|
||||
|
||||
try:
|
||||
is_generating = True
|
||||
loop = asyncio.get_running_loop()
|
||||
success = await loop.run_in_executor(
|
||||
executor, think_wrapper, interaction, request.query, request.tts_enabled
|
||||
)
|
||||
is_generating = False
|
||||
|
||||
if not success:
|
||||
query_resp.answer = interaction.last_answer
|
||||
return JSONResponse(status_code=400, content=query_resp.jsonify())
|
||||
|
||||
if interaction.current_agent:
|
||||
blocks_json = {f'{i}': block.jsonify() for i, block in enumerate(interaction.current_agent.get_blocks_result())}
|
||||
else:
|
||||
logger.error("No current agent found")
|
||||
blocks_json = {}
|
||||
query_resp.answer = "Error: No current agent"
|
||||
return JSONResponse(status_code=400, content=query_resp.jsonify())
|
||||
|
||||
logger.info(f"Answer: {interaction.last_answer}")
|
||||
logger.info(f"Blocks: {blocks_json}")
|
||||
query_resp.done = "true"
|
||||
query_resp.answer = interaction.last_answer
|
||||
query_resp.agent_name = interaction.current_agent.agent_name
|
||||
query_resp.success = str(interaction.last_success)
|
||||
query_resp.blocks = blocks_json
|
||||
logger.info("Query processed successfully")
|
||||
return JSONResponse(status_code=200, content=query_resp.jsonify())
|
||||
except Exception as e:
|
||||
logger.error(f"An error occurred: {str(e)}")
|
||||
sys.exit(1)
|
||||
finally:
|
||||
logger.info("Processing finished")
|
||||
if config.getboolean('MAIN', 'save_session'):
|
||||
interaction.save_session()
|
||||
|
||||
if __name__ == "__main__":
|
||||
uvicorn.run(app, host="0.0.0.0", port=8000)
|
Loading…
x
Reference in New Issue
Block a user