mirror of
https://github.com/tcsenpai/multi1.git
synced 2025-06-06 19:15:23 +00:00
multimode support with launcher
This commit is contained in:
parent
e80dca6cee
commit
8b7158e16b
81
README.md
81
README.md
@ -1,17 +1,23 @@
|
||||
# g1: Using Llama-3.1 70b on Groq to create o1-like reasoning chains
|
||||
# multi1: Using multiple AI providers to create o1-like reasoning chains
|
||||
|
||||
## Features
|
||||
|
||||
- [x] Using Llama-3.1 70b on Groq to create o1-like reasoning chains
|
||||
- [x] Using Ollama to create o1-like reasoning chains
|
||||
- [x] Using Perplexity to create o1-like reasoning chains
|
||||
|
||||
[Video Demo](https://github.com/user-attachments/assets/db2a221f-f8eb-48c3-b5a7-8399c6300243)
|
||||
|
||||
This is an early prototype of using prompting strategies to improve the LLM's reasoning capabilities through o1-like reasoning chains. This allows the LLM to "think" and solve logical problems that usually otherwise stump leading models. Unlike o1, all the reasoning tokens are shown, and the app uses an open source model.
|
||||
|
||||
g1 is experimental and being open sourced to help inspire the open source community to develop new strategies to produce o1-like reasoning. This experiment helps show the power of prompting reasoning in visualized steps, not a comparison to or full replication of o1, which uses different techniques. OpenAI's o1 is instead trained with large-scale reinforcement learning to reason using Chain of Thought, achieving state-of-the-art performance on complex PhD-level problems.
|
||||
multi1 is experimental and being open sourced to help inspire the open source community to develop new strategies to produce o1-like reasoning. This experiment helps show the power of prompting reasoning in visualized steps, not a comparison to or full replication of o1, which uses different techniques. OpenAI's o1 is instead trained with large-scale reinforcement learning to reason using Chain of Thought, achieving state-of-the-art performance on complex PhD-level problems.
|
||||
|
||||
g1 demonstrates the potential of prompting alone to overcome straightforward LLM logic issues like the Strawberry problem, allowing existing open source models to benefit from dynamic reasoning chains and an improved interface for exploring them.
|
||||
multi1 demonstrates the potential of prompting alone to overcome straightforward LLM logic issues like the Strawberry problem, allowing existing open source models to benefit from dynamic reasoning chains and an improved interface for exploring them.
|
||||
|
||||
|
||||
### How it works
|
||||
|
||||
g1 powered by Llama3.1-70b creates reasoning chains, in principle a dynamic Chain of Thought, that allows the LLM to "think" and solve some logical problems that usually otherwise stump leading models.
|
||||
multi1 powered by Llama3.1-70b creates reasoning chains, in principle a dynamic Chain of Thought, that allows the LLM to "think" and solve some logical problems that usually otherwise stump leading models.
|
||||
|
||||
At each step, the LLM can choose to continue to another reasoning step, or provide a final answer. Each step is titled and visible to the user. The system prompt also includes tips for the LLM. There is a full explanation under Prompt Breakdown, but a few examples are asking the model to “include exploration of alternative answers” and “use at least 3 methods to derive the answer”.
|
||||
|
||||
@ -21,7 +27,7 @@ The reasoning ability of the LLM is therefore improved through combining Chain-o
|
||||
### Examples
|
||||
|
||||
> [!IMPORTANT]
|
||||
> g1 is not perfect, but it can perform significantly better than LLMs out-of-the-box. From initial testing, g1 accurately solves simple logic problems 60-80% of the time that usually stump LLMs. However, accuracy has yet to be formally evaluated. See examples below.
|
||||
> multi1 is not perfect, but it can perform significantly better than LLMs out-of-the-box. From initial testing, multi1 accurately solves simple logic problems 60-80% of the time that usually stump LLMs. However, accuracy has yet to be formally evaluated. See examples below.
|
||||
|
||||
|
||||
##### How many Rs are in strawberry?
|
||||
@ -42,31 +48,59 @@ Result:
|
||||
|
||||
### Quickstart
|
||||
|
||||
To use the Streamlit UI, follow these instructions:
|
||||
To use the launcher, follow these instructions:
|
||||
|
||||
~~~
|
||||
python3 -m venv venv
|
||||
~~~
|
||||
1. Set up the environment:
|
||||
|
||||
~~~
|
||||
source venv/bin/activate
|
||||
~~~
|
||||
```
|
||||
python3 -m venv venv
|
||||
source venv/bin/activate
|
||||
pip3 install -r requirements.txt
|
||||
```
|
||||
|
||||
~~~
|
||||
pip3 install -r requirements.txt
|
||||
~~~
|
||||
2. Copy the example environment file:
|
||||
|
||||
~~~
|
||||
export GROQ_API_KEY=gsk...
|
||||
~~~
|
||||
```
|
||||
cp example.env .env
|
||||
```
|
||||
|
||||
~~~
|
||||
streamlit run app.py
|
||||
~~~
|
||||
3. Edit the .env file with your API keys / models preferences.
|
||||
|
||||
4. Run the launcher:
|
||||
|
||||
```
|
||||
python launcher.py
|
||||
```
|
||||
|
||||
5. Use the arrow keys to navigate the menu, Enter to select an option, and 'q' to quit.
|
||||
|
||||
The launcher allows you to:
|
||||
|
||||
- Run the Ollama-based chat application (ol1.py)
|
||||
- Run the Perplexity-based chat application (p1.py)
|
||||
- Run the Groq-based chat application (g1.py)
|
||||
- Edit the .env file
|
||||
- Exit the launcher
|
||||
|
||||
When running a chat application, you can press 'q' at any time to stop the application and return to the launcher.
|
||||
|
||||
---
|
||||
|
||||
Alternatively, follow these additional instructions to use the Gradio UI:
|
||||
Alternatively, if you prefer to run the applications directly without the launcher:
|
||||
|
||||
```
|
||||
streamlit run app.py
|
||||
```
|
||||
|
||||
Where 'app.py' is the app you want to run and can be:
|
||||
|
||||
- g1.py (Groq)
|
||||
- ol1.py (Ollama)
|
||||
- p1.py (Perplexity)
|
||||
|
||||
---
|
||||
|
||||
If you prefer to use the Gradio UI, follow these additional instructions (only works with Groq at the moment):
|
||||
|
||||
~~~
|
||||
cd gradio
|
||||
@ -138,4 +172,5 @@ Finally, after the problem is added as a user message, an assistant message is l
|
||||
|
||||
### Credits
|
||||
|
||||
This app was developed by [Benjamin Klieger](https://x.com/benjaminklieger).
|
||||
This app was originally developed by [Benjamin Klieger](https://x.com/benjaminklieger).
|
||||
Part of the code (Ollama and Perplexity support, launcher.py) was developed by [tcsenpai](https://github.com/tcsenpai).
|
||||
|
@ -1 +1,7 @@
|
||||
GROQ_API_KEY=gsk...
|
||||
GROQ_API_KEY=gsk...
|
||||
|
||||
OLLAMA_URL=http://localhost:11434
|
||||
OLLAMA_MODEL=llama2
|
||||
|
||||
PERPLEXITY_API_KEY=your_perplexity_api_key
|
||||
PERPLEXITY_MODEL=llama-3.1-sonar-small-128k-online
|
123
launcher.py
Normal file
123
launcher.py
Normal file
@ -0,0 +1,123 @@
|
||||
import blessed
|
||||
import subprocess
|
||||
import os
|
||||
import sys
|
||||
import time
|
||||
from contextlib import contextmanager
|
||||
|
||||
term = blessed.Terminal()
|
||||
|
||||
MENU_ITEMS = [
|
||||
("Ollama", "ol1.py", "Launch Ollama-based chat application"),
|
||||
("Perplexity", "p1.py", "Launch Perplexity-based chat application"),
|
||||
("Groq", "g1.py", "Launch Groq-based chat application"),
|
||||
("Edit .env", "edit_env", "Edit environment variables"),
|
||||
("Exit", None, "Exit the launcher")
|
||||
]
|
||||
|
||||
@contextmanager
|
||||
def fullscreen():
|
||||
with term.fullscreen(), term.cbreak(), term.hidden_cursor():
|
||||
yield
|
||||
|
||||
def draw_3d_box(y, x, height, width, color):
|
||||
shadow_color = term.color_rgb(50, 50, 50)
|
||||
|
||||
# Draw shadow
|
||||
print(term.move(y+1, x+2) + shadow_color + '█' * (width-1) + term.normal)
|
||||
for i in range(height-1):
|
||||
print(term.move(y+2+i, x+width) + shadow_color + '█' + term.normal)
|
||||
|
||||
# Draw main box
|
||||
print(term.move(y, x) + color + '╔' + '═' * (width - 2) + '╗' + term.normal)
|
||||
for i in range(height - 2):
|
||||
print(term.move(y + i + 1, x) + color + '║' + ' ' * (width - 2) + '║' + term.normal)
|
||||
print(term.move(y + height - 1, x) + color + '╚' + '═' * (width - 2) + '╝' + term.normal)
|
||||
|
||||
def draw_menu(current_option):
|
||||
menu_width = 50
|
||||
menu_height = len(MENU_ITEMS) * 3 + 5
|
||||
start_y = (term.height - menu_height) // 2
|
||||
start_x = (term.width - menu_width) // 2
|
||||
|
||||
main_color = term.cornflower_blue
|
||||
draw_3d_box(start_y, start_x, menu_height, menu_width, main_color)
|
||||
|
||||
title = '🚀 Launcher Menu 🚀'
|
||||
print(term.move(start_y + 1, start_x + (menu_width - len(title)) // 2) + term.bold + term.yellow(title))
|
||||
|
||||
for i, (option, _, _) in enumerate(MENU_ITEMS):
|
||||
y = start_y + i * 3 + 4
|
||||
if i == current_option:
|
||||
item_color = term.black_on_yellow
|
||||
draw_3d_box(y-1, start_x+3, 3, menu_width-6, item_color)
|
||||
print(term.move(y, start_x + 5) + item_color + term.bold(f" {option:<{menu_width - 10}} ") + term.normal)
|
||||
else:
|
||||
item_color = term.white_on_blue
|
||||
draw_3d_box(y-1, start_x+3, 3, menu_width-6, item_color)
|
||||
print(term.move(y, start_x + 5) + item_color + term.bold(f" {option:<{menu_width - 10}} ") + term.normal)
|
||||
|
||||
description = MENU_ITEMS[current_option][2]
|
||||
print(term.move(start_y + menu_height, start_x) + term.center(term.italic(description), menu_width))
|
||||
|
||||
def run_script(script):
|
||||
with fullscreen():
|
||||
print(term.clear + term.move_y(term.height // 2) + term.bold_green(term.center(f"Running {script}...")))
|
||||
time.sleep(1)
|
||||
|
||||
process = subprocess.Popen(["streamlit", "run", script], stdout=subprocess.PIPE, stderr=subprocess.STDOUT, text=True)
|
||||
|
||||
with term.cbreak():
|
||||
print(term.clear)
|
||||
try:
|
||||
while True:
|
||||
output = process.stdout.readline()
|
||||
if output == '' and process.poll() is not None:
|
||||
break
|
||||
if output:
|
||||
print(output.strip())
|
||||
if term.inkey(timeout=0.1) == 'q':
|
||||
process.terminate()
|
||||
print(term.bold_red("\nScript terminated. Press any key to return to the launcher..."))
|
||||
term.inkey()
|
||||
return
|
||||
except KeyboardInterrupt:
|
||||
process.terminate()
|
||||
print(term.bold_red("\nScript terminated. Press any key to return to the launcher..."))
|
||||
term.inkey()
|
||||
return
|
||||
|
||||
print(term.bold_green("\nScript finished. Press any key to return to the launcher..."))
|
||||
term.inkey()
|
||||
|
||||
def edit_env():
|
||||
os.system('clear')
|
||||
os.system("nano .env")
|
||||
|
||||
def main_menu():
|
||||
current_option = 0
|
||||
|
||||
while True:
|
||||
with fullscreen():
|
||||
print(term.clear)
|
||||
draw_menu(current_option)
|
||||
|
||||
key = term.inkey()
|
||||
|
||||
if key.name == 'KEY_UP' and current_option > 0:
|
||||
current_option -= 1
|
||||
elif key.name == 'KEY_DOWN' and current_option < len(MENU_ITEMS) - 1:
|
||||
current_option += 1
|
||||
elif key.name == 'KEY_ENTER':
|
||||
selected_option = MENU_ITEMS[current_option][1]
|
||||
if selected_option is None:
|
||||
return
|
||||
elif selected_option == "edit_env":
|
||||
edit_env()
|
||||
else:
|
||||
run_script(selected_option)
|
||||
elif key == 'q':
|
||||
return
|
||||
|
||||
if __name__ == "__main__":
|
||||
main_menu()
|
134
ol1.py
Normal file
134
ol1.py
Normal file
@ -0,0 +1,134 @@
|
||||
import streamlit as st
|
||||
import json
|
||||
import time
|
||||
import requests # Add this import for making HTTP requests to Ollama
|
||||
from dotenv import load_dotenv
|
||||
import os
|
||||
|
||||
# Load environment variables
|
||||
load_dotenv()
|
||||
|
||||
# Get configuration from .env file
|
||||
OLLAMA_URL = os.getenv('OLLAMA_URL', 'http://localhost:11434')
|
||||
OLLAMA_MODEL = os.getenv('OLLAMA_MODEL', 'llama2')
|
||||
|
||||
def make_api_call(messages, max_tokens, is_final_answer=False):
|
||||
for attempt in range(3):
|
||||
try:
|
||||
response = requests.post(
|
||||
f"{OLLAMA_URL}/api/chat",
|
||||
json={
|
||||
"model": OLLAMA_MODEL,
|
||||
"messages": messages,
|
||||
"stream": False,
|
||||
"options": {
|
||||
"num_predict": max_tokens,
|
||||
"temperature": 0.2
|
||||
}
|
||||
}
|
||||
)
|
||||
response.raise_for_status()
|
||||
return json.loads(response.json()["message"]["content"])
|
||||
except Exception as e:
|
||||
if attempt == 2:
|
||||
if is_final_answer:
|
||||
return {"title": "Error", "content": f"Failed to generate final answer after 3 attempts. Error: {str(e)}"}
|
||||
else:
|
||||
return {"title": "Error", "content": f"Failed to generate step after 3 attempts. Error: {str(e)}", "next_action": "final_answer"}
|
||||
time.sleep(1) # Wait for 1 second before retrying
|
||||
|
||||
def generate_response(prompt):
|
||||
messages = [
|
||||
{"role": "system", "content": """You are an expert AI assistant that explains your reasoning step by step. For each step, provide a title that describes what you're doing in that step, along with the content. Decide if you need another step or if you're ready to give the final answer. Respond in JSON format with 'title', 'content', and 'next_action' (either 'continue' or 'final_answer') keys. USE AS MANY REASONING STEPS AS POSSIBLE. AT LEAST 3. BE AWARE OF YOUR LIMITATIONS AS AN LLM AND WHAT YOU CAN AND CANNOT DO. IN YOUR REASONING, INCLUDE EXPLORATION OF ALTERNATIVE ANSWERS. CONSIDER YOU MAY BE WRONG, AND IF YOU ARE WRONG IN YOUR REASONING, WHERE IT WOULD BE. FULLY TEST ALL OTHER POSSIBILITIES. YOU CAN BE WRONG. WHEN YOU SAY YOU ARE RE-EXAMINING, ACTUALLY RE-EXAMINE, AND USE ANOTHER APPROACH TO DO SO. DO NOT JUST SAY YOU ARE RE-EXAMINING. USE AT LEAST 3 METHODS TO DERIVE THE ANSWER. USE BEST PRACTICES.
|
||||
|
||||
Example of a valid JSON response:
|
||||
```json
|
||||
{
|
||||
"title": "Identifying Key Information",
|
||||
"content": "To begin solving this problem, we need to carefully examine the given information and identify the crucial elements that will guide our solution process. This involves...",
|
||||
"next_action": "continue"
|
||||
}```
|
||||
"""},
|
||||
{"role": "user", "content": prompt},
|
||||
{"role": "assistant", "content": "Thank you! I will now think step by step following my instructions, starting at the beginning after decomposing the problem."}
|
||||
]
|
||||
|
||||
steps = []
|
||||
step_count = 1
|
||||
total_thinking_time = 0
|
||||
|
||||
while True:
|
||||
start_time = time.time()
|
||||
step_data = make_api_call(messages, 300)
|
||||
end_time = time.time()
|
||||
thinking_time = end_time - start_time
|
||||
total_thinking_time += thinking_time
|
||||
|
||||
steps.append((f"Step {step_count}: {step_data['title']}", step_data['content'], thinking_time))
|
||||
|
||||
messages.append({"role": "assistant", "content": json.dumps(step_data)})
|
||||
|
||||
if step_data['next_action'] == 'final_answer':
|
||||
break
|
||||
|
||||
step_count += 1
|
||||
|
||||
# Yield after each step for Streamlit to update
|
||||
yield steps, None # We're not yielding the total time until the end
|
||||
|
||||
# Generate final answer
|
||||
messages.append({"role": "user", "content": "Please provide the final answer based on your reasoning above."})
|
||||
|
||||
start_time = time.time()
|
||||
final_data = make_api_call(messages, 200, is_final_answer=True)
|
||||
end_time = time.time()
|
||||
thinking_time = end_time - start_time
|
||||
total_thinking_time += thinking_time
|
||||
|
||||
steps.append(("Final Answer", final_data['content'], thinking_time))
|
||||
|
||||
yield steps, total_thinking_time
|
||||
|
||||
def main():
|
||||
st.set_page_config(page_title="ol1 prototype - Ollama version", page_icon="🧠", layout="wide")
|
||||
|
||||
st.title("ol1: Using Ollama to create o1-like reasoning chains")
|
||||
|
||||
st.markdown("""
|
||||
This is an early prototype of using prompting to create o1-like reasoning chains to improve output accuracy. It is not perfect and accuracy has yet to be formally evaluated. It is powered by Ollama so that the reasoning step is local!
|
||||
|
||||
Forked from [bklieger-groq](https://github.com/bklieger-groq)
|
||||
Open source [repository here](https://github.com/tcsenpai/ol1-p1)
|
||||
""")
|
||||
|
||||
st.markdown(f"**Current Configuration:**")
|
||||
st.markdown(f"- Ollama URL: `{OLLAMA_URL}`")
|
||||
st.markdown(f"- Ollama Model: `{OLLAMA_MODEL}`")
|
||||
|
||||
# Text input for user query
|
||||
user_query = st.text_input("Enter your query:", placeholder="e.g., How many 'R's are in the word strawberry?")
|
||||
|
||||
if user_query:
|
||||
st.write("Generating response...")
|
||||
|
||||
# Create empty elements to hold the generated text and total time
|
||||
response_container = st.empty()
|
||||
time_container = st.empty()
|
||||
|
||||
# Generate and display the response
|
||||
for steps, total_thinking_time in generate_response(user_query):
|
||||
with response_container.container():
|
||||
for i, (title, content, thinking_time) in enumerate(steps):
|
||||
if title.startswith("Final Answer"):
|
||||
st.markdown(f"### {title}")
|
||||
st.markdown(content.replace('\n', '<br>'), unsafe_allow_html=True)
|
||||
else:
|
||||
with st.expander(title, expanded=True):
|
||||
st.markdown(content.replace('\n', '<br>'), unsafe_allow_html=True)
|
||||
|
||||
# Only show total time when it's available at the end
|
||||
if total_thinking_time is not None:
|
||||
time_container.markdown(f"**Total thinking time: {total_thinking_time:.2f} seconds**")
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
212
p1.py
Normal file
212
p1.py
Normal file
@ -0,0 +1,212 @@
|
||||
import streamlit as st
|
||||
import json
|
||||
import time
|
||||
import requests # Add this import for making HTTP requests to Ollama
|
||||
from dotenv import load_dotenv
|
||||
import os
|
||||
|
||||
# Load environment variables
|
||||
load_dotenv()
|
||||
|
||||
# Get configuration from .env file
|
||||
PERPLEXITY_API_KEY = os.getenv("PERPLEXITY_API_KEY")
|
||||
PERPLEXITY_MODEL = os.getenv("PERPLEXITY_MODEL", "llama-3.1-sonar-small-128k-online")
|
||||
|
||||
if not PERPLEXITY_API_KEY:
|
||||
raise ValueError("PERPLEXITY_API_KEY is not set in the .env file")
|
||||
|
||||
|
||||
def make_api_call(messages, max_tokens, is_final_answer=False):
|
||||
for attempt in range(3):
|
||||
try:
|
||||
url = "https://api.perplexity.ai/chat/completions"
|
||||
|
||||
payload = {"model": PERPLEXITY_MODEL, "messages": messages}
|
||||
headers = {
|
||||
"Authorization": f"Bearer {PERPLEXITY_API_KEY}",
|
||||
"Content-Type": "application/json",
|
||||
}
|
||||
|
||||
print(f"payload: {payload}")
|
||||
|
||||
response = requests.request("POST", url, json=payload, headers=headers)
|
||||
|
||||
print(f"Response status code: {response.status_code}")
|
||||
print(f"Response content: {response.text}")
|
||||
|
||||
response.raise_for_status()
|
||||
response_json = response.json()
|
||||
content = response_json["choices"][0]["message"]["content"]
|
||||
|
||||
# Try to parse the content as JSON
|
||||
try:
|
||||
return json.loads(content)
|
||||
except json.JSONDecodeError:
|
||||
# If parsing fails, return the content as is
|
||||
return {
|
||||
"title": "Raw Response",
|
||||
"content": content,
|
||||
"next_action": "final_answer" if is_final_answer else "continue"
|
||||
}
|
||||
|
||||
except requests.exceptions.HTTPError as e:
|
||||
if response.status_code == 400:
|
||||
error_message = f"400 Bad Request: {response.text}"
|
||||
print(error_message)
|
||||
if attempt == 2:
|
||||
return {
|
||||
"title": "Error",
|
||||
"content": error_message,
|
||||
"next_action": "final_answer",
|
||||
}
|
||||
else:
|
||||
# Handle other HTTP errors
|
||||
if attempt == 2:
|
||||
error_message = f"HTTP error occurred: {str(e)}"
|
||||
return {
|
||||
"title": "Error",
|
||||
"content": error_message,
|
||||
"next_action": "final_answer",
|
||||
}
|
||||
except json.JSONDecodeError:
|
||||
if attempt == 2:
|
||||
return {
|
||||
"title": "Error",
|
||||
"content": f"Failed to parse API response: {response.text}",
|
||||
"next_action": "final_answer",
|
||||
}
|
||||
except requests.exceptions.RequestException as e:
|
||||
if attempt == 2:
|
||||
error_message = f"API request failed after 3 attempts. Error: {str(e)}"
|
||||
return {
|
||||
"title": "Error",
|
||||
"content": error_message,
|
||||
"next_action": "final_answer",
|
||||
}
|
||||
time.sleep(1) # Wait for 1 second before retrying
|
||||
|
||||
|
||||
def generate_response(prompt):
|
||||
|
||||
messages = [
|
||||
{
|
||||
"role": "system",
|
||||
"content": """You are an expert AI assistant that explains your reasoning step by step. For each step, provide a title that describes what you're doing in that step, along with the content. Decide if you need another step or if you're ready to give the final answer. Respond in JSON format with 'title', 'content', and 'next_action' (either 'continue' or 'final_answer') keys. USE AS MANY REASONING STEPS AS POSSIBLE. AT LEAST 3. BE AWARE OF YOUR LIMITATIONS AS AN LLM AND WHAT YOU CAN AND CANNOT DO. IN YOUR REASONING, INCLUDE EXPLORATION OF ALTERNATIVE ANSWERS. CONSIDER YOU MAY BE WRONG, AND IF YOU ARE WRONG IN YOUR REASONING, WHERE IT WOULD BE. FULLY TEST ALL OTHER POSSIBILITIES. YOU CAN BE WRONG. WHEN YOU SAY YOU ARE RE-EXAMINING, ACTUALLY RE-EXAMINE, AND USE ANOTHER APPROACH TO DO SO. DO NOT JUST SAY YOU ARE RE-EXAMINING. USE AT LEAST 3 METHODS TO DERIVE THE ANSWER. USE BEST PRACTICES.
|
||||
|
||||
Example of a valid JSON response:
|
||||
```json
|
||||
{
|
||||
"title": "Identifying Key Information",
|
||||
"content": "To begin solving this problem, we need to carefully examine the given information and identify the crucial elements that will guide our solution process. This involves...",
|
||||
"next_action": "continue"
|
||||
}```
|
||||
""",
|
||||
},
|
||||
{"role": "user", "content": prompt},
|
||||
]
|
||||
|
||||
steps = []
|
||||
step_count = 1
|
||||
total_thinking_time = 0
|
||||
|
||||
while True:
|
||||
start_time = time.time()
|
||||
step_data = make_api_call(messages, 300)
|
||||
end_time = time.time()
|
||||
thinking_time = end_time - start_time
|
||||
total_thinking_time += thinking_time
|
||||
|
||||
steps.append(
|
||||
(
|
||||
f"Step {step_count}: {step_data['title']}",
|
||||
step_data["content"],
|
||||
thinking_time,
|
||||
)
|
||||
)
|
||||
|
||||
messages.append({"role": "assistant", "content": json.dumps(step_data)})
|
||||
|
||||
if step_data["next_action"] == "final_answer":
|
||||
break
|
||||
|
||||
step_count += 1
|
||||
|
||||
# Add a user message to maintain alternation
|
||||
messages.append({"role": "user", "content": "Continue with the next step."})
|
||||
|
||||
# Yield after each step for Streamlit to update
|
||||
yield steps, None # We're not yielding the total time until the end
|
||||
|
||||
# Generate final answer
|
||||
messages.append(
|
||||
{
|
||||
"role": "user",
|
||||
"content": "Please provide the final answer based on your reasoning above.",
|
||||
}
|
||||
)
|
||||
|
||||
start_time = time.time()
|
||||
final_data = make_api_call(messages, 200, is_final_answer=True)
|
||||
end_time = time.time()
|
||||
thinking_time = end_time - start_time
|
||||
total_thinking_time += thinking_time
|
||||
|
||||
steps.append(("Final Answer", final_data["content"], thinking_time))
|
||||
|
||||
yield steps, total_thinking_time
|
||||
|
||||
|
||||
def main():
|
||||
st.set_page_config(page_title="p1 prototype - Perplexity version", page_icon="🧠", layout="wide")
|
||||
|
||||
st.title("ol1: Using Perplexity AI to create o1-like reasoning chains")
|
||||
|
||||
st.markdown(
|
||||
"""
|
||||
This is an early prototype of using prompting to create o1-like reasoning chains to improve output accuracy. It is not perfect and accuracy has yet to be formally evaluated. It is powered by Perplexity AI API!
|
||||
|
||||
Forked from [bklieger-groq](https://github.com/bklieger-groq)
|
||||
Open source [repository here](https://github.com/tcsenpai/ol1-p1)
|
||||
"""
|
||||
)
|
||||
|
||||
st.markdown(f"**Current Configuration:**")
|
||||
st.markdown(f"- Perplexity AI Model: `{PERPLEXITY_MODEL}`")
|
||||
|
||||
# Text input for user query
|
||||
user_query = st.text_input(
|
||||
"Enter your query:",
|
||||
placeholder="e.g., How many 'R's are in the word strawberry?",
|
||||
)
|
||||
|
||||
if user_query:
|
||||
st.write("Generating response...")
|
||||
|
||||
# Create empty elements to hold the generated text and total time
|
||||
response_container = st.empty()
|
||||
time_container = st.empty()
|
||||
|
||||
# Generate and display the response
|
||||
for steps, total_thinking_time in generate_response(user_query):
|
||||
with response_container.container():
|
||||
for i, (title, content, thinking_time) in enumerate(steps):
|
||||
if title.startswith("Final Answer"):
|
||||
st.markdown(f"### {title}")
|
||||
st.markdown(
|
||||
content.replace("\n", "<br>"), unsafe_allow_html=True
|
||||
)
|
||||
else:
|
||||
with st.expander(title, expanded=True):
|
||||
st.markdown(
|
||||
content.replace("\n", "<br>"), unsafe_allow_html=True
|
||||
)
|
||||
|
||||
# Only show total time when it's available at the end
|
||||
if total_thinking_time is not None:
|
||||
time_container.markdown(
|
||||
f"**Total thinking time: {total_thinking_time:.2f} seconds**"
|
||||
)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
@ -1,2 +1,5 @@
|
||||
streamlit
|
||||
groq
|
||||
dotenv
|
||||
requests
|
||||
blessed
|
||||
|
Loading…
x
Reference in New Issue
Block a user