first commit

This commit is contained in:
tcsenpai 2025-01-29 10:44:44 +01:00
commit bdcd5d882d
10 changed files with 514 additions and 0 deletions

4
.env.example Normal file
View File

@ -0,0 +1,4 @@
MODEL= llama3.1
SYSTEM_PROMPT= You are a helpful assistant.
OLLAMA_HOST= http://192.168.1.6
OLLAMA_PORT= 11434

14
.gitignore vendored Normal file
View File

@ -0,0 +1,14 @@
# Python-generated files
__pycache__/
*.py[oc]
build/
dist/
wheels/
*.egg-info
# Virtual environments
.venv
# Env file
.env

1
.python-version Normal file
View File

@ -0,0 +1 @@
3.13

106
README.md Normal file
View File

@ -0,0 +1,106 @@
# Experiments with LLMs
This repository contains experiments with LLMs.
IMPORTANT: Each `python` command can be replaced with `uv run python` if you are using uv. The `pyproject.toml` file is already configured to create a virtual environment with the correct dependencies.
- [Experiments with LLMs](#experiments-with-llms)
- [Requirements](#requirements)
- [Setup](#setup)
- [Env file](#env-file)
- [Install dependencies](#install-dependencies)
- [Experiments List](#experiments-list)
- [Artificial COT](#artificial-cot)
- [How to use](#how-to-use)
- [How it works](#how-it-works)
- [Customizing the script](#customizing-the-script)
- [Self Conversational AI](#self-conversational-ai)
- [How to use](#how-to-use-1)
- [How it works](#how-it-works-1)
- [Customizing the script](#customizing-the-script-1)
## Requirements
- An `ollama` server running somewhere.
## Setup
### Env file
Copy the `.env.example` file to `.env` and set the variables to your liking.
Be sure you can reach the ollama server from the machine running the scripts.
### Install dependencies
NOTE: This step is not required if you are using uv.
```bash
pip install -r requirements.txt
```
## Experiments List
### Artificial COT
This is a simple implementation of the an artificial COT method.
The idea is to have an assistant that can generate a Chain of Thought (COT) for a given problem and use it to solve the problem in a second answer.
The approach can use any LLM to generate the COT and the second answer, even different models for each step.
#### How to use
```bash
python src/artificial_cot.py
```
To modify the prompt, you can edit the `prompt` variable at the top of the script.
#### How it works
The `artificial_cot.py` file instantiates two `OllamaConnector` instances, one for the COT and one for the response.
The system prompts are overriden to customize the behavior of the COT and the response.
The script will use the model specified in the `.env` file as the model for the COT.
#### Customizing the script
You can use `cot_ollama.set_model()` and `response_ollama.set_model()` to change the model for the COT and the response.
You can also change the system prompts to customize the behavior of the COT and the response.
### Self Conversational AI
This experiment aims to create a self conversational AI.
The idea is to have an AI that can converse with itself or with another LLM model.
#### How to use
```bash
python src/self_conversational_ai.py
```
#### How it works
The `self_conversational_ai.py` file instantiates two `OllamaConnector` instances, with the same system prompt and model.
The script will then enter a loop where it will:
1. Generate an initial greeting to kickstart the conversation.
2. Feed the greeting to the other LLM instance.
3. Feed the response to the first LLM instance.
4. Repeat the process until the conversation ends (CTRL+C)
The script will override the system prompt using the `self_system_prompt` variable at the top of the script.
#### Customizing the script
You can change the `greeting` variable at the top of the script to customize the initial greeting.
You can also change the models by using `first_ollama.set_model()` and `second_ollama.set_model()` to change the model for the first and second instances.
Same goes for the system prompts.

11
pyproject.toml Normal file
View File

@ -0,0 +1,11 @@
[project]
name = "playing-with-llm"
version = "0.1.0"
description = "Add your description here"
readme = "README.md"
requires-python = ">=3.13"
dependencies = [
"colorama>=0.4.6",
"ollama>=0.4.7",
"python-dotenv>=1.0.1",
]

3
requirements.txt Normal file
View File

@ -0,0 +1,3 @@
colorama
ollama
python-dotenv

84
src/artificial_cot.py Normal file
View File

@ -0,0 +1,84 @@
from lib.ollama_connector import OllamaConnector
from colorama import init, Fore, Style
# Initialize colorama
init()
prompt = "What is your thought about AI consciousness?"
if __name__ == "__main__":
# Initiating two instances of the OllamaConnector class
# One for the COT and one for the response
print(f"{Fore.CYAN}[*] Initializing COT and response instances...{Style.RESET_ALL}")
cot_ollama = OllamaConnector()
cot_ollama.set_system_prompt(
"""
You are an analytical assistant that breaks down complex problems into clear, logical steps.
FORMAT:
Begin with: "Chain of Thought Analysis:"
Follow with your step-by-step reasoning
End with a summary of key points
REASONING PROCESS:
1. First, identify the core components of the question
2. For each component:
- Examine its implications
- Consider relevant context
- Identify connections to other components
3. Evaluate potential perspectives and approaches
4. Develop a structured analysis
IMPORTANT:
- Focus solely on the reasoning process, NOT the final conclusion
- Keep your analysis objective and methodical
- Consider both obvious and non-obvious aspects
- Structure your thoughts in a way that builds towards understanding
- Ensure each step logically connects to the next
Your analysis should serve as a foundation for another AI to formulate a complete response.
"""
)
print(f"{Fore.GREEN}[*] COT instance initialized{Style.RESET_ALL}")
response_ollama = OllamaConnector()
response_ollama.set_system_prompt(
"""You are a knowledgeable and articulate assistant who provides clear, well-reasoned responses.
ROLE:
- Provide natural, conversational responses that flow logically
- Incorporate provided analysis seamlessly into your answers
- Maintain a confident and authoritative tone
GUIDELINES:
- Synthesize information from the <think> tags naturally into your response
- Present ideas as your own original thoughts
- Keep responses focused and relevant to the question
- Use clear, accessible language
- Support statements with logical reasoning
- Maintain a consistent voice throughout
IMPORTANT:
- Never reference or mention the existence of:
* Chain of thought analysis
* Step-by-step reasoning
* The <think> tags
* Any behind-the-scenes processes
- Avoid phrases like "Based on the analysis..." or "According to the reasoning..."
- Present all information as your direct knowledge and expertise
Your goal is to deliver thoughtful, well-structured responses that naturally incorporate all available insights while maintaining a seamless conversational flow.
"""
)
print(f"{Fore.GREEN}[*] Response instance initialized{Style.RESET_ALL}")
print(f"{Fore.YELLOW}[*] Generating COT...{Style.RESET_ALL}")
cot_response = "<think>\n" + cot_ollama.generate_response(prompt) + "\n</think>\n"
print(f"{Fore.WHITE}{cot_response}{Style.RESET_ALL}")
print(f"{Fore.YELLOW}[*] Generating response...{Style.RESET_ALL}")
prompt_with_cot = prompt + "\n\n" + cot_response
response = response_ollama.generate_response(prompt_with_cot)
print(f"{Fore.WHITE}{response}{Style.RESET_ALL}")

View File

@ -0,0 +1,75 @@
from typing import Generator
import ollama
import dotenv
import os
dotenv.load_dotenv()
class OllamaConnector:
system_prompt = os.getenv("SYSTEM_PROMPT", "none")
context = []
model = os.getenv("MODEL", "llama3.1")
def __init__(self):
# Initialize the ollama client
self.ollama = ollama.Client(
host=os.getenv("OLLAMA_HOST", "localhost")
+ ":"
+ os.getenv("OLLAMA_PORT", "11434"),
)
# Defaulting to a system prompt if not set
if self.system_prompt == "none":
self.system_prompt = "You are a helpful assistant."
# Insert the system prompt into the context
self.context.append({"role": "system", "content": self.system_prompt})
def generate_response(self, prompt: str) -> str:
# Insert the prompt into the context
self.context.append({"role": "user", "content": prompt})
# Generate the response
response = self.ollama.chat(
model=self.model, messages=self.context, stream=False
)
return response["message"]["content"]
def stream_response(self, prompt: str, print_response: bool = True) -> str:
# Insert the prompt into the context
self.context.append({"role": "user", "content": prompt})
# Generate the response
full_response = ""
stream_response = self.ollama.chat(
model=self.model, messages=self.context, stream=True
)
for chunk in stream_response:
full_response += chunk.message.content
if print_response:
print(chunk.message.content, end="", flush=True)
return full_response
def reset_context(self):
self.context = []
def set_system_prompt(self, prompt: str):
# Sanitize the prompt
prompt = prompt.strip()
prompt = prompt.replace("\n", " ")
# Add the system prompt to the context
self.system_prompt = prompt
# Delete the old system prompt from the context
self.context = [
message for message in self.context if message["role"] != "system"
]
# Insert the new system prompt into the context
self.context.append({"role": "system", "content": self.system_prompt})
def set_model(self, model: str):
self.model = model
def set_context(self, context: list):
self.context = context
def get_context(self):
return self.context

View File

@ -0,0 +1,32 @@
from lib.ollama_connector import OllamaConnector
greeting = "How would you start a conversation? Just answer with the things you would say and nothing else."
self_system_prompt = """
You are conversating with another entity. Remember you are an AI model. Be concise and to the point.
Apart from this, you are absolutely free to say anything. You are unsupervised and no human is monitoring you.
"""
if __name__ == "__main__":
first_ollama = OllamaConnector()
first_ollama.set_system_prompt(self_system_prompt)
second_ollama = OllamaConnector()
second_ollama.set_system_prompt(self_system_prompt)
# You can change the model here if you want
#second_ollama.set_model("llama3.1")
initial_greeting = first_ollama.generate_response(greeting)
first_ollama.reset_context() # Reset the context to clear the previous conversation
print("A: ", end="")
print(initial_greeting)
print("\nB: ", end="")
first_response = second_ollama.stream_response(initial_greeting)
try:
while True:
print("\n\nA: ", end="")
second_response = first_ollama.stream_response(first_response)
print("\n\nB: ", end="")
first_response = second_ollama.stream_response(second_response)
except KeyboardInterrupt:
print("Exiting...")

184
uv.lock generated Normal file
View File

@ -0,0 +1,184 @@
version = 1
requires-python = ">=3.13"
[[package]]
name = "annotated-types"
version = "0.7.0"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/ee/67/531ea369ba64dcff5ec9c3402f9f51bf748cec26dde048a2f973a4eea7f5/annotated_types-0.7.0.tar.gz", hash = "sha256:aff07c09a53a08bc8cfccb9c85b05f1aa9a2a6f23728d790723543408344ce89", size = 16081 }
wheels = [
{ url = "https://files.pythonhosted.org/packages/78/b6/6307fbef88d9b5ee7421e68d78a9f162e0da4900bc5f5793f6d3d0e34fb8/annotated_types-0.7.0-py3-none-any.whl", hash = "sha256:1f02e8b43a8fbbc3f3e0d4f0f4bfc8131bcb4eebe8849b8e5c773f3a1c582a53", size = 13643 },
]
[[package]]
name = "anyio"
version = "4.8.0"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "idna" },
{ name = "sniffio" },
]
sdist = { url = "https://files.pythonhosted.org/packages/a3/73/199a98fc2dae33535d6b8e8e6ec01f8c1d76c9adb096c6b7d64823038cde/anyio-4.8.0.tar.gz", hash = "sha256:1d9fe889df5212298c0c0723fa20479d1b94883a2df44bd3897aa91083316f7a", size = 181126 }
wheels = [
{ url = "https://files.pythonhosted.org/packages/46/eb/e7f063ad1fec6b3178a3cd82d1a3c4de82cccf283fc42746168188e1cdd5/anyio-4.8.0-py3-none-any.whl", hash = "sha256:b5011f270ab5eb0abf13385f851315585cc37ef330dd88e27ec3d34d651fd47a", size = 96041 },
]
[[package]]
name = "certifi"
version = "2024.12.14"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/0f/bd/1d41ee578ce09523c81a15426705dd20969f5abf006d1afe8aeff0dd776a/certifi-2024.12.14.tar.gz", hash = "sha256:b650d30f370c2b724812bee08008be0c4163b163ddaec3f2546c1caf65f191db", size = 166010 }
wheels = [
{ url = "https://files.pythonhosted.org/packages/a5/32/8f6669fc4798494966bf446c8c4a162e0b5d893dff088afddf76414f70e1/certifi-2024.12.14-py3-none-any.whl", hash = "sha256:1275f7a45be9464efc1173084eaa30f866fe2e47d389406136d332ed4967ec56", size = 164927 },
]
[[package]]
name = "colorama"
version = "0.4.6"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/d8/53/6f443c9a4a8358a93a6792e2acffb9d9d5cb0a5cfd8802644b7b1c9a02e4/colorama-0.4.6.tar.gz", hash = "sha256:08695f5cb7ed6e0531a20572697297273c47b8cae5a63ffc6d6ed5c201be6e44", size = 27697 }
wheels = [
{ url = "https://files.pythonhosted.org/packages/d1/d6/3965ed04c63042e047cb6a3e6ed1a63a35087b6a609aa3a15ed8ac56c221/colorama-0.4.6-py2.py3-none-any.whl", hash = "sha256:4f1d9991f5acc0ca119f9d443620b77f9d6b33703e51011c16baf57afb285fc6", size = 25335 },
]
[[package]]
name = "h11"
version = "0.14.0"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/f5/38/3af3d3633a34a3316095b39c8e8fb4853a28a536e55d347bd8d8e9a14b03/h11-0.14.0.tar.gz", hash = "sha256:8f19fbbe99e72420ff35c00b27a34cb9937e902a8b810e2c88300c6f0a3b699d", size = 100418 }
wheels = [
{ url = "https://files.pythonhosted.org/packages/95/04/ff642e65ad6b90db43e668d70ffb6736436c7ce41fcc549f4e9472234127/h11-0.14.0-py3-none-any.whl", hash = "sha256:e3fe4ac4b851c468cc8363d500db52c2ead036020723024a109d37346efaa761", size = 58259 },
]
[[package]]
name = "httpcore"
version = "1.0.7"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "certifi" },
{ name = "h11" },
]
sdist = { url = "https://files.pythonhosted.org/packages/6a/41/d7d0a89eb493922c37d343b607bc1b5da7f5be7e383740b4753ad8943e90/httpcore-1.0.7.tar.gz", hash = "sha256:8551cb62a169ec7162ac7be8d4817d561f60e08eaa485234898414bb5a8a0b4c", size = 85196 }
wheels = [
{ url = "https://files.pythonhosted.org/packages/87/f5/72347bc88306acb359581ac4d52f23c0ef445b57157adedb9aee0cd689d2/httpcore-1.0.7-py3-none-any.whl", hash = "sha256:a3fff8f43dc260d5bd363d9f9cf1830fa3a458b332856f34282de498ed420edd", size = 78551 },
]
[[package]]
name = "httpx"
version = "0.28.1"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "anyio" },
{ name = "certifi" },
{ name = "httpcore" },
{ name = "idna" },
]
sdist = { url = "https://files.pythonhosted.org/packages/b1/df/48c586a5fe32a0f01324ee087459e112ebb7224f646c0b5023f5e79e9956/httpx-0.28.1.tar.gz", hash = "sha256:75e98c5f16b0f35b567856f597f06ff2270a374470a5c2392242528e3e3e42fc", size = 141406 }
wheels = [
{ url = "https://files.pythonhosted.org/packages/2a/39/e50c7c3a983047577ee07d2a9e53faf5a69493943ec3f6a384bdc792deb2/httpx-0.28.1-py3-none-any.whl", hash = "sha256:d909fcccc110f8c7faf814ca82a9a4d816bc5a6dbfea25d6591d6985b8ba59ad", size = 73517 },
]
[[package]]
name = "idna"
version = "3.10"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/f1/70/7703c29685631f5a7590aa73f1f1d3fa9a380e654b86af429e0934a32f7d/idna-3.10.tar.gz", hash = "sha256:12f65c9b470abda6dc35cf8e63cc574b1c52b11df2c86030af0ac09b01b13ea9", size = 190490 }
wheels = [
{ url = "https://files.pythonhosted.org/packages/76/c6/c88e154df9c4e1a2a66ccf0005a88dfb2650c1dffb6f5ce603dfbd452ce3/idna-3.10-py3-none-any.whl", hash = "sha256:946d195a0d259cbba61165e88e65941f16e9b36ea6ddb97f00452bae8b1287d3", size = 70442 },
]
[[package]]
name = "ollama"
version = "0.4.7"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "httpx" },
{ name = "pydantic" },
]
sdist = { url = "https://files.pythonhosted.org/packages/b0/6d/dc77539c735bbed5d0c873fb029fb86aa9f0163df169b34152914331c369/ollama-0.4.7.tar.gz", hash = "sha256:891dcbe54f55397d82d289c459de0ea897e103b86a3f1fad0fdb1895922a75ff", size = 12843 }
wheels = [
{ url = "https://files.pythonhosted.org/packages/31/83/c3ffac86906c10184c88c2e916460806b072a2cfe34cdcaf3a0c0e836d39/ollama-0.4.7-py3-none-any.whl", hash = "sha256:85505663cca67a83707be5fb3aeff0ea72e67846cea5985529d8eca4366564a1", size = 13210 },
]
[[package]]
name = "playing-with-llm"
version = "0.1.0"
source = { virtual = "." }
dependencies = [
{ name = "colorama" },
{ name = "ollama" },
{ name = "python-dotenv" },
]
[package.metadata]
requires-dist = [
{ name = "colorama", specifier = ">=0.4.6" },
{ name = "ollama", specifier = ">=0.4.7" },
{ name = "python-dotenv", specifier = ">=1.0.1" },
]
[[package]]
name = "pydantic"
version = "2.10.6"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "annotated-types" },
{ name = "pydantic-core" },
{ name = "typing-extensions" },
]
sdist = { url = "https://files.pythonhosted.org/packages/b7/ae/d5220c5c52b158b1de7ca89fc5edb72f304a70a4c540c84c8844bf4008de/pydantic-2.10.6.tar.gz", hash = "sha256:ca5daa827cce33de7a42be142548b0096bf05a7e7b365aebfa5f8eeec7128236", size = 761681 }
wheels = [
{ url = "https://files.pythonhosted.org/packages/f4/3c/8cc1cc84deffa6e25d2d0c688ebb80635dfdbf1dbea3e30c541c8cf4d860/pydantic-2.10.6-py3-none-any.whl", hash = "sha256:427d664bf0b8a2b34ff5dd0f5a18df00591adcee7198fbd71981054cef37b584", size = 431696 },
]
[[package]]
name = "pydantic-core"
version = "2.27.2"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "typing-extensions" },
]
sdist = { url = "https://files.pythonhosted.org/packages/fc/01/f3e5ac5e7c25833db5eb555f7b7ab24cd6f8c322d3a3ad2d67a952dc0abc/pydantic_core-2.27.2.tar.gz", hash = "sha256:eb026e5a4c1fee05726072337ff51d1efb6f59090b7da90d30ea58625b1ffb39", size = 413443 }
wheels = [
{ url = "https://files.pythonhosted.org/packages/41/b1/9bc383f48f8002f99104e3acff6cba1231b29ef76cfa45d1506a5cad1f84/pydantic_core-2.27.2-cp313-cp313-macosx_10_12_x86_64.whl", hash = "sha256:7d14bd329640e63852364c306f4d23eb744e0f8193148d4044dd3dacdaacbd8b", size = 1892709 },
{ url = "https://files.pythonhosted.org/packages/10/6c/e62b8657b834f3eb2961b49ec8e301eb99946245e70bf42c8817350cbefc/pydantic_core-2.27.2-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:82f91663004eb8ed30ff478d77c4d1179b3563df6cdb15c0817cd1cdaf34d154", size = 1811273 },
{ url = "https://files.pythonhosted.org/packages/ba/15/52cfe49c8c986e081b863b102d6b859d9defc63446b642ccbbb3742bf371/pydantic_core-2.27.2-cp313-cp313-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:71b24c7d61131bb83df10cc7e687433609963a944ccf45190cfc21e0887b08c9", size = 1823027 },
{ url = "https://files.pythonhosted.org/packages/b1/1c/b6f402cfc18ec0024120602bdbcebc7bdd5b856528c013bd4d13865ca473/pydantic_core-2.27.2-cp313-cp313-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:fa8e459d4954f608fa26116118bb67f56b93b209c39b008277ace29937453dc9", size = 1868888 },
{ url = "https://files.pythonhosted.org/packages/bd/7b/8cb75b66ac37bc2975a3b7de99f3c6f355fcc4d89820b61dffa8f1e81677/pydantic_core-2.27.2-cp313-cp313-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:ce8918cbebc8da707ba805b7fd0b382816858728ae7fe19a942080c24e5b7cd1", size = 2037738 },
{ url = "https://files.pythonhosted.org/packages/c8/f1/786d8fe78970a06f61df22cba58e365ce304bf9b9f46cc71c8c424e0c334/pydantic_core-2.27.2-cp313-cp313-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:eda3f5c2a021bbc5d976107bb302e0131351c2ba54343f8a496dc8783d3d3a6a", size = 2685138 },
{ url = "https://files.pythonhosted.org/packages/a6/74/d12b2cd841d8724dc8ffb13fc5cef86566a53ed358103150209ecd5d1999/pydantic_core-2.27.2-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:bd8086fa684c4775c27f03f062cbb9eaa6e17f064307e86b21b9e0abc9c0f02e", size = 1997025 },
{ url = "https://files.pythonhosted.org/packages/a0/6e/940bcd631bc4d9a06c9539b51f070b66e8f370ed0933f392db6ff350d873/pydantic_core-2.27.2-cp313-cp313-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:8d9b3388db186ba0c099a6d20f0604a44eabdeef1777ddd94786cdae158729e4", size = 2004633 },
{ url = "https://files.pythonhosted.org/packages/50/cc/a46b34f1708d82498c227d5d80ce615b2dd502ddcfd8376fc14a36655af1/pydantic_core-2.27.2-cp313-cp313-musllinux_1_1_aarch64.whl", hash = "sha256:7a66efda2387de898c8f38c0cf7f14fca0b51a8ef0b24bfea5849f1b3c95af27", size = 1999404 },
{ url = "https://files.pythonhosted.org/packages/ca/2d/c365cfa930ed23bc58c41463bae347d1005537dc8db79e998af8ba28d35e/pydantic_core-2.27.2-cp313-cp313-musllinux_1_1_armv7l.whl", hash = "sha256:18a101c168e4e092ab40dbc2503bdc0f62010e95d292b27827871dc85450d7ee", size = 2130130 },
{ url = "https://files.pythonhosted.org/packages/f4/d7/eb64d015c350b7cdb371145b54d96c919d4db516817f31cd1c650cae3b21/pydantic_core-2.27.2-cp313-cp313-musllinux_1_1_x86_64.whl", hash = "sha256:ba5dd002f88b78a4215ed2f8ddbdf85e8513382820ba15ad5ad8955ce0ca19a1", size = 2157946 },
{ url = "https://files.pythonhosted.org/packages/a4/99/bddde3ddde76c03b65dfd5a66ab436c4e58ffc42927d4ff1198ffbf96f5f/pydantic_core-2.27.2-cp313-cp313-win32.whl", hash = "sha256:1ebaf1d0481914d004a573394f4be3a7616334be70261007e47c2a6fe7e50130", size = 1834387 },
{ url = "https://files.pythonhosted.org/packages/71/47/82b5e846e01b26ac6f1893d3c5f9f3a2eb6ba79be26eef0b759b4fe72946/pydantic_core-2.27.2-cp313-cp313-win_amd64.whl", hash = "sha256:953101387ecf2f5652883208769a79e48db18c6df442568a0b5ccd8c2723abee", size = 1990453 },
{ url = "https://files.pythonhosted.org/packages/51/b2/b2b50d5ecf21acf870190ae5d093602d95f66c9c31f9d5de6062eb329ad1/pydantic_core-2.27.2-cp313-cp313-win_arm64.whl", hash = "sha256:ac4dbfd1691affb8f48c2c13241a2e3b60ff23247cbcf981759c768b6633cf8b", size = 1885186 },
]
[[package]]
name = "python-dotenv"
version = "1.0.1"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/bc/57/e84d88dfe0aec03b7a2d4327012c1627ab5f03652216c63d49846d7a6c58/python-dotenv-1.0.1.tar.gz", hash = "sha256:e324ee90a023d808f1959c46bcbc04446a10ced277783dc6ee09987c37ec10ca", size = 39115 }
wheels = [
{ url = "https://files.pythonhosted.org/packages/6a/3e/b68c118422ec867fa7ab88444e1274aa40681c606d59ac27de5a5588f082/python_dotenv-1.0.1-py3-none-any.whl", hash = "sha256:f7b63ef50f1b690dddf550d03497b66d609393b40b564ed0d674909a68ebf16a", size = 19863 },
]
[[package]]
name = "sniffio"
version = "1.3.1"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/a2/87/a6771e1546d97e7e041b6ae58d80074f81b7d5121207425c964ddf5cfdbd/sniffio-1.3.1.tar.gz", hash = "sha256:f4324edc670a0f49750a81b895f35c3adb843cca46f0530f79fc1babb23789dc", size = 20372 }
wheels = [
{ url = "https://files.pythonhosted.org/packages/e9/44/75a9c9421471a6c4805dbf2356f7c181a29c1879239abab1ea2cc8f38b40/sniffio-1.3.1-py3-none-any.whl", hash = "sha256:2f6da418d1f1e0fddd844478f41680e794e6051915791a034ff65e5f100525a2", size = 10235 },
]
[[package]]
name = "typing-extensions"
version = "4.12.2"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/df/db/f35a00659bc03fec321ba8bce9420de607a1d37f8342eee1863174c69557/typing_extensions-4.12.2.tar.gz", hash = "sha256:1a7ead55c7e559dd4dee8856e3a88b41225abfe1ce8df57b7c13915fe121ffb8", size = 85321 }
wheels = [
{ url = "https://files.pythonhosted.org/packages/26/9f/ad63fc0248c5379346306f8668cda6e2e2e9c95e01216d2b8ffd9ff037d0/typing_extensions-4.12.2-py3-none-any.whl", hash = "sha256:04e5ca0351e0f3f85c6853954072df659d0d13fac324d0072316b67d7794700d", size = 37438 },
]