Logo
Explore Help
Register Sign In
tcsenpai/ollama
1
0
Fork 0
You've already forked ollama
mirror of https://github.com/tcsenpai/ollama.git synced 2025-06-08 12:15:22 +00:00
Code Issues Packages Projects Releases Wiki Activity
ollama/llm/llama.cpp
History
Daniel Hiltgen d5ec730354
Merge pull request #1779 from dhiltgen/refined_amd_gpu_list
Improve maintainability of Radeon card list
2024-01-03 16:18:57 -08:00
..
gguf@328b83de23
Bump llama.cpp to b1662 and set n_parallel=1
2023-12-19 09:05:46 -08:00
CMakeLists.txt
Rename the ollama cmakefile
2024-01-02 15:36:16 -08:00
ext_server.cpp
Get rid of one-line llama.log
2024-01-02 15:36:16 -08:00
ext_server.h
Refactor how we augment llama.cpp
2024-01-02 15:35:55 -08:00
gen_common.sh
Rename the ollama cmakefile
2024-01-02 15:36:16 -08:00
gen_darwin.sh
Switch windows build to fully dynamic
2024-01-02 15:36:16 -08:00
gen_linux.sh
Merge pull request #1779 from dhiltgen/refined_amd_gpu_list
2024-01-03 16:18:57 -08:00
gen_windows.ps1
Rename the ollama cmakefile
2024-01-02 15:36:16 -08:00
generate_darwin.go
Add cgo implementation for llama.cpp
2023-12-19 09:05:46 -08:00
generate_linux.go
Adapted rocm support to cgo based llama.cpp
2023-12-19 09:05:46 -08:00
generate_windows.go
Add cgo implementation for llama.cpp
2023-12-19 09:05:46 -08:00
Powered by Gitea Version: 1.23.0+rc0 Page: 488ms Template: 30ms
English
Bahasa Indonesia Deutsch English Español Français Italiano Latviešu Magyar nyelv Nederlands Polski Português de Portugal Português do Brasil Suomi Svenska Türkçe Čeština Ελληνικά Български Русский Українська فارسی മലയാളം 日本語 简体中文 繁體中文(台灣) 繁體中文(香港) 한국어
Licenses API