Logo
Explore Help
Register Sign In
tcsenpai/ollama
1
0
Fork 0
You've already forked ollama
mirror of https://github.com/tcsenpai/ollama.git synced 2025-06-10 13:07:08 +00:00
Code Issues Packages Projects Releases Wiki Activity
ollama/llm
History
Jeffrey Morgan c0285158a9 tweak memory requirements error text
2024-01-03 19:47:18 -05:00
..
llama.cpp
update cmake flags for amd64 macOS (#1780)
2024-01-03 19:22:15 -05:00
dynamic_shim.c
Switch windows build to fully dynamic
2024-01-02 15:36:16 -08:00
dynamic_shim.h
Refactor how we augment llama.cpp
2024-01-02 15:35:55 -08:00
ext_server_common.go
fix: relay request opts to loaded llm prediction (#1761)
2024-01-03 12:01:42 -05:00
ext_server_default.go
fix: relay request opts to loaded llm prediction (#1761)
2024-01-03 12:01:42 -05:00
ext_server_windows.go
Switch windows build to fully dynamic
2024-01-02 15:36:16 -08:00
ggml.go
deprecate ggml
2023-12-19 09:05:46 -08:00
gguf.go
remove per-model types
2023-12-11 09:40:21 -08:00
llama.go
fix: relay request opts to loaded llm prediction (#1761)
2024-01-03 12:01:42 -05:00
llm.go
tweak memory requirements error text
2024-01-03 19:47:18 -05:00
shim_darwin.go
Switch windows build to fully dynamic
2024-01-02 15:36:16 -08:00
shim_ext_server_linux.go
Switch windows build to fully dynamic
2024-01-02 15:36:16 -08:00
shim_ext_server_windows.go
Switch windows build to fully dynamic
2024-01-02 15:36:16 -08:00
shim_ext_server.go
Fix CPU only builds
2024-01-03 16:08:34 -08:00
utils.go
partial decode ggml bin for more info
2023-08-10 09:23:10 -07:00
Powered by Gitea Version: 1.23.0+rc0 Page: 386ms Template: 12ms
English
Bahasa Indonesia Deutsch English Español Français Italiano Latviešu Magyar nyelv Nederlands Polski Português de Portugal Português do Brasil Suomi Svenska Türkçe Čeština Ελληνικά Български Русский Українська فارسی മലയാളം 日本語 简体中文 繁體中文(台灣) 繁體中文(香港) 한국어
Licenses API