mirror of
https://github.com/tcsenpai/ollama.git
synced 2025-06-07 03:35:21 +00:00
add clip and parallel requests to the todo list
This commit is contained in:
parent
593d6836ab
commit
e37651cca0
@ -2,6 +2,8 @@
|
||||
|
||||
This package integrates llama.cpp as a Go package that's easy to build with tags for different CPU and GPU processors.
|
||||
|
||||
Supported:
|
||||
|
||||
- [x] CPU
|
||||
- [x] avx, avx2
|
||||
- [ ] avx512
|
||||
@ -10,6 +12,8 @@ This package integrates llama.cpp as a Go package that's easy to build with tags
|
||||
- [x] Windows ROCm
|
||||
- [ ] Linux CUDA
|
||||
- [ ] Linux ROCm
|
||||
- [ ] Clip
|
||||
- [ ] Parallel Requests
|
||||
|
||||
Extra build steps are required for CUDA and ROCm on Windows since `nvcc` and `hipcc` both require using msvc as the host compiler. For these small dlls are created:
|
||||
|
||||
|
Loading…
x
Reference in New Issue
Block a user