mirror of
https://github.com/tcsenpai/ollama.git
synced 2025-06-08 12:15:22 +00:00
add clip and parallel requests to the todo list
This commit is contained in:
parent
593d6836ab
commit
e37651cca0
@ -2,6 +2,8 @@
|
|||||||
|
|
||||||
This package integrates llama.cpp as a Go package that's easy to build with tags for different CPU and GPU processors.
|
This package integrates llama.cpp as a Go package that's easy to build with tags for different CPU and GPU processors.
|
||||||
|
|
||||||
|
Supported:
|
||||||
|
|
||||||
- [x] CPU
|
- [x] CPU
|
||||||
- [x] avx, avx2
|
- [x] avx, avx2
|
||||||
- [ ] avx512
|
- [ ] avx512
|
||||||
@ -10,6 +12,8 @@ This package integrates llama.cpp as a Go package that's easy to build with tags
|
|||||||
- [x] Windows ROCm
|
- [x] Windows ROCm
|
||||||
- [ ] Linux CUDA
|
- [ ] Linux CUDA
|
||||||
- [ ] Linux ROCm
|
- [ ] Linux ROCm
|
||||||
|
- [ ] Clip
|
||||||
|
- [ ] Parallel Requests
|
||||||
|
|
||||||
Extra build steps are required for CUDA and ROCm on Windows since `nvcc` and `hipcc` both require using msvc as the host compiler. For these small dlls are created:
|
Extra build steps are required for CUDA and ROCm on Windows since `nvcc` and `hipcc` both require using msvc as the host compiler. For these small dlls are created:
|
||||||
|
|
||||||
|
Loading…
x
Reference in New Issue
Block a user