jmorganca
db55b1b89d
better example
module, add port
2024-07-29 15:38:51 -07:00
jmorganca
1124e24aff
wip
2024-07-29 15:38:51 -07:00
jmorganca
df44d119a3
add llava
to runner
2024-07-29 15:38:51 -07:00
jmorganca
86955c3014
fix output in build_hipblas.sh
2024-07-29 15:38:51 -07:00
jmorganca
c05ba504ef
mods to build_hipblas.sh
for linux
2024-07-29 15:38:51 -07:00
jmorganca
aaca2ce093
wip
2024-07-29 15:38:51 -07:00
jmorganca
921708003e
improve cuda and hipblas build scripts
2024-07-29 15:38:51 -07:00
jmorganca
323a3f1f3a
cuda linux
2024-07-29 15:38:51 -07:00
Jeffrey Morgan
07d6e589ca
Update README.md
2024-07-29 15:38:51 -07:00
Jeffrey Morgan
aa52dfcaaf
Update README.md
2024-07-29 15:38:51 -07:00
jmorganca
31e0de825e
disable log file
2024-07-29 15:38:51 -07:00
jmorganca
d65b4ea480
fix readme for llava
2024-07-29 15:38:51 -07:00
jmorganca
878eb9a19f
add llava
2024-07-29 15:38:51 -07:00
jmorganca
5818e3b210
llama: add clip dependencies
2024-07-29 15:38:51 -07:00
jmorganca
2a41ad5b1f
add clip and parallel requests to the todo list
2024-07-29 15:38:51 -07:00
jmorganca
cf1ec78071
fix cuda build
2024-07-29 15:38:51 -07:00
jmorganca
57d03929cd
fix build on windows
2024-07-29 15:38:51 -07:00
jmorganca
0a6b1adbd7
fix ggml-metal.m
build constraints
2024-07-29 15:38:51 -07:00
jmorganca
ec60d79a67
fix ggml-metal.m
2024-07-29 15:38:51 -07:00
jmorganca
3d656588a7
avx2
should only add avx2
2024-07-29 15:38:51 -07:00
jmorganca
460d9857e2
fix sync script
2024-07-29 15:38:51 -07:00
jmorganca
a5548a81fc
fix ggml-metal.m
2024-07-29 15:38:51 -07:00
jmorganca
634f6a75d0
fix ggml-metal.m
2024-07-29 15:38:51 -07:00
jmorganca
3b5e5a6280
add license headers
2024-07-29 15:38:51 -07:00
jmorganca
853d96b1b1
pre-patch
2024-07-29 15:38:51 -07:00
jmorganca
4dd63c1fef
move runner
package down
2024-07-29 15:38:51 -07:00
jmorganca
82214396b5
replace static build in llm
2024-07-29 15:38:51 -07:00
jmorganca
8ca4a9a70a
fix build
2024-07-29 15:35:09 -07:00
jmorganca
25fd8fd045
wip...
2024-07-29 15:35:09 -07:00
jmorganca
be2f37b5d4
rename server
to runner
2024-07-29 15:35:09 -07:00
Jeffrey Morgan
9e28405c54
Update README.md
2024-07-29 15:35:09 -07:00
Jeffrey Morgan
9f3e950120
Update README.md
2024-07-29 15:35:09 -07:00
Jeffrey Morgan
951104045f
Update README.md
2024-07-29 15:35:09 -07:00
Jeffrey Morgan
597712006c
Update README.md
2024-07-29 15:35:09 -07:00
jmorganca
64e712b12b
Add missing hipcc flags
2024-07-29 15:35:09 -07:00
jmorganca
491ff41675
Initial llama
Go module
2024-07-29 15:35:09 -07:00
jmorganca
075f2e88d9
add sync of llama.cpp
2024-07-29 15:35:09 -07:00
Michael Yang
fccf8d179f
partial decode ggml bin for more info
2023-08-10 09:23:10 -07:00
Bruce MacDonald
984c9c628c
fix embeddings invalid values
2023-08-09 16:50:53 -04:00
Bruce MacDonald
09d8bf6730
fix build errors
2023-08-09 10:45:57 -04:00
Bruce MacDonald
7a5f3616fd
embed text document in modelfile
2023-08-09 10:26:19 -04:00
Michael Yang
f2074ed4c0
Merge pull request #306 from jmorganca/default-keep-system
...
automatically set num_keep if num_keep < 0
2023-08-08 09:25:34 -07:00
Bruce MacDonald
a6f6d18f83
embed text document in modelfile
2023-08-08 11:27:17 -04:00
Jeffrey Morgan
5eb712f962
trim whitespace before checking stop conditions
...
Fixes #295
2023-08-08 00:29:19 -04:00
Michael Yang
4dc5b117dd
automatically set num_keep if num_keep < 0
...
num_keep defines how many tokens to keep in the context when truncating
inputs. if left to its default value of -1, the server will calculate
num_keep to be the left of the system instructions
2023-08-07 16:19:12 -07:00
Michael Yang
b9f4d67554
configurable rope frequency parameters
2023-08-03 22:11:58 -07:00
Michael Yang
c5bcf32823
update llama.cpp
2023-08-03 11:50:24 -07:00
Michael Yang
0e79e52ddd
override ggml-metal if the file is different
2023-08-02 12:50:30 -07:00
Michael Yang
74a5f7e698
no gpu for 70B model
2023-08-01 17:12:50 -07:00
Michael Yang
7a1c3e62dc
update llama.cpp
2023-08-01 16:54:01 -07:00