mirror of
https://github.com/ggerganov/llama.cpp
synced 2026-03-14 11:11:25 +01:00
* ggml : add RPC backend The RPC backend proxies all operations to a remote server which runs a regular backend (CPU, CUDA, Metal, etc). * set TCP_NODELAY * add CI workflows * Address review comments * fix warning * implement llama_max_devices() for RPC * Address review comments * Address review comments * wrap sockfd into a struct * implement get_alignment and get_max_size * add get_device_memory * fix warning * win32 support * add README * readme : trim trailing whitespace * Address review comments * win32 fix * Address review comments * fix compile warnings on macos |
||
|---|---|---|
| .. | ||
| base64.hpp | ||
| build-info.cpp.in | ||
| CMakeLists.txt | ||
| common.cpp | ||
| common.h | ||
| console.cpp | ||
| console.h | ||
| grammar-parser.cpp | ||
| grammar-parser.h | ||
| json-schema-to-grammar.cpp | ||
| json-schema-to-grammar.h | ||
| json.hpp | ||
| log.h | ||
| ngram-cache.cpp | ||
| ngram-cache.h | ||
| sampling.cpp | ||
| sampling.h | ||
| stb_image.h | ||
| train.cpp | ||
| train.h | ||