mirror of
https://github.com/ggerganov/llama.cpp
synced 2026-04-04 22:35:40 +02:00
* split: support in llama_model_loader * avoid copying the entire vector Co-authored-by: slaren <slarengh@gmail.com> * split: move llama_tensor_offset to llama_model_loader * llama_model_loader: PR feedbacks: - use only one gguf_context for metadata only - store all ggml_context in a vector as the files and mappings - store all weights in a vector along with the source tensor - rename ctx_gguf to meta - rename ctx_meta to contexts * avoid copying the entire vector * Simplify this by making these optional, switch some layer creation tensor optional Co-authored-by: Georgi Gerganov <ggerganov@gmail.com> * Handle optional tensors Co-authored-by: Georgi Gerganov <ggerganov@gmail.com> * llama_model_loader: fail if backend cannot allocate buffer * fix mmap buffer management * llama_model_loader: map file to backend buffer if the allocation succeeds only * llama_model_loader: only map tensors included in the context * llama_model_loader: minor, use same variable name for consistency, fix spacing in types cast * llama_model_loader: fail if any of backend buffer cannot be allocated * spacing Co-authored-by: slaren <slarengh@gmail.com> * fix loop over pointer Co-authored-by: slaren <slarengh@gmail.com> * llama_model_loader: if n_tensors declared not equals to loaded tensors in split, throw an exception instead of asserting * llama_model_loader: ensure mappings vector has the expected size * llama_model_loader: use at instead of operator[] if this should never add to the map. * llama_model_loader: immediately add the backend buffer to the model buffers in order to free them if an error occurs in the next allocation. Reserve the expected size. * llama_model_loader: be sure the model mappings has enough capacity before allocating backend buffer * llama_model_loader: fix map -> unordered map * llama_split_prefix: use a clearer version, not pass split path len but dest max len. Co-authored-by: Xuan Son Nguyen <thichthat@gmail.com> * llama : minor ggml-ci * llama : introduce some typedef helpers * docs: add model shard in hot topic * llama_model_loader: put mapping in a unique_ptr from the moment it is allocated Co-authored-by: slaren <slarengh@gmail.com> * fix llama_split_prefix --------- Co-authored-by: slaren <slarengh@gmail.com> Co-authored-by: Georgi Gerganov <ggerganov@gmail.com> Co-authored-by: Xuan Son Nguyen <thichthat@gmail.com> |
||
|---|---|---|
| .. | ||
| baby-llama | ||
| batched | ||
| batched-bench | ||
| batched.swift | ||
| beam-search | ||
| benchmark | ||
| convert-llama2c-to-ggml | ||
| embedding | ||
| export-lora | ||
| finetune | ||
| gguf | ||
| gguf-split | ||
| gritlm | ||
| imatrix | ||
| infill | ||
| jeopardy | ||
| llama-bench | ||
| llama.android | ||
| llama.swiftui | ||
| llava | ||
| lookahead | ||
| lookup | ||
| main | ||
| main-cmake-pkg | ||
| parallel | ||
| passkey | ||
| perplexity | ||
| quantize | ||
| quantize-stats | ||
| save-load-state | ||
| server | ||
| simple | ||
| speculative | ||
| sycl | ||
| tokenize | ||
| train-text-from-scratch | ||
| alpaca.sh | ||
| base-translate.sh | ||
| chat-13B.bat | ||
| chat-13B.sh | ||
| chat-persistent.sh | ||
| chat-vicuna.sh | ||
| chat.sh | ||
| CMakeLists.txt | ||
| gpt4all.sh | ||
| json-schema-pydantic-example.py | ||
| json-schema-to-grammar.py | ||
| llama2-13b.sh | ||
| llama2.sh | ||
| llama.vim | ||
| llm.vim | ||
| make-ggml.py | ||
| Miku.sh | ||
| pydantic_models_to_grammar.py | ||
| pydantic-models-to-grammar-examples.py | ||
| reason-act.sh | ||
| regex-to-grammar.py | ||
| server-embd.py | ||
| server-llama2-13B.sh | ||
| ts-type-to-grammar.sh | ||