mirror of
https://github.com/ggerganov/llama.cpp
synced 2026-03-09 16:49:20 +01:00
This commit updates the causal model card template and removes the -fa option as it is no longer required (fa is auto detected).
14 lines
190 B
Plaintext
14 lines
190 B
Plaintext
---
|
|
base_model:
|
|
- {base_model}
|
|
---
|
|
# {model_name} GGUF
|
|
|
|
Recommended way to run this model:
|
|
|
|
```sh
|
|
llama-server -hf {namespace}/{model_name}-GGUF -c 0
|
|
```
|
|
|
|
Then, access http://localhost:8080
|