ik_llama.cpp/gguf-py/gguf
saood06 8ba7e2b40c
Add support for Seed-OSS (#1218)
* it compiles

* Fix constants.py
2026-02-03 07:39:45 +02:00
..
__init__.py Merge mainline llama.cpp (#3) 2024-07-27 07:55:01 +02:00
constants.py Add support for Seed-OSS (#1218) 2026-02-03 07:39:45 +02:00
gguf_reader.py Make gguf-py stuff work with numpy 2.0 (#991) 2025-11-20 10:20:55 +01:00
gguf_writer.py Make gguf-py stuff work with numpy 2.0 (#991) 2025-11-20 10:20:55 +01:00
gguf.py gguf-py: Refactor and allow reading/modifying existing GGUF files (#3981) 2023-11-11 08:04:50 +03:00
lazy.py Merge mainline - Aug 12 2024 (#17) 2024-08-12 15:14:32 +02:00
metadata.py Merge mainline - Aug 12 2024 (#17) 2024-08-12 15:14:32 +02:00
py.typed convert : various script cleanups/fixes + merges and special token handling (#2842) 2023-08-30 11:25:50 +03:00
quants.py convert_hf_to_gguf.py : conversion from hf weights to Q6_0 (#483) 2025-06-03 09:30:30 +03:00
tensor_mapping.py model : Port Minimax M2 from mainline (#907) 2025-11-06 18:09:24 +02:00
utility.py Merge mainline llama.cpp (#3) 2024-07-27 07:55:01 +02:00
vocab.py Add support for GLM-4.5 models (#668) 2025-08-07 07:55:00 +03:00