latest (4.7GB)
1.5b (986MB)
7b (4.7GB)
1.5b-base (986MB)
1.5b-instruct (986MB)
7b-base (4.7GB)
7b-instruct (4.7GB)
1.5b-base-fp16 (3.1GB)
1.5b-base-q2_K (676MB)
1.5b-base-q3_K_S (761MB)
1.5b-base-q3_K_M (824MB)
1.5b-base-q3_K_L (880MB)
1.5b-base-q4_0 (935MB)
1.5b-base-q4_1 (1.0GB)
1.5b-base-q4_K_S (940MB)
1.5b-base-q4_K_M (986MB)
1.5b-base-q5_0 (1.1GB)
1.5b-base-q5_1 (1.2GB)
1.5b-base-q5_K_S (1.1GB)
1.5b-base-q5_K_M (1.1GB)
1.5b-base-q6_K (1.3GB)
1.5b-base-q8_0 (1.6GB)
1.5b-instruct-fp16 (3.1GB)
1.5b-instruct-q2_K (676MB)
1.5b-instruct-q3_K_S (761MB)
1.5b-instruct-q3_K_M (824MB)
1.5b-instruct-q3_K_L (880MB)
1.5b-instruct-q4_0 (935MB)
1.5b-instruct-q4_1 (1.0GB)
1.5b-instruct-q4_K_S (940MB)
1.5b-instruct-q4_K_M (986MB)
1.5b-instruct-q5_0 (1.1GB)
1.5b-instruct-q5_1 (1.2GB)
1.5b-instruct-q5_K_S (1.1GB)
1.5b-instruct-q5_K_M (1.1GB)
1.5b-instruct-q6_K (1.3GB)
1.5b-instruct-q8_0 (1.6GB)
7b-base-fp16 (15GB)
7b-base-q2_K (3.0GB)
7b-base-q3_K_S (3.5GB)
7b-base-q3_K_M (3.8GB)
7b-base-q3_K_L (4.1GB)
7b-base-q4_0 (4.4GB)
7b-base-q4_1 (4.9GB)
7b-base-q4_K_S (4.5GB)
7b-base-q4_K_M (4.7GB)
7b-base-q5_0 (5.3GB)
7b-base-q5_1 (5.8GB)
7b-base-q5_K_S (5.3GB)
7b-base-q5_K_M (5.4GB)
7b-base-q6_K (6.3GB)
7b-base-q8_0 (8.1GB)
7b-instruct-fp16 (15GB)
7b-instruct-q2_K (3.0GB)
7b-instruct-q3_K_S (3.5GB)
7b-instruct-q3_K_M (3.8GB)
7b-instruct-q3_K_L (4.1GB)
7b-instruct-q4_0 (4.4GB)
7b-instruct-q4_1 (4.9GB)
7b-instruct-q4_K_S (4.5GB)
7b-instruct-q4_K_M (4.7GB)
7b-instruct-q5_0 (5.3GB)
7b-instruct-q5_1 (5.8GB)
7b-instruct-q5_K_S (5.3GB)
7b-instruct-q5_K_M (5.4GB)
7b-instruct-q6_K (6.3GB)
7b-instruct-q8_0 (8.1GB)
qwen2.5-coder
π
Copied!
The newest series of Code-Specific Qwen models features major enhancements in code generation, reasoning, and correction. These advancements lead to more effective coding solutions.
Category: Tiny,
Downloads: 261.6K
Last Updated: 1 month ago
latest (13GB)
22b (13GB)
preview (13GB)
22b-preview-instruct-fp16 (44GB)
22b-preview-instruct-q2_K (8.2GB)
22b-preview-instruct-q3_K_S (9.6GB)
22b-preview-instruct-q3_K_M (11GB)
22b-preview-instruct-q3_K_L (12GB)
22b-preview-instruct-q4_0 (12GB)
22b-preview-instruct-q4_1 (14GB)
22b-preview-instruct-q4_K_S (13GB)
22b-preview-instruct-q4_K_M (13GB)
22b-preview-instruct-q5_0 (15GB)
22b-preview-instruct-q5_1 (17GB)
22b-preview-instruct-q5_K_S (15GB)
22b-preview-instruct-q5_K_M (16GB)
22b-preview-instruct-q6_K (18GB)
22b-preview-instruct-q8_0 (24GB)
solar-pro
π
Copied!
Solar Pro Preview is an advanced large language model (LLM) featuring 22 billion parameters, optimized for deployment on a single GPU.
Category: Language
Downloads: 19K
Last Updated: 1 month ago
latest (4.7GB)
0.5b (398MB)
1.5b (986MB)
3b (1.9GB)
7b (4.7GB)
14b (9.0GB)
32b (20GB)
72b (47GB)
0.5b-base (398MB)
0.5b-instruct (398MB)
1.5b-instruct (986MB)
3b-instruct (1.9GB)
7b-instruct (4.7GB)
14b-instruct (9.0GB)
32b-instruct (20GB)
72b-instruct (47GB)
0.5b-base-q2_K (339MB)
0.5b-base-q3_K_S (338MB)
0.5b-base-q3_K_M (355MB)
0.5b-base-q3_K_L (369MB)
0.5b-base-q4_0 (352MB)
0.5b-base-q4_1 (375MB)
0.5b-base-q4_K_S (385MB)
0.5b-base-q4_K_M (398MB)
0.5b-base-q5_0 (397MB)
0.5b-base-q5_1 (419MB)
0.5b-base-q5_K_S (413MB)
0.5b-base-q8_0 (531MB)
0.5b-instruct-fp16 (994MB)
0.5b-instruct-q2_K (339MB)
0.5b-instruct-q3_K_S (338MB)
0.5b-instruct-q3_K_M (355MB)
0.5b-instruct-q3_K_L (369MB)
0.5b-instruct-q4_0 (352MB)
0.5b-instruct-q4_1 (375MB)
0.5b-instruct-q4_K_S (385MB)
0.5b-instruct-q4_K_M (398MB)
0.5b-instruct-q5_0 (397MB)
0.5b-instruct-q5_1 (419MB)
0.5b-instruct-q5_K_S (413MB)
0.5b-instruct-q5_K_M (420MB)
0.5b-instruct-q6_K (506MB)
0.5b-instruct-q8_0 (531MB)
1.5b-instruct-fp16 (3.1GB)
1.5b-instruct-q2_K (676MB)
1.5b-instruct-q3_K_S (761MB)
1.5b-instruct-q3_K_M (824MB)
1.5b-instruct-q3_K_L (880MB)
1.5b-instruct-q4_0 (935MB)
1.5b-instruct-q4_1 (1.0GB)
1.5b-instruct-q4_K_S (940MB)
1.5b-instruct-q4_K_M (986MB)
1.5b-instruct-q5_0 (1.1GB)
1.5b-instruct-q5_1 (1.2GB)
1.5b-instruct-q5_K_S (1.1GB)
1.5b-instruct-q5_K_M (1.1GB)
1.5b-instruct-q6_K (1.3GB)
1.5b-instruct-q8_0 (1.6GB)
3b-instruct-fp16 (6.2GB)
3b-instruct-q2_K (1.3GB)
3b-instruct-q3_K_S (1.5GB)
3b-instruct-q3_K_M (1.6GB)
3b-instruct-q3_K_L (1.7GB)
3b-instruct-q4_0 (1.8GB)
3b-instruct-q4_1 (2.0GB)
3b-instruct-q4_K_S (1.8GB)
3b-instruct-q4_K_M (1.9GB)
3b-instruct-q5_0 (2.2GB)
3b-instruct-q5_1 (2.3GB)
3b-instruct-q5_K_S (2.2GB)
3b-instruct-q5_K_M (2.2GB)
3b-instruct-q6_K (2.5GB)
3b-instruct-q8_0 (3.3GB)
7b-instruct-fp16 (15GB)
7b-instruct-q2_K (3.0GB)
7b-instruct-q3_K_S (3.5GB)
7b-instruct-q3_K_M (3.8GB)
7b-instruct-q3_K_L (4.1GB)
7b-instruct-q4_0 (4.4GB)
7b-instruct-q4_1 (4.9GB)
7b-instruct-q4_K_S (4.5GB)
7b-instruct-q4_K_M (4.7GB)
7b-instruct-q5_0 (5.3GB)
7b-instruct-q5_1 (5.8GB)
7b-instruct-q5_K_S (5.3GB)
7b-instruct-q5_K_M (5.4GB)
7b-instruct-q6_K (6.3GB)
7b-instruct-q8_0 (8.1GB)
14b-instruct-fp16 (30GB)
14b-instruct-q2_K (5.8GB)
14b-instruct-q3_K_S (6.7GB)
14b-instruct-q3_K_M (7.3GB)
14b-instruct-q3_K_L (7.9GB)
14b-instruct-q4_0 (8.5GB)
14b-instruct-q4_1 (9.4GB)
14b-instruct-q4_K_S (8.6GB)
14b-instruct-q4_K_M (9.0GB)
14b-instruct-q5_0 (10GB)
14b-instruct-q5_1 (11GB)
14b-instruct-q5_K_S (10GB)
14b-instruct-q5_K_M (11GB)
14b-instruct-q6_K (12GB)
14b-instruct-q8_0 (16GB)
32b-instruct-fp16 (66GB)
32b-instruct-q2_K (12GB)
32b-instruct-q3_K_S (14GB)
32b-instruct-q3_K_M (16GB)
32b-instruct-q3_K_L (17GB)
32b-instruct-q4_0 (19GB)
32b-instruct-q4_1 (21GB)
32b-instruct-q4_K_S (19GB)
32b-instruct-q4_K_M (20GB)
32b-instruct-q5_0 (23GB)
32b-instruct-q5_1 (25GB)
32b-instruct-q5_K_S (23GB)
32b-instruct-q5_K_M (23GB)
32b-instruct-q6_K (27GB)
32b-instruct-q8_0 (35GB)
72b-instruct-fp16 (145GB)
72b-instruct-q2_K (30GB)
72b-instruct-q3_K_S (34GB)
72b-instruct-q3_K_M (38GB)
72b-instruct-q3_K_L (40GB)
72b-instruct-q4_0 (41GB)
72b-instruct-q4_1 (46GB)
72b-instruct-q4_K_S (44GB)
72b-instruct-q4_K_M (47GB)
72b-instruct-q5_0 (50GB)
72b-instruct-q5_1 (55GB)
72b-instruct-q5_K_S (51GB)
72b-instruct-q5_K_M (54GB)
72b-instruct-q6_K (64GB)
72b-instruct-q8_0 (77GB)
qwen2.5
π
Copied!
The Qwen2.5 models have been pretrained on Alibaba's latest extensive dataset, which includes up to 18 trillion tokens. This model can handle up to 128K tokens and offers multilingual support.
Category: Tiny,
Downloads: 1.9M
Last Updated: 1 month ago
latest (13GB)
22b (13GB)
22b-instruct-2409-fp16 (44GB)
22b-instruct-2409-q2_K (8.3GB)
22b-instruct-2409-q3_K_S (9.6GB)
22b-instruct-2409-q3_K_M (11GB)
22b-instruct-2409-q3_K_L (12GB)
22b-instruct-2409-q4_0 (13GB)
22b-instruct-2409-q4_1 (14GB)
22b-instruct-2409-q4_K_S (13GB)
22b-instruct-2409-q4_K_M (13GB)
22b-instruct-2409-q5_0 (15GB)
22b-instruct-2409-q5_1 (17GB)
22b-instruct-2409-q5_K_S (15GB)
22b-instruct-2409-q5_K_M (16GB)
22b-instruct-2409-q6_K (18GB)
22b-instruct-2409-q8_0 (24GB)
mistral-small
π
Copied!
Mistral Small is a lightweight model designed for cost-effective use in tasks like translation and summarization.
Category: Language
Downloads: 38.1K
Last Updated: 1 month ago
latest (5.5GB)
8b (5.5GB)
8b-2.6-fp16 (16GB)
8b-2.6-q2_K (4.1GB)
8b-2.6-q3_K_S (4.5GB)
8b-2.6-q3_K_M (4.9GB)
8b-2.6-q3_K_L (5.1GB)
8b-2.6-q4_0 (5.5GB)
8b-2.6-q4_1 (5.9GB)
8b-2.6-q4_K_S (5.5GB)
8b-2.6-q4_K_M (5.7GB)
8b-2.6-q5_0 (6.4GB)
8b-2.6-q5_1 (6.8GB)
8b-2.6-q5_K_S (6.4GB)
8b-2.6-q5_K_M (6.5GB)
8b-2.6-q6_K (7.3GB)
8b-2.6-q8_0 (9.1GB)
minicpm-v
π
Copied!
A series of multimodal LLMs (MLLMs) designed for vision-language understanding.
Category: Multimodal
Downloads: 35.5K
Last Updated: 2 months ago
latest (133GB)
236b (133GB)
236b-q4_0 (133GB)
236b-q4_1 (148GB)
236b-q5_0 (162GB)
236b-q5_1 (177GB)
236b-q8_0 (251GB)
deepseek-v2.5
π
Copied!
An upgraded version of DeekSeek-V2 that integrates the general and coding abilities of both DeepSeek-V2-Chat and DeepSeek-Coder-V2-Instruct.
Category: Language
Downloads: 8,330
Last Updated: 2 months ago
latest (5.0GB)
1.5b (866MB)
9b (5.0GB)
1.5b-base (866MB)
1.5b-chat (866MB)
9b-base (5.0GB)
9b-chat (5.0GB)
1.5b-base-fp16 (3.0GB)
1.5b-base-q2_K (635MB)
1.5b-base-q3_K_S (723MB)
1.5b-base-q3_K_M (786MB)
1.5b-base-q3_K_L (826MB)
1.5b-base-q4_0 (866MB)
1.5b-base-q4_1 (950MB)
1.5b-base-q4_K_S (904MB)
1.5b-base-q4_K_M (964MB)
1.5b-base-q5_0 (1.0GB)
1.5b-base-q5_1 (1.1GB)
1.5b-base-q5_K_S (1.1GB)
1.5b-base-q5_K_M (1.1GB)
1.5b-base-q6_K (1.3GB)
1.5b-base-q8_0 (1.6GB)
1.5b-chat-fp16 (3.0GB)
1.5b-chat-q2_K (635MB)
1.5b-chat-q3_K_S (723MB)
1.5b-chat-q3_K_M (786MB)
1.5b-chat-q3_K_L (826MB)
1.5b-chat-q4_0 (866MB)
1.5b-chat-q4_1 (950MB)
1.5b-chat-q4_K_S (904MB)
1.5b-chat-q4_K_M (964MB)
1.5b-chat-q5_0 (1.0GB)
1.5b-chat-q5_1 (1.1GB)
1.5b-chat-q5_K_S (1.1GB)
1.5b-chat-q5_K_M (1.1GB)
1.5b-chat-q6_K (1.3GB)
1.5b-chat-q8_0 (1.6GB)
9b-base-fp16 (18GB)
9b-base-q2_K (3.4GB)
9b-base-q3_K_S (3.9GB)
9b-base-q3_K_M (4.3GB)
9b-base-q3_K_L (4.7GB)
9b-base-q4_0 (5.0GB)
9b-base-q4_1 (5.6GB)
9b-base-q4_K_S (5.1GB)
9b-base-q4_K_M (5.3GB)
9b-base-q5_0 (6.1GB)
9b-base-q5_1 (6.6GB)
9b-base-q5_K_S (6.1GB)
9b-base-q5_K_M (6.3GB)
9b-base-q6_K (7.2GB)
9b-base-q8_0 (9.4GB)
9b-chat-fp16 (18GB)
9b-chat-q2_K (3.4GB)
9b-chat-q3_K_S (3.9GB)
9b-chat-q3_K_M (4.3GB)
9b-chat-q3_K_L (4.7GB)
9b-chat-q4_0 (5.0GB)
9b-chat-q4_1 (5.6GB)
9b-chat-q4_K_S (5.1GB)
9b-chat-q4_K_M (5.3GB)
9b-chat-q5_0 (6.1GB)
9b-chat-q5_1 (6.6GB)
9b-chat-q5_K_S (6.1GB)
9b-chat-q5_K_M (6.3GB)
9b-chat-q6_K (7.2GB)
9b-chat-q8_0 (9.4GB)
yi-coder
π
Copied!
Yi-Coder is a series of open-source code language models that delivers state-of-the-art coding performance with fewer than 10 billion parameters.
Category: Tiny,
Downloads: 54.2K
Last Updated: 2 months ago
latest (1.8GB)
135m (271MB)
360m (726MB)
1.7b (1.8GB)
1.7b-instruct-fp16 (3.4GB)
1.7b-instruct-q2_K (675MB)
1.7b-instruct-q3_K_L (933MB)
1.7b-instruct-q3_K_M (860MB)
1.7b-instruct-q3_K_S (777MB)
1.7b-instruct-q4_0 (991MB)
1.7b-instruct-q4_1 (1.1GB)
1.7b-instruct-q4_K_M (1.1GB)
1.7b-instruct-q4_K_S (999MB)
1.7b-instruct-q5_0 (1.2GB)
1.7b-instruct-q5_1 (1.3GB)
1.7b-instruct-q5_K_M (1.2GB)
1.7b-instruct-q5_K_S (1.2GB)
1.7b-instruct-q6_K (1.4GB)
1.7b-instruct-q8_0 (1.8GB)
135m-instruct-fp16 (271MB)
135m-instruct-q2_K (88MB)
135m-instruct-q3_K_L (98MB)
135m-instruct-q3_K_M (94MB)
135m-instruct-q3_K_S (88MB)
135m-instruct-q4_0 (92MB)
135m-instruct-q4_1 (98MB)
135m-instruct-q4_K_M (105MB)
135m-instruct-q4_K_S (102MB)
135m-instruct-q5_0 (105MB)
135m-instruct-q5_1 (112MB)
135m-instruct-q5_K_M (112MB)
135m-instruct-q5_K_S (110MB)
135m-instruct-q6_K (138MB)
135m-instruct-q8_0 (145MB)
360m-instruct-fp16 (726MB)
360m-instruct-q2_K (219MB)
360m-instruct-q3_K_L (246MB)
360m-instruct-q3_K_M (235MB)
360m-instruct-q3_K_S (219MB)
360m-instruct-q4_0 (229MB)
360m-instruct-q4_1 (249MB)
360m-instruct-q4_K_M (271MB)
360m-instruct-q4_K_S (260MB)
360m-instruct-q5_0 (268MB)
360m-instruct-q5_1 (288MB)
360m-instruct-q5_K_M (290MB)
360m-instruct-q5_K_S (283MB)
360m-instruct-q6_K (367MB)
360m-instruct-q8_0 (386MB)
smollm2
π
Copied!
SmolLM2 is a family of compact language models available in three size: 135M, 360M, and 1.7B parameters.
Category: Tiny,
Downloads: 5,478
Last Updated: 2 weeks ago
latest (2.7GB)
2b (2.7GB)
8b (5.8GB)
2b-fp16 (5.1GB)
2b-q8_0 (2.7GB)
8b-fp16 (16GB)
8b-q5_K_M (5.8GB)
8b-q5_K_S (5.6GB)
8b-q6_K (6.7GB)
8b-q8_0 (8.7GB)
granite3-guardian
π
Copied!
The IBM Granite Guardian 3.0 2B and 8B models are designed to detect risks in prompts and/or responses.
Category: Language
Downloads: 624
Last Updated: 2 weeks ago
latest (5.1GB)
8b (5.1GB)
32b (20GB)
32b-fp16 (65GB)
32b-q2_K (13GB)
32b-q3_K_L (18GB)
32b-q3_K_M (16GB)
32b-q3_K_S (15GB)
32b-q4_0 (19GB)
32b-q4_1 (21GB)
32b-q4_K_M (20GB)
32b-q4_K_S (19GB)
32b-q5_0 (22GB)
32b-q5_1 (24GB)
32b-q5_K_M (23GB)
32b-q5_K_S (22GB)
32b-q6_K (27GB)
32b-q8_0 (34GB)
8b-fp16 (16GB)
8b-q2_K (3.4GB)
8b-q3_K_L (4.5GB)
8b-q3_K_M (4.2GB)
8b-q3_K_S (3.9GB)
8b-q4_0 (4.8GB)
8b-q4_1 (5.2GB)
8b-q4_K_M (5.1GB)
8b-q4_K_S (4.8GB)
8b-q5_0 (5.7GB)
8b-q5_1 (6.1GB)
8b-q5_K_M (5.8GB)
8b-q5_K_S (5.7GB)
8b-q6_K (6.6GB)
8b-q8_0 (8.5GB)
aya-expanse
π
Copied!
Cohere For AI's language models trained to perform well across 23 different languages.
Category: Language
Downloads: 8,349
Last Updated: 3 weeks ago
latest (2.1GB)
1b (822MB)
3b (2.1GB)
1b-instruct-fp16 (2.7GB)
1b-instruct-q2_K (512MB)
1b-instruct-q3_K_L (711MB)
1b-instruct-q3_K_M (659MB)
1b-instruct-q3_K_S (598MB)
1b-instruct-q4_0 (768MB)
1b-instruct-q4_1 (849MB)
1b-instruct-q4_K_M (822MB)
1b-instruct-q4_K_S (775MB)
1b-instruct-q5_0 (929MB)
1b-instruct-q5_1 (1.0GB)
1b-instruct-q5_K_M (956MB)
1b-instruct-q5_K_S (929MB)
1b-instruct-q6_K (1.1GB)
1b-instruct-q8_0 (1.4GB)
3b-instruct-fp16 (6.8GB)
3b-instruct-q2_K (1.3GB)
3b-instruct-q3_K_L (1.8GB)
3b-instruct-q3_K_M (1.6GB)
3b-instruct-q3_K_S (1.5GB)
3b-instruct-q4_0 (1.9GB)
3b-instruct-q4_1 (2.1GB)
3b-instruct-q4_K_M (2.1GB)
3b-instruct-q4_K_S (1.9GB)
3b-instruct-q5_0 (2.3GB)
3b-instruct-q5_1 (2.5GB)
3b-instruct-q5_K_M (2.4GB)
3b-instruct-q5_K_S (2.3GB)
3b-instruct-q6_K (2.8GB)
3b-instruct-q8_0 (3.6GB)
granite3-moe
π
Copied!
The IBM Granite 1B and 3B models are the first mixture of experts (MoE) Granite models from IBM designed for low latency usage.
Category: Tiny,
Downloads: 9,752
Last Updated: 4 weeks ago
latest (1.6GB)
2b (1.6GB)
8b (4.9GB)
2b-instruct-fp16 (5.3GB)
2b-instruct-q2_K (1.0GB)
2b-instruct-q3_K_L (1.4GB)
2b-instruct-q3_K_M (1.3GB)
2b-instruct-q3_K_S (1.2GB)
2b-instruct-q4_0 (1.5GB)
2b-instruct-q4_1 (1.7GB)
2b-instruct-q4_K_M (1.6GB)
2b-instruct-q4_K_S (1.5GB)
2b-instruct-q5_0 (1.8GB)
2b-instruct-q5_1 (2.0GB)
2b-instruct-q5_K_M (1.9GB)
2b-instruct-q5_K_S (1.8GB)
2b-instruct-q6_K (2.2GB)
2b-instruct-q8_0 (2.8GB)
8b-instruct-fp16 (16GB)
8b-instruct-q2_K (3.1GB)
8b-instruct-q3_K_L (4.3GB)
8b-instruct-q3_K_M (4.0GB)
8b-instruct-q3_K_S (3.6GB)
8b-instruct-q4_0 (4.7GB)
8b-instruct-q4_1 (5.1GB)
8b-instruct-q4_K_M (4.9GB)
8b-instruct-q4_K_S (4.7GB)
8b-instruct-q5_0 (5.6GB)
8b-instruct-q5_1 (6.1GB)
8b-instruct-q5_K_M (5.8GB)
8b-instruct-q5_K_S (5.6GB)
8b-instruct-q6_K (6.7GB)
8b-instruct-q8_0 (8.7GB)
granite3-dense
π
Copied!
The IBM Granite 2B and 8B models are designed to support tool-based use cases and support for retrieval augmented generation (RAG), streamlining code generation, translation and bug fixing.
Category: Language
Downloads: 14.7K
Last Updated: 4 weeks ago
latest (43GB)
70b (43GB)
70b-instruct-fp16 (141GB)
70b-instruct-q2_K (26GB)
70b-instruct-q3_K_L (37GB)
70b-instruct-q3_K_M (34GB)
70b-instruct-q3_K_S (31GB)
70b-instruct-q4_0 (40GB)
70b-instruct-q4_1 (44GB)
70b-instruct-q4_K_M (43GB)
70b-instruct-q4_K_S (40GB)
70b-instruct-q5_0 (49GB)
70b-instruct-q5_1 (53GB)
70b-instruct-q5_K_M (50GB)
70b-instruct-q5_K_S (49GB)
70b-instruct-q6_K (58GB)
70b-instruct-q8_0 (75GB)
nemotron
π
Copied!
Llama-3.1-Nemotron-70B-Instruct is a large language model customized by NVIDIA to improve the helpfulness of LLM generated responses to user queries.
Category: Language
Downloads: 26K
Last Updated: 4 weeks ago
latest (5.8GB)
2b (1.7GB)
9b (5.8GB)
27b (17GB)
27b-fp16 (54GB)
27b-q2_K (10GB)
27b-q3_K_L (15GB)
27b-q3_K_M (13GB)
27b-q3_K_S (12GB)
27b-q4_0 (16GB)
27b-q4_1 (17GB)
27b-q4_K_M (17GB)
27b-q4_K_S (16GB)
27b-q5_0 (19GB)
27b-q5_1 (21GB)
27b-q5_K_M (19GB)
27b-q5_K_S (19GB)
27b-q6_K (22GB)
27b-q8_0 (29GB)
2b-fp16 (5.2GB)
2b-q2_K (1.2GB)
2b-q3_K_L (1.6GB)
2b-q3_K_M (1.5GB)
2b-q3_K_S (1.4GB)
2b-q4_0 (1.6GB)
2b-q4_1 (1.8GB)
2b-q4_K_M (1.7GB)
2b-q4_K_S (1.6GB)
2b-q5_0 (1.9GB)
2b-q5_1 (2.0GB)
2b-q5_K_M (1.9GB)
2b-q5_K_S (1.9GB)
2b-q6_K (2.2GB)
2b-q8_0 (2.8GB)
9b-fp16 (18GB)
9b-q2_K (3.8GB)
9b-q3_K_L (5.1GB)
9b-q3_K_M (4.8GB)
9b-q3_K_S (4.3GB)
9b-q4_0 (5.4GB)
9b-q4_1 (6.0GB)
9b-q4_K_M (5.8GB)
9b-q4_K_S (5.5GB)
9b-q5_0 (6.5GB)
9b-q5_1 (7.0GB)
9b-q5_K_M (6.6GB)
9b-q5_K_S (6.5GB)
9b-q6_K (7.6GB)
9b-q8_0 (9.8GB)
shieldgemma
π
Copied!
ShieldGemma is set of instruction tuned models for evaluating the safety of text prompt input and text output responses against a set of defined safety policies.
Category: Language
Downloads: 7,019
Last Updated: 1 month ago
latest (4.9GB)
1b (1.6GB)
8b (4.9GB)
1b-fp16 (3.0GB)
1b-q2_K (667MB)
1b-q3_K_L (845MB)
1b-q3_K_M (804MB)
1b-q3_K_S (755MB)
1b-q4_0 (919MB)
1b-q4_1 (996MB)
1b-q4_K_M (955MB)
1b-q4_K_S (923MB)
1b-q5_0 (1.1GB)
1b-q5_1 (1.2GB)
1b-q5_K_M (1.1GB)
1b-q5_K_S (1.1GB)
1b-q6_K (1.2GB)
1b-q8_0 (1.6GB)
8b-fp16 (16GB)
8b-q2_K (3.2GB)
8b-q3_K_L (4.3GB)
8b-q3_K_M (4.0GB)
8b-q3_K_S (3.7GB)
8b-q4_0 (4.7GB)
8b-q4_1 (5.1GB)
8b-q4_K_M (4.9GB)
8b-q4_K_S (4.7GB)
8b-q5_0 (5.6GB)
8b-q5_1 (6.1GB)
8b-q5_K_M (5.7GB)
8b-q5_K_S (5.6GB)
8b-q6_K (6.6GB)
8b-q8_0 (8.5GB)
llama-guard3
π
Copied!
Llama Guard 3 is a series of models fine-tuned for content safety classification of LLM inputs and responses.
Category: Tiny,
Downloads: 6,318
Last Updated: 1 month ago
latest (2.0GB)
1b (1.3GB)
3b (2.0GB)
1b-instruct-fp16 (2.5GB)
1b-instruct-q2_K (581MB)
1b-instruct-q3_K_L (733MB)
1b-instruct-q3_K_M (691MB)
1b-instruct-q3_K_S (642MB)
1b-instruct-q4_0 (771MB)
1b-instruct-q4_1 (832MB)
1b-instruct-q4_K_M (808MB)
1b-instruct-q4_K_S (776MB)
1b-instruct-q5_0 (893MB)
1b-instruct-q5_1 (953MB)
1b-instruct-q5_K_M (912MB)
1b-instruct-q5_K_S (893MB)
1b-instruct-q6_K (1.0GB)
1b-instruct-q8_0 (1.3GB)
1b-text-fp16 (2.5GB)
1b-text-q2_K (581MB)
1b-text-q3_K_L (733MB)
1b-text-q3_K_M (691MB)
1b-text-q3_K_S (642MB)
1b-text-q4_0 (771MB)
1b-text-q4_1 (832MB)
1b-text-q4_K_M (808MB)
1b-text-q4_K_S (776MB)
1b-text-q5_0 (893MB)
1b-text-q5_1 (953MB)
1b-text-q5_K_M (912MB)
1b-text-q5_K_S (893MB)
1b-text-q6_K (1.0GB)
1b-text-q8_0 (1.3GB)
3b-instruct-fp16 (6.4GB)
3b-instruct-q2_K (1.4GB)
3b-instruct-q3_K_L (1.8GB)
3b-instruct-q3_K_M (1.7GB)
3b-instruct-q3_K_S (1.5GB)
3b-instruct-q4_0 (1.9GB)
3b-instruct-q4_1 (2.1GB)
3b-instruct-q4_K_M (2.0GB)
3b-instruct-q4_K_S (1.9GB)
3b-instruct-q5_0 (2.3GB)
3b-instruct-q5_1 (2.4GB)
3b-instruct-q5_K_M (2.3GB)
3b-instruct-q5_K_S (2.3GB)
3b-instruct-q6_K (2.6GB)
3b-instruct-q8_0 (3.4GB)
3b-text-fp16 (6.4GB)
3b-text-q2_K (1.4GB)
3b-text-q3_K_L (1.8GB)
3b-text-q3_K_M (1.7GB)
3b-text-q3_K_S (1.5GB)
3b-text-q4_0 (1.9GB)
3b-text-q4_1 (2.1GB)
3b-text-q4_K_M (2.0GB)
3b-text-q4_K_S (1.9GB)
3b-text-q5_0 (2.3GB)
3b-text-q5_1 (2.4GB)
3b-text-q5_K_M (2.3GB)
3b-text-q5_K_S (2.3GB)
3b-text-q6_K (2.6GB)
3b-text-q8_0 (3.4GB)
llama3.2
π
Copied!
Meta's Llama 3.2 goes small with 1B and 3B models.
Category: Tiny,
Downloads: 2.3M
Last Updated: 1 month ago
latest (2.2GB)
3.8b (2.2GB)
3.8b-mini-instruct-q2_K (1.4GB)
3.8b-mini-instruct-q4_0 (2.2GB)
3.8b-mini-instruct-q4_1 (2.4GB)
3.8b-mini-instruct-q5_0 (2.6GB)
3.8b-mini-instruct-q5_1 (2.9GB)
3.8b-mini-instruct-q8_0 (4.1GB)
phi3.5
π
Copied!
A lightweight AI model consisting of 3.8 billion parameters surpasses the performance of similarly sized and larger models.
Category: Language
Downloads: 159.1K
Last Updated: 2 months ago
latest (991MB)
135m (92MB)
360m (229MB)
1.7b (991MB)
135m-base-v0.2-fp16 (271MB)
135m-base-v0.2-q2_K (88MB)
135m-base-v0.2-q3_K_S (88MB)
135m-base-v0.2-q3_K_M (94MB)
135m-base-v0.2-q3_K_L (98MB)
135m-base-v0.2-q4_0 (92MB)
135m-base-v0.2-q4_1 (98MB)
135m-base-v0.2-q4_K_S (102MB)
135m-base-v0.2-q4_K_M (105MB)
135m-base-v0.2-q5_0 (105MB)
135m-base-v0.2-q5_1 (112MB)
135m-base-v0.2-q5_K_S (110MB)
135m-base-v0.2-q5_K_M (112MB)
135m-base-v0.2-q6_K (138MB)
135m-base-v0.2-q8_0 (145MB)
135m-instruct-v0.2-fp16 (271MB)
135m-instruct-v0.2-q2_K (88MB)
135m-instruct-v0.2-q3_K_S (88MB)
135m-instruct-v0.2-q3_K_M (94MB)
135m-instruct-v0.2-q3_K_L (98MB)
135m-instruct-v0.2-q4_0 (92MB)
135m-instruct-v0.2-q4_1 (98MB)
135m-instruct-v0.2-q4_K_S (102MB)
135m-instruct-v0.2-q4_K_M (105MB)
135m-instruct-v0.2-q5_0 (105MB)
135m-instruct-v0.2-q5_1 (112MB)
135m-instruct-v0.2-q5_K_S (110MB)
135m-instruct-v0.2-q5_K_M (112MB)
135m-instruct-v0.2-q6_K (138MB)
135m-instruct-v0.2-q8_0 (145MB)
360m-base-v0.2-fp16 (726MB)
360m-base-v0.2-q2_K (219MB)
360m-base-v0.2-q3_K_S (219MB)
360m-base-v0.2-q3_K_M (235MB)
360m-base-v0.2-q3_K_L (246MB)
360m-base-v0.2-q4_0 (229MB)
360m-base-v0.2-q4_1 (249MB)
360m-base-v0.2-q4_K_S (260MB)
360m-base-v0.2-q4_K_M (271MB)
360m-base-v0.2-q5_0 (268MB)
360m-base-v0.2-q5_1 (288MB)
360m-base-v0.2-q5_K_S (283MB)
360m-base-v0.2-q5_K_M (290MB)
360m-base-v0.2-q6_K (367MB)
360m-base-v0.2-q8_0 (386MB)
360m-instruct-v0.2-fp16 (726MB)
360m-instruct-v0.2-q2_K (219MB)
360m-instruct-v0.2-q3_K_S (219MB)
360m-instruct-v0.2-q3_K_M (235MB)
360m-instruct-v0.2-q3_K_L (246MB)
360m-instruct-v0.2-q4_0 (229MB)
360m-instruct-v0.2-q4_1 (249MB)
360m-instruct-v0.2-q4_K_S (260MB)
360m-instruct-v0.2-q4_K_M (271MB)
360m-instruct-v0.2-q5_0 (268MB)
360m-instruct-v0.2-q5_1 (288MB)
360m-instruct-v0.2-q5_K_S (283MB)
360m-instruct-v0.2-q5_K_M (290MB)
360m-instruct-v0.2-q6_K (367MB)
360m-instruct-v0.2-q8_0 (386MB)
1.7b-base-v0.2-fp16 (3.4GB)
1.7b-base-v0.2-q2_K (675MB)
1.7b-base-v0.2-q3_K_S (777MB)
1.7b-base-v0.2-q3_K_M (860MB)
1.7b-base-v0.2-q3_K_L (933MB)
1.7b-base-v0.2-q4_0 (991MB)
1.7b-base-v0.2-q4_1 (1.1GB)
1.7b-base-v0.2-q4_K_S (999MB)
1.7b-base-v0.2-q4_K_M (1.1GB)
1.7b-base-v0.2-q5_0 (1.2GB)
1.7b-base-v0.2-q5_1 (1.3GB)
1.7b-base-v0.2-q5_K_S (1.2GB)
1.7b-base-v0.2-q5_K_M (1.2GB)
1.7b-base-v0.2-q6_K (1.4GB)
1.7b-base-v0.2-q8_0 (1.8GB)
1.7b-instruct-v0.2-fp16 (3.4GB)
1.7b-instruct-v0.2-q2_K (675MB)
1.7b-instruct-v0.2-q3_K_S (777MB)
1.7b-instruct-v0.2-q3_K_M (860MB)
1.7b-instruct-v0.2-q3_K_L (933MB)
1.7b-instruct-v0.2-q4_0 (991MB)
1.7b-instruct-v0.2-q4_1 (1.1GB)
1.7b-instruct-v0.2-q4_K_S (999MB)
1.7b-instruct-v0.2-q4_K_M (1.1GB)
1.7b-instruct-v0.2-q5_0 (1.2GB)
1.7b-instruct-v0.2-q5_1 (1.3GB)
1.7b-instruct-v0.2-q5_K_S (1.2GB)
1.7b-instruct-v0.2-q5_K_M (1.2GB)
1.7b-instruct-v0.2-q6_K (1.4GB)
1.7b-instruct-v0.2-q8_0 (1.8GB)
smollm
π
Copied!
Category: Tiny,
Downloads: 77.6K
Last Updated: 2 months ago
latest (7.1GB)
12b (7.1GB)
12b-instruct-2407-fp16 (25GB)
12b-instruct-2407-q2_K (4.8GB)
12b-instruct-2407-q3_K_L (6.6GB)
12b-instruct-2407-q3_K_M (6.1GB)
12b-instruct-2407-q3_K_S (5.5GB)
12b-instruct-2407-q4_0 (7.1GB)
12b-instruct-2407-q4_1 (7.8GB)
12b-instruct-2407-q4_K_M (7.5GB)
12b-instruct-2407-q4_K_S (7.1GB)
12b-instruct-2407-q5_0 (8.5GB)
12b-instruct-2407-q5_1 (9.2GB)
12b-instruct-2407-q5_K_M (8.7GB)
12b-instruct-2407-q5_K_S (8.5GB)
12b-instruct-2407-q6_K (10GB)
12b-instruct-2407-q8_0 (13GB)
mistral-nemo
π
Copied!
A cutting-edge 12B model featuring a 128k context length, developed by Mistral AI in partnership with NVIDIA.
Category: Language
Downloads: 468.4K
Last Updated: 3 months ago Read more about: Mistral NeMo
latest (4.7GB)
405b (231GB)
70b (40GB)
8b (4.7GB)
70b-instruct-fp16 (141GB)
70b-instruct-q2_K (26GB)
70b-instruct-q3_K_L (37GB)
70b-instruct-q3_K_M (34GB)
70b-instruct-q3_K_S (31GB)
70b-instruct-q4_0 (40GB)
70b-instruct-q4_1 (44GB)
70b-instruct-q4_K_M (43GB)
70b-instruct-q4_K_S (40GB)
70b-instruct-q5_0 (49GB)
70b-instruct-q5_1 (53GB)
70b-instruct-q5_K_M (50GB)
70b-instruct-q5_K_S (49GB)
70b-instruct-q6_K (58GB)
70b-instruct-q8_0 (75GB)
8b-instruct-fp16 (16GB)
8b-instruct-q2_K (3.2GB)
8b-instruct-q3_K_L (4.3GB)
8b-instruct-q3_K_M (4.0GB)
8b-instruct-q3_K_S (3.7GB)
8b-instruct-q4_0 (4.7GB)
8b-instruct-q4_1 (5.1GB)
8b-instruct-q4_K_M (4.9GB)
8b-instruct-q4_K_S (4.7GB)
8b-instruct-q5_0 (5.6GB)
8b-instruct-q5_1 (6.1GB)
8b-instruct-q5_K_M (5.7GB)
8b-instruct-q5_K_S (5.6GB)
8b-instruct-q6_K (6.6GB)
8b-instruct-q8_0 (8.5GB)
llama3.1
π
Copied!
Meta has released Llama 3.1, a cutting-edge model offered in parameter sizes of 8B, 70B, and 405B.
Category: Language
Downloads: 8.2M
Last Updated: 2 months ago Read more about: Llama 3.1
latest (69GB)
123b (69GB)
123b-instruct-2407-fp16 (245GB)
123b-instruct-2407-q2_K (45GB)
123b-instruct-2407-q3_K_L (65GB)
123b-instruct-2407-q3_K_M (59GB)
123b-instruct-2407-q3_K_S (53GB)
123b-instruct-2407-q4_0 (69GB)
123b-instruct-2407-q4_1 (77GB)
123b-instruct-2407-q4_K_M (73GB)
123b-instruct-2407-q4_K_S (70GB)
123b-instruct-2407-q5_0 (84GB)
123b-instruct-2407-q5_1 (92GB)
123b-instruct-2407-q5_K_M (86GB)
123b-instruct-2407-q5_K_S (84GB)
123b-instruct-2407-q6_K (101GB)
123b-instruct-2407-q8_0 (130GB)
mistral-large
π
Copied!
Mistral Large 2 is the latest flagship model from Mistral, offering enhanced performance in code generation, mathematics, and reasoning, thanks to its 128k context window and support for numerous languages. Its capabilities are significantly improved compared to previous models.
Category: Language
Downloads: 97.9K
Last Updated: 3 months ago Read more about: Mistral Large 2
latest (4.4GB)
1.5b (935MB)
7b (4.4GB)
72b (41GB)
1.5b-instruct (935MB)
7b-instruct (4.4GB)
72b-instruct (41GB)
1.5b-instruct-fp16 (3.1GB)
1.5b-instruct-q2_K (676MB)
1.5b-instruct-q3_K_S (761MB)
1.5b-instruct-q3_K_M (824MB)
1.5b-instruct-q3_K_L (880MB)
1.5b-instruct-q4_0 (935MB)
1.5b-instruct-q4_1 (1.0GB)
1.5b-instruct-q4_K_S (940MB)
1.5b-instruct-q4_K_M (986MB)
1.5b-instruct-q5_0 (1.1GB)
1.5b-instruct-q5_1 (1.2GB)
1.5b-instruct-q5_K_S (1.1GB)
1.5b-instruct-q5_K_M (1.1GB)
1.5b-instruct-q6_K (1.3GB)
1.5b-instruct-q8_0 (1.6GB)
7b-instruct-fp16 (15GB)
7b-instruct-q2_K (3.0GB)
7b-instruct-q3_K_S (3.5GB)
7b-instruct-q3_K_M (3.8GB)
7b-instruct-q3_K_L (4.1GB)
7b-instruct-q4_0 (4.4GB)
7b-instruct-q4_1 (4.9GB)
7b-instruct-q4_K_S (4.5GB)
7b-instruct-q4_K_M (4.7GB)
7b-instruct-q5_0 (5.3GB)
7b-instruct-q5_1 (5.8GB)
7b-instruct-q5_K_S (5.3GB)
7b-instruct-q5_K_M (5.4GB)
7b-instruct-q6_K (6.3GB)
7b-instruct-q8_0 (8.1GB)
72b-instruct-fp16 (145GB)
72b-instruct-q2_K (30GB)
72b-instruct-q3_K_S (34GB)
72b-instruct-q3_K_M (38GB)
72b-instruct-q3_K_L (40GB)
72b-instruct-q4_0 (41GB)
72b-instruct-q4_1 (46GB)
72b-instruct-q4_K_S (44GB)
72b-instruct-q4_K_M (47GB)
72b-instruct-q5_0 (50GB)
72b-instruct-q5_1 (55GB)
72b-instruct-q5_K_S (51GB)
72b-instruct-q5_K_M (54GB)
72b-instruct-q6_K (64GB)
72b-instruct-q8_0 (77GB)
qwen2-math
π
Copied!
Qwen2 Math consists of a collection of specialized math language models based on the Qwen2 LLMs. These models greatly surpass the mathematical performance of both open-source and even some closed-source models, such as GPT4o.
Category: Tiny,
Downloads: 97.3K
Last Updated: 2 months ago
latest (4.7GB)
8b (4.7GB)
70b (40GB)
405b (229GB)
8b-llama3.1-fp16 (16GB)
8b-llama3.1-q2_K (3.2GB)
8b-llama3.1-q3_K_S (3.7GB)
8b-llama3.1-q3_K_M (4.0GB)
8b-llama3.1-q3_K_L (4.3GB)
8b-llama3.1-q4_0 (4.7GB)
8b-llama3.1-q4_1 (5.1GB)
8b-llama3.1-q4_K_S (4.7GB)
8b-llama3.1-q4_K_M (4.9GB)
8b-llama3.1-q5_0 (5.6GB)
8b-llama3.1-q5_1 (6.1GB)
8b-llama3.1-q5_K_S (5.6GB)
8b-llama3.1-q5_K_M (5.7GB)
8b-llama3.1-q6_K (6.6GB)
8b-llama3.1-q8_0 (8.5GB)
70b-llama3.1-fp16 (141GB)
70b-llama3.1-q2_K (26GB)
70b-llama3.1-q3_K_S (31GB)
70b-llama3.1-q3_K_M (34GB)
70b-llama3.1-q3_K_L (37GB)
70b-llama3.1-q4_0 (40GB)
70b-llama3.1-q4_1 (44GB)
70b-llama3.1-q4_K_S (40GB)
70b-llama3.1-q4_K_M (43GB)
70b-llama3.1-q5_0 (49GB)
70b-llama3.1-q5_1 (53GB)
70b-llama3.1-q5_K_S (49GB)
70b-llama3.1-q5_K_M (50GB)
70b-llama3.1-q6_K (58GB)
70b-llama3.1-q8_0 (75GB)
405b-llama3.1-fp16 (812GB)
405b-llama3.1-q2_K (149GB)
405b-llama3.1-q3_K_S (175GB)
405b-llama3.1-q3_K_M (195GB)
405b-llama3.1-q3_K_L (213GB)
405b-llama3.1-q4_0 (229GB)
405b-llama3.1-q4_1 (254GB)
405b-llama3.1-q4_K_S (231GB)
405b-llama3.1-q4_K_M (243GB)
405b-llama3.1-q5_0 (279GB)
405b-llama3.1-q5_1 (305GB)
405b-llama3.1-q5_K_S (279GB)
405b-llama3.1-q5_K_M (287GB)
405b-llama3.1-q6_K (333GB)
405b-llama3.1-q8_0 (431GB)
hermes3
π
Copied!
Hermes 3 is the newest iteration of Nous Research's flagship series of LLMs. It continues the legacy of the Hermes series.
Category: Language
Downloads: 53.1K
Last Updated: 2 months ago
latest (5.4GB)
27b (16GB)
9b (5.4GB)
2b (1.6GB)
27b-instruct-fp16 (54GB)
27b-instruct-q2_K (10GB)
27b-instruct-q3_K_L (15GB)
27b-instruct-q3_K_M (13GB)
27b-instruct-q3_K_S (12GB)
27b-instruct-q4_0 (16GB)
27b-instruct-q4_1 (17GB)
27b-instruct-q4_K_M (17GB)
27b-instruct-q4_K_S (16GB)
27b-instruct-q5_0 (19GB)
27b-instruct-q5_1 (21GB)
27b-instruct-q5_K_M (19GB)
27b-instruct-q5_K_S (19GB)
27b-instruct-q6_K (22GB)
27b-instruct-q8_0 (29GB)
27b-text-fp16 (54GB)
27b-text-q2_K (10GB)
27b-text-q3_K_L (15GB)
27b-text-q3_K_M (13GB)
27b-text-q3_K_S (12GB)
27b-text-q4_0 (16GB)
27b-text-q4_1 (17GB)
27b-text-q4_K_M (17GB)
27b-text-q4_K_S (16GB)
27b-text-q5_0 (19GB)
27b-text-q5_1 (21GB)
27b-text-q5_K_M (19GB)
27b-text-q5_K_S (19GB)
27b-text-q6_K (22GB)
27b-text-q8_0 (29GB)
9b-instruct-fp16 (18GB)
9b-instruct-q2_K (3.8GB)
9b-instruct-q3_K_L (5.1GB)
9b-instruct-q3_K_M (4.8GB)
9b-instruct-q3_K_S (4.3GB)
9b-instruct-q4_0 (5.4GB)
9b-instruct-q4_1 (6.0GB)
9b-instruct-q4_K_M (5.8GB)
9b-instruct-q4_K_S (5.5GB)
9b-instruct-q5_0 (6.5GB)
9b-instruct-q5_1 (7.0GB)
9b-instruct-q5_K_M (6.6GB)
9b-instruct-q5_K_S (6.5GB)
9b-instruct-q6_K (7.6GB)
9b-instruct-q8_0 (9.8GB)
9b-text-fp16 (18GB)
9b-text-q2_K (3.8GB)
9b-text-q3_K_L (5.1GB)
9b-text-q3_K_M (4.8GB)
9b-text-q3_K_S (4.3GB)
9b-text-q4_0 (5.4GB)
9b-text-q4_1 (6.0GB)
9b-text-q4_K_M (5.8GB)
9b-text-q4_K_S (5.5GB)
9b-text-q5_0 (6.5GB)
9b-text-q5_1 (7.0GB)
9b-text-q5_K_M (6.6GB)
9b-text-q5_K_S (6.5GB)
9b-text-q6_K (7.6GB)
9b-text-q8_0 (9.8GB)
2b-instruct-fp16 (5.2GB)
2b-instruct-q2_K (1.2GB)
2b-instruct-q3_K_L (1.6GB)
2b-instruct-q3_K_M (1.5GB)
2b-instruct-q3_K_S (1.4GB)
2b-instruct-q4_0 (1.6GB)
2b-instruct-q4_1 (1.8GB)
2b-instruct-q4_K_M (1.7GB)
2b-instruct-q4_K_S (1.6GB)
2b-instruct-q5_0 (1.9GB)
2b-instruct-q5_1 (2.0GB)
2b-instruct-q5_K_M (1.9GB)
2b-instruct-q5_K_S (1.9GB)
2b-instruct-q6_K (2.2GB)
2b-instruct-q8_0 (2.8GB)
2b-text-fp16 (5.2GB)
2b-text-q2_K (1.2GB)
2b-text-q3_K_L (1.6GB)
2b-text-q3_K_M (1.5GB)
2b-text-q3_K_S (1.4GB)
2b-text-q4_0 (1.6GB)
2b-text-q4_1 (1.8GB)
2b-text-q4_K_M (1.7GB)
2b-text-q4_K_S (1.6GB)
2b-text-q5_0 (1.9GB)
2b-text-q5_1 (2.0GB)
2b-text-q5_K_M (1.9GB)
2b-text-q5_K_S (1.9GB)
2b-text-q6_K (2.2GB)
2b-text-q8_0 (2.8GB)
gemma2
π
Copied!
Google Gemma 2 is now offered in three sizes: 2B, 9B and 27B.
Category: Language
Downloads: 1.8M
Last Updated: 3 months ago Read more about: Gemma 2
latest (4.7GB)
70b (40GB)
8b (4.7GB)
instruct (4.7GB)
text (4.7GB)
70b-instruct (40GB)
70b-instruct-fp16 (141GB)
70b-instruct-q2_K (26GB)
70b-instruct-q3_K_L (37GB)
70b-instruct-q3_K_M (34GB)
70b-instruct-q3_K_S (31GB)
70b-instruct-q4_0 (40GB)
70b-instruct-q4_1 (44GB)
70b-instruct-q4_K_M (43GB)
70b-instruct-q4_K_S (40GB)
70b-instruct-q5_0 (49GB)
70b-instruct-q5_1 (53GB)
70b-instruct-q5_K_M (50GB)
70b-instruct-q5_K_S (49GB)
70b-instruct-q6_K (58GB)
70b-instruct-q8_0 (75GB)
70b-text (40GB)
70b-text-fp16 (141GB)
70b-text-q2_K (26GB)
70b-text-q3_K_L (37GB)
70b-text-q3_K_M (34GB)
70b-text-q3_K_S (31GB)
70b-text-q4_0 (40GB)
70b-text-q4_1 (44GB)
70b-text-q4_K_M (43GB)
70b-text-q4_K_S (40GB)
70b-text-q5_0 (49GB)
70b-text-q5_1 (53GB)
70b-text-q5_K_M (50GB)
70b-text-q5_K_S (49GB)
70b-text-q6_K (58GB)
70b-text-q8_0 (75GB)
8b-instruct-fp16 (16GB)
8b-instruct-q2_K (3.2GB)
8b-instruct-q3_K_L (4.3GB)
8b-instruct-q3_K_M (4.0GB)
8b-instruct-q3_K_S (3.7GB)
8b-instruct-q4_0 (4.7GB)
8b-instruct-q4_1 (5.1GB)
8b-instruct-q4_K_M (4.9GB)
8b-instruct-q4_K_S (4.7GB)
8b-instruct-q5_0 (5.6GB)
8b-instruct-q5_1 (6.1GB)
8b-instruct-q5_K_M (5.7GB)
8b-instruct-q5_K_S (5.6GB)
8b-instruct-q6_K (6.6GB)
8b-instruct-q8_0 (8.5GB)
8b-text (4.7GB)
8b-text-fp16 (16GB)
8b-text-q2_K (3.2GB)
8b-text-q3_K_L (4.3GB)
8b-text-q3_K_M (4.0GB)
8b-text-q3_K_S (3.7GB)
8b-text-q4_0 (4.7GB)
8b-text-q4_1 (5.1GB)
8b-text-q4_K_M (4.9GB)
8b-text-q4_K_S (4.7GB)
8b-text-q5_0 (5.6GB)
8b-text-q5_1 (6.1GB)
8b-text-q5_K_M (5.7GB)
8b-text-q5_K_S (5.6GB)
8b-text-q6_K (6.6GB)
8b-text-q8_0 (8.5GB)
llama3
π
Copied!
Meta Llama 3 is the most advanced openly accessible language model available so far. It stands out as the leading choice among open-source LLMs.
Category: Language
Downloads: 6.6M
Last Updated: 5 months ago Read more about: Llama 3
latest (4.4GB)
72b (41GB)
7b (4.4GB)
1.5b (935MB)
0.5b (352MB)
72b-instruct (41GB)
72b-instruct-fp16 (145GB)
72b-instruct-q2_K (30GB)
72b-instruct-q3_K_L (40GB)
72b-instruct-q3_K_M (38GB)
72b-instruct-q3_K_S (34GB)
72b-instruct-q4_0 (41GB)
72b-instruct-q4_1 (46GB)
72b-instruct-q4_K_M (47GB)
72b-instruct-q4_K_S (44GB)
72b-instruct-q5_0 (50GB)
72b-instruct-q5_1 (55GB)
72b-instruct-q5_K_M (54GB)
72b-instruct-q5_K_S (51GB)
72b-instruct-q6_K (64GB)
72b-instruct-q8_0 (77GB)
72b-text (41GB)
72b-text-fp16 (145GB)
72b-text-q2_K (30GB)
72b-text-q3_K_L (40GB)
72b-text-q3_K_M (38GB)
72b-text-q3_K_S (34GB)
72b-text-q4_0 (41GB)
72b-text-q4_1 (46GB)
72b-text-q4_K_M (47GB)
72b-text-q4_K_S (44GB)
72b-text-q5_0 (50GB)
72b-text-q5_1 (55GB)
72b-text-q5_K_M (54GB)
72b-text-q5_K_S (51GB)
72b-text-q6_K (64GB)
72b-text-q8_0 (77GB)
7b-instruct (4.4GB)
7b-instruct-fp16 (15GB)
7b-instruct-q2_K (3.0GB)
7b-instruct-q3_K_L (4.1GB)
7b-instruct-q3_K_M (3.8GB)
7b-instruct-q3_K_S (3.5GB)
7b-instruct-q4_0 (4.4GB)
7b-instruct-q4_1 (4.9GB)
7b-instruct-q4_K_M (4.7GB)
7b-instruct-q4_K_S (4.5GB)
7b-instruct-q5_0 (5.3GB)
7b-instruct-q5_1 (5.8GB)
7b-instruct-q5_K_M (5.4GB)
7b-instruct-q5_K_S (5.3GB)
7b-instruct-q6_K (6.3GB)
7b-instruct-q8_0 (8.1GB)
7b-text (4.4GB)
7b-text-q2_K (3.0GB)
7b-text-q3_K_L (4.1GB)
7b-text-q3_K_M (3.8GB)
7b-text-q3_K_S (3.5GB)
7b-text-q4_0 (4.4GB)
7b-text-q4_1 (4.9GB)
7b-text-q4_K_M (4.7GB)
7b-text-q4_K_S (4.5GB)
7b-text-q5_0 (5.3GB)
7b-text-q5_1 (5.8GB)
7b-text-q8_0 (8.1GB)
1.5b-instruct (935MB)
1.5b-instruct-fp16 (3.1GB)
1.5b-instruct-q2_K (676MB)
1.5b-instruct-q3_K_L (880MB)
1.5b-instruct-q3_K_M (824MB)
1.5b-instruct-q3_K_S (761MB)
1.5b-instruct-q4_0 (935MB)
1.5b-instruct-q4_1 (1.0GB)
1.5b-instruct-q4_K_M (986MB)
1.5b-instruct-q4_K_S (940MB)
1.5b-instruct-q5_0 (1.1GB)
1.5b-instruct-q5_1 (1.2GB)
1.5b-instruct-q5_K_M (1.1GB)
1.5b-instruct-q5_K_S (1.1GB)
1.5b-instruct-q6_K (1.3GB)
1.5b-instruct-q8_0 (1.6GB)
0.5b-instruct (352MB)
0.5b-instruct-fp16 (994MB)
0.5b-instruct-q2_K (339MB)
0.5b-instruct-q3_K_L (369MB)
0.5b-instruct-q3_K_M (355MB)
0.5b-instruct-q3_K_S (338MB)
0.5b-instruct-q4_0 (352MB)
0.5b-instruct-q4_1 (375MB)
0.5b-instruct-q4_K_M (398MB)
0.5b-instruct-q4_K_S (385MB)
0.5b-instruct-q5_0 (397MB)
0.5b-instruct-q5_1 (419MB)
0.5b-instruct-q5_K_M (420MB)
0.5b-instruct-q5_K_S (413MB)
0.5b-instruct-q6_K (506MB)
0.5b-instruct-q8_0 (531MB)
qwen2
π
Copied!
Qwen2 is a newly launched series of extensive language models developed by the Alibaba Group.
Category: Tiny,
Downloads: 3.9M
Last Updated: 2 months ago Read more about: Qwen2
latest (8.9GB)
236b (133GB)
16b (8.9GB)
lite (8.9GB)
236b-instruct-q4_k_m (142GB)
236b-instruct-fp16 (472GB)
236b-instruct-q2_K (86GB)
236b-instruct-q3_K_L (122GB)
236b-instruct-q3_K_M (113GB)
236b-instruct-q3_K_S (102GB)
236b-instruct-q4_0 (133GB)
236b-instruct-q4_1 (148GB)
236b-instruct-q4_K_S (134GB)
236b-instruct-q5_0 (162GB)
236b-instruct-q5_1 (177GB)
236b-instruct-q5_K_M (167GB)
236b-instruct-q5_K_S (162GB)
236b-instruct-q6_K (194GB)
236b-instruct-q8_0 (251GB)
16b-lite-base-fp16 (31GB)
16b-lite-base-q2_K (6.4GB)
16b-lite-base-q3_K_L (8.5GB)
16b-lite-base-q3_K_M (8.1GB)
16b-lite-base-q3_K_S (7.5GB)
16b-lite-base-q4_0 (8.9GB)
16b-lite-base-q4_1 (9.9GB)
16b-lite-base-q4_K_M (10GB)
16b-lite-base-q4_K_S (9.5GB)
16b-lite-base-q5_0 (11GB)
16b-lite-base-q5_1 (12GB)
16b-lite-base-q5_K_M (12GB)
16b-lite-base-q5_K_S (11GB)
16b-lite-base-q6_K (14GB)
16b-lite-base-q8_0 (17GB)
16b-lite-instruct-fp16 (31GB)
16b-lite-instruct-q2_K (6.4GB)
16b-lite-instruct-q3_K_L (8.5GB)
16b-lite-instruct-q3_K_M (8.1GB)
16b-lite-instruct-q3_K_S (7.5GB)
16b-lite-instruct-q4_0 (8.9GB)
16b-lite-instruct-q4_1 (9.9GB)
16b-lite-instruct-q4_K_M (10GB)
16b-lite-instruct-q4_K_S (9.5GB)
16b-lite-instruct-q5_0 (11GB)
16b-lite-instruct-q5_1 (12GB)
16b-lite-instruct-q5_K_M (12GB)
16b-lite-instruct-q5_K_S (11GB)
16b-lite-instruct-q6_K (14GB)
16b-lite-instruct-q8_0 (17GB)
deepseek-coder-v2
π
Copied!
A code language model based on an open-source Mixture-of-Experts approach, it delivers performance similar to GPT-4 Turbo for code-related tasks.
Category: Coding
Downloads: 389.1K
Last Updated: 2 months ago Read more about: DeepSeek-Coder-v2
latest (2.2GB)
14b (7.9GB)
3.8b (2.2GB)
instruct (2.2GB)
medium (7.9GB)
mini (2.2GB)
14b-instruct (7.9GB)
14b-medium-4k-instruct-f16 (28GB)
14b-medium-128k-instruct-f16 (28GB)
14b-medium-128k-instruct-q2_K (5.1GB)
14b-medium-128k-instruct-q3_K_L (7.5GB)
14b-medium-128k-instruct-q3_K_M (6.9GB)
14b-medium-128k-instruct-q3_K_S (6.1GB)
14b-medium-128k-instruct-q4_0 (7.9GB)
14b-medium-128k-instruct-q4_1 (8.8GB)
14b-medium-128k-instruct-q4_K_M (8.6GB)
14b-medium-128k-instruct-q4_K_S (8.0GB)
14b-medium-128k-instruct-q5_0 (9.6GB)
14b-medium-128k-instruct-q5_1 (10GB)
14b-medium-128k-instruct-q5_K_M (10GB)
14b-medium-128k-instruct-q5_K_S (9.6GB)
14b-medium-128k-instruct-q6_K (11GB)
14b-medium-4k-instruct-q2_K (5.1GB)
14b-medium-4k-instruct-q3_K_L (7.5GB)
14b-medium-4k-instruct-q3_K_M (6.9GB)
14b-medium-4k-instruct-q3_K_S (6.1GB)
14b-medium-4k-instruct-q4_0 (7.9GB)
14b-medium-4k-instruct-q4_1 (8.8GB)
14b-medium-4k-instruct-q4_K_M (8.6GB)
14b-medium-4k-instruct-q4_K_S (8.0GB)
14b-medium-4k-instruct-q5_0 (9.6GB)
14b-medium-4k-instruct-q5_1 (10GB)
14b-medium-4k-instruct-q5_K_M (10GB)
14b-medium-4k-instruct-q5_K_S (9.6GB)
14b-medium-4k-instruct-q6_K (11GB)
14b-medium-4k-instruct-q8_0 (15GB)
3.8b-instruct (2.2GB)
3.8b-mini-128k-instruct-f16 (7.6GB)
3.8b-mini-4k-instruct-f16 (7.6GB)
3.8b-mini-128k-instruct-fp16 (7.6GB)
3.8b-mini-128k-instruct-q2_K (1.4GB)
3.8b-mini-128k-instruct-q3_K_L (2.1GB)
3.8b-mini-128k-instruct-q3_K_M (2.0GB)
3.8b-mini-128k-instruct-q3_K_S (1.7GB)
3.8b-mini-128k-instruct-q4_0 (2.2GB)
3.8b-mini-128k-instruct-q4_1 (2.4GB)
3.8b-mini-128k-instruct-q4_K_M (2.4GB)
3.8b-mini-128k-instruct-q4_K_S (2.2GB)
3.8b-mini-128k-instruct-q5_0 (2.6GB)
3.8b-mini-128k-instruct-q5_1 (2.9GB)
3.8b-mini-128k-instruct-q5_K_M (2.8GB)
3.8b-mini-128k-instruct-q5_K_S (2.6GB)
3.8b-mini-128k-instruct-q6_K (3.1GB)
3.8b-mini-128k-instruct-q8_0 (4.1GB)
3.8b-mini-4k-instruct-fp16 (7.6GB)
3.8b-mini-4k-instruct-q2_K (1.4GB)
3.8b-mini-4k-instruct-q3_K_L (2.1GB)
3.8b-mini-4k-instruct-q3_K_M (2.0GB)
3.8b-mini-4k-instruct-q3_K_S (1.7GB)
3.8b-mini-4k-instruct-q4_0 (2.2GB)
3.8b-mini-4k-instruct-q4_1 (2.4GB)
3.8b-mini-4k-instruct-q4_K_M (2.4GB)
3.8b-mini-4k-instruct-q4_K_S (2.2GB)
3.8b-mini-4k-instruct-q5_0 (2.6GB)
3.8b-mini-4k-instruct-q5_1 (2.9GB)
3.8b-mini-4k-instruct-q5_K_M (2.8GB)
3.8b-mini-4k-instruct-q5_K_S (2.6GB)
3.8b-mini-4k-instruct-q6_K (3.1GB)
3.8b-mini-4k-instruct-q8_0 (4.1GB)
3.8b-mini-instruct-4k-fp16 (7.6GB)
medium-128k (7.9GB)
mini-128k (2.2GB)
mini-4k (2.4GB)
14b-medium-128k-instruct-fp16 (28GB)
14b-medium-128k-instruct-q8_0 (15GB)
14b-medium-4k-instruct-fp16 (28GB)
medium-4k (7.9GB)
phi3
π
Copied!
Phi-3 consists of advanced, lightweight open models, including 3B (Mini) and 14B (Medium) versions, developed by Microsoft.
Category: Language
Downloads: 2.6M
Last Updated: 3 months ago Read more about: Phi-3
latest (4.8GB)
35b (20GB)
8b (4.8GB)
35b-23 (20GB)
35b-23-f16 (70GB)
35b-23-q2_K (14GB)
35b-23-q3_K_L (19GB)
35b-23-q3_K_M (18GB)
35b-23-q3_K_S (16GB)
35b-23-q4_0 (20GB)
35b-23-q4_1 (22GB)
35b-23-q4_K_M (22GB)
35b-23-q4_K_S (20GB)
35b-23-q5_0 (24GB)
35b-23-q5_1 (26GB)
35b-23-q5_K_M (25GB)
35b-23-q5_K_S (24GB)
35b-23-q6_K (29GB)
35b-23-q8_0 (37GB)
8b-23 (4.8GB)
8b-23-f16 (16GB)
8b-23-q2_K (3.4GB)
8b-23-q3_K_L (4.5GB)
8b-23-q3_K_M (4.2GB)
8b-23-q3_K_S (3.9GB)
8b-23-q4_0 (4.8GB)
8b-23-q4_1 (5.2GB)
8b-23-q4_K_M (5.1GB)
8b-23-q4_K_S (4.8GB)
8b-23-q5_0 (5.7GB)
8b-23-q5_1 (6.1GB)
8b-23-q5_K_M (5.8GB)
8b-23-q5_K_S (5.7GB)
8b-23-q6_K (6.6GB)
8b-23-q8_0 (8.5GB)
aya
π
Copied!
Cohere has launched Aya 23, a cutting-edge family of multilingual models that accommodate 23 different languages.
Category: Language
Downloads: 112K
Last Updated: 5 months ago Read more about: Aya 23
latest (4.1GB)
7b (4.1GB)
instruct (4.1GB)
text (4.1GB)
v0.1 (4.1GB)
v0.2 (4.1GB)
v0.3 (4.1GB)
7b-instruct (4.1GB)
7b-instruct-v0.2-fp16 (14GB)
7b-instruct-v0.2-q2_K (3.1GB)
7b-instruct-v0.2-q3_K_L (3.8GB)
7b-instruct-v0.2-q3_K_M (3.5GB)
7b-instruct-v0.2-q3_K_S (3.2GB)
7b-instruct-v0.2-q4_0 (4.1GB)
7b-instruct-v0.2-q4_1 (4.6GB)
7b-instruct-v0.2-q4_K_M (4.4GB)
7b-instruct-v0.2-q4_K_S (4.1GB)
7b-instruct-v0.2-q5_0 (5.0GB)
7b-instruct-v0.2-q5_1 (5.4GB)
7b-instruct-v0.2-q5_K_M (5.1GB)
7b-instruct-v0.2-q5_K_S (5.0GB)
7b-instruct-v0.2-q6_K (5.9GB)
7b-instruct-v0.2-q8_0 (7.7GB)
7b-instruct-v0.3-fp16 (14GB)
7b-instruct-v0.3-q2_K (2.7GB)
7b-instruct-v0.3-q3_K_L (3.8GB)
7b-instruct-v0.3-q3_K_M (3.5GB)
7b-instruct-v0.3-q3_K_S (3.2GB)
7b-instruct-v0.3-q4_0 (4.1GB)
7b-instruct-v0.3-q4_1 (4.6GB)
7b-instruct-v0.3-q4_K_M (4.4GB)
7b-instruct-q3_K_L (3.8GB)
7b-instruct-v0.3-q8_0 (7.7GB)
7b-instruct-v0.3-q5_0 (5.0GB)
7b-instruct-v0.3-q6_K (5.9GB)
7b-instruct-v0.3-q5_K_M (5.1GB)
7b-instruct-v0.3-q5_1 (5.4GB)
7b-instruct-fp16 (14GB)
7b-instruct-q2_K (3.1GB)
7b-instruct-v0.3-q4_K_S (4.1GB)
7b-instruct-v0.3-q5_K_S (5.0GB)
7b-instruct-q6_K (5.9GB)
7b-instruct-q4_0 (4.1GB)
7b-instruct-q5_1 (5.4GB)
7b-instruct-q4_1 (4.6GB)
7b-instruct-q4_K_M (4.4GB)
7b-instruct-q3_K_M (3.5GB)
7b-instruct-q5_0 (5.0GB)
7b-instruct-q3_K_S (3.2GB)
7b-instruct-q5_K_M (5.1GB)
7b-instruct-q5_K_S (5.0GB)
7b-instruct-q4_K_S (4.1GB)
7b-instruct-q8_0 (7.7GB)
7b-text (4.1GB)
7b-text-fp16 (14GB)
7b-text-q2_K (3.1GB)
7b-text-q3_K_L (3.8GB)
7b-text-q3_K_M (3.5GB)
7b-text-q3_K_S (3.2GB)
7b-text-q4_0 (4.1GB)
7b-text-q4_1 (4.6GB)
7b-text-q4_K_M (4.4GB)
7b-text-q4_K_S (4.1GB)
7b-text-q5_0 (5.0GB)
7b-text-v0.2-q5_K_M (5.1GB)
7b-text-q6_K (5.9GB)
7b-text-v0.2-q5_1 (5.4GB)
7b-text-q8_0 (7.7GB)
7b-text-v0.2-q4_1 (4.6GB)
7b-text-v0.2-q4_K_M (4.4GB)
7b-text-v0.2-q4_0 (4.1GB)
7b-text-v0.2-fp16 (14GB)
7b-text-v0.2-q3_K_L (3.8GB)
7b-text-v0.2-q3_K_M (3.5GB)
7b-text-v0.2-q3_K_S (3.2GB)
7b-text-q5_K_M (5.1GB)
7b-text-v0.2-q2_K (2.7GB)
7b-text-v0.2-q4_K_S (4.1GB)
7b-text-q5_K_S (5.0GB)
7b-text-q5_1 (5.4GB)
7b-text-v0.2-q5_0 (5.0GB)
7b-text-v0.2-q5_K_S (5.0GB)
7b-text-v0.2-q6_K (5.9GB)
7b-text-v0.2-q8_0 (7.7GB)
mistral
π
Copied!
Mistral AI has released the 7B model, now updated to version 0.3.
Category: Language
Downloads: 4.8M
Last Updated: 3 months ago Read more about: Mistral 7B
latest (26GB)
8x22b (80GB)
8x7b (26GB)
instruct (26GB)
text (26GB)
v0.1 (80GB)
8x22b-instruct (80GB)
8x22b-instruct-v0.1-fp16 (281GB)
8x22b-instruct-v0.1-q2_K (52GB)
8x22b-instruct-v0.1-q3_K_L (73GB)
8x22b-instruct-v0.1-q3_K_M (68GB)
8x22b-instruct-v0.1-q3_K_S (62GB)
8x22b-instruct-v0.1-q4_0 (80GB)
8x22b-instruct-v0.1-q4_1 (88GB)
8x22b-instruct-v0.1-q4_K_M (86GB)
8x22b-instruct-v0.1-q4_K_S (80GB)
8x22b-instruct-v0.1-q5_0 (97GB)
8x22b-instruct-v0.1-q5_1 (106GB)
8x22b-instruct-v0.1-q5_K_M (100GB)
8x22b-instruct-v0.1-q5_K_S (97GB)
8x22b-instruct-v0.1-q6_K (116GB)
8x22b-instruct-v0.1-q8_0 (149GB)
8x7b-instruct-v0.1-fp16 (93GB)
8x7b-instruct-v0.1-q2_K (16GB)
8x7b-instruct-v0.1-q3_K_L (20GB)
8x7b-instruct-v0.1-q3_K_M (20GB)
8x7b-instruct-v0.1-q3_K_S (20GB)
8x7b-instruct-v0.1-q4_0 (26GB)
8x7b-instruct-v0.1-q4_1 (29GB)
8x7b-instruct-v0.1-q4_K_M (26GB)
8x7b-instruct-v0.1-q4_K_S (26GB)
8x7b-instruct-v0.1-q5_0 (32GB)
8x7b-instruct-v0.1-q5_1 (35GB)
8x7b-instruct-v0.1-q5_K_M (32GB)
8x7b-instruct-v0.1-q5_K_S (32GB)
8x7b-instruct-v0.1-q6_K (38GB)
8x7b-instruct-v0.1-q8_0 (50GB)
8x22b-text (80GB)
8x22b-text-v0.1-fp16 (281GB)
8x22b-text-v0.1-q2_K (52GB)
8x22b-text-v0.1-q3_K_L (73GB)
8x22b-text-v0.1-q3_K_M (68GB)
8x22b-text-v0.1-q3_K_S (61GB)
8x22b-text-v0.1-q4_0 (80GB)
8x22b-text-v0.1-q4_1 (88GB)
8x22b-text-v0.1-q4_K_M (86GB)
8x22b-text-v0.1-q4_K_S (80GB)
8x22b-text-v0.1-q5_0 (97GB)
8x22b-text-v0.1-q5_1 (106GB)
8x22b-text-v0.1-q5_K_M (100GB)
8x22b-text-v0.1-q5_K_S (97GB)
8x22b-text-v0.1-q6_K (116GB)
8x22b-text-v0.1-q8_0 (149GB)
8x7b-text-v0.1-fp16 (93GB)
8x7b-text-v0.1-q2_K (16GB)
8x7b-text-v0.1-q3_K_L (20GB)
8x7b-text-v0.1-q3_K_M (20GB)
8x7b-text-v0.1-q3_K_S (20GB)
8x7b-text-v0.1-q4_0 (26GB)
8x7b-text-v0.1-q4_1 (29GB)
8x7b-text-v0.1-q4_K_M (26GB)
8x7b-text-v0.1-q4_K_S (26GB)
8x7b-text-v0.1-q5_0 (32GB)
8x7b-text-v0.1-q5_1 (35GB)
8x7b-text-v0.1-q5_K_M (32GB)
8x7b-text-v0.1-q5_K_S (32GB)
8x7b-text-v0.1-q6_K (38GB)
8x7b-text-v0.1-q8_0 (50GB)
v0.1-instruct (80GB)
mixtral
π
Copied!
Mistral AI has released a set of Mixture of Experts (MoE) models featuring open weights, available in parameter sizes of 8x7b and 8x22b.
Category: Language
Downloads: 470K
Last Updated: 3 months ago Read more about: Mistral 7B and 22B
latest (5.0GB)
7b (5.0GB)
2b (1.6GB)
code (1.6GB)
instruct (5.0GB)
7b-code (5.0GB)
7b-code-fp16 (17GB)
7b-code-q2_K (3.5GB)
7b-code-q3_K_L (4.7GB)
7b-code-q3_K_M (4.4GB)
7b-code-q3_K_S (4.0GB)
7b-code-q4_0 (5.0GB)
7b-code-q4_1 (5.5GB)
7b-code-q4_K_M (5.3GB)
7b-code-q4_K_S (5.0GB)
7b-code-q5_0 (6.0GB)
7b-code-q5_1 (6.5GB)
7b-code-q5_K_M (6.1GB)
7b-code-q5_K_S (6.0GB)
7b-code-q6_K (7.0GB)
7b-code-q8_0 (9.1GB)
7b-instruct (5.0GB)
7b-instruct-fp16 (17GB)
7b-instruct-q2_K (3.5GB)
7b-instruct-q3_K_L (4.7GB)
7b-instruct-q3_K_M (4.4GB)
7b-instruct-q3_K_S (4.0GB)
7b-instruct-q4_0 (5.0GB)
7b-instruct-q4_1 (5.5GB)
7b-instruct-q4_K_M (5.3GB)
7b-instruct-q4_K_S (5.0GB)
7b-instruct-q5_0 (6.0GB)
7b-instruct-q5_1 (6.5GB)
7b-instruct-q5_K_M (6.1GB)
7b-instruct-q5_K_S (6.0GB)
7b-instruct-q6_K (7.0GB)
7b-instruct-q8_0 (9.1GB)
7b-instruct-v1.1-q5_K_M (6.1GB)
7b-instruct-v1.1-q4_K_M (5.3GB)
7b-instruct-v1.1-q4_K_S (5.0GB)
7b-instruct-v1.1-q3_K_L (4.7GB)
7b-instruct-v1.1-q2_K (3.5GB)
7b-instruct-v1.1-q3_K_S (4.0GB)
7b-instruct-v1.1-q5_0 (6.0GB)
7b-instruct-v1.1-q4_0 (5.0GB)
7b-instruct-v1.1-q5_1 (6.5GB)
7b-instruct-v1.1-q4_1 (5.5GB)
7b-instruct-v1.1-fp16 (17GB)
7b-instruct-v1.1-q3_K_M (4.4GB)
7b-instruct-v1.1-q5_K_S (6.0GB)
7b-instruct-v1.1-q6_K (7.0GB)
7b-instruct-v1.1-q8_0 (9.1GB)
7b-v1.1 (5.0GB)
2b-code (1.6GB)
2b-code-fp16 (5.0GB)
2b-code-q2_K (1.2GB)
2b-code-q3_K_L (1.5GB)
2b-code-q3_K_M (1.4GB)
2b-code-q3_K_S (1.3GB)
2b-code-q4_0 (1.6GB)
2b-code-q4_1 (1.7GB)
2b-code-q4_K_M (1.6GB)
2b-code-q4_K_S (1.6GB)
2b-code-q5_0 (1.8GB)
2b-code-q5_1 (1.9GB)
2b-code-q5_K_M (1.8GB)
2b-code-v1.1-q4_1 (1.7GB)
2b-code-v1.1-q3_K_S (1.3GB)
2b-code-q5_K_S (1.8GB)
2b-code-v1.1-q4_0 (1.6GB)
2b-code-v1.1-q3_K_L (1.5GB)
2b-code-v1.1-q2_K (1.2GB)
2b-code-q8_0 (2.7GB)
2b-code-v1.1-fp16 (5.0GB)
2b-code-v1.1-q3_K_M (1.4GB)
2b-code-q6_K (2.1GB)
2b-code-v1.1-q4_K_M (1.6GB)
2b-code-v1.1-q4_K_S (1.6GB)
2b-code-v1.1-q5_0 (1.8GB)
2b-code-v1.1-q5_1 (1.9GB)
2b-code-v1.1-q5_K_M (1.8GB)
2b-code-v1.1-q5_K_S (1.8GB)
2b-code-v1.1-q6_K (2.1GB)
2b-code-v1.1-q8_0 (2.7GB)
2b-v1.1 (1.6GB)
codegemma
π
Copied!
CodeGemma comprises robust, lightweight models capable of various coding tasks, including code completion, generation, natural language understanding, mathematical reasoning, and instruction adherence. These models are designed to efficiently handle diverse programming challenges.
Category: Coding
Downloads: 348.8K
Last Updated: 3 months ago Read more about: CodeGemma
latest (20GB)
35b (20GB)
v0.1 (20GB)
35b-v0.1-fp16 (70GB)
35b-v0.1-q2_K (14GB)
35b-v0.1-q3_K_L (19GB)
35b-v0.1-q3_K_M (18GB)
35b-v0.1-q3_K_S (16GB)
35b-v0.1-q4_0 (20GB)
35b-v0.1-q4_1 (22GB)
35b-v0.1-q4_K_M (22GB)
35b-v0.1-q4_K_S (20GB)
35b-v0.1-q5_1 (26GB)
35b-v0.1-q5_K_M (25GB)
35b-v0.1-q5_K_S (24GB)
35b-v0.1-q6_K (29GB)
35b-v0.1-q8_0 (37GB)
command-r
π
Copied!
Command R is a Large Language Model designed for effective conversational interaction and handling extended context tasks. It is optimized to facilitate engaging dialogue and manage lengthy discussions.
Category: Language
Downloads: 236.7K
Last Updated: 2 months ago Read more about: Command-R
latest (59GB)
104b (59GB)
104b-fp16 (208GB)
104b-q2_K (39GB)
104b-q4_0 (59GB)
104b-q8_0 (110GB)
command-r-plus
π
Copied!
Command R+ is a robust and scalable large language model designed specifically for optimal performance in retrieval augmented generation (RAG) and real-world enterprise applications.
Category: Specialized
Downloads: 102.8K
Last Updated: 2 months ago Read more about: Command R+
latest (4.7GB)
34b (20GB)
13b (8.0GB)
7b (4.7GB)
v1.6 (4.7GB)
34b-v1.6 (20GB)
34b-v1.6-fp16 (69GB)
34b-v1.6-q2_K (14GB)
34b-v1.6-q3_K_L (19GB)
34b-v1.6-q3_K_M (17GB)
34b-v1.6-q3_K_S (16GB)
34b-v1.6-q4_0 (20GB)
34b-v1.6-q4_1 (22GB)
34b-v1.6-q4_K_M (21GB)
34b-v1.6-q4_K_S (20GB)
34b-v1.6-q5_0 (24GB)
34b-v1.6-q5_1 (27GB)
34b-v1.6-q5_K_M (25GB)
34b-v1.6-q5_K_S (24GB)
34b-v1.6-q6_K (29GB)
34b-v1.6-q8_0 (37GB)
13b-v1.5-fp16 (27GB)
13b-v1.5-q2_K (6.1GB)
13b-v1.5-q3_K_L (7.6GB)
13b-v1.5-q3_K_M (7.0GB)
13b-v1.5-q3_K_S (6.3GB)
13b-v1.5-q4_0 (8.0GB)
13b-v1.5-q4_1 (8.8GB)
13b-v1.5-q4_K_M (8.5GB)
13b-v1.5-q4_K_S (8.1GB)
13b-v1.5-q5_0 (9.6GB)
13b-v1.5-q5_1 (10GB)
13b-v1.5-q5_K_M (9.9GB)
13b-v1.5-q5_K_S (9.6GB)
13b-v1.5-q6_K (11GB)
13b-v1.5-q8_0 (14GB)
13b-v1.6 (8.0GB)
13b-v1.6-vicuna-fp16 (27GB)
13b-v1.6-vicuna-q2_K (5.5GB)
13b-v1.6-vicuna-q3_K_L (7.6GB)
13b-v1.6-vicuna-q3_K_M (7.0GB)
13b-v1.6-vicuna-q3_K_S (6.3GB)
13b-v1.6-vicuna-q4_0 (8.0GB)
13b-v1.6-vicuna-q4_1 (8.8GB)
13b-v1.6-vicuna-q4_K_M (8.5GB)
13b-v1.6-vicuna-q4_K_S (8.1GB)
13b-v1.6-vicuna-q5_0 (9.6GB)
13b-v1.6-vicuna-q5_1 (10GB)
13b-v1.6-vicuna-q5_K_M (9.9GB)
13b-v1.6-vicuna-q5_K_S (9.6GB)
13b-v1.6-vicuna-q6_K (11GB)
13b-v1.6-vicuna-q8_0 (14GB)
7b-v1.5-fp16 (14GB)
7b-v1.5-q2_K (3.5GB)
7b-v1.5-q3_K_L (4.2GB)
7b-v1.5-q3_K_M (3.9GB)
7b-v1.5-q3_K_S (3.6GB)
7b-v1.5-q4_0 (4.5GB)
7b-v1.5-q4_1 (4.9GB)
7b-v1.5-q4_K_M (4.7GB)
7b-v1.5-q4_K_S (4.5GB)
7b-v1.5-q5_0 (5.3GB)
7b-v1.5-q5_1 (5.7GB)
7b-v1.5-q5_K_M (5.4GB)
7b-v1.5-q5_K_S (5.3GB)
7b-v1.5-q6_K (6.2GB)
7b-v1.5-q8_0 (7.8GB)
7b-v1.6 (4.7GB)
7b-v1.6-mistral-fp16 (15GB)
7b-v1.6-mistral-q2_K (3.3GB)
7b-v1.6-mistral-q3_K_L (4.4GB)
7b-v1.6-mistral-q3_K_M (4.1GB)
7b-v1.6-mistral-q3_K_S (3.8GB)
7b-v1.6-mistral-q4_0 (4.7GB)
7b-v1.6-mistral-q4_1 (5.2GB)
7b-v1.6-mistral-q4_K_M (5.0GB)
7b-v1.6-mistral-q4_K_S (4.8GB)
7b-v1.6-mistral-q5_0 (5.6GB)
7b-v1.6-mistral-q5_1 (6.1GB)
7b-v1.6-mistral-q5_K_M (5.8GB)
7b-v1.6-mistral-q5_K_S (5.6GB)
7b-v1.6-mistral-q6_K (6.6GB)
7b-v1.6-mistral-q8_0 (8.3GB)
7b-v1.6-vicuna-fp16 (14GB)
7b-v1.6-vicuna-q2_K (3.2GB)
7b-v1.6-vicuna-q3_K_L (4.2GB)
7b-v1.6-vicuna-q3_K_M (3.9GB)
7b-v1.6-vicuna-q3_K_S (3.6GB)
7b-v1.6-vicuna-q4_0 (4.5GB)
7b-v1.6-vicuna-q4_1 (4.9GB)
7b-v1.6-vicuna-q4_K_M (4.7GB)
7b-v1.6-vicuna-q4_K_S (4.5GB)
7b-v1.6-vicuna-q5_0 (5.3GB)
7b-v1.6-vicuna-q5_1 (5.7GB)
7b-v1.6-vicuna-q5_K_M (5.4GB)
7b-v1.6-vicuna-q5_K_S (5.3GB)
7b-v1.6-vicuna-q6_K (6.2GB)
7b-v1.6-vicuna-q8_0 (7.8GB)
llava
π
Copied!
LLaVA is a novel large multimodal model that combines a vision encoder and Vicuna for general-purpose visual and language understanding. Updated to version 1.6.
Category: Multimodal
Downloads: 1.7M
Last Updated: 9 months ago Read more about: LLaVA
latest (5.0GB)
7b (5.0GB)
2b (1.7GB)
instruct (5.0GB)
text (5.2GB)
v1.1 (5.0GB)
7b-instruct (5.0GB)
7b-instruct-v1.1-fp16 (17GB)
7b-instruct-v1.1-q2_K (3.5GB)
7b-instruct-v1.1-q3_K_L (4.7GB)
7b-instruct-v1.1-q3_K_M (4.4GB)
7b-instruct-v1.1-q3_K_S (4.0GB)
7b-instruct-v1.1-q4_0 (5.0GB)
7b-instruct-v1.1-q4_1 (5.5GB)
7b-instruct-v1.1-q4_K_M (5.3GB)
7b-instruct-v1.1-q4_K_S (5.0GB)
7b-instruct-v1.1-q5_0 (6.0GB)
7b-instruct-v1.1-q5_1 (6.5GB)
7b-instruct-q4_K_S (5.2GB)
7b-instruct-q3_K_S (4.2GB)
7b-instruct-q3_K_M (4.6GB)
7b-instruct-q2_K (3.7GB)
7b-instruct-q4_K_M (5.5GB)
7b-instruct-v1.1-q5_K_S (6.0GB)
7b-instruct-q4_1 (5.7GB)
7b-instruct-v1.1-q6_K (7.0GB)
7b-instruct-q4_0 (5.2GB)
7b-instruct-v1.1-q8_0 (9.1GB)
7b-instruct-fp16 (17GB)
7b-instruct-v1.1-q5_K_M (6.1GB)
7b-instruct-q3_K_L (4.9GB)
7b-instruct-q5_0 (6.2GB)
7b-instruct-q5_1 (6.7GB)
7b-instruct-q5_K_M (6.3GB)
7b-instruct-q5_K_S (6.2GB)
7b-instruct-q6_K (7.2GB)
7b-instruct-q8_0 (9.1GB)
7b-text (5.2GB)
7b-text-fp16 (16GB)
7b-text-q2_K (3.7GB)
7b-text-q3_K_L (4.9GB)
7b-text-q3_K_M (4.6GB)
7b-text-q3_K_S (4.2GB)
7b-text-q4_0 (5.2GB)
7b-text-q4_1 (5.7GB)
7b-text-q4_K_M (5.5GB)
7b-text-q4_K_S (5.2GB)
7b-text-q5_0 (6.2GB)
7b-text-q5_1 (6.7GB)
7b-text-q5_K_M (6.3GB)
7b-text-q5_K_S (6.2GB)
7b-text-q6_K (7.2GB)
7b-text-q8_0 (9.1GB)
7b-v1.1 (5.0GB)
2b-instruct (1.6GB)
2b-instruct-v1.1-fp16 (5.0GB)
2b-instruct-v1.1-q2_K (1.2GB)
2b-instruct-v1.1-q3_K_L (1.5GB)
2b-instruct-v1.1-q3_K_M (1.4GB)
2b-instruct-v1.1-q3_K_S (1.3GB)
2b-instruct-v1.1-q4_0 (1.6GB)
2b-instruct-v1.1-q4_1 (1.7GB)
2b-instruct-v1.1-q4_K_M (1.6GB)
2b-instruct-v1.1-q4_K_S (1.6GB)
2b-instruct-q4_0 (1.7GB)
2b-instruct-v1.1-q6_K (2.1GB)
2b-instruct-q2_K (1.3GB)
2b-instruct-v1.1-q8_0 (2.7GB)
2b-instruct-q3_K_L (1.6GB)
2b-instruct-v1.1-q5_1 (1.9GB)
2b-instruct-q3_K_M (1.5GB)
2b-instruct-v1.1-q5_0 (1.8GB)
2b-instruct-fp16 (4.5GB)
2b-instruct-v1.1-q5_K_S (1.8GB)
2b-instruct-v1.1-q5_K_M (1.8GB)
2b-instruct-q3_K_S (1.4GB)
2b-instruct-q4_1 (1.8GB)
2b-instruct-q4_K_M (1.8GB)
2b-instruct-q4_K_S (1.7GB)
2b-instruct-q5_0 (1.9GB)
2b-instruct-q5_1 (2.1GB)
2b-instruct-q5_K_M (2.0GB)
2b-instruct-q5_K_S (1.9GB)
2b-instruct-q6_K (2.2GB)
2b-instruct-q8_0 (2.7GB)
2b-text (1.7GB)
2b-text-fp16 (4.5GB)
2b-text-q2_K (1.3GB)
2b-text-q3_K_L (1.6GB)
2b-text-q3_K_M (1.5GB)
2b-text-q3_K_S (1.4GB)
2b-text-q4_0 (1.7GB)
2b-text-q4_1 (1.8GB)
2b-text-q4_K_M (1.8GB)
2b-text-q4_K_S (1.7GB)
2b-text-q5_0 (1.9GB)
2b-text-q5_1 (2.1GB)
2b-text-q5_K_M (2.0GB)
2b-text-q5_K_S (1.9GB)
2b-text-q6_K (2.2GB)
2b-text-q8_0 (2.7GB)
2b-v1.1 (1.6GB)
gemma
π
Copied!
Gemma is a cutting-edge family of lightweight open models developed by Google DeepMind. It has recently been updated to version 1.1.
Category: Language
Downloads: 4.2M
Last Updated: 6 months ago Read more about: Gemma
latest (2.3GB)
110b (63GB)
72b (41GB)
32b (18GB)
14b (8.2GB)
7b (4.5GB)
4b (2.3GB)
1.8b (1.1GB)
0.5b (395MB)
110b-chat (63GB)
110b-chat-v1.5-fp16 (222GB)
110b-chat-v1.5-q2_K (41GB)
110b-chat-v1.5-q3_K_L (58GB)
110b-chat-v1.5-q3_K_M (54GB)
110b-chat-v1.5-q3_K_S (48GB)
110b-chat-v1.5-q4_0 (63GB)
110b-chat-v1.5-q4_1 (70GB)
110b-chat-v1.5-q4_K_M (67GB)
110b-chat-v1.5-q4_K_S (63GB)
110b-chat-v1.5-q5_0 (77GB)
110b-chat-v1.5-q5_1 (84GB)
110b-chat-v1.5-q5_K_M (79GB)
110b-chat-v1.5-q5_K_S (77GB)
110b-chat-v1.5-q6_K (91GB)
110b-chat-v1.5-q8_0 (118GB)
110b-text-v1.5-fp16 (222GB)
110b-text-v1.5-q2_K (41GB)
110b-text-v1.5-q3_K_L (58GB)
110b-text-v1.5-q3_K_M (54GB)
110b-text-v1.5-q3_K_S (48GB)
110b-text-v1.5-q4_0 (63GB)
110b-text-v1.5-q4_1 (70GB)
110b-text-v1.5-q4_K_M (67GB)
110b-text-v1.5-q4_K_S (63GB)
110b-text-v1.5-q5_0 (77GB)
110b-text-v1.5-q5_1 (84GB)
110b-text-v1.5-q5_K_M (79GB)
110b-text-v1.5-q5_K_S (77GB)
110b-text-v1.5-q6_K (91GB)
110b-text-v1.5-q8_0 (118GB)
72b-chat (41GB)
72b-chat-v1.5-fp16 (145GB)
72b-chat-v1.5-q2_K (28GB)
72b-chat-v1.5-q3_K_L (38GB)
72b-chat-v1.5-q3_K_M (36GB)
72b-chat-v1.5-q3_K_S (33GB)
72b-chat-v1.5-q4_0 (41GB)
72b-chat-v1.5-q4_1 (45GB)
72b-chat-v1.5-q4_K_M (44GB)
72b-chat-v1.5-q4_K_S (42GB)
72b-chat-v1.5-q5_0 (50GB)
72b-chat-v1.5-q5_1 (54GB)
72b-chat-v1.5-q5_K_M (51GB)
72b-chat-v1.5-q5_K_S (50GB)
72b-chat-v1.5-q6_K (59GB)
72b-chat-v1.5-q8_0 (77GB)
72b-chat-q4_K_S (41GB)
72b-chat-q4_0 (41GB)
72b-chat-q3_K_S (32GB)
72b-chat-q3_K_M (37GB)
72b-chat-q3_K_L (39GB)
72b-chat-q2_K (27GB)
72b-chat-q4_1 (45GB)
72b-chat-fp16 (145GB)
72b-chat-q4_K_M (45GB)
72b-chat-q5_0 (50GB)
72b-chat-q5_1 (54GB)
72b-chat-q5_K_M (53GB)
72b-chat-q5_K_S (50GB)
72b-chat-q6_K (59GB)
72b-chat-q8_0 (77GB)
72b-text (63GB)
72b-text-fp16 (145GB)
72b-text-q2_K (27GB)
72b-text-q3_K_L (39GB)
72b-text-v1.5-q4_K_S (42GB)
72b-text-v1.5-fp16 (145GB)
72b-text-q4_0 (41GB)
72b-text-v1.5-q3_K_M (36GB)
72b-text-q5_K_M (53GB)
72b-text-v1.5-q3_K_L (38GB)
72b-text-q5_0 (50GB)
72b-text-q8_0 (77GB)
72b-text-q4_1 (45GB)
72b-text-v1.5-q3_K_S (33GB)
72b-text-q6_K (59GB)
72b-text-q3_K_M (37GB)
72b-text-v1.5-q4_1 (45GB)
72b-text-q3_K_S (32GB)
72b-text-q4_K_S (41GB)
72b-text-q4_K_M (45GB)
72b-text-q5_K_S (50GB)
72b-text-v1.5-q2_K (28GB)
72b-text-q5_1 (54GB)
72b-text-v1.5-q4_K_M (44GB)
72b-text-v1.5-q4_0 (41GB)
72b-text-v1.5-q5_0 (50GB)
72b-text-v1.5-q5_1 (54GB)
72b-text-v1.5-q5_K_M (51GB)
72b-text-v1.5-q5_K_S (50GB)
72b-text-v1.5-q6_K (59GB)
72b-text-v1.5-q8_0 (77GB)
32b-chat (18GB)
32b-chat-v1.5-fp16 (65GB)
32b-chat-v1.5-q2_K (12GB)
32b-chat-v1.5-q3_K_L (17GB)
32b-chat-v1.5-q3_K_M (16GB)
32b-chat-v1.5-q3_K_S (14GB)
32b-chat-v1.5-q4_0 (18GB)
32b-chat-v1.5-q4_1 (20GB)
32b-chat-v1.5-q4_K_M (20GB)
32b-chat-v1.5-q4_K_S (19GB)
32b-chat-v1.5-q5_0 (22GB)
32b-chat-v1.5-q5_1 (24GB)
32b-chat-v1.5-q5_K_M (23GB)
32b-chat-v1.5-q5_K_S (22GB)
32b-chat-v1.5-q6_K (27GB)
32b-chat-v1.5-q8_0 (35GB)
32b-text (18GB)
32b-text-v1.5-q2_K (12GB)
32b-text-v1.5-q3_K_L (17GB)
32b-text-v1.5-q3_K_M (16GB)
32b-text-v1.5-q3_K_S (14GB)
32b-text-v1.5-q4_0 (18GB)
32b-text-v1.5-q4_1 (20GB)
32b-text-v1.5-q4_K_S (19GB)
32b-text-v1.5-q5_0 (22GB)
32b-text-v1.5-q5_1 (24GB)
32b-text-v1.5-q8_0 (35GB)
14b-chat (8.2GB)
14b-chat-fp16 (28GB)
14b-chat-q2_K (6.0GB)
14b-chat-q3_K_L (8.0GB)
14b-chat-v1.5-q5_0 (9.9GB)
14b-chat-q8_0 (15GB)
14b-chat-q4_K_S (8.6GB)
14b-chat-v1.5-q4_K_M (9.2GB)
14b-chat-q4_0 (8.2GB)
14b-chat-v1.5-q4_1 (9.0GB)
14b-chat-v1.5-q3_K_L (7.8GB)
14b-chat-q6_K (12GB)
14b-chat-v1.5-q3_K_M (7.4GB)
14b-chat-q4_1 (9.0GB)
14b-chat-q3_K_M (7.7GB)
14b-chat-q5_0 (9.9GB)
14b-chat-v1.5-fp16 (28GB)
14b-chat-v1.5-q4_K_S (8.6GB)
14b-chat-v1.5-q4_0 (8.2GB)
14b-chat-v1.5-q3_K_S (6.9GB)
14b-chat-q5_1 (11GB)
14b-chat-q4_K_M (9.4GB)
14b-chat-q5_K_M (11GB)
14b-chat-q3_K_S (6.9GB)
14b-chat-v1.5-q2_K (6.1GB)
14b-chat-q5_K_S (10GB)
14b-chat-v1.5-q5_1 (11GB)
14b-chat-v1.5-q5_K_M (11GB)
14b-chat-v1.5-q5_K_S (10GB)
14b-chat-v1.5-q6_K (12GB)
14b-chat-v1.5-q8_0 (15GB)
14b-text (8.2GB)
14b-text-fp16 (28GB)
14b-text-q2_K (6.0GB)
14b-text-q3_K_L (8.0GB)
14b-text-q3_K_M (7.7GB)
14b-text-q3_K_S (6.9GB)
14b-text-q4_0 (8.2GB)
14b-text-q4_1 (9.0GB)
14b-text-q4_K_M (9.4GB)
14b-text-q4_K_S (8.6GB)
14b-text-q5_0 (9.9GB)
14b-text-q5_1 (11GB)
14b-text-q5_K_M (11GB)
14b-text-q5_K_S (10GB)
14b-text-q6_K (12GB)
14b-text-q8_0 (15GB)
14b-text-v1.5-q3_K_M (7.4GB)
14b-text-v1.5-q2_K (6.1GB)
14b-text-v1.5-fp16 (28GB)
14b-text-v1.5-q3_K_L (7.8GB)
14b-text-v1.5-q3_K_S (6.9GB)
14b-text-v1.5-q4_0 (8.2GB)
14b-text-v1.5-q4_1 (9.0GB)
14b-text-v1.5-q4_K_M (9.2GB)
14b-text-v1.5-q4_K_S (8.6GB)
14b-text-v1.5-q5_0 (9.9GB)
14b-text-v1.5-q5_1 (11GB)
14b-text-v1.5-q5_K_M (11GB)
14b-text-v1.5-q5_K_S (10GB)
14b-text-v1.5-q6_K (12GB)
14b-text-v1.5-q8_0 (15GB)
7b-chat (4.5GB)
7b-chat-v1.5-fp16 (15GB)
7b-chat-v1.5-q2_K (3.1GB)
7b-chat-v1.5-q3_K_L (4.2GB)
7b-chat-v1.5-q3_K_M (3.9GB)
7b-chat-v1.5-q3_K_S (3.6GB)
7b-chat-v1.5-q4_0 (4.5GB)
7b-chat-v1.5-q4_1 (5.0GB)
7b-chat-v1.5-q4_K_M (4.8GB)
7b-chat-v1.5-q4_K_S (4.5GB)
7b-chat-v1.5-q5_0 (5.4GB)
7b-chat-v1.5-q5_1 (5.8GB)
7b-chat-v1.5-q5_K_M (5.5GB)
7b-chat-v1.5-q5_K_S (5.4GB)
7b-chat-v1.5-q6_K (6.3GB)
7b-chat-v1.5-q8_0 (8.2GB)
7b-chat-q5_K_S (5.4GB)
7b-chat-q4_1 (5.0GB)
7b-chat-q4_0 (4.5GB)
7b-chat-q4_K_M (4.9GB)
7b-chat-q2_K (3.0GB)
7b-chat-q3_K_M (4.1GB)
7b-chat-q5_1 (5.8GB)
7b-chat-fp16 (15GB)
7b-chat-q3_K_S (3.6GB)
7b-chat-q3_K_L (4.3GB)
7b-chat-q4_K_S (4.5GB)
7b-chat-q5_K_M (5.7GB)
7b-chat-q5_0 (5.4GB)
7b-chat-q6_K (6.3GB)
7b-chat-q8_0 (8.2GB)
7b-text (4.5GB)
7b-text-v1.5-fp16 (15GB)
7b-text-v1.5-q2_K (3.1GB)
7b-text-v1.5-q3_K_L (4.2GB)
7b-text-v1.5-q3_K_M (3.9GB)
7b-text-v1.5-q3_K_S (3.6GB)
7b-text-v1.5-q4_0 (4.5GB)
7b-text-v1.5-q4_1 (5.0GB)
7b-text-v1.5-q4_K_M (4.8GB)
7b-text-v1.5-q4_K_S (4.5GB)
7b-text-v1.5-q5_0 (5.4GB)
7b-text-v1.5-q5_1 (5.8GB)
7b-text-v1.5-q5_K_M (5.5GB)
7b-text-v1.5-q5_K_S (5.4GB)
7b-text-v1.5-q6_K (6.3GB)
7b-text-v1.5-q8_0 (8.2GB)
4b-chat (2.3GB)
4b-chat-v1.5-fp16 (7.9GB)
4b-chat-v1.5-q2_K (1.6GB)
4b-chat-v1.5-q3_K_L (2.2GB)
4b-chat-v1.5-q3_K_M (2.0GB)
4b-chat-v1.5-q3_K_S (1.9GB)
4b-chat-v1.5-q4_0 (2.3GB)
4b-chat-v1.5-q4_1 (2.6GB)
4b-chat-v1.5-q4_K_M (2.5GB)
4b-chat-v1.5-q4_K_S (2.3GB)
4b-chat-v1.5-q5_0 (2.8GB)
4b-chat-v1.5-q5_1 (3.0GB)
4b-chat-v1.5-q5_K_M (2.8GB)
4b-chat-v1.5-q5_K_S (2.8GB)
4b-chat-v1.5-q6_K (3.2GB)
4b-chat-v1.5-q8_0 (4.2GB)
4b-text (2.3GB)
7b-fp16 (15GB)
7b-q2_K (3.0GB)
7b-q3_K_L (4.3GB)
7b-q3_K_M (4.1GB)
7b-q3_K_S (3.6GB)
7b-q4_0 (4.5GB)
7b-q4_1 (5.0GB)
7b-q4_K_M (4.9GB)
7b-q4_K_S (4.5GB)
7b-q5_0 (5.4GB)
7b-q5_1 (5.8GB)
7b-q5_K_M (5.7GB)
7b-q5_K_S (5.4GB)
7b-q6_K (6.3GB)
7b-q8_0 (8.2GB)
4b-text-v1.5-fp16 (7.9GB)
4b-text-v1.5-q2_K (1.6GB)
4b-text-v1.5-q3_K_L (2.2GB)
4b-text-v1.5-q3_K_M (2.0GB)
4b-text-v1.5-q3_K_S (1.9GB)
4b-text-v1.5-q4_0 (2.3GB)
4b-text-v1.5-q4_1 (2.6GB)
4b-text-v1.5-q4_K_M (2.5GB)
4b-text-v1.5-q4_K_S (2.3GB)
4b-text-v1.5-q5_0 (2.8GB)
4b-text-v1.5-q5_1 (3.0GB)
4b-text-v1.5-q5_K_M (2.8GB)
4b-text-v1.5-q5_K_S (2.8GB)
4b-text-v1.5-q6_K (3.2GB)
4b-text-v1.5-q8_0 (4.2GB)
1.8b-chat (1.1GB)
1.8b-chat-v1.5-fp16 (3.7GB)
1.8b-chat-v1.5-q2_K (863MB)
1.8b-chat-v1.5-q3_K_L (1.1GB)
1.8b-chat-v1.5-q3_K_M (1.0GB)
1.8b-chat-v1.5-q3_K_S (970MB)
1.8b-chat-v1.5-q4_0 (1.1GB)
1.8b-chat-v1.5-q4_1 (1.2GB)
1.8b-chat-v1.5-q4_K_M (1.2GB)
1.8b-chat-v1.5-q4_K_S (1.2GB)
1.8b-chat-v1.5-q5_0 (1.3GB)
1.8b-chat-v1.5-q5_1 (1.4GB)
1.8b-chat-v1.5-q5_K_M (1.4GB)
1.8b-chat-v1.5-q5_K_S (1.3GB)
1.8b-chat-v1.5-q6_K (1.6GB)
1.8b-chat-q5_1 (1.4GB)
1.8b-chat-q4_0 (1.1GB)
1.8b-chat-q2_K (853MB)
1.8b-chat-q3_K_S (970MB)
1.8b-chat-q3_K_M (1.0GB)
1.8b-chat-q3_K_L (1.1GB)
1.8b-chat-q5_0 (1.3GB)
1.8b-chat-q4_1 (1.2GB)
1.8b-chat-q4_K_M (1.2GB)
1.8b-chat-q4_K_S (1.2GB)
1.8b-chat-v1.5-q8_0 (2.0GB)
1.8b-chat-fp16 (3.7GB)
1.8b-chat-q5_K_M (1.4GB)
1.8b-chat-q5_K_S (1.3GB)
1.8b-chat-q6_K (1.6GB)
1.8b-chat-q8_0 (2.0GB)
1.8b-text (1.1GB)
1.8b-text-fp16 (3.7GB)
1.8b-text-q2_K (853MB)
1.8b-text-q3_K_L (1.1GB)
1.8b-text-q3_K_M (1.0GB)
1.8b-text-q3_K_S (970MB)
1.8b-text-q4_0 (1.1GB)
1.8b-text-q4_1 (1.2GB)
1.8b-text-q4_K_M (1.2GB)
1.8b-text-q4_K_S (1.2GB)
1.8b-text-q5_0 (1.3GB)
1.8b-text-v1.5-q5_0 (1.3GB)
1.8b-text-v1.5-q4_K_S (1.2GB)
1.8b-text-v1.5-q4_1 (1.2GB)
1.8b-text-q8_0 (2.0GB)
1.8b-text-q6_K (1.6GB)
1.8b-text-q5_K_M (1.4GB)
1.8b-text-v1.5-q4_K_M (1.2GB)
1.8b-text-v1.5-q3_K_S (970MB)
1.8b-text-v1.5-q2_K (863MB)
1.8b-text-v1.5-q3_K_M (1.0GB)
1.8b-text-v1.5-q4_0 (1.1GB)
1.8b-text-q5_1 (1.4GB)
1.8b-text-v1.5-fp16 (3.7GB)
1.8b-text-v1.5-q3_K_L (1.1GB)
1.8b-text-q5_K_S (1.3GB)
1.8b-text-v1.5-q5_1 (1.4GB)
1.8b-text-v1.5-q5_K_M (1.4GB)
1.8b-text-v1.5-q5_K_S (1.3GB)
1.8b-text-v1.5-q6_K (1.6GB)
1.8b-text-v1.5-q8_0 (2.0GB)
0.5b-chat (395MB)
0.5b-chat-v1.5-fp16 (1.2GB)
0.5b-chat-v1.5-q2_K (298MB)
0.5b-chat-v1.5-q3_K_L (364MB)
0.5b-chat-v1.5-q3_K_M (350MB)
0.5b-chat-v1.5-q3_K_S (333MB)
0.5b-chat-v1.5-q4_0 (395MB)
0.5b-chat-v1.5-q4_1 (424MB)
0.5b-chat-v1.5-q4_K_M (407MB)
0.5b-chat-v1.5-q4_K_S (397MB)
0.5b-chat-v1.5-q5_0 (453MB)
0.5b-chat-v1.5-q5_1 (482MB)
0.5b-chat-v1.5-q5_K_M (459MB)
0.5b-chat-v1.5-q5_K_S (453MB)
0.5b-chat-v1.5-q6_K (515MB)
0.5b-chat-v1.5-q8_0 (665MB)
0.5b-text (395MB)
0.5b-text-v1.5-fp16 (1.2GB)
0.5b-text-v1.5-q2_K (298MB)
0.5b-text-v1.5-q3_K_L (364MB)
0.5b-text-v1.5-q3_K_M (350MB)
0.5b-text-v1.5-q3_K_S (333MB)
0.5b-text-v1.5-q4_0 (395MB)
0.5b-text-v1.5-q4_1 (424MB)
0.5b-text-v1.5-q4_K_M (407MB)
0.5b-text-v1.5-q4_K_S (397MB)
0.5b-text-v1.5-q5_0 (453MB)
0.5b-text-v1.5-q5_1 (482MB)
0.5b-text-v1.5-q5_K_M (459MB)
0.5b-text-v1.5-q5_K_S (453MB)
0.5b-text-v1.5-q6_K (515MB)
0.5b-text-v1.5-q8_0 (665MB)
qwen
π
Copied!
Qwen 1.5 is a range of large language models developed by Alibaba Cloud, featuring parameter sizes from 0.5 billion to 110 billion.
Category: Tiny,
Downloads: 4.1M
Last Updated: 6 months ago Read more about: Qwen 1.5
latest (3.8GB)
70b (39GB)
13b (7.4GB)
7b (3.8GB)
chat (3.8GB)
text (3.8GB)
70b-chat (39GB)
70b-chat-fp16 (138GB)
70b-chat-q2_K (29GB)
70b-chat-q3_K_L (36GB)
70b-chat-q3_K_M (33GB)
70b-chat-q3_K_S (30GB)
70b-chat-q4_0 (39GB)
70b-chat-q4_1 (43GB)
70b-chat-q4_K_M (41GB)
70b-chat-q4_K_S (39GB)
70b-chat-q5_0 (47GB)
70b-chat-q5_1 (52GB)
70b-chat-q5_K_M (49GB)
70b-chat-q5_K_S (47GB)
70b-chat-q6_K (57GB)
70b-chat-q8_0 (73GB)
70b-text (39GB)
70b-text-fp16 (138GB)
70b-text-q2_K (29GB)
70b-text-q3_K_L (36GB)
70b-text-q3_K_M (33GB)
70b-text-q3_K_S (30GB)
70b-text-q4_0 (39GB)
70b-text-q4_1 (43GB)
70b-text-q4_K_M (41GB)
70b-text-q4_K_S (39GB)
70b-text-q5_0 (47GB)
70b-text-q5_1 (52GB)
70b-text-q5_K_M (49GB)
70b-text-q5_K_S (47GB)
70b-text-q6_K (57GB)
70b-text-q8_0 (73GB)
13b-chat (7.4GB)
13b-chat-fp16 (26GB)
13b-chat-q2_K (5.4GB)
13b-chat-q3_K_L (6.9GB)
13b-chat-q3_K_M (6.3GB)
13b-chat-q3_K_S (5.7GB)
13b-chat-q4_0 (7.4GB)
13b-chat-q4_1 (8.2GB)
13b-chat-q4_K_M (7.9GB)
13b-chat-q4_K_S (7.4GB)
13b-chat-q5_0 (9.0GB)
13b-chat-q5_1 (9.8GB)
13b-chat-q5_K_M (9.2GB)
13b-chat-q5_K_S (9.0GB)
13b-chat-q6_K (11GB)
13b-chat-q8_0 (14GB)
13b-text (7.4GB)
13b-text-fp16 (26GB)
13b-text-q2_K (5.4GB)
13b-text-q3_K_L (6.9GB)
13b-text-q3_K_M (6.3GB)
13b-text-q3_K_S (5.7GB)
13b-text-q4_0 (7.4GB)
13b-text-q4_1 (8.2GB)
13b-text-q4_K_M (7.9GB)
13b-text-q4_K_S (7.4GB)
13b-text-q5_0 (9.0GB)
13b-text-q5_1 (9.8GB)
13b-text-q5_K_M (9.2GB)
13b-text-q5_K_S (9.0GB)
13b-text-q6_K (11GB)
13b-text-q8_0 (14GB)
7b-chat (3.8GB)
7b-chat-fp16 (13GB)
7b-chat-q2_K (2.8GB)
7b-chat-q3_K_L (3.6GB)
7b-chat-q3_K_M (3.3GB)
7b-chat-q3_K_S (2.9GB)
7b-chat-q4_0 (3.8GB)
7b-chat-q4_1 (4.2GB)
7b-chat-q4_K_M (4.1GB)
7b-chat-q4_K_S (3.9GB)
7b-chat-q5_0 (4.7GB)
7b-chat-q5_1 (5.1GB)
7b-chat-q5_K_M (4.8GB)
7b-chat-q5_K_S (4.7GB)
7b-chat-q6_K (5.5GB)
7b-chat-q8_0 (7.2GB)
7b-text (3.8GB)
7b-text-fp16 (13GB)
7b-text-q2_K (2.8GB)
7b-text-q3_K_L (3.6GB)
7b-text-q3_K_M (3.3GB)
7b-text-q3_K_S (2.9GB)
7b-text-q4_0 (3.8GB)
7b-text-q4_1 (4.2GB)
7b-text-q4_K_M (4.1GB)
7b-text-q4_K_S (3.9GB)
7b-text-q5_0 (4.7GB)
7b-text-q5_1 (5.1GB)
7b-text-q5_K_M (4.8GB)
7b-text-q5_K_S (4.7GB)
7b-text-q6_K (5.5GB)
7b-text-q8_0 (7.2GB)
llama2
π
Copied!
Llama 2 consists of a series of foundational language models with parameter sizes between 7 billion and 70 billion.
Category: Language
Downloads: 2.3M
Last Updated: 10 months ago Read more about: Llama 2
latest (3.8GB)
70b (39GB)
34b (19GB)
13b (7.4GB)
7b (3.8GB)
code (3.8GB)
instruct (3.8GB)
python (3.8GB)
70b-code (39GB)
70b-code-fp16 (138GB)
70b-code-q2_K (25GB)
70b-code-q3_K_L (36GB)
70b-code-q3_K_M (33GB)
70b-code-q3_K_S (30GB)
70b-code-q4_0 (39GB)
70b-code-q4_1 (43GB)
70b-code-q4_K_M (41GB)
70b-code-q4_K_S (39GB)
70b-code-q5_0 (47GB)
70b-code-q5_1 (52GB)
70b-code-q5_K_M (49GB)
70b-code-q5_K_S (47GB)
70b-code-q6_K (57GB)
70b-code-q8_0 (73GB)
70b-instruct (39GB)
70b-instruct-fp16 (138GB)
70b-instruct-q2_K (25GB)
70b-instruct-q3_K_L (36GB)
70b-instruct-q3_K_M (33GB)
70b-instruct-q3_K_S (30GB)
70b-instruct-q4_0 (39GB)
70b-instruct-q4_1 (43GB)
70b-instruct-q4_K_M (41GB)
70b-instruct-q4_K_S (39GB)
70b-instruct-q5_0 (47GB)
70b-instruct-q5_1 (52GB)
70b-instruct-q5_K_M (49GB)
70b-instruct-q5_K_S (47GB)
70b-instruct-q6_K (57GB)
70b-instruct-q8_0 (73GB)
70b-python (39GB)
70b-python-fp16 (138GB)
70b-python-q2_K (25GB)
70b-python-q3_K_L (36GB)
70b-python-q3_K_M (33GB)
70b-python-q3_K_S (30GB)
70b-python-q4_0 (39GB)
70b-python-q4_1 (43GB)
70b-python-q4_K_M (41GB)
70b-python-q4_K_S (39GB)
70b-python-q5_0 (47GB)
70b-python-q5_1 (52GB)
70b-python-q5_K_M (49GB)
70b-python-q5_K_S (47GB)
70b-python-q6_K (57GB)
70b-python-q8_0 (73GB)
34b-code (19GB)
34b-code-q2_K (14GB)
34b-code-q3_K_L (18GB)
34b-code-q3_K_M (16GB)
34b-code-q3_K_S (15GB)
34b-code-q4_0 (19GB)
34b-code-q4_1 (21GB)
34b-code-q4_K_M (20GB)
34b-code-q4_K_S (19GB)
34b-code-q5_0 (23GB)
34b-code-q5_1 (25GB)
34b-code-q5_K_M (24GB)
34b-code-q5_K_S (23GB)
34b-code-q6_K (28GB)
34b-code-q8_0 (36GB)
34b-instruct (19GB)
34b-instruct-fp16 (67GB)
34b-instruct-q2_K (14GB)
34b-instruct-q3_K_L (18GB)
34b-instruct-q3_K_M (16GB)
34b-instruct-q3_K_S (15GB)
34b-instruct-q4_0 (19GB)
34b-instruct-q4_1 (21GB)
34b-instruct-q4_K_M (20GB)
34b-instruct-q4_K_S (19GB)
34b-instruct-q5_0 (23GB)
34b-instruct-q5_1 (25GB)
34b-instruct-q5_K_M (24GB)
34b-instruct-q5_K_S (23GB)
34b-instruct-q6_K (28GB)
34b-instruct-q8_0 (36GB)
34b-python (19GB)
34b-python-fp16 (67GB)
34b-python-q2_K (14GB)
34b-python-q3_K_L (18GB)
34b-python-q3_K_M (16GB)
34b-python-q3_K_S (15GB)
34b-python-q4_0 (19GB)
34b-python-q4_1 (21GB)
34b-python-q4_K_M (20GB)
34b-python-q4_K_S (19GB)
34b-python-q5_0 (23GB)
34b-python-q5_1 (25GB)
34b-python-q5_K_M (24GB)
34b-python-q5_K_S (23GB)
34b-python-q6_K (28GB)
34b-python-q8_0 (36GB)
13b-code (7.4GB)
13b-code-fp16 (26GB)
13b-code-q2_K (5.4GB)
13b-code-q3_K_L (6.9GB)
13b-code-q3_K_M (6.3GB)
13b-code-q3_K_S (5.7GB)
13b-code-q4_0 (7.4GB)
13b-code-q4_1 (8.2GB)
13b-code-q4_K_M (7.9GB)
13b-code-q4_K_S (7.4GB)
13b-code-q5_0 (9.0GB)
13b-code-q5_1 (9.8GB)
13b-code-q5_K_M (9.2GB)
13b-code-q5_K_S (9.0GB)
13b-code-q6_K (11GB)
13b-code-q8_0 (14GB)
13b-instruct (7.4GB)
13b-instruct-fp16 (26GB)
13b-instruct-q2_K (5.4GB)
13b-instruct-q3_K_L (6.9GB)
13b-instruct-q3_K_M (6.3GB)
13b-instruct-q3_K_S (5.7GB)
13b-instruct-q4_0 (7.4GB)
13b-instruct-q4_1 (8.2GB)
13b-instruct-q4_K_M (7.9GB)
13b-instruct-q4_K_S (7.4GB)
13b-instruct-q5_0 (9.0GB)
13b-instruct-q5_1 (9.8GB)
13b-instruct-q5_K_M (9.2GB)
13b-instruct-q5_K_S (9.0GB)
13b-instruct-q6_K (11GB)
13b-instruct-q8_0 (14GB)
13b-python (7.4GB)
13b-python-fp16 (26GB)
13b-python-q2_K (5.4GB)
13b-python-q3_K_L (6.9GB)
13b-python-q3_K_M (6.3GB)
13b-python-q3_K_S (5.7GB)
13b-python-q4_0 (7.4GB)
13b-python-q4_1 (8.2GB)
13b-python-q4_K_M (7.9GB)
13b-python-q4_K_S (7.4GB)
13b-python-q5_0 (9.0GB)
13b-python-q5_1 (9.8GB)
13b-python-q5_K_M (9.2GB)
13b-python-q5_K_S (9.0GB)
13b-python-q6_K (11GB)
13b-python-q8_0 (14GB)
7b-code (3.8GB)
7b-code-fp16 (13GB)
7b-code-q2_K (2.8GB)
7b-code-q3_K_L (3.6GB)
7b-code-q3_K_M (3.3GB)
7b-code-q3_K_S (2.9GB)
7b-code-q4_0 (3.8GB)
7b-code-q4_1 (4.2GB)
7b-code-q4_K_M (4.1GB)
7b-code-q4_K_S (3.9GB)
7b-code-q5_0 (4.7GB)
7b-code-q5_1 (5.1GB)
7b-code-q5_K_M (4.8GB)
7b-code-q5_K_S (4.7GB)
7b-code-q6_K (5.5GB)
7b-code-q8_0 (7.2GB)
7b-instruct (3.8GB)
7b-instruct-fp16 (13GB)
7b-instruct-q2_K (2.8GB)
7b-instruct-q3_K_L (3.6GB)
7b-instruct-q3_K_M (3.3GB)
7b-instruct-q3_K_S (2.9GB)
7b-instruct-q4_0 (3.8GB)
7b-instruct-q4_1 (4.2GB)
7b-instruct-q4_K_M (4.1GB)
7b-instruct-q4_K_S (3.9GB)
7b-instruct-q5_0 (4.7GB)
7b-instruct-q5_1 (5.1GB)
7b-instruct-q5_K_M (4.8GB)
7b-instruct-q5_K_S (4.7GB)
7b-instruct-q6_K (5.5GB)
7b-instruct-q8_0 (7.2GB)
7b-python (3.8GB)
7b-python-fp16 (13GB)
7b-python-q2_K (2.8GB)
7b-python-q3_K_L (3.6GB)
7b-python-q3_K_M (3.3GB)
7b-python-q3_K_S (2.9GB)
7b-python-q4_0 (3.8GB)
7b-python-q4_1 (4.2GB)
7b-python-q4_K_M (4.1GB)
7b-python-q4_K_S (3.9GB)
7b-python-q5_0 (4.7GB)
7b-python-q5_1 (5.1GB)
7b-python-q5_K_M (4.8GB)
7b-python-q5_K_S (4.7GB)
7b-python-q6_K (5.5GB)
7b-python-q8_0 (7.2GB)
codellama
π
Copied!
A powerful language model capable of generating and discussing code based on text prompts. It leverages natural language to facilitate coding tasks.
Category: Coding
Downloads: 1.5M
Last Updated: 3 months ago Read more about: Codella
latest (26GB)
8x22b (80GB)
8x7b (26GB)
v2.5 (26GB)
v2.6 (26GB)
v2.6.1 (26GB)
v2.7 (26GB)
8x7b-v2.5 (26GB)
8x7b-v2.5-fp16 (93GB)
8x7b-v2.5-q2_K (16GB)
8x7b-v2.5-q3_K_L (20GB)
8x7b-v2.5-q3_K_M (20GB)
8x7b-v2.5-q3_K_S (20GB)
8x7b-v2.5-q4_0 (26GB)
8x7b-v2.5-q4_1 (29GB)
8x7b-v2.5-q4_K_M (26GB)
8x7b-v2.5-q4_K_S (26GB)
8x7b-v2.5-q5_0 (32GB)
8x7b-v2.5-q5_1 (35GB)
8x7b-v2.5-q5_K_M (32GB)
8x7b-v2.5-q5_K_S (32GB)
8x7b-v2.5-q6_K (38GB)
8x7b-v2.5-q8_0 (50GB)
8x7b-v2.6 (26GB)
8x7b-v2.6-fp16 (93GB)
8x7b-v2.6-q2_K (16GB)
8x7b-v2.6-q3_K_L (20GB)
8x7b-v2.6-q3_K_M (20GB)
8x7b-v2.6-q3_K_S (20GB)
8x7b-v2.6-q4_0 (26GB)
8x7b-v2.6-q4_1 (29GB)
8x7b-v2.6-q4_K_M (26GB)
8x7b-v2.6-q4_K_S (26GB)
8x7b-v2.6-q5_0 (32GB)
8x7b-v2.6-q5_1 (35GB)
8x7b-v2.6-q5_K_M (32GB)
8x7b-v2.6-q5_K_S (32GB)
8x7b-v2.6-q6_K (38GB)
8x7b-v2.6-q8_0 (50GB)
8x7b-v2.6.1 (26GB)
8x7b-v2.6.1-fp16 (93GB)
8x7b-v2.6.1-q2_K (16GB)
8x7b-v2.6.1-q3_K_L (20GB)
8x7b-v2.6.1-q3_K_M (20GB)
8x7b-v2.6.1-q3_K_S (20GB)
8x7b-v2.6.1-q4_0 (26GB)
8x7b-v2.6.1-q4_1 (29GB)
8x7b-v2.6.1-q4_K_M (26GB)
8x7b-v2.6.1-q4_K_S (26GB)
8x7b-v2.6.1-q5_0 (32GB)
8x7b-v2.6.1-q5_1 (35GB)
8x7b-v2.6.1-q5_K_M (32GB)
8x7b-v2.6.1-q5_K_S (32GB)
8x7b-v2.6.1-q6_K (38GB)
8x7b-v2.6.1-q8_0 (50GB)
8x7b-v2.7 (26GB)
8x7b-v2.7-fp16 (93GB)
8x7b-v2.7-q2_K (16GB)
8x7b-v2.7-q3_K_L (20GB)
8x7b-v2.7-q3_K_M (20GB)
8x7b-v2.7-q3_K_S (20GB)
8x7b-v2.7-q4_0 (26GB)
8x7b-v2.7-q4_1 (29GB)
8x7b-v2.7-q4_K_M (26GB)
8x7b-v2.7-q4_K_S (26GB)
8x7b-v2.7-q5_0 (32GB)
8x7b-v2.7-q5_1 (35GB)
8x7b-v2.7-q5_K_M (32GB)
8x7b-v2.7-q5_K_S (32GB)
8x7b-v2.7-q6_K (38GB)
8x7b-v2.7-q8_0 (50GB)
8x22b-v2.9 (80GB)
8x22b-v2.9-fp16 (281GB)
8x22b-v2.9-q2_K (52GB)
8x22b-v2.9-q3_K_L (73GB)
8x22b-v2.9-q3_K_M (68GB)
8x22b-v2.9-q3_K_S (61GB)
8x22b-v2.9-q4_0 (80GB)
8x22b-v2.9-q4_1 (88GB)
8x22b-v2.9-q4_K_M (86GB)
8x22b-v2.9-q4_K_S (80GB)
8x22b-v2.9-q5_0 (97GB)
8x22b-v2.9-q5_1 (106GB)
8x22b-v2.9-q5_K_M (100GB)
8x22b-v2.9-q5_K_S (97GB)
8x22b-v2.9-q6_K (116GB)
8x22b-v2.9-q8_0 (149GB)
dolphin-mixtral
π
Copied!
Eric Hartford developed the uncensored 8x7b and 8x22b fine-tuned models using the Mixtral mixture of experts, which are particularly effective for coding tasks.
Category: Uncensored
Downloads: 428.5K
Last Updated: 6 months ago Read more about: Dolphin Mixtral
latest (3.8GB)
70b (39GB)
7b (3.8GB)
70b-chat (39GB)
70b-chat-q2_K (29GB)
70b-chat-q3_K_L (36GB)
70b-chat-q3_K_M (33GB)
70b-chat-q3_K_S (30GB)
70b-chat-q4_0 (39GB)
70b-chat-q4_1 (43GB)
70b-chat-q4_K_M (41GB)
70b-chat-q4_K_S (39GB)
70b-chat-q5_0 (47GB)
70b-chat-q5_1 (52GB)
70b-chat-q5_K_M (49GB)
70b-chat-q5_K_S (47GB)
70b-chat-q6_K (57GB)
70b-chat-q8_0 (73GB)
7b-chat (3.8GB)
7b-chat-fp16 (13GB)
7b-chat-q2_K (2.8GB)
7b-chat-q3_K_L (3.6GB)
7b-chat-q3_K_M (3.3GB)
7b-chat-q3_K_S (2.9GB)
7b-chat-q4_0 (3.8GB)
7b-chat-q4_1 (4.2GB)
7b-chat-q4_K_M (4.1GB)
7b-chat-q4_K_S (3.9GB)
7b-chat-q5_0 (4.7GB)
7b-chat-q5_1 (5.1GB)
7b-chat-q5_K_M (4.8GB)
7b-chat-q5_K_S (4.7GB)
7b-chat-q6_K (5.5GB)
7b-chat-q8_0 (7.2GB)
llama2-uncensored
π
Copied!
The Uncensored Llama 2 model was developed by George Sung and Jarrad Hope.
Category: Uncensored
Downloads: 349.4K
Last Updated: 1 year ago Read more about: Llama 2 Uncensored
latest (776MB)
33b (19GB)
6.7b (3.8GB)
1.3b (776MB)
base (776MB)
instruct (776MB)
33b-base (19GB)
33b-base-fp16 (67GB)
33b-base-q2_K (14GB)
33b-base-q3_K_L (18GB)
33b-base-q3_K_M (16GB)
33b-base-q3_K_S (14GB)
33b-base-q4_0 (19GB)
33b-base-q4_1 (21GB)
33b-base-q4_K_M (20GB)
33b-base-q4_K_S (19GB)
33b-base-q5_0 (23GB)
33b-base-q5_1 (25GB)
33b-base-q5_K_M (24GB)
33b-base-q5_K_S (23GB)
33b-base-q6_K (27GB)
33b-base-q8_0 (35GB)
33b-instruct (19GB)
33b-instruct-fp16 (67GB)
33b-instruct-q2_K (14GB)
33b-instruct-q3_K_L (18GB)
33b-instruct-q3_K_M (16GB)
33b-instruct-q3_K_S (14GB)
33b-instruct-q4_0 (19GB)
33b-instruct-q4_1 (21GB)
33b-instruct-q4_K_M (20GB)
33b-instruct-q4_K_S (19GB)
33b-instruct-q5_0 (23GB)
33b-instruct-q5_1 (25GB)
33b-instruct-q5_K_M (24GB)
33b-instruct-q5_K_S (23GB)
33b-instruct-q6_K (27GB)
33b-instruct-q8_0 (35GB)
6.7b-base (3.8GB)
6.7b-base-fp16 (13GB)
6.7b-base-q2_K (2.8GB)
6.7b-base-q3_K_L (3.6GB)
6.7b-base-q3_K_M (3.3GB)
6.7b-base-q3_K_S (3.0GB)
6.7b-base-q4_0 (3.8GB)
6.7b-base-q4_1 (4.2GB)
6.7b-base-q4_K_M (4.1GB)
6.7b-base-q4_K_S (3.9GB)
6.7b-base-q5_0 (4.7GB)
6.7b-base-q5_1 (5.1GB)
6.7b-base-q5_K_M (4.8GB)
6.7b-base-q5_K_S (4.7GB)
6.7b-base-q6_K (5.5GB)
6.7b-base-q8_0 (7.2GB)
6.7b-instruct (3.8GB)
6.7b-instruct-fp16 (13GB)
6.7b-instruct-q2_K (2.8GB)
6.7b-instruct-q3_K_L (3.6GB)
6.7b-instruct-q3_K_M (3.3GB)
6.7b-instruct-q3_K_S (3.0GB)
6.7b-instruct-q4_0 (3.8GB)
6.7b-instruct-q4_1 (4.2GB)
6.7b-instruct-q4_K_M (4.1GB)
6.7b-instruct-q4_K_S (3.9GB)
6.7b-instruct-q5_0 (4.7GB)
6.7b-instruct-q5_1 (5.1GB)
6.7b-instruct-q5_K_M (4.8GB)
6.7b-instruct-q5_K_S (4.7GB)
6.7b-instruct-q6_K (5.5GB)
6.7b-instruct-q8_0 (7.2GB)
1.3b-base (776MB)
1.3b-base-fp16 (2.7GB)
1.3b-base-q2_K (632MB)
1.3b-base-q3_K_L (745MB)
1.3b-base-q3_K_M (705MB)
1.3b-base-q3_K_S (659MB)
1.3b-base-q4_0 (776MB)
1.3b-base-q4_1 (856MB)
1.3b-base-q4_K_M (874MB)
1.3b-base-q4_K_S (815MB)
1.3b-base-q5_0 (936MB)
1.3b-base-q5_1 (1.0GB)
1.3b-base-q5_K_M (1.0GB)
1.3b-base-q5_K_S (953MB)
1.3b-base-q6_K (1.2GB)
1.3b-base-q8_0 (1.4GB)
1.3b-instruct (776MB)
1.3b-instruct-fp16 (2.7GB)
1.3b-instruct-q2_K (632MB)
1.3b-instruct-q3_K_L (745MB)
1.3b-instruct-q3_K_M (705MB)
1.3b-instruct-q3_K_S (659MB)
1.3b-instruct-q4_0 (776MB)
1.3b-instruct-q4_1 (856MB)
1.3b-instruct-q4_K_M (874MB)
1.3b-instruct-q4_K_S (815MB)
1.3b-instruct-q5_0 (936MB)
1.3b-instruct-q5_1 (1.0GB)
1.3b-instruct-q5_K_M (1.0GB)
1.3b-instruct-q5_K_S (953MB)
1.3b-instruct-q6_K (1.2GB)
1.3b-instruct-q8_0 (1.4GB)
deepseek-coder
π
Copied!
DeepSeek Coder is a powerful coding model that has been trained on two trillion tokens of code and natural language. Its extensive training allows it to perform well in various coding tasks.
Category: Tiny,coding
Downloads: 361.2K
Last Updated: 10 months ago Read more about: DeepSeek-Coder
latest (1.6GB)
2.7b (1.6GB)
chat (1.6GB)
2.7b-chat-v2-fp16 (5.6GB)
2.7b-chat-v2-q2_K (1.2GB)
2.7b-chat-v2-q3_K_L (1.6GB)
2.7b-chat-v2-q3_K_M (1.5GB)
2.7b-chat-v2-q3_K_S (1.3GB)
2.7b-chat-v2-q4_0 (1.6GB)
2.7b-chat-v2-q4_1 (1.8GB)
2.7b-chat-v2-q4_K_M (1.8GB)
2.7b-chat-v2-q4_K_S (1.6GB)
2.7b-chat-v2-q5_0 (1.9GB)
2.7b-chat-v2-q5_1 (2.1GB)
2.7b-chat-v2-q5_K_M (2.1GB)
2.7b-chat-v2-q5_K_S (1.9GB)
2.7b-chat-v2-q6_K (2.3GB)
2.7b-chat-v2-q8_0 (3.0GB)
phi
π
Copied!
Phi-2 is a 2.7 billion parameter language model developed by Microsoft Research, showcasing exceptional reasoning and language comprehension skills.
Category: Language
Downloads: 373.9K
Last Updated: 10 months ago Read more about: Phi-2
latest (4.1GB)
7b (4.1GB)
v2 (4.1GB)
v2.1 (4.1GB)
v2.2 (4.1GB)
v2.2.1 (4.1GB)
v2.6 (4.1GB)
v2.8 (4.1GB)
7b-v2 (4.1GB)
7b-v2-fp16 (14GB)
7b-v2-q2_K (3.1GB)
7b-v2-q3_K_L (3.8GB)
7b-v2-q3_K_M (3.5GB)
7b-v2-q3_K_S (3.2GB)
7b-v2-q4_0 (4.1GB)
7b-v2-q4_1 (4.6GB)
7b-v2-q4_K_M (4.4GB)
7b-v2-q4_K_S (4.1GB)
7b-v2-q5_0 (5.0GB)
7b-v2-q5_1 (5.4GB)
7b-v2-q5_K_M (5.1GB)
7b-v2-q5_K_S (5.0GB)
7b-v2-q6_K (5.9GB)
7b-v2-q8_0 (7.7GB)
7b-v2.1 (4.1GB)
7b-v2.1-fp16 (14GB)
7b-v2.1-q2_K (3.1GB)
7b-v2.1-q3_K_L (3.8GB)
7b-v2.1-q3_K_M (3.5GB)
7b-v2.1-q3_K_S (3.2GB)
7b-v2.1-q4_0 (4.1GB)
7b-v2.1-q4_1 (4.6GB)
7b-v2.1-q4_K_M (4.4GB)
7b-v2.1-q4_K_S (4.1GB)
7b-v2.1-q5_0 (5.0GB)
7b-v2.1-q5_1 (5.4GB)
7b-v2.1-q5_K_M (5.1GB)
7b-v2.1-q5_K_S (5.0GB)
7b-v2.1-q6_K (5.9GB)
7b-v2.1-q8_0 (7.7GB)
7b-v2.2 (4.1GB)
7b-v2.2-fp16 (14GB)
7b-v2.2-q2_K (3.1GB)
7b-v2.2-q3_K_L (3.8GB)
7b-v2.2-q3_K_M (3.5GB)
7b-v2.2-q3_K_S (3.2GB)
7b-v2.2-q4_0 (4.1GB)
7b-v2.2-q4_1 (4.6GB)
7b-v2.2-q4_K_M (4.4GB)
7b-v2.2-q4_K_S (4.1GB)
7b-v2.2-q5_0 (5.0GB)
7b-v2.2-q5_1 (5.4GB)
7b-v2.2-q5_K_M (5.1GB)
7b-v2.2-q5_K_S (5.0GB)
7b-v2.2-q6_K (5.9GB)
7b-v2.2-q8_0 (7.7GB)
7b-v2.2.1 (4.1GB)
7b-v2.2.1-fp16 (14GB)
7b-v2.2.1-q2_K (3.1GB)
7b-v2.2.1-q3_K_L (3.8GB)
7b-v2.2.1-q3_K_M (3.5GB)
7b-v2.2.1-q3_K_S (3.2GB)
7b-v2.2.1-q4_0 (4.1GB)
7b-v2.2.1-q4_1 (4.6GB)
7b-v2.2.1-q4_K_M (4.4GB)
7b-v2.2.1-q4_K_S (4.1GB)
7b-v2.2.1-q5_0 (5.0GB)
7b-v2.2.1-q5_1 (5.4GB)
7b-v2.2.1-q5_K_M (5.1GB)
7b-v2.2.1-q5_K_S (5.0GB)
7b-v2.2.1-q6_K (5.9GB)
7b-v2.2.1-q8_0 (7.7GB)
7b-v2.6-dpo-laser (4.1GB)
7b-v2.6 (4.1GB)
7b-v2.6-fp16 (14GB)
7b-v2.6-q2_K (3.1GB)
7b-v2.6-dpo-laser-q8_0 (7.7GB)
7b-v2.6-dpo-laser-q4_0 (4.1GB)
7b-v2.6-q5_K_M (5.1GB)
7b-v2.6-q3_K_L (3.8GB)
7b-v2.6-q4_K_M (4.4GB)
7b-v2.6-dpo-laser-q4_K_S (4.1GB)
7b-v2.6-dpo-laser-q5_0 (5.0GB)
7b-v2.6-dpo-laser-q2_K (3.1GB)
7b-v2.6-dpo-laser-q6_K (5.9GB)
7b-v2.6-dpo-laser-q5_K_M (5.1GB)
7b-v2.6-dpo-laser-q3_K_M (3.5GB)
7b-v2.6-q5_K_S (5.0GB)
7b-v2.6-dpo-laser-q5_K_S (5.0GB)
7b-v2.6-q4_0 (4.1GB)
7b-v2.6-q4_1 (4.6GB)
7b-v2.6-dpo-laser-fp16 (14GB)
7b-v2.6-dpo-laser-q4_1 (4.6GB)
7b-v2.6-dpo-laser-q3_K_S (3.2GB)
7b-v2.6-q3_K_M (3.5GB)
7b-v2.6-q6_K (5.9GB)
7b-v2.6-q3_K_S (3.2GB)
7b-v2.6-q5_0 (5.0GB)
7b-v2.6-q5_1 (5.4GB)
7b-v2.6-q8_0 (7.7GB)
7b-v2.6-dpo-laser-q4_K_M (4.4GB)
7b-v2.6-dpo-laser-q3_K_L (3.8GB)
7b-v2.6-dpo-laser-q5_1 (5.4GB)
7b-v2.6-q4_K_S (4.1GB)
7b-v2.8 (4.1GB)
7b-v2.8-fp16 (14GB)
7b-v2.8-q2_K (2.7GB)
7b-v2.8-q3_K_L (3.8GB)
7b-v2.8-q3_K_M (3.5GB)
7b-v2.8-q3_K_S (3.2GB)
7b-v2.8-q4_0 (4.1GB)
7b-v2.8-q4_1 (4.6GB)
7b-v2.8-q4_K_M (4.4GB)
7b-v2.8-q4_K_S (4.1GB)
7b-v2.8-q5_0 (5.0GB)
7b-v2.8-q5_1 (5.4GB)
7b-v2.8-q5_K_M (5.1GB)
7b-v2.8-q5_K_S (5.0GB)
7b-v2.8-q6_K (5.9GB)
7b-v2.8-q8_0 (7.7GB)
dolphin-mistral
π
Copied!
The uncensored Dolphin model, which is built on Mistral, performs exceptionally well in coding tasks. It has been updated to version 2.8.
Category: Uncensored
Downloads: 257.2K
Last Updated: 7 months ago Read more about: Dolphin Mistral
latest (2.0GB)
70b (39GB)
13b (7.4GB)
7b (3.8GB)
3b (2.0GB)
70b-v3 (39GB)
70b-v3-fp16 (138GB)
70b-v3-q2_K (29GB)
70b-v3-q3_K_L (36GB)
70b-v3-q3_K_M (33GB)
70b-v3-q3_K_S (30GB)
70b-v3-q4_0 (39GB)
70b-v3-q4_1 (43GB)
70b-v3-q4_K_M (41GB)
70b-v3-q4_K_S (39GB)
70b-v3-q5_0 (47GB)
70b-v3-q5_1 (52GB)
70b-v3-q5_K_M (49GB)
70b-v3-q5_K_S (47GB)
70b-v3-q6_K (57GB)
70b-v3-q8_0 (73GB)
13b-v2-fp16 (26GB)
13b-v2-q2_K (5.4GB)
13b-v2-q3_K_L (6.9GB)
13b-v2-q3_K_M (6.3GB)
13b-v2-q3_K_S (5.7GB)
13b-v2-q4_0 (7.4GB)
13b-v2-q4_1 (8.2GB)
13b-v2-q4_K_M (7.9GB)
13b-v2-q4_K_S (7.4GB)
13b-v2-q5_0 (9.0GB)
13b-v2-q5_1 (9.8GB)
13b-v2-q5_K_M (9.2GB)
13b-v2-q5_K_S (9.0GB)
13b-v2-q6_K (11GB)
13b-v2-q8_0 (14GB)
13b-v3 (7.4GB)
13b-v3-fp16 (26GB)
13b-v3-q2_K (5.4GB)
13b-v3-q3_K_L (6.9GB)
13b-v3-q3_K_M (6.3GB)
13b-v3-q3_K_S (5.7GB)
13b-v3-q4_0 (7.4GB)
13b-v3-q4_1 (8.2GB)
13b-v3-q4_K_M (7.9GB)
13b-v3-q4_K_S (7.4GB)
13b-v3-q5_0 (9.0GB)
13b-v3-q5_1 (9.8GB)
13b-v3-q5_K_M (9.2GB)
13b-v3-q5_K_S (9.0GB)
13b-v3-q6_K (11GB)
13b-v3-q8_0 (14GB)
7b-v3 (3.8GB)
13b-fp16 (26GB)
13b-q2_K (5.4GB)
13b-q3_K_L (6.9GB)
13b-q3_K_M (6.3GB)
13b-q3_K_S (5.7GB)
13b-q4_0 (7.4GB)
13b-q4_1 (8.2GB)
13b-q4_K_M (7.9GB)
13b-q4_K_S (7.4GB)
13b-q5_0 (9.0GB)
13b-q5_1 (9.8GB)
13b-q5_K_M (9.2GB)
13b-q5_K_S (9.0GB)
13b-q6_K (11GB)
13b-q8_0 (14GB)
7b-v2-fp16 (13GB)
7b-v2-q2_K (2.8GB)
7b-v2-q3_K_L (3.6GB)
7b-v2-q3_K_M (3.3GB)
7b-v2-q3_K_S (2.9GB)
7b-v2-q4_0 (3.8GB)
7b-v2-q4_1 (4.2GB)
7b-v2-q4_K_M (4.1GB)
7b-v2-q4_K_S (3.9GB)
7b-v2-q5_0 (4.7GB)
7b-v2-q5_1 (5.1GB)
7b-v2-q5_K_M (4.8GB)
7b-v2-q5_K_S (4.7GB)
7b-v2-q6_K (5.5GB)
7b-v2-q8_0 (7.2GB)
7b-v3-fp16 (13GB)
7b-v3-q2_K (2.8GB)
7b-v3-q3_K_L (3.6GB)
7b-v3-q3_K_M (3.3GB)
7b-v3-q3_K_S (2.9GB)
7b-v3-q4_0 (3.8GB)
7b-v3-q4_1 (4.2GB)
7b-v3-q4_K_M (4.1GB)
7b-v3-q4_K_S (3.9GB)
7b-v3-q5_0 (4.7GB)
7b-v3-q5_1 (5.1GB)
7b-v3-q5_K_M (4.8GB)
7b-v3-q5_K_S (4.7GB)
7b-v3-q6_K (5.5GB)
7b-v3-q8_0 (7.2GB)
7b-fp16 (13GB)
7b-q2_K (2.8GB)
7b-q3_K_L (3.6GB)
7b-q3_K_M (3.3GB)
7b-q3_K_S (2.9GB)
7b-q4_0 (3.8GB)
7b-q4_1 (4.2GB)
7b-q4_K_M (4.1GB)
7b-q4_K_S (3.9GB)
7b-q5_0 (4.7GB)
7b-q5_1 (5.1GB)
7b-q5_K_M (4.8GB)
7b-q5_K_S (4.7GB)
7b-q6_K (5.5GB)
7b-q8_0 (7.2GB)
3b-fp16 (6.9GB)
3b-q4_0 (2.0GB)
3b-q4_1 (2.2GB)
3b-q5_0 (2.4GB)
3b-q5_1 (2.6GB)
3b-q8_0 (3.6GB)
orca-mini
π
Copied!
A versatile model with a parameter range of 3 billion to 70 billion, designed for use on entry-level hardware.
Category: Language
Downloads: 227.5K
Last Updated: 1 year ago Read more about: Orca Mini
latest (2.7GB)
4b (2.7GB)
4b-instruct-fp16 (8.4GB)
4b-instruct-q2_K (1.9GB)
4b-instruct-q3_K_S (2.1GB)
4b-instruct-q3_K_M (2.3GB)
4b-instruct-q3_K_L (2.5GB)
4b-instruct-q4_0 (2.6GB)
4b-instruct-q4_1 (2.8GB)
4b-instruct-q4_K_S (2.6GB)
4b-instruct-q4_K_M (2.7GB)
4b-instruct-q5_0 (3.0GB)
4b-instruct-q5_1 (3.2GB)
4b-instruct-q5_K_S (3.0GB)
4b-instruct-q5_K_M (3.1GB)
4b-instruct-q6_K (3.4GB)
4b-instruct-q8_0 (4.5GB)
nemotron-mini
π
Copied!
NVIDIA has developed a small language model that is optimized for commercial use, particularly in roleplay, retrieval-augmented generation (RAG) question answering, and function calling. It is designed to be user-friendly for various applications.
Category: Language
Downloads: 31.8K
Last Updated: 1 month ago
latest (4.7GB)
70b (40GB)
8b (4.7GB)
256k (4.7GB)
v2.9 (4.7GB)
70b-v2.9 (40GB)
70b-v2.9-fp16 (141GB)
70b-v2.9-q2_K (26GB)
70b-v2.9-q3_K_L (37GB)
70b-v2.9-q3_K_M (34GB)
70b-v2.9-q3_K_S (31GB)
70b-v2.9-q4_0 (40GB)
70b-v2.9-q4_1 (44GB)
70b-v2.9-q4_K_M (43GB)
70b-v2.9-q4_K_S (40GB)
70b-v2.9-q5_0 (49GB)
70b-v2.9-q5_1 (53GB)
70b-v2.9-q5_K_M (50GB)
70b-v2.9-q5_K_S (49GB)
70b-v2.9-q6_K (58GB)
70b-v2.9-q8_0 (75GB)
8b-256k-v2.9 (4.7GB)
8b-256k (4.7GB)
8b-256k-v2.9-fp16 (16GB)
8b-256k-v2.9-q2_K (3.2GB)
8b-256k-v2.9-q3_K_L (4.3GB)
8b-256k-v2.9-q3_K_M (4.0GB)
8b-256k-v2.9-q3_K_S (3.7GB)
8b-256k-v2.9-q4_0 (4.7GB)
8b-256k-v2.9-q4_1 (5.1GB)
8b-256k-v2.9-q4_K_M (4.9GB)
8b-256k-v2.9-q4_K_S (4.7GB)
8b-256k-v2.9-q5_0 (5.6GB)
8b-256k-v2.9-q5_1 (6.1GB)
8b-256k-v2.9-q5_K_M (5.7GB)
8b-256k-v2.9-q5_K_S (5.6GB)
8b-256k-v2.9-q6_K (6.6GB)
8b-256k-v2.9-q8_0 (8.5GB)
8b-v2.9 (4.7GB)
8b-v2.9-fp16 (16GB)
8b-v2.9-q2_K (3.2GB)
8b-v2.9-q3_K_L (4.3GB)
8b-v2.9-q3_K_M (4.0GB)
8b-v2.9-q3_K_S (3.7GB)
8b-v2.9-q4_0 (4.7GB)
8b-v2.9-q4_1 (5.1GB)
8b-v2.9-q4_K_M (4.9GB)
8b-v2.9-q4_K_S (4.7GB)
8b-v2.9-q5_0 (5.6GB)
8b-v2.9-q5_1 (6.1GB)
8b-v2.9-q5_K_M (5.7GB)
8b-v2.9-q5_K_S (5.6GB)
8b-v2.9-q6_K (6.6GB)
8b-v2.9-q8_0 (8.5GB)
dolphin-llama3
π
Copied!
Dolphin 2.9, developed by Eric Hartford and built on Llama 3, comes in 8B and 70B sizes. It boasts a range of capabilities in instruction, conversation, and coding.
Category: Language
Downloads: 232.9K
Last Updated: 5 months ago Read more about: Dolphin LLama3
latest (4.7GB)
7b (4.7GB)
7b-fp16 (15GB)
7b-q2_K (3.0GB)
7b-q3_K_S (3.5GB)
7b-q3_K_M (3.8GB)
7b-q3_K_L (4.1GB)
7b-q4_0 (4.5GB)
7b-q4_1 (4.9GB)
7b-q4_K_S (4.5GB)
7b-q4_K_M (4.7GB)
7b-q5_0 (5.4GB)
7b-q5_1 (5.8GB)
7b-q5_K_S (5.4GB)
7b-q5_K_M (5.5GB)
7b-q6_K (6.4GB)
7b-q8_0 (8.2GB)
bespoke-minicheck
π
Copied!
A state-of-the-art fact-checking model developed by Bespoke Labs.
Category: Language
Downloads: 8,554
Last Updated: 1 month ago
latest (4.1GB)
7b (4.1GB)
7b-fp16 (14GB)
7b-q2_K (3.1GB)
7b-q3_K_L (3.8GB)
7b-q3_K_M (3.5GB)
7b-q3_K_S (3.2GB)
7b-q4_0 (4.1GB)
7b-q4_1 (4.6GB)
7b-q4_K_M (4.4GB)
7b-q4_K_S (4.1GB)
7b-q5_0 (5.0GB)
7b-q5_1 (5.4GB)
7b-q5_K_M (5.1GB)
7b-q5_K_S (5.0GB)
7b-q6_K (5.9GB)
7b-q8_0 (7.7GB)
mistral-openorca
π
Copied!
Mistral OpenOrca is a 7 billion parameter model that has been fine-tuned from the Mistral 7B model with the OpenOrca dataset.
Category: Language
Downloads: 159.2K
Last Updated: 1 year ago Read more about: Mistral OpenOrca
latest (1.7GB)
15b (9.1GB)
7b (4.0GB)
3b (1.7GB)
instruct (9.1GB)
15b-instruct (9.1GB)
15b-instruct-v0.1-fp16 (32GB)
15b-instruct-v0.1-q2_K (6.2GB)
15b-instruct-v0.1-q3_K_L (9.0GB)
15b-instruct-v0.1-q3_K_M (8.0GB)
15b-instruct-v0.1-q3_K_S (7.0GB)
15b-instruct-v0.1-q4_0 (9.1GB)
15b-instruct-v0.1-q4_1 (10GB)
15b-instruct-v0.1-q4_K_M (9.9GB)
15b-instruct-v0.1-q4_K_S (9.2GB)
15b-instruct-v0.1-q5_0 (11GB)
15b-instruct-v0.1-q5_1 (12GB)
15b-instruct-v0.1-q5_K_M (11GB)
15b-instruct-v0.1-q5_K_S (11GB)
15b-instruct-v0.1-q6_K (13GB)
15b-instruct-v0.1-q8_0 (17GB)
15b-instruct-q4_0 (9.1GB)
15b-fp16 (32GB)
15b-q2_K (6.2GB)
15b-q3_K_L (9.0GB)
15b-q3_K_M (8.1GB)
15b-q3_K_S (7.0GB)
15b-q4_0 (9.1GB)
15b-q4_1 (10GB)
15b-q4_K_M (9.9GB)
15b-q4_K_S (9.3GB)
15b-q5_0 (11GB)
15b-q5_1 (12GB)
15b-q5_K_M (11GB)
15b-q5_K_S (11GB)
15b-q6_K (13GB)
15b-q8_0 (17GB)
7b-fp16 (14GB)
7b-q2_K (2.7GB)
7b-q3_K_L (4.0GB)
7b-q3_K_M (3.6GB)
7b-q3_K_S (3.1GB)
7b-q4_0 (4.0GB)
7b-q4_1 (4.5GB)
7b-q4_K_M (4.4GB)
7b-q4_K_S (4.1GB)
7b-q5_0 (4.9GB)
7b-q5_1 (5.4GB)
7b-q5_K_M (5.1GB)
7b-q5_K_S (4.9GB)
7b-q6_K (5.9GB)
7b-q8_0 (7.6GB)
3b-fp16 (6.1GB)
3b-q2_K (1.1GB)
3b-q3_K_L (1.7GB)
3b-q3_K_M (1.5GB)
3b-q3_K_S (1.3GB)
3b-q4_0 (1.7GB)
3b-q4_1 (1.9GB)
3b-q4_K_M (1.8GB)
3b-q4_K_S (1.7GB)
3b-q5_0 (2.1GB)
3b-q5_1 (2.3GB)
3b-q5_K_M (2.2GB)
3b-q5_K_S (2.1GB)
3b-q6_K (2.5GB)
3b-q8_0 (3.2GB)
starcoder2
π
Copied!
StarCoder2 represents the next evolution of openly trained LLMs for code, available in three sizes: 3B, 7B, and 15B parameters.
Category: Coding
Downloads: 412.9K
Last Updated: 2 months ago Read more about: StarCoder2
latest (935MB)
0.5b (352MB)
1.5b (935MB)
0.5b-fp16 (994MB)
0.5b-q2_K (339MB)
0.5b-q3_K_S (338MB)
0.5b-q3_K_M (355MB)
0.5b-q3_K_L (369MB)
0.5b-q4_0 (352MB)
0.5b-q4_1 (375MB)
0.5b-q4_K_S (385MB)
0.5b-q4_K_M (398MB)
0.5b-q5_0 (397MB)
0.5b-q5_1 (419MB)
0.5b-q5_K_S (413MB)
0.5b-q5_K_M (420MB)
0.5b-q6_K (506MB)
0.5b-q8_0 (531MB)
1.5b-fp16 (3.1GB)
1.5b-q2_K (676MB)
1.5b-q3_K_S (761MB)
1.5b-q3_K_M (824MB)
1.5b-q3_K_L (880MB)
1.5b-q4_0 (935MB)
1.5b-q4_1 (1.0GB)
1.5b-q4_K_S (940MB)
1.5b-q4_K_M (986MB)
1.5b-q5_0 (1.1GB)
1.5b-q5_1 (1.2GB)
1.5b-q5_K_S (1.1GB)
1.5b-q5_K_M (1.1GB)
1.5b-q6_K (1.3GB)
1.5b-q8_0 (1.6GB)
reader-lm
π
Copied!
A series of models that convert HTML content to Markdown content, which is useful for content conversion tasks.
Category: Tiny,
Downloads: 16.4K
Last Updated: 2 months ago
latest (4.1GB)
141b (80GB)
7b (4.1GB)
141b-v0.1 (80GB)
141b-v0.1-fp16 (281GB)
141b-v0.1-q2_K (52GB)
141b-v0.1-q4_0 (80GB)
141b-v0.1-q8_0 (149GB)
7b-alpha (4.1GB)
7b-alpha-fp16 (14GB)
7b-alpha-q2_K (3.1GB)
7b-alpha-q3_K_L (3.8GB)
7b-alpha-q3_K_M (3.5GB)
7b-alpha-q3_K_S (3.2GB)
7b-alpha-q4_0 (4.1GB)
7b-alpha-q4_1 (4.6GB)
7b-alpha-q4_K_M (4.4GB)
7b-alpha-q4_K_S (4.1GB)
7b-alpha-q5_0 (5.0GB)
7b-alpha-q5_1 (5.4GB)
7b-alpha-q5_K_M (5.1GB)
7b-alpha-q5_K_S (5.0GB)
7b-alpha-q6_K (5.9GB)
7b-alpha-q8_0 (7.7GB)
7b-beta (4.1GB)
7b-beta-fp16 (14GB)
7b-beta-q2_K (3.1GB)
7b-beta-q3_K_L (3.8GB)
7b-beta-q3_K_M (3.5GB)
7b-beta-q3_K_S (3.2GB)
7b-beta-q4_0 (4.1GB)
7b-beta-q4_1 (4.6GB)
7b-beta-q4_K_M (4.4GB)
7b-beta-q4_K_S (4.1GB)
7b-beta-q5_0 (5.0GB)
7b-beta-q5_1 (5.4GB)
7b-beta-q5_K_M (5.1GB)
7b-beta-q5_K_S (5.0GB)
7b-beta-q6_K (5.9GB)
7b-beta-q8_0 (7.7GB)
zephyr
π
Copied!
Zephyr comprises refined versions of the Mistral and Mixtral models, designed to serve as effective assistants. These models are specifically trained to enhance their helpfulness.
Category: Language
Downloads: 221.3K
Last Updated: 6 months ago Read more about: Zephyr
latest (3.5GB)
34b (19GB)
9b (5.0GB)
6b (3.5GB)
v1.5 (3.5GB)
34b-chat (19GB)
34b-chat-fp16 (69GB)
34b-chat-q2_K (15GB)
34b-chat-q3_K_L (18GB)
34b-chat-q3_K_M (17GB)
34b-chat-q3_K_S (15GB)
34b-chat-q4_0 (19GB)
34b-chat-q4_1 (22GB)
34b-chat-q4_K_M (21GB)
34b-chat-q4_K_S (20GB)
34b-chat-q5_0 (24GB)
34b-chat-q5_1 (26GB)
34b-chat-q5_K_M (24GB)
34b-chat-q5_K_S (24GB)
34b-chat-q6_K (28GB)
34b-chat-v1.5-q4_0 (19GB)
34b-chat-v1.5-q3_K_M (17GB)
34b-chat-v1.5-q3_K_L (18GB)
34b-chat-q8_0 (37GB)
34b-chat-v1.5-q2_K (13GB)
34b-chat-v1.5-fp16 (69GB)
34b-chat-v1.5-q3_K_S (15GB)
34b-chat-v1.5-q4_1 (22GB)
34b-chat-v1.5-q4_K_M (21GB)
34b-chat-v1.5-q4_K_S (20GB)
34b-chat-v1.5-q5_0 (24GB)
34b-chat-v1.5-q5_1 (26GB)
34b-chat-v1.5-q5_K_M (24GB)
34b-chat-v1.5-q5_K_S (24GB)
34b-chat-v1.5-q6_K (28GB)
34b-chat-v1.5-q8_0 (37GB)
34b-v1.5 (19GB)
34b-v1.5-fp16 (69GB)
34b-v1.5-q2_K (13GB)
34b-v1.5-q3_K_L (18GB)
34b-v1.5-q3_K_M (17GB)
34b-v1.5-q3_K_S (15GB)
34b-v1.5-q4_0 (19GB)
34b-v1.5-q4_1 (22GB)
34b-v1.5-q4_K_M (21GB)
34b-v1.5-q4_K_S (20GB)
34b-v1.5-q5_0 (24GB)
34b-v1.5-q5_1 (26GB)
34b-v1.5-q5_K_M (24GB)
34b-v1.5-q5_K_S (24GB)
34b-v1.5-q6_K (28GB)
34b-v1.5-q8_0 (37GB)
9b-chat (5.0GB)
34b-q2_K (15GB)
34b-q3_K_L (18GB)
34b-q3_K_M (17GB)
34b-q3_K_S (15GB)
34b-q4_0 (19GB)
34b-q4_1 (22GB)
34b-q4_K_M (21GB)
34b-q4_K_S (20GB)
34b-q5_0 (24GB)
34b-q5_1 (26GB)
34b-q5_K_S (24GB)
34b-q6_K (28GB)
9b-chat-v1.5-fp16 (18GB)
9b-chat-v1.5-q2_K (3.4GB)
9b-chat-v1.5-q3_K_L (4.7GB)
9b-chat-v1.5-q3_K_M (4.3GB)
9b-chat-v1.5-q3_K_S (3.9GB)
9b-chat-v1.5-q4_0 (5.0GB)
9b-chat-v1.5-q4_1 (5.6GB)
9b-chat-v1.5-q4_K_M (5.3GB)
9b-chat-v1.5-q4_K_S (5.1GB)
9b-chat-v1.5-q5_0 (6.1GB)
9b-chat-v1.5-q5_1 (6.6GB)
9b-chat-v1.5-q5_K_M (6.3GB)
9b-chat-v1.5-q5_K_S (6.1GB)
9b-chat-v1.5-q6_K (7.2GB)
9b-chat-v1.5-q8_0 (9.4GB)
9b-v1.5 (5.0GB)
9b-v1.5-fp16 (18GB)
9b-v1.5-q2_K (3.4GB)
9b-v1.5-q3_K_L (4.7GB)
9b-v1.5-q3_K_M (4.3GB)
9b-v1.5-q3_K_S (3.9GB)
9b-v1.5-q4_0 (5.0GB)
9b-v1.5-q4_1 (5.6GB)
9b-v1.5-q4_K_M (5.3GB)
9b-v1.5-q4_K_S (5.1GB)
9b-v1.5-q5_0 (6.1GB)
9b-v1.5-q5_1 (6.6GB)
9b-v1.5-q5_K_M (6.3GB)
9b-v1.5-q5_K_S (6.1GB)
9b-v1.5-q6_K (7.2GB)
9b-v1.5-q8_0 (9.4GB)
6b-200k (3.5GB)
6b-200k-fp16 (12GB)
6b-200k-q2_K (2.6GB)
6b-200k-q3_K_L (3.2GB)
6b-200k-q3_K_M (3.0GB)
6b-200k-q3_K_S (2.7GB)
6b-200k-q4_0 (3.5GB)
6b-200k-q4_1 (3.8GB)
6b-200k-q4_K_M (3.7GB)
6b-200k-q4_K_S (3.5GB)
6b-200k-q5_0 (4.2GB)
6b-200k-q5_1 (4.6GB)
6b-200k-q5_K_M (4.3GB)
6b-200k-q5_K_S (4.2GB)
6b-200k-q6_K (5.0GB)
6b-200k-q8_0 (6.4GB)
6b-chat (3.5GB)
6b-chat-fp16 (12GB)
6b-chat-q2_K (2.6GB)
6b-chat-q3_K_L (3.2GB)
6b-chat-q3_K_M (3.0GB)
6b-chat-v1.5-fp16 (12GB)
6b-chat-q5_1 (4.6GB)
6b-chat-q4_1 (3.8GB)
6b-chat-q5_K_M (4.3GB)
6b-chat-q3_K_S (2.7GB)
6b-chat-q4_K_M (3.7GB)
6b-chat-q5_0 (4.2GB)
6b-chat-q6_K (5.0GB)
6b-chat-q5_K_S (4.2GB)
6b-chat-q4_0 (3.5GB)
6b-chat-q4_K_S (3.5GB)
6b-chat-q8_0 (6.4GB)
6b-chat-v1.5-q8_0 (6.4GB)
6b-chat-v1.5-q5_K_M (4.3GB)
6b-chat-v1.5-q4_K_S (3.5GB)
6b-chat-v1.5-q5_K_S (4.2GB)
6b-chat-v1.5-q4_0 (3.5GB)
6b-chat-v1.5-q6_K (5.0GB)
6b-chat-v1.5-q5_0 (4.2GB)
6b-chat-v1.5-q3_K_L (3.2GB)
6b-chat-v1.5-q2_K (2.3GB)
6b-chat-v1.5-q5_1 (4.6GB)
6b-chat-v1.5-q4_K_M (3.7GB)
6b-chat-v1.5-q3_K_S (2.7GB)
6b-chat-v1.5-q4_1 (3.8GB)
6b-chat-v1.5-q3_K_M (3.0GB)
6b-v1.5 (3.5GB)
6b-v1.5-fp16 (12GB)
6b-v1.5-q2_K (2.3GB)
6b-v1.5-q3_K_L (3.2GB)
6b-v1.5-q3_K_M (3.0GB)
6b-v1.5-q3_K_S (2.7GB)
6b-v1.5-q4_0 (3.5GB)
6b-v1.5-q4_1 (3.8GB)
6b-v1.5-q4_K_M (3.7GB)
6b-v1.5-q4_K_S (3.5GB)
6b-v1.5-q5_0 (4.2GB)
6b-v1.5-q5_1 (4.6GB)
6b-v1.5-q5_K_M (4.3GB)
6b-v1.5-q5_K_S (4.2GB)
6b-v1.5-q6_K (5.0GB)
6b-v1.5-q8_0 (6.4GB)
6b-fp16 (12GB)
6b-q2_K (2.6GB)
6b-q3_K_L (3.2GB)
6b-q3_K_M (3.0GB)
6b-q3_K_S (2.7GB)
6b-q4_0 (3.5GB)
6b-q4_1 (3.8GB)
6b-q4_K_M (3.7GB)
6b-q4_K_S (3.5GB)
6b-q5_0 (4.2GB)
6b-q5_1 (4.6GB)
6b-q5_K_M (4.3GB)
6b-q5_K_S (4.2GB)
6b-q6_K (5.0GB)
6b-q8_0 (6.4GB)
yi
π
Copied!
Yi 1.5 is an advanced bilingual language model that delivers exceptional performance. It excels in understanding and generating text in two languages.
Category: Language
Downloads: 235K
Last Updated: 5 months ago Read more about: Yi 1.5
latest (3.8GB)
13b (7.4GB)
7b (3.8GB)
13b-chat (7.4GB)
13b-chat-fp16 (26GB)
13b-chat-q2_K (5.4GB)
13b-chat-q3_K_L (6.9GB)
13b-chat-q3_K_M (6.3GB)
13b-chat-q3_K_S (5.7GB)
13b-chat-q4_0 (7.4GB)
13b-chat-q4_1 (8.2GB)
13b-chat-q4_K_M (7.9GB)
13b-chat-q4_K_S (7.4GB)
13b-chat-q5_0 (9.0GB)
13b-chat-q5_1 (9.8GB)
13b-chat-q5_K_M (9.2GB)
13b-chat-q5_K_S (9.0GB)
13b-chat-q6_K (11GB)
13b-chat-q8_0 (14GB)
7b-chat (3.8GB)
7b-chat-fp16 (13GB)
7b-chat-q2_K (2.8GB)
7b-chat-q3_K_L (3.6GB)
7b-chat-q3_K_M (3.3GB)
7b-chat-q3_K_S (2.9GB)
7b-chat-q4_0 (3.8GB)
7b-chat-q4_1 (4.2GB)
7b-chat-q4_K_M (4.1GB)
7b-chat-q4_K_S (3.9GB)
7b-chat-q5_0 (4.7GB)
7b-chat-q5_1 (5.1GB)
7b-chat-q5_K_M (4.8GB)
7b-chat-q5_K_S (4.7GB)
7b-chat-q6_K (5.5GB)
7b-chat-q8_0 (7.2GB)
llama2-chinese
π
Copied!
A Llama 2-based model has been fine-tuned to enhance its ability to engage in Chinese dialogue. This adjustment aims to improve conversational performance in that language.
Category: Language
Downloads: 135.5K
Last Updated: 1 year ago Read more about: Llama 2 Chinese
latest (40GB)
70b (40GB)
70b-fp16 (141GB)
70b-q2_K (26GB)
70b-q3_K_S (31GB)
70b-q3_K_M (34GB)
70b-q3_K_L (37GB)
70b-q4_0 (40GB)
70b-q4_1 (44GB)
70b-q4_K_S (40GB)
70b-q4_K_M (43GB)
70b-q5_0 (49GB)
70b-q5_1 (53GB)
70b-q5_K_S (49GB)
70b-q5_K_M (50GB)
70b-q6_K (58GB)
70b-q8_0 (75GB)
reflection
π
Copied!
A high-performing model trained with a new technique called Reflection-tuning that teaches a LLM to detect mistakes in its reasoning and correct course.
Category: Language
Downloads: 95.3K
Last Updated: 2 months ago
latest (5.5GB)
8b (5.5GB)
8b-v1.1-fp16 (17GB)
8b-v1.1-q4_0 (5.5GB)
llava-llama3
π
Copied!
A LLaVA model, fine-tuned from Llama 3 Instruct, has achieved improved scores across multiple benchmarks.
Category: Multimodal
Downloads: 205.6K
Last Updated: 6 months ago Read more about: LLaVA Llama 3
latest (3.8GB)
33b (18GB)
13b (7.4GB)
7b (3.8GB)
13b-16k (7.4GB)
33b-fp16 (65GB)
33b-q2_K (14GB)
33b-q3_K_L (17GB)
33b-q3_K_M (16GB)
33b-q3_K_S (14GB)
33b-q4_0 (18GB)
33b-q4_1 (20GB)
33b-q4_K_M (20GB)
33b-q4_K_S (18GB)
33b-q5_0 (22GB)
33b-q5_1 (24GB)
33b-q5_K_M (23GB)
33b-q5_K_S (22GB)
33b-q6_K (27GB)
33b-q8_0 (35GB)
13b-v1.5-fp16 (26GB)
13b-v1.5-q2_K (5.4GB)
13b-v1.5-q3_K_L (6.9GB)
13b-v1.5-q3_K_M (6.3GB)
13b-v1.5-q3_K_S (5.7GB)
13b-v1.5-q4_0 (7.4GB)
13b-v1.5-q4_1 (8.2GB)
13b-v1.5-q4_K_M (7.9GB)
13b-v1.5-q4_K_S (7.4GB)
13b-v1.5-q5_0 (9.0GB)
13b-v1.5-q5_1 (9.8GB)
13b-v1.5-q5_K_M (9.2GB)
13b-v1.5-16k-q5_K_M (9.2GB)
13b-v1.5-16k-q4_1 (8.2GB)
13b-v1.5-16k-q4_K_M (7.9GB)
13b-v1.5-16k-q5_1 (9.8GB)
13b-v1.5-16k-q5_0 (9.0GB)
13b-v1.5-16k-q4_K_S (7.4GB)
13b-v1.5-16k-q2_K (5.4GB)
13b-v1.5-16k-q3_K_L (6.9GB)
13b-v1.5-q5_K_S (9.0GB)
13b-v1.5-16k-q3_K_S (5.7GB)
13b-v1.5-16k-q3_K_M (6.3GB)
13b-v1.5-16k-q4_0 (7.4GB)
13b-v1.5-q8_0 (14GB)
13b-v1.5-q6_K (11GB)
13b-v1.5-16k-fp16 (26GB)
13b-v1.5-16k-q5_K_S (9.0GB)
13b-v1.5-16k-q6_K (11GB)
13b-v1.5-16k-q8_0 (14GB)
7b-16k (3.8GB)
13b-fp16 (26GB)
13b-q2_K (5.4GB)
13b-q3_K_L (6.9GB)
13b-q3_K_M (6.3GB)
13b-q3_K_S (5.7GB)
13b-q4_0 (7.4GB)
13b-q4_1 (8.2GB)
13b-q4_K_M (7.9GB)
13b-q4_K_S (7.4GB)
13b-q5_0 (9.0GB)
13b-q5_1 (9.8GB)
13b-q5_K_M (9.2GB)
13b-q5_K_S (9.0GB)
13b-q6_K (11GB)
13b-q8_0 (14GB)
7b-v1.5-fp16 (13GB)
7b-v1.5-q2_K (2.8GB)
7b-v1.5-q3_K_L (3.6GB)
7b-v1.5-q3_K_M (3.3GB)
7b-v1.5-q3_K_S (2.9GB)
7b-v1.5-q4_0 (3.8GB)
7b-v1.5-q4_1 (4.2GB)
7b-v1.5-q4_K_M (4.1GB)
7b-v1.5-q4_K_S (3.9GB)
7b-v1.5-q5_0 (4.7GB)
7b-v1.5-q5_1 (5.1GB)
7b-v1.5-q5_K_M (4.8GB)
7b-v1.5-q5_K_S (4.7GB)
7b-v1.5-16k-q4_K_M (4.1GB)
7b-v1.5-16k-q4_0 (3.8GB)
7b-v1.5-16k-q2_K (2.8GB)
7b-v1.5-16k-q3_K_M (3.3GB)
7b-v1.5-q6_K (5.5GB)
7b-v1.5-16k-q3_K_L (3.6GB)
7b-v1.5-16k-q4_1 (4.2GB)
7b-v1.5-q8_0 (7.2GB)
7b-v1.5-16k-fp16 (13GB)
7b-v1.5-16k-q3_K_S (2.9GB)
7b-v1.5-16k-q4_K_S (3.9GB)
7b-v1.5-16k-q5_0 (4.7GB)
7b-v1.5-16k-q5_1 (5.1GB)
7b-v1.5-16k-q5_K_M (4.8GB)
7b-v1.5-16k-q5_K_S (4.7GB)
7b-v1.5-16k-q6_K (5.5GB)
7b-v1.5-16k-q8_0 (7.2GB)
7b-fp16 (13GB)
7b-q2_K (2.8GB)
7b-q3_K_L (3.6GB)
7b-q3_K_M (3.3GB)
7b-q3_K_S (2.9GB)
7b-q4_0 (3.8GB)
7b-q4_1 (4.2GB)
7b-q4_K_M (4.1GB)
7b-q4_K_S (3.9GB)
7b-q5_0 (4.7GB)
7b-q5_1 (5.1GB)
7b-q5_K_M (4.8GB)
7b-q5_K_S (4.7GB)
7b-q6_K (5.5GB)
7b-q8_0 (7.2GB)
vicuna
π
Copied!
A general-purpose chat model built on Llama and Llama 2, offering context sizes ranging from 2K to 16K.
Category: Language
Downloads: 153.7K
Last Updated: 1 year ago Read more about: Vicuna
latest (6.1GB)
34b (19GB)
10.7b (6.1GB)
34b-yi-fp16 (69GB)
34b-yi-q2_K (15GB)
34b-yi-q3_K_L (18GB)
34b-yi-q3_K_M (17GB)
34b-yi-q3_K_S (15GB)
34b-yi-q4_0 (19GB)
34b-yi-q4_1 (22GB)
34b-yi-q4_K_M (21GB)
34b-yi-q4_K_S (20GB)
34b-yi-q5_0 (24GB)
34b-yi-q5_1 (26GB)
34b-yi-q5_K_M (24GB)
34b-yi-q5_K_S (24GB)
34b-yi-q6_K (28GB)
34b-yi-q8_0 (37GB)
10.7b-solar-fp16 (21GB)
10.7b-solar-q2_K (4.5GB)
10.7b-solar-q3_K_L (5.7GB)
10.7b-solar-q3_K_M (5.2GB)
10.7b-solar-q3_K_S (4.7GB)
10.7b-solar-q4_0 (6.1GB)
10.7b-solar-q4_1 (6.7GB)
10.7b-solar-q4_K_M (6.5GB)
10.7b-solar-q4_K_S (6.1GB)
10.7b-solar-q5_0 (7.4GB)
10.7b-solar-q5_1 (8.1GB)
10.7b-solar-q5_K_M (7.6GB)
10.7b-solar-q5_K_S (7.4GB)
10.7b-solar-q6_K (8.8GB)
10.7b-solar-q8_0 (11GB)
nous-hermes2
π
Copied!
Nous Research offers a robust family of models that excel in scientific discussions and coding tasks. These powerful models are designed to enhance productivity in various research applications.
Category: Language
Downloads: 113.6K
Last Updated: 10 months ago Read more about: Nous Hermes 2
latest (3.8GB)
30b (18GB)
13b (7.4GB)
7b (3.8GB)
30b-fp16 (65GB)
30b-q2_K (14GB)
30b-q3_K_L (17GB)
30b-q3_K_M (16GB)
30b-q3_K_S (14GB)
30b-q4_0 (18GB)
30b-q4_1 (20GB)
30b-q4_K_M (20GB)
30b-q4_K_S (18GB)
30b-q5_0 (22GB)
30b-q5_1 (24GB)
30b-q5_K_M (23GB)
30b-q5_K_S (22GB)
30b-q6_K (27GB)
30b-q8_0 (35GB)
13b-fp16 (26GB)
13b-q2_K (5.4GB)
13b-q3_K_L (6.9GB)
13b-q3_K_M (6.3GB)
13b-q3_K_S (5.7GB)
13b-q4_0 (7.4GB)
13b-q4_1 (8.2GB)
13b-q4_K_M (7.9GB)
13b-q4_K_S (7.4GB)
13b-q5_0 (9.0GB)
13b-q5_1 (9.8GB)
13b-q5_K_M (9.2GB)
13b-q5_K_S (9.0GB)
13b-q6_K (11GB)
13b-q8_0 (14GB)
7b-fp16 (13GB)
7b-q2_K (2.8GB)
7b-q3_K_L (3.6GB)
7b-q3_K_M (3.3GB)
7b-q3_K_S (2.9GB)
7b-q4_0 (3.8GB)
7b-q4_1 (4.2GB)
7b-q4_K_M (4.1GB)
7b-q4_K_S (3.9GB)
7b-q5_0 (4.7GB)
7b-q5_1 (5.1GB)
7b-q5_K_M (4.8GB)
7b-q5_K_S (4.7GB)
7b-q6_K (5.5GB)
7b-q8_0 (7.2GB)
wizard-vicuna-uncensored
π
Copied!
Wizard Vicuna Uncensored is a model with 7B, 13B, and 30B parameters, developed from Llama 2 by Eric Hartford. It is an uncensored version of the original model.
latest (638MB)
1.1b (638MB)
chat (638MB)
v0.6 (638MB)
v1 (638MB)
1.1b-chat (638MB)
1.1b-chat-v0.6-fp16 (2.2GB)
1.1b-chat-v0.6-q2_K (483MB)
1.1b-chat-v0.6-q3_K_L (593MB)
1.1b-chat-v0.6-q3_K_M (551MB)
1.1b-chat-v0.6-q3_K_S (500MB)
1.1b-chat-v0.6-q4_0 (638MB)
1.1b-chat-v0.6-q4_1 (702MB)
1.1b-chat-v0.6-q4_K_M (669MB)
1.1b-chat-v0.6-q4_K_S (644MB)
1.1b-chat-v0.6-q5_0 (767MB)
1.1b-chat-v0.6-q5_1 (832MB)
1.1b-chat-v0.6-q5_K_M (783MB)
1.1b-chat-v0.6-q5_K_S (767MB)
1.1b-chat-v0.6-q6_K (904MB)
1.1b-chat-v0.6-q8_0 (1.2GB)
1.1b-chat-v1-fp16 (2.2GB)
1.1b-chat-v1-q2_K (483MB)
1.1b-chat-v1-q3_K_L (593MB)
1.1b-chat-v1-q3_K_M (551MB)
1.1b-chat-v1-q3_K_S (500MB)
1.1b-chat-v1-q4_0 (638MB)
1.1b-chat-v1-q4_1 (702MB)
1.1b-chat-v1-q4_K_M (669MB)
1.1b-chat-v1-q4_K_S (644MB)
1.1b-chat-v1-q5_0 (767MB)
1.1b-chat-v1-q5_1 (832MB)
1.1b-chat-v1-q5_K_M (783MB)
1.1b-chat-v1-q5_K_S (767MB)
1.1b-chat-v1-q6_K (904MB)
1.1b-chat-v1-q8_0 (1.2GB)
tinyllama
π
Copied!
The TinyLlama SLM (small language model) is a compact AI language model with 1.1B parameters, designed for local on-premise inference on consumer grade hardware.
Category: Tiny,
Downloads: 248.7K
Last Updated: 10 months ago Read more about: TinyLlama
latest (13GB)
22b (13GB)
v0.1 (13GB)
22b-v0.1-f16 (44GB)
22b-v0.1-q2_K (8.3GB)
22b-v0.1-q3_K_L (12GB)
22b-v0.1-q3_K_M (11GB)
22b-v0.1-q3_K_S (9.6GB)
22b-v0.1-q4_0 (13GB)
22b-v0.1-q4_1 (14GB)
22b-v0.1-q4_K_M (13GB)
22b-v0.1-q4_K_S (13GB)
22b-v0.1-q5_0 (15GB)
22b-v0.1-q5_1 (17GB)
22b-v0.1-q5_K_M (16GB)
22b-v0.1-q5_K_S (15GB)
22b-v0.1-q6_K (18GB)
22b-v0.1-q8_0 (24GB)
codestral
π
Copied!
Codestral is the inaugural code model from Mistral AI, specifically developed for tasks involving code generation.
Category: Coding
Downloads: 158.5K
Last Updated: 2 months ago Read more about: Codestral
latest (1.8GB)
15b (9.0GB)
7b (4.3GB)
3b (1.8GB)
1b (726MB)
15b-base (9.0GB)
15b-base-fp16 (32GB)
15b-base-q2_K (6.7GB)
15b-base-q3_K_L (9.1GB)
15b-base-q3_K_M (8.2GB)
15b-base-q3_K_S (6.9GB)
15b-base-q4_0 (9.0GB)
15b-base-q4_1 (10.0GB)
15b-base-q4_K_M (10.0GB)
15b-base-q4_K_S (9.1GB)
15b-base-q5_0 (11GB)
15b-base-q5_1 (12GB)
15b-base-q5_K_M (12GB)
15b-base-q5_K_S (11GB)
15b-base-q6_K (13GB)
15b-base-q8_0 (17GB)
15b-plus (9.0GB)
15b-plus-fp16 (32GB)
15b-plus-q2_K (6.7GB)
15b-plus-q3_K_L (9.1GB)
15b-plus-q3_K_M (8.2GB)
15b-plus-q3_K_S (6.9GB)
15b-plus-q4_0 (9.0GB)
15b-plus-q4_1 (10.0GB)
15b-plus-q4_K_M (10.0GB)
15b-plus-q4_K_S (9.1GB)
15b-plus-q5_0 (11GB)
15b-plus-q5_1 (12GB)
15b-plus-q5_K_M (12GB)
15b-plus-q5_K_S (11GB)
15b-plus-q6_K (13GB)
15b-plus-q8_0 (17GB)
7b-base (4.3GB)
15b-fp16 (32GB)
15b-q2_K (6.7GB)
15b-q3_K_L (9.1GB)
15b-q3_K_M (8.2GB)
15b-q3_K_S (6.9GB)
15b-q4_0 (9.0GB)
15b-q4_1 (10.0GB)
15b-q4_K_M (10.0GB)
15b-q4_K_S (9.1GB)
15b-q5_0 (11GB)
15b-q5_1 (12GB)
15b-q5_K_M (12GB)
15b-q5_K_S (11GB)
15b-q6_K (13GB)
15b-q8_0 (17GB)
7b-base-fp16 (15GB)
7b-base-q2_K (3.2GB)
7b-base-q3_K_L (4.3GB)
7b-base-q3_K_M (3.9GB)
7b-base-q3_K_S (3.3GB)
7b-base-q4_0 (4.3GB)
7b-base-q4_1 (4.8GB)
7b-base-q4_K_M (4.8GB)
7b-base-q4_K_S (4.3GB)
7b-base-q5_0 (5.2GB)
7b-base-q5_1 (5.7GB)
7b-base-q5_K_M (5.5GB)
7b-base-q5_K_S (5.2GB)
7b-base-q6_K (6.2GB)
7b-base-q8_0 (8.0GB)
3b-base (1.8GB)
3b-base-fp16 (6.4GB)
3b-base-q2_K (1.4GB)
3b-base-q3_K_L (1.8GB)
3b-base-q3_K_M (1.7GB)
3b-base-q3_K_S (1.4GB)
3b-base-q4_0 (1.8GB)
3b-base-q4_1 (2.0GB)
3b-base-q4_K_M (2.0GB)
3b-base-q4_K_S (1.8GB)
3b-base-q5_0 (2.2GB)
3b-base-q5_1 (2.4GB)
3b-base-q5_K_M (2.3GB)
3b-base-q5_K_S (2.2GB)
3b-base-q6_K (2.6GB)
3b-base-q8_0 (3.4GB)
1b-base (726MB)
1b-base-fp16 (2.5GB)
1b-base-q2_K (552MB)
1b-base-q3_K_L (720MB)
1b-base-q3_K_M (661MB)
1b-base-q3_K_S (575MB)
1b-base-q4_0 (726MB)
1b-base-q4_1 (797MB)
1b-base-q4_K_M (792MB)
1b-base-q4_K_S (734MB)
1b-base-q5_0 (868MB)
1b-base-q5_1 (939MB)
1b-base-q5_K_M (910MB)
1b-base-q5_K_S (868MB)
1b-base-q6_K (1.0GB)
1b-base-q8_0 (1.3GB)
starcoder
π
Copied!
StarCoder is a model designed for code generation, trained in over 80 programming languages.
Category: Tiny,coding
Downloads: 162.1K
Last Updated: 1 year ago Read more about: StarCoder
latest (4.1GB)
8x22b (80GB)
7b (4.1GB)
8x22b-fp16 (281GB)
8x22b-q2_K (52GB)
8x22b-q4_0 (80GB)
8x22b-q8_0 (149GB)
7b-fp16 (14GB)
7b-q2_K (2.7GB)
7b-q3_K_L (3.8GB)
7b-q3_K_M (3.5GB)
7b-q3_K_S (3.2GB)
7b-q4_0 (4.1GB)
7b-q4_1 (4.6GB)
7b-q4_K_M (4.4GB)
7b-q4_K_S (4.1GB)
7b-q5_0 (5.0GB)
7b-q5_1 (5.4GB)
7b-q5_K_M (5.1GB)
7b-q5_K_S (5.0GB)
7b-q6_K (5.9GB)
7b-q8_0 (7.7GB)
wizardlm2
π
Copied!
Microsoft AI has developed a cutting-edge large language model that enhances performance in complex chat, multilingual interactions, reasoning, and agent applications.
Category: Language
Downloads: 138.6K
Last Updated: 6 months ago Read more about: WizardLM-2
latest (4.1GB)
7b (4.1GB)
7b-v3.5-1210 (4.1GB)
7b-v3.5-0106 (4.1GB)
7b-v3.5 (4.1GB)
7b-v3.5-0106-fp16 (14GB)
7b-v3.5-0106-q2_K (3.1GB)
7b-v3.5-0106-q3_K_L (3.8GB)
7b-v3.5-0106-q3_K_M (3.5GB)
7b-v3.5-0106-q3_K_S (3.2GB)
7b-v3.5-0106-q4_0 (4.1GB)
7b-v3.5-0106-q4_1 (4.6GB)
7b-v3.5-0106-q4_K_M (4.4GB)
7b-v3.5-0106-q4_K_S (4.1GB)
7b-v3.5-0106-q5_0 (5.0GB)
7b-v3.5-0106-q5_1 (5.4GB)
7b-v3.5-0106-q5_K_M (5.1GB)
7b-v3.5-0106-q5_K_S (5.0GB)
7b-v3.5-0106-q6_K (5.9GB)
7b-v3.5-0106-q8_0 (7.7GB)
7b-v3.5-1210-fp16 (14GB)
7b-v3.5-1210-q2_K (3.1GB)
7b-v3.5-1210-q3_K_L (3.8GB)
7b-v3.5-1210-q3_K_M (3.5GB)
7b-v3.5-1210-q3_K_S (3.2GB)
7b-v3.5-q4_0 (4.1GB)
7b-v3.5-q3_K_S (3.2GB)
7b-v3.5-1210-q8_0 (7.7GB)
7b-v3.5-1210-q6_K (5.9GB)
7b-v3.5-1210-q4_1 (4.6GB)
7b-v3.5-fp16 (14GB)
7b-v3.5-1210-q5_0 (5.0GB)
7b-v3.5-1210-q5_1 (5.4GB)
7b-v3.5-q2_K (3.1GB)
7b-v3.5-q3_K_L (3.8GB)
7b-v3.5-q3_K_M (3.5GB)
7b-v3.5-1210-q4_0 (4.1GB)
7b-v3.5-1210-q5_K_M (5.1GB)
7b-v3.5-1210-q4_K_M (4.4GB)
7b-v3.5-1210-q4_K_S (4.1GB)
7b-v3.5-1210-q5_K_S (5.0GB)
7b-v3.5-q4_1 (4.6GB)
7b-v3.5-q4_K_M (4.4GB)
7b-v3.5-q4_K_S (4.1GB)
7b-v3.5-q5_0 (5.0GB)
7b-v3.5-q5_1 (5.4GB)
7b-v3.5-q5_K_M (5.1GB)
7b-v3.5-q5_K_S (5.0GB)
7b-v3.5-q6_K (5.9GB)
7b-v3.5-q8_0 (7.7GB)
openchat
π
Copied!
An open-source model family trained on diverse data sets has outperformed ChatGPT on multiple benchmarks. It has been updated to version 3.5-0106.
Category: Language
Downloads: 113.8K
Last Updated: 9 months ago Read more about: OpenChat
latest (637MB)
1.1b (637MB)
v2.8 (637MB)
1.1b-v2.8-fp16 (2.2GB)
1.1b-v2.8-q2_K (432MB)
1.1b-v2.8-q3_K_L (592MB)
1.1b-v2.8-q3_K_M (548MB)
1.1b-v2.8-q3_K_S (499MB)
1.1b-v2.8-q4_0 (637MB)
1.1b-v2.8-q4_1 (701MB)
1.1b-v2.8-q4_K_M (668MB)
1.1b-v2.8-q4_K_S (640MB)
1.1b-v2.8-q5_0 (766MB)
1.1b-v2.8-q5_1 (831MB)
1.1b-v2.8-q5_K_M (782MB)
1.1b-v2.8-q5_K_S (766MB)
1.1b-v2.8-q6_K (903MB)
1.1b-v2.8-q8_0 (1.2GB)
tinydolphin
π
Copied!
Eric Hartford developed an experimental model with 1.1 billion parameters, utilizing the new Dolphin 2.8 dataset and based on TinyLlama.
Category: Tiny,
Downloads: 104.5K
Last Updated: 9 months ago Read more about: TinyDolphin
latest (4.1GB)
v2.5 (4.1GB)
v2 (4.1GB)
7b-mistral-v2-fp16 (14GB)
7b-mistral-v2-q2_K (3.1GB)
7b-mistral-v2-q3_K_L (3.8GB)
7b-mistral-v2-q3_K_M (3.5GB)
7b-mistral-v2-q3_K_S (3.2GB)
7b-mistral-v2-q4_0 (4.1GB)
7b-mistral-v2-q4_1 (4.6GB)
7b-mistral-v2-q4_K_M (4.4GB)
7b-mistral-v2-q4_K_S (4.1GB)
7b-mistral-v2-q5_0 (5.0GB)
7b-mistral-v2-q5_1 (5.4GB)
7b-mistral-v2-q5_K_M (5.1GB)
7b-mistral-v2-q5_K_S (5.0GB)
7b-mistral-v2-q6_K (5.9GB)
7b-mistral-v2-q8_0 (7.7GB)
7b-mistral-v2.5-fp16 (14GB)
7b-mistral-v2.5-q2_K (3.1GB)
7b-mistral-v2.5-q3_K_L (3.8GB)
7b-mistral-v2.5-q3_K_M (3.5GB)
7b-mistral-v2.5-q3_K_S (3.2GB)
7b-mistral-v2.5-q4_0 (4.1GB)
7b-mistral-v2.5-q4_1 (4.6GB)
7b-mistral-v2.5-q4_K_M (4.4GB)
7b-mistral-v2.5-q4_K_S (4.1GB)
7b-mistral-v2.5-q5_0 (5.0GB)
7b-mistral-v2.5-q5_1 (5.4GB)
7b-mistral-v2.5-q5_K_M (5.1GB)
7b-mistral-v2.5-q5_K_S (5.0GB)
7b-mistral-v2.5-q6_K (5.9GB)
7b-mistral-v2.5-q8_0 (7.7GB)
7b-v2 (4.1GB)
7b-v2.5 (4.1GB)
openhermes
π
Copied!
OpenHermes 2.5 is a 7B model that has been fine-tuned by Teknium using Mistral and entirely open datasets.
Category: Language
Downloads: 99.7K
Last Updated: 10 months ago Read more about: OpenHermes 2.5
latest (3.8GB)
33b (19GB)
python (3.8GB)
34b-python (19GB)
34b-python-fp16 (67GB)
34b-python-q2_K (14GB)
34b-python-q3_K_L (18GB)
34b-python-q3_K_M (16GB)
34b-python-q3_K_S (15GB)
34b-python-q4_0 (19GB)
34b-python-q4_1 (21GB)
34b-python-q4_K_M (20GB)
34b-python-q4_K_S (19GB)
34b-python-q5_0 (23GB)
34b-python-q5_1 (25GB)
34b-python-q5_K_M (24GB)
34b-python-q5_K_S (23GB)
34b-python-q6_K (28GB)
34b-python-q8_0 (36GB)
33b-v1.1 (19GB)
33b-v1.1-fp16 (67GB)
33b-v1.1-q2_K (14GB)
33b-v1.1-q3_K_L (18GB)
33b-v1.1-q3_K_M (16GB)
33b-v1.1-q3_K_S (14GB)
33b-v1.1-q4_0 (19GB)
33b-v1.1-q4_1 (21GB)
33b-v1.1-q4_K_M (20GB)
33b-v1.1-q4_K_S (19GB)
33b-v1.1-q5_0 (23GB)
33b-v1.1-q5_1 (25GB)
33b-v1.1-q5_K_M (24GB)
33b-v1.1-q5_K_S (23GB)
33b-v1.1-q6_K (27GB)
33b-v1.1-q8_0 (35GB)
13b-python (7.4GB)
13b-python-fp16 (26GB)
13b-python-q2_K (5.4GB)
13b-python-q3_K_L (6.9GB)
13b-python-q3_K_M (6.3GB)
13b-python-q3_K_S (5.7GB)
13b-python-q4_0 (7.4GB)
13b-python-q4_1 (8.2GB)
13b-python-q4_K_M (7.9GB)
13b-python-q4_K_S (7.4GB)
13b-python-q5_0 (9.0GB)
13b-python-q5_1 (9.8GB)
13b-python-q5_K_M (9.2GB)
13b-python-q5_K_S (9.0GB)
13b-python-q6_K (11GB)
13b-python-q8_0 (14GB)
7b-python (3.8GB)
7b-python-fp16 (13GB)
7b-python-q2_K (2.8GB)
7b-python-q3_K_L (3.6GB)
7b-python-q3_K_M (3.3GB)
7b-python-q3_K_S (2.9GB)
7b-python-q4_0 (3.8GB)
7b-python-q4_1 (4.2GB)
7b-python-q4_K_M (4.1GB)
7b-python-q4_K_S (3.9GB)
7b-python-q5_0 (4.7GB)
7b-python-q5_1 (5.1GB)
7b-python-q5_K_M (4.8GB)
7b-python-q5_K_S (4.7GB)
7b-python-q6_K (5.5GB)
7b-python-q8_0 (7.2GB)
wizardcoder
π
Copied!
Advanced code generation model.
Category: Coding
Downloads: 103.4K
Last Updated: 10 months ago Read more about: Wizard Coder
latest (1.6GB)
3b (1.6GB)
code (1.6GB)
instruct (1.6GB)
3b-code (1.6GB)
3b-code-fp16 (5.6GB)
3b-code-q2_K (1.1GB)
3b-code-q3_K_L (1.5GB)
3b-code-q3_K_M (1.4GB)
3b-code-q3_K_S (1.3GB)
3b-code-q4_0 (1.6GB)
3b-code-q4_1 (1.8GB)
3b-code-q4_K_M (1.7GB)
3b-code-q4_K_S (1.6GB)
3b-code-q5_0 (1.9GB)
3b-code-q5_1 (2.1GB)
3b-code-q5_K_M (2.0GB)
3b-code-q5_K_S (1.9GB)
3b-code-q6_K (2.3GB)
3b-code-q8_0 (3.0GB)
3b-instruct (1.6GB)
3b-instruct-fp16 (5.6GB)
3b-instruct-q2_K (1.1GB)
3b-instruct-q3_K_L (1.5GB)
3b-instruct-q3_K_M (1.4GB)
3b-instruct-q3_K_S (1.3GB)
3b-instruct-q4_0 (1.6GB)
3b-instruct-q4_1 (1.8GB)
3b-instruct-q4_K_M (1.7GB)
3b-instruct-q4_K_S (1.6GB)
3b-instruct-q5_0 (1.9GB)
3b-instruct-q5_1 (2.1GB)
3b-instruct-q5_K_M (2.0GB)
3b-instruct-q5_K_S (1.9GB)
3b-instruct-q6_K (2.3GB)
3b-instruct-q8_0 (3.0GB)
stable-code
π
Copied!
Stable Code 3B is a coding model that offers instruct and code completion options comparable to larger models like Code Llama 7B, which is 2.5 times its size.
Category: Coding
Downloads: 101.6K
Last Updated: 7 months ago Read more about: Stable Code 3B
latest (4.2GB)
7b (4.2GB)
chat (4.2GB)
code (4.2GB)
v1.5 (4.2GB)
7b-chat (4.2GB)
7b-chat-v1.5-fp16 (15GB)
7b-chat-v1.5-q2_K (3.1GB)
7b-chat-v1.5-q3_K_L (4.0GB)
7b-chat-v1.5-q3_K_M (3.8GB)
7b-chat-v1.5-q3_K_S (3.5GB)
7b-chat-v1.5-q4_0 (4.2GB)
7b-chat-v1.5-q4_1 (4.6GB)
7b-chat-v1.5-q4_K_M (4.7GB)
7b-chat-v1.5-q4_K_S (4.4GB)
7b-chat-v1.5-q5_0 (5.0GB)
7b-chat-v1.5-q5_1 (5.5GB)
7b-chat-v1.5-q5_K_M (5.4GB)
7b-chat-v1.5-q5_K_S (5.1GB)
7b-chat-v1.5-q6_K (6.4GB)
7b-chat-v1.5-q8_0 (7.7GB)
7b-code (4.2GB)
7b-code-v1.5-fp16 (15GB)
7b-code-v1.5-q4_0 (4.2GB)
7b-code-v1.5-q4_1 (4.6GB)
7b-code-v1.5-q5_0 (5.0GB)
7b-code-v1.5-q5_1 (5.5GB)
7b-code-v1.5-q8_0 (7.7GB)
v1.5-chat (4.2GB)
v1.5-code (4.2GB)
codeqwen
π
Copied!
CodeQwen1.5 is a substantial language model that has been pretrained using extensive code datasets.
Category: Coding
Downloads: 111.2K
Last Updated: 4 months ago Read more about: CodeQwen 1.5
latest (4.1GB)
7b (4.1GB)
7b-v3.1 (4.1GB)
7b-v3.1-fp16 (14GB)
7b-v3.1-q2_K (3.1GB)
7b-v3.1-q3_K_L (3.8GB)
7b-v3.1-q3_K_M (3.5GB)
7b-v3.1-q3_K_S (3.2GB)
7b-v3.1-q4_0 (4.1GB)
7b-v3.1-q4_1 (4.6GB)
7b-v3.1-q4_K_M (4.4GB)
7b-v3.1-q4_K_S (4.1GB)
7b-v3.1-q5_0 (5.0GB)
7b-v3.1-q5_1 (5.4GB)
7b-v3.1-q5_K_M (5.1GB)
7b-v3.1-q5_K_S (5.0GB)
7b-v3.1-q6_K (5.9GB)
7b-v3.1-q8_0 (7.7GB)
7b-v3.2 (4.1GB)
7b-v3.2-fp16 (14GB)
7b-v3.2-q2_K (3.1GB)
7b-v3.2-q3_K_L (3.8GB)
7b-v3.2-q3_K_M (3.5GB)
7b-v3.2-q3_K_S (3.2GB)
7b-v3.2-q4_0 (4.1GB)
7b-v3.2-q4_1 (4.6GB)
7b-v3.2-q4_K_M (4.4GB)
7b-v3.2-q4_K_S (4.1GB)
7b-v3.2-q5_0 (5.0GB)
7b-v3.2-q5_1 (5.4GB)
7b-v3.2-q5_K_M (5.1GB)
7b-v3.2-q5_K_S (5.0GB)
7b-v3.2-q6_K (5.9GB)
7b-v3.2-q8_0 (7.7GB)
7b-v3.3 (4.1GB)
7b-v3.3-fp16 (14GB)
7b-v3.3-q2_K (3.1GB)
7b-v3.3-q3_K_L (3.8GB)
7b-v3.3-q3_K_M (3.5GB)
7b-v3.3-q3_K_S (3.2GB)
7b-v3.3-q4_0 (4.1GB)
7b-v3.3-q4_1 (4.6GB)
7b-v3.3-q4_K_M (4.4GB)
7b-v3.3-q4_K_S (4.1GB)
7b-v3.3-q5_0 (5.0GB)
7b-v3.3-q5_1 (5.4GB)
7b-v3.3-q5_K_M (5.1GB)
7b-v3.3-q5_K_S (5.0GB)
7b-v3.3-q6_K (5.9GB)
7b-v3.3-q8_0 (7.7GB)
neural-chat
π
Copied!
A well-tuned Mistral model that effectively covers both domain and language.
Category: Language
Downloads: 80.7K
Last Updated: 10 months ago Read more about: NeuralChat
latest (4.1GB)
70b (39GB)
13b (7.4GB)
7b (4.1GB)
70b-fp16 (138GB)
70b-q2_K (29GB)
70b-q3_K_L (36GB)
70b-q3_K_M (33GB)
70b-q3_K_S (30GB)
70b-q4_0 (39GB)
70b-q4_1 (43GB)
70b-q4_K_M (41GB)
70b-q4_K_S (39GB)
70b-q5_0 (47GB)
70b-q5_1 (52GB)
70b-q5_K_M (49GB)
70b-q5_K_S (47GB)
70b-q6_K (57GB)
70b-q8_0 (73GB)
13b-fp16 (26GB)
13b-q2_K (5.4GB)
13b-q3_K_L (6.9GB)
13b-q3_K_M (6.3GB)
13b-q3_K_S (5.7GB)
13b-q4_0 (7.4GB)
13b-q4_1 (8.2GB)
13b-q4_K_M (7.9GB)
13b-q4_K_S (7.4GB)
13b-q5_0 (9.0GB)
13b-q5_1 (9.8GB)
13b-q5_K_M (9.2GB)
13b-q5_K_S (9.0GB)
13b-q6_K (11GB)
13b-q8_0 (14GB)
7b-v1.1-fp16 (14GB)
7b-v1.1-q2_K (3.1GB)
7b-v1.1-q3_K_L (3.8GB)
7b-v1.1-q3_K_M (3.5GB)
7b-v1.1-q3_K_S (3.2GB)
7b-v1.1-q4_0 (4.1GB)
7b-v1.1-q4_1 (4.6GB)
7b-v1.1-q4_K_M (4.4GB)
7b-v1.1-q4_K_S (4.1GB)
7b-v1.1-q5_0 (5.0GB)
7b-v1.1-q5_1 (5.4GB)
7b-v1.1-q5_K_M (5.1GB)
7b-v1.1-q5_K_S (5.0GB)
7b-v1.1-q6_K (5.9GB)
7b-v1.1-q8_0 (7.7GB)
7b-fp16 (13GB)
7b-q2_K (2.8GB)
7b-q3_K_L (3.6GB)
7b-q3_K_M (3.3GB)
7b-q3_K_S (2.9GB)
7b-q4_0 (3.8GB)
7b-q4_1 (4.2GB)
7b-q4_K_M (4.1GB)
7b-q4_K_S (3.9GB)
7b-q5_0 (4.7GB)
7b-q5_1 (5.1GB)
7b-q5_K_M (4.8GB)
7b-q5_K_S (4.7GB)
7b-q6_K (5.5GB)
7b-q8_0 (7.2GB)
wizard-math
π
Copied!
The model specializes in solving mathematical and logical challenges. It emphasizes analytical reasoning and problem-solving skills.
Category: Specialized
Downloads: 87.6K
Last Updated: 10 months ago Read more about: WizardMath
latest (983MB)
12b (7.0GB)
1.6b (983MB)
chat (983MB)
zephyr (983MB)
12b-chat (7.0GB)
12b-chat-fp16 (24GB)
12b-chat-q2_K (4.7GB)
12b-chat-q3_K_L (6.5GB)
12b-chat-q3_K_M (6.0GB)
12b-chat-q3_K_S (5.4GB)
12b-chat-q4_0 (7.0GB)
12b-chat-q4_1 (7.7GB)
12b-chat-q4_K_M (7.4GB)
12b-chat-q4_K_S (7.0GB)
12b-chat-q5_0 (8.4GB)
12b-chat-q5_1 (9.1GB)
12b-chat-q5_K_M (8.6GB)
12b-chat-q5_K_S (8.4GB)
12b-chat-q6_K (10.0GB)
12b-chat-q8_0 (13GB)
12b-text (7.0GB)
1.6b-chat (983MB)
12b-fp16 (24GB)
12b-q2_K (4.7GB)
12b-q3_K_L (6.5GB)
12b-q3_K_M (6.0GB)
12b-q3_K_S (5.4GB)
12b-q4_0 (7.0GB)
12b-q4_1 (7.7GB)
12b-q4_K_M (7.4GB)
12b-q4_K_S (7.0GB)
12b-q5_0 (8.4GB)
12b-q5_1 (9.1GB)
12b-q5_K_M (8.6GB)
12b-q5_K_S (8.4GB)
12b-q6_K (10.0GB)
12b-q8_0 (13GB)
1.6b-chat-fp16 (3.3GB)
1.6b-chat-q2_K (694MB)
1.6b-chat-q3_K_L (915MB)
1.6b-chat-q3_K_M (858MB)
1.6b-chat-q3_K_S (792MB)
1.6b-chat-q4_0 (983MB)
1.6b-chat-q4_1 (1.1GB)
1.6b-chat-q4_K_M (1.0GB)
1.6b-chat-q4_K_S (989MB)
1.6b-chat-q5_0 (1.2GB)
1.6b-chat-q5_1 (1.3GB)
1.6b-chat-q5_K_M (1.2GB)
1.6b-chat-q5_K_S (1.2GB)
1.6b-chat-q6_K (1.4GB)
1.6b-chat-q8_0 (1.8GB)
1.6b-zephyr (983MB)
1.6b-zephyr-fp16 (3.3GB)
1.6b-zephyr-q2_K (694MB)
1.6b-zephyr-q3_K_L (915MB)
1.6b-zephyr-q3_K_M (858MB)
1.6b-zephyr-q3_K_S (792MB)
1.6b-zephyr-q4_0 (983MB)
1.6b-zephyr-q4_1 (1.1GB)
1.6b-zephyr-q4_K_M (1.0GB)
1.6b-zephyr-q4_K_S (989MB)
1.6b-zephyr-q5_0 (1.2GB)
1.6b-zephyr-q5_1 (1.3GB)
1.6b-zephyr-q5_K_M (1.2GB)
1.6b-zephyr-q5_K_S (1.2GB)
1.6b-zephyr-q6_K (1.4GB)
1.6b-zephyr-q8_0 (1.8GB)
1.6b-fp16 (3.3GB)
1.6b-q2_K (694MB)
1.6b-q3_K_L (915MB)
1.6b-q3_K_M (858MB)
1.6b-q3_K_S (792MB)
1.6b-q4_0 (983MB)
1.6b-q4_1 (1.1GB)
1.6b-q4_K_M (1.0GB)
1.6b-q4_K_S (989MB)
1.6b-q5_0 (1.2GB)
1.6b-q5_1 (1.3GB)
1.6b-q5_K_M (1.2GB)
1.6b-q5_K_S (1.2GB)
1.6b-q6_K (1.4GB)
1.6b-q8_0 (1.8GB)
stablelm2
π
Copied!
Stable LM 2 is an advanced language model with 1.6 billion and 12 billion parameters, designed using multilingual data in several languages including English, Spanish, German, Italian, French, Portuguese, and Dutch.
Category: Tiny,
Downloads: 94.2K
Last Updated: 6 months ago Read more about: Stable LM 2
latest (19GB)
34b (19GB)
34b-python (19GB)
34b-python-fp16 (67GB)
34b-python-q2_K (14GB)
34b-python-q3_K_L (18GB)
34b-python-q3_K_M (16GB)
34b-python-q3_K_S (15GB)
34b-python-q4_0 (19GB)
34b-python-q4_1 (21GB)
34b-python-q4_K_M (20GB)
34b-python-q4_K_S (19GB)
34b-python-q5_0 (23GB)
34b-python-q5_1 (25GB)
34b-python-q5_K_M (24GB)
34b-python-q5_K_S (23GB)
34b-python-q6_K (28GB)
34b-python-q8_0 (36GB)
34b-v2 (19GB)
34b-v2-fp16 (67GB)
34b-v2-q2_K (14GB)
34b-v2-q3_K_L (18GB)
34b-v2-q3_K_M (16GB)
34b-v2-q3_K_S (15GB)
34b-v2-q4_0 (19GB)
34b-v2-q4_1 (21GB)
34b-v2-q4_K_M (20GB)
34b-v2-q4_K_S (19GB)
34b-v2-q5_0 (23GB)
34b-v2-q5_1 (25GB)
34b-v2-q5_K_M (24GB)
34b-v2-q5_K_S (23GB)
34b-v2-q6_K (28GB)
34b-v2-q8_0 (36GB)
34b-fp16 (67GB)
34b-q2_K (14GB)
34b-q3_K_L (18GB)
34b-q3_K_M (16GB)
34b-q3_K_S (15GB)
34b-q4_0 (19GB)
34b-q4_1 (21GB)
34b-q4_K_M (20GB)
34b-q4_K_S (19GB)
34b-q5_0 (23GB)
34b-q5_1 (25GB)
34b-q5_K_M (24GB)
34b-q5_K_S (23GB)
34b-q6_K (28GB)
34b-q8_0 (36GB)
phind-codellama
π
Copied!
A code generation model utilizing Code Llama.
Category: Coding
Downloads: 73.3K
Last Updated: 10 months ago Read more about: Phind CodeLlama
latest (2.0GB)
34b (19GB)
20b (12GB)
8b (4.6GB)
3b (2.0GB)
34b-base-f16 (68GB)
34b-base (19GB)
34b-base-q2_K (13GB)
34b-base-q3_K_L (20GB)
34b-base-q3_K_M (18GB)
34b-base-q3_K_S (15GB)
34b-base-q4_0 (19GB)
34b-base-q4_1 (21GB)
34b-base-q4_K_M (21GB)
34b-base-q4_K_S (19GB)
34b-base-q5_0 (23GB)
34b-base-q5_1 (25GB)
34b-base-q5_K_M (25GB)
34b-base-q5_K_S (23GB)
34b-base-q6_K (28GB)
34b-base-q8_0 (36GB)
34b-instruct (19GB)
34b-instruct-f16 (68GB)
34b-instruct-q2_K (13GB)
34b-instruct-q3_K_L (20GB)
34b-instruct-q3_K_M (18GB)
34b-instruct-q3_K_S (15GB)
34b-instruct-q4_0 (19GB)
34b-instruct-q4_1 (21GB)
34b-instruct-q4_K_M (21GB)
34b-instruct-q4_K_S (19GB)
34b-instruct-q5_0 (23GB)
34b-instruct-q5_1 (25GB)
34b-instruct-q5_K_M (25GB)
34b-instruct-q5_K_S (23GB)
34b-instruct-q6_K (28GB)
34b-instruct-q8_0 (36GB)
20b-base-f16 (40GB)
20b-base (12GB)
20b-base-fp16 (40GB)
20b-base-q2_K (7.9GB)
20b-base-q3_K_L (12GB)
20b-base-q3_K_M (11GB)
20b-base-q3_K_S (8.9GB)
20b-base-q4_0 (12GB)
20b-base-q4_1 (13GB)
20b-base-q4_K_M (13GB)
20b-base-q4_K_S (12GB)
20b-base-q5_0 (14GB)
20b-base-q5_1 (15GB)
20b-base-q5_K_M (15GB)
20b-base-q5_K_S (14GB)
20b-base-q6_K (17GB)
20b-base-q8_0 (21GB)
20b-instruct-f16 (40GB)
20b-instruct (12GB)
20b-instruct-q2_K (7.9GB)
20b-instruct-q3_K_L (12GB)
20b-instruct-q3_K_M (11GB)
20b-instruct-q3_K_S (8.9GB)
20b-instruct-q4_0 (12GB)
20b-instruct-q4_1 (13GB)
20b-instruct-q4_K_M (13GB)
20b-instruct-q4_K_S (12GB)
20b-instruct-q5_0 (14GB)
20b-instruct-q5_1 (15GB)
20b-instruct-q5_K_M (15GB)
20b-instruct-q5_K_S (14GB)
20b-instruct-q6_K (17GB)
20b-instruct-q8_0 (21GB)
8b-base-f16 (16GB)
8b-base (4.6GB)
8b-base-fp16 (16GB)
8b-base-q2_K (3.1GB)
8b-base-q3_K_L (4.3GB)
8b-base-q3_K_M (3.9GB)
8b-base-q3_K_S (3.5GB)
8b-base-q4_0 (4.6GB)
8b-base-q4_1 (5.1GB)
8b-base-q4_K_M (4.9GB)
8b-base-q4_K_S (4.6GB)
8b-base-q5_0 (5.6GB)
8b-base-q5_1 (6.1GB)
8b-base-q5_K_M (5.7GB)
8b-base-q5_K_S (5.6GB)
8b-base-q6_K (6.6GB)
8b-base-q8_0 (8.6GB)
8b-instruct (4.6GB)
8b-instruct-f16 (16GB)
8b-instruct-fp16 (16GB)
8b-instruct-q2_K (3.1GB)
8b-instruct-q3_K_L (4.3GB)
8b-instruct-q3_K_M (3.9GB)
8b-instruct-q3_K_S (3.5GB)
8b-instruct-q4_0 (4.6GB)
8b-instruct-q4_1 (5.1GB)
8b-instruct-q4_K_M (4.9GB)
8b-instruct-q4_K_S (4.6GB)
8b-instruct-q5_0 (5.6GB)
8b-instruct-q5_1 (6.1GB)
8b-instruct-q5_K_M (5.7GB)
8b-instruct-q5_K_S (5.6GB)
8b-instruct-q6_K (6.6GB)
8b-instruct-q8_0 (8.6GB)
3b-base (2.0GB)
3b-base-f16 (7.0GB)
3b-base-fp16 (7.0GB)
3b-base-q2_K (1.3GB)
3b-base-q3_K_L (1.9GB)
3b-base-q3_K_M (1.7GB)
3b-base-q3_K_S (1.6GB)
3b-base-q4_0 (2.0GB)
3b-base-q4_1 (2.2GB)
3b-base-q4_K_M (2.1GB)
3b-base-q4_K_S (2.0GB)
3b-base-q5_0 (2.4GB)
3b-base-q5_1 (2.6GB)
3b-base-q5_K_M (2.5GB)
3b-base-q5_K_S (2.4GB)
3b-base-q6_K (2.9GB)
3b-base-q8_0 (3.7GB)
3b-instruct-f16 (7.0GB)
3b-instruct (2.0GB)
3b-instruct-fp16 (7.0GB)
3b-instruct-q2_K (1.3GB)
3b-instruct-q3_K_L (1.9GB)
3b-instruct-q3_K_M (1.7GB)
3b-instruct-q3_K_S (1.6GB)
3b-instruct-q4_0 (2.0GB)
3b-instruct-q4_1 (2.2GB)
3b-instruct-q4_K_M (2.1GB)
3b-instruct-q4_K_S (2.0GB)
3b-instruct-q5_0 (2.4GB)
3b-instruct-q5_1 (2.6GB)
3b-instruct-q5_K_M (2.5GB)
3b-instruct-q5_K_S (2.4GB)
3b-instruct-q6_K (2.9GB)
3b-instruct-q8_0 (3.7GB)
granite-code
π
Copied!
IBM has developed a family of open foundation models designed for code intelligence.
Category: Coding
Downloads: 143.1K
Last Updated: 2 months ago Read more about: Granite Code
latest (4.2GB)
15b (9.1GB)
7b (4.2GB)
15b-starcoder2 (9.1GB)
15b-starcoder2-fp16 (32GB)
15b-starcoder2-q2_K (6.2GB)
15b-starcoder2-q3_K_L (9.0GB)
15b-starcoder2-q3_K_M (8.1GB)
15b-starcoder2-q3_K_S (7.0GB)
15b-starcoder2-q4_0 (9.1GB)
15b-starcoder2-q4_1 (10GB)
15b-starcoder2-q4_K_M (9.9GB)
15b-starcoder2-q4_K_S (9.3GB)
15b-starcoder2-q5_0 (11GB)
15b-starcoder2-q5_1 (12GB)
15b-starcoder2-q5_K_M (11GB)
15b-starcoder2-q5_K_S (11GB)
15b-starcoder2-q6_K (13GB)
15b-starcoder2-q8_0 (17GB)
7b-starcoder2 (4.2GB)
7b-starcoder2-fp16 (15GB)
7b-starcoder2-q2_K (2.9GB)
7b-starcoder2-q3_K_L (4.2GB)
7b-starcoder2-q3_K_M (3.8GB)
7b-starcoder2-q3_K_S (3.3GB)
7b-starcoder2-q4_0 (4.2GB)
7b-starcoder2-q4_1 (4.7GB)
7b-starcoder2-q4_K_M (4.6GB)
7b-starcoder2-q4_K_S (4.3GB)
7b-starcoder2-q5_0 (5.1GB)
7b-starcoder2-q5_1 (5.6GB)
7b-starcoder2-q5_K_M (5.3GB)
7b-starcoder2-q5_K_S (5.1GB)
7b-starcoder2-q6_K (6.1GB)
7b-starcoder2-q8_0 (7.9GB)
dolphincoder
π
Copied!
A 7B and 15B uncensored version of the Dolphin model family, which is highly effective at coding, is derived from StarCoder2.
Category: Uncensored,coding
Downloads: 70.4K
Last Updated: 7 months ago Read more about: Dolphin Coder
latest (3.8GB)
13b (7.4GB)
7b (3.8GB)
70b-llama2-fp16 (138GB)
70b-llama2-q2_K (29GB)
70b-llama2-q3_K_L (36GB)
70b-llama2-q3_K_M (33GB)
70b-llama2-q3_K_S (30GB)
70b-llama2-q4_0 (39GB)
70b-llama2-q4_1 (43GB)
70b-llama2-q4_K_M (41GB)
70b-llama2-q4_K_S (39GB)
70b-llama2-q5_0 (47GB)
70b-llama2-q5_1 (52GB)
70b-llama2-q5_K_M (49GB)
70b-llama2-q6_K (57GB)
13b-llama2 (7.4GB)
13b-llama2-fp16 (26GB)
13b-llama2-q2_K (5.4GB)
13b-llama2-q3_K_L (6.9GB)
13b-llama2-q3_K_M (6.3GB)
13b-llama2-q3_K_S (5.7GB)
13b-llama2-q4_0 (7.4GB)
13b-llama2-q4_1 (8.2GB)
13b-llama2-q4_K_M (7.9GB)
13b-llama2-q4_K_S (7.4GB)
13b-llama2-q5_0 (9.0GB)
13b-llama2-q5_1 (9.8GB)
13b-llama2-q5_K_M (9.2GB)
13b-llama2-q5_K_S (9.0GB)
13b-llama2-q6_K (11GB)
13b-llama2-q8_0 (14GB)
7b-llama2 (3.8GB)
13b-fp16 (26GB)
13b-q2_K (5.4GB)
13b-q3_K_L (6.9GB)
13b-q3_K_M (6.3GB)
13b-q3_K_S (5.7GB)
13b-q4_0 (7.4GB)
13b-q4_1 (8.2GB)
13b-q4_K_M (7.9GB)
13b-q4_K_S (7.4GB)
13b-q5_0 (9.0GB)
13b-q5_1 (9.8GB)
13b-q5_K_M (9.2GB)
13b-q5_K_S (9.0GB)
13b-q6_K (11GB)
13b-q8_0 (14GB)
7b-llama2-fp16 (13GB)
7b-llama2-q2_K (2.8GB)
7b-llama2-q3_K_L (3.6GB)
7b-llama2-q3_K_M (3.3GB)
7b-llama2-q3_K_S (2.9GB)
7b-llama2-q4_0 (3.8GB)
7b-llama2-q4_1 (4.2GB)
7b-llama2-q4_K_M (4.1GB)
7b-llama2-q4_K_S (3.9GB)
7b-llama2-q5_0 (4.7GB)
7b-llama2-q5_1 (5.1GB)
7b-llama2-q5_K_M (4.8GB)
7b-llama2-q5_K_S (4.7GB)
7b-llama2-q6_K (5.5GB)
7b-llama2-q8_0 (7.2GB)
nous-hermes
π
Copied!
Nous Research has developed general use models based on Llama and Llama 2.
Category: Language
Downloads: 74.3K
Last Updated: 1 year ago Read more about: Nous Hermes
latest (4.1GB)
15b (9.0GB)
7b (4.1GB)
70b-alpha-fp16 (138GB)
70b-alpha-q2_K (25GB)
70b-alpha-q3_K_L (36GB)
70b-alpha-q3_K_M (33GB)
70b-alpha-q3_K_S (30GB)
70b-alpha-q4_0 (39GB)
70b-alpha-q4_1 (43GB)
70b-alpha-q4_K_M (41GB)
70b-alpha-q4_K_S (39GB)
70b-alpha-q5_0 (47GB)
70b-alpha-q5_1 (52GB)
70b-alpha-q5_K_M (49GB)
70b-alpha-q5_K_S (47GB)
70b-alpha-q6_K (57GB)
70b-alpha-q8_0 (73GB)
15b-fp16 (32GB)
15b-q2_K (6.7GB)
15b-q3_K_L (9.1GB)
15b-q3_K_M (8.2GB)
15b-q3_K_S (6.9GB)
15b-q4_0 (9.0GB)
15b-q4_1 (10.0GB)
15b-q4_K_M (10.0GB)
15b-q4_K_S (9.1GB)
15b-q5_0 (11GB)
15b-q5_1 (12GB)
15b-q5_K_M (12GB)
15b-q5_K_S (11GB)
15b-q6_K (13GB)
15b-q8_0 (17GB)
7b-fp16 (14GB)
7b-q2_K (3.1GB)
7b-q3_K_L (3.8GB)
7b-q3_K_M (3.5GB)
7b-q3_K_S (3.2GB)
7b-q4_0 (4.1GB)
7b-q4_1 (4.6GB)
7b-q4_K_M (4.4GB)
7b-q4_K_S (4.1GB)
7b-q5_0 (5.0GB)
7b-q5_1 (5.4GB)
7b-q5_K_M (5.1GB)
7b-q5_K_S (5.0GB)
7b-q6_K (5.9GB)
7b-q8_0 (7.7GB)
sqlcoder
π
Copied!
SQLCoder is a code completion model specifically optimized for SQL generation tasks, built upon StarCoder. It offers enhanced capabilities for coding assistance in SQL.
Category: Coding,specialized
Downloads: 74.5K
Last Updated: 9 months ago Read more about: SQLCoder
latest (4.7GB)
70b (40GB)
8b (4.7GB)
1048k (4.7GB)
instruct (4.7GB)
70b-instruct-1048k-fp16 (141GB)
70b-instruct-1048k-q2_K (26GB)
70b-instruct-1048k-q3_K_L (37GB)
70b-instruct-1048k-q3_K_M (34GB)
70b-instruct-1048k-q3_K_S (31GB)
70b-instruct-1048k-q4_0 (40GB)
70b-instruct-1048k-q4_1 (44GB)
70b-instruct-1048k-q4_K_M (43GB)
70b-instruct-1048k-q4_K_S (40GB)
70b-instruct-1048k-q5_0 (49GB)
70b-instruct-1048k-q5_1 (53GB)
70b-instruct-1048k-q5_K_M (50GB)
70b-instruct-1048k-q5_K_S (49GB)
70b-instruct-1048k-q6_K (58GB)
70b-instruct-1048k-q8_0 (75GB)
8b-instruct-1048k-fp16 (16GB)
8b-instruct-1048k-q2_K (3.2GB)
8b-instruct-1048k-q3_K_L (4.3GB)
8b-instruct-1048k-q3_K_M (4.0GB)
8b-instruct-1048k-q3_K_S (3.7GB)
8b-instruct-1048k-q4_0 (4.7GB)
8b-instruct-1048k-q4_1 (5.1GB)
8b-instruct-1048k-q4_K_M (4.9GB)
8b-instruct-1048k-q4_K_S (4.7GB)
8b-instruct-1048k-q5_0 (5.6GB)
8b-instruct-1048k-q5_1 (6.1GB)
8b-instruct-1048k-q5_K_M (5.7GB)
8b-instruct-1048k-q5_K_S (5.6GB)
8b-instruct-1048k-q6_K (6.6GB)
8b-instruct-1048k-q8_0 (8.5GB)
llama3-gradient
π
Copied!
This model enhances the context length of LLama-3 8B from 8,000 to more than 1,000,000 tokens.
Category: Language
Downloads: 88.6K
Last Updated: 6 months ago Read more about: Llama 3 Gradient
latest (4.1GB)
7b (4.1GB)
alpha (4.1GB)
beta (4.1GB)
7b-alpha (4.1GB)
7b-alpha-fp16 (14GB)
7b-alpha-q2_K (2.7GB)
7b-alpha-q3_K_L (3.8GB)
7b-alpha-q3_K_M (3.5GB)
7b-alpha-q3_K_S (3.2GB)
7b-alpha-q4_0 (4.1GB)
7b-alpha-q4_1 (4.6GB)
7b-alpha-q4_K_M (4.4GB)
7b-alpha-q4_K_S (4.1GB)
7b-alpha-q5_0 (5.0GB)
7b-alpha-q5_1 (5.4GB)
7b-alpha-q5_K_M (5.1GB)
7b-alpha-q5_K_S (5.0GB)
7b-alpha-q6_K (5.9GB)
7b-alpha-q8_0 (7.7GB)
7b-beta (4.1GB)
7b-beta-fp16 (14GB)
7b-beta-q2_K (2.7GB)
7b-beta-q3_K_L (3.8GB)
7b-beta-q3_K_M (3.5GB)
7b-beta-q3_K_S (3.2GB)
7b-beta-q4_0 (4.1GB)
7b-beta-q4_1 (4.6GB)
7b-beta-q4_K_M (4.4GB)
7b-beta-q4_K_S (4.1GB)
7b-beta-q5_0 (5.0GB)
7b-beta-q5_1 (5.4GB)
7b-beta-q5_K_M (5.1GB)
7b-beta-q5_K_S (5.0GB)
7b-beta-q6_K (5.9GB)
7b-beta-q8_0 (7.7GB)
starling-lm
π
Copied!
Starling is a substantial language model developed through reinforcement learning based on feedback from AI, aimed at enhancing the helpfulness of chatbots.
Category: Language
Downloads: 61K
Last Updated: 7 months ago Read more about: Starling
latest (4.0GB)
67b (38GB)
7b (4.0GB)
67b-base (38GB)
67b-base-fp16 (135GB)
67b-base-q2_K (28GB)
67b-base-q3_K_L (36GB)
67b-base-q3_K_M (33GB)
67b-base-q3_K_S (29GB)
67b-base-q4_0 (38GB)
67b-base-q4_1 (42GB)
67b-base-q4_K_M (40GB)
67b-base-q4_K_S (38GB)
67b-base-q5_0 (46GB)
67b-base-q5_1 (51GB)
67b-base-q5_K_M (48GB)
67b-base-q5_K_S (46GB)
67b-base-q6_K (55GB)
67b-base-q8_0 (72GB)
67b-chat (38GB)
67b-chat-fp16 (135GB)
67b-chat-q2_K (28GB)
67b-chat-q3_K_L (36GB)
67b-chat-q3_K_M (33GB)
67b-chat-q3_K_S (29GB)
67b-chat-q4_0 (38GB)
67b-chat-q4_1 (42GB)
67b-chat-q4_K_M (40GB)
67b-chat-q4_K_S (38GB)
67b-chat-q5_0 (46GB)
67b-chat-q5_1 (51GB)
67b-chat-q5_K_S (46GB)
7b-base (4.0GB)
7b-base-fp16 (14GB)
7b-base-q2_K (3.0GB)
7b-base-q3_K_L (3.7GB)
7b-base-q3_K_M (3.5GB)
7b-base-q3_K_S (3.1GB)
7b-base-q4_0 (4.0GB)
7b-base-q4_1 (4.4GB)
7b-base-q4_K_M (4.2GB)
7b-base-q4_K_S (4.0GB)
7b-base-q5_0 (4.8GB)
7b-base-q5_1 (5.2GB)
7b-base-q5_K_M (4.9GB)
7b-base-q5_K_S (4.8GB)
7b-base-q6_K (5.7GB)
7b-base-q8_0 (7.3GB)
7b-chat (4.0GB)
7b-chat-fp16 (14GB)
7b-chat-q2_K (3.0GB)
7b-chat-q3_K_L (3.7GB)
7b-chat-q3_K_M (3.5GB)
7b-chat-q3_K_S (3.1GB)
7b-chat-q4_0 (4.0GB)
7b-chat-q4_1 (4.4GB)
7b-chat-q4_K_M (4.2GB)
7b-chat-q4_K_S (4.0GB)
7b-chat-q5_0 (4.8GB)
7b-chat-q5_1 (5.2GB)
7b-chat-q5_K_M (4.9GB)
7b-chat-q5_K_S (4.8GB)
7b-chat-q6_K (5.7GB)
7b-chat-q8_0 (7.3GB)
deepseek-llm
π
Copied!
A sophisticated language model built with 2 trillion bilingual tokens.
Category: Language
Downloads: 88.5K
Last Updated: 10 months ago Read more about: DeepSeek
latest (3.8GB)
13b (7.4GB)
7b (3.8GB)
13b-128k (7.4GB)
13b-128k-fp16 (26GB)
13b-128k-q2_K (5.4GB)
13b-128k-q3_K_L (6.9GB)
13b-128k-q3_K_M (6.3GB)
13b-128k-q3_K_S (5.7GB)
13b-128k-q4_0 (7.4GB)
13b-128k-q4_1 (8.2GB)
13b-128k-q4_K_M (7.9GB)
13b-128k-q4_K_S (7.4GB)
13b-128k-q5_0 (9.0GB)
13b-128k-q5_1 (9.8GB)
13b-128k-q5_K_M (9.2GB)
13b-128k-q5_K_S (9.0GB)
13b-128k-q6_K (11GB)
13b-128k-q8_0 (14GB)
13b-64k (7.4GB)
13b-64k-fp16 (26GB)
13b-64k-q2_K (5.4GB)
13b-64k-q3_K_L (6.9GB)
13b-64k-q3_K_M (6.3GB)
13b-64k-q3_K_S (5.7GB)
13b-64k-q4_0 (7.4GB)
13b-64k-q4_1 (8.2GB)
13b-64k-q4_K_M (7.9GB)
13b-64k-q4_K_S (7.4GB)
13b-64k-q5_0 (9.0GB)
13b-64k-q5_1 (9.8GB)
13b-64k-q5_K_M (9.2GB)
13b-64k-q5_K_S (9.0GB)
13b-64k-q6_K (11GB)
13b-64k-q8_0 (14GB)
7b-128k (3.8GB)
7b-128k-fp16 (13GB)
7b-128k-q2_K (2.8GB)
7b-128k-q3_K_L (3.6GB)
7b-128k-q3_K_M (3.3GB)
7b-128k-q3_K_S (2.9GB)
7b-128k-q4_0 (3.8GB)
7b-128k-q4_1 (4.2GB)
7b-128k-q4_K_M (4.1GB)
7b-128k-q4_K_S (3.9GB)
7b-128k-q5_0 (4.7GB)
7b-128k-q5_1 (5.1GB)
7b-128k-q5_K_M (4.8GB)
7b-128k-q5_K_S (4.7GB)
7b-128k-q6_K (5.5GB)
7b-128k-q8_0 (7.2GB)
7b-64k (3.8GB)
7b-64k-fp16 (13GB)
7b-64k-q2_K (2.8GB)
7b-64k-q3_K_L (3.6GB)
7b-64k-q3_K_M (3.3GB)
7b-64k-q3_K_S (2.9GB)
7b-64k-q4_0 (3.8GB)
7b-64k-q4_1 (4.2GB)
7b-64k-q4_K_M (4.1GB)
7b-64k-q4_K_S (3.9GB)
7b-64k-q5_0 (4.7GB)
7b-64k-q5_1 (5.1GB)
7b-64k-q5_K_M (4.8GB)
7b-64k-q5_K_S (4.7GB)
7b-64k-q6_K (5.5GB)
7b-64k-q8_0 (7.2GB)
yarn-llama2
π
Copied!
A version of Llama 2 that allows for a context of up to 128k tokens. This extension enhances its capacity for processing larger amounts of information.
Category: Language
Downloads: 71.1K
Last Updated: 1 year ago Read more about: Yarn Llama 2
latest (3.8GB)
13b (7.4GB)
7b (3.8GB)
70b-v0.1 (39GB)
70b-v0.1-fp16 (138GB)
70b-v0.1-q2_K (29GB)
70b-v0.1-q3_K_L (36GB)
70b-v0.1-q3_K_M (33GB)
70b-v0.1-q3_K_S (30GB)
70b-v0.1-q4_0 (39GB)
70b-v0.1-q4_1 (43GB)
70b-v0.1-q4_K_M (41GB)
70b-v0.1-q4_K_S (39GB)
70b-v0.1-q5_0 (47GB)
70b-v0.1-q5_1 (52GB)
70b-v0.1-q5_K_S (47GB)
70b-v0.1-q6_K (57GB)
70b-v0.1-q8_0 (73GB)
13b-v0.1 (7.4GB)
13b-v0.1-fp16 (26GB)
13b-v0.1-q2_K (5.4GB)
13b-v0.1-q3_K_L (6.9GB)
13b-v0.1-q3_K_M (6.3GB)
13b-v0.1-q3_K_S (5.7GB)
13b-v0.1-q4_0 (7.4GB)
13b-v0.1-q4_1 (8.2GB)
13b-v0.1-q4_K_M (7.9GB)
13b-v0.1-q4_K_S (7.4GB)
13b-v0.1-q5_0 (9.0GB)
13b-v0.1-q5_1 (9.8GB)
13b-v0.1-q5_K_M (9.2GB)
13b-v0.1-q5_K_S (9.0GB)
13b-v0.1-q6_K (11GB)
13b-v0.1-q8_0 (14GB)
13b-v0.2 (7.4GB)
13b-v0.2-fp16 (26GB)
13b-v0.2-q2_K (5.4GB)
13b-v0.2-q3_K_L (6.9GB)
13b-v0.2-q3_K_M (6.3GB)
13b-v0.2-q3_K_S (5.7GB)
13b-v0.2-q4_0 (7.4GB)
13b-v0.2-q4_1 (8.2GB)
13b-v0.2-q4_K_M (7.9GB)
13b-v0.2-q4_K_S (7.4GB)
13b-v0.2-q5_0 (9.0GB)
13b-v0.2-q5_1 (9.8GB)
13b-v0.2-q5_K_M (9.2GB)
13b-v0.2-q5_K_S (9.0GB)
13b-v0.2-q6_K (11GB)
13b-v0.2-q8_0 (14GB)
7b-v0.1 (3.8GB)
7b-v0.1-fp16 (13GB)
7b-v0.1-q2_K (2.8GB)
7b-v0.1-q3_K_L (3.6GB)
7b-v0.1-q3_K_M (3.3GB)
7b-v0.1-q3_K_S (2.9GB)
7b-v0.1-q4_0 (3.8GB)
7b-v0.1-q4_1 (4.2GB)
7b-v0.1-q4_K_M (4.1GB)
7b-v0.1-q4_K_S (3.9GB)
7b-v0.1-q5_0 (4.7GB)
7b-v0.1-q5_1 (5.1GB)
7b-v0.1-q5_K_M (4.8GB)
7b-v0.1-q5_K_S (4.7GB)
7b-v0.1-q6_K (5.5GB)
7b-v0.1-q8_0 (7.2GB)
7b-v0.2 (3.8GB)
7b-v0.2-fp16 (13GB)
7b-v0.2-q2_K (2.8GB)
7b-v0.2-q3_K_L (3.6GB)
7b-v0.2-q3_K_S (2.9GB)
7b-v0.2-q4_0 (3.8GB)
7b-v0.2-q4_1 (4.2GB)
7b-v0.2-q4_K_M (4.1GB)
7b-v0.2-q4_K_S (3.9GB)
7b-v0.2-q5_0 (4.7GB)
7b-v0.2-q5_K_M (4.8GB)
7b-v0.2-q5_K_S (4.7GB)
7b-v0.2-q6_K (5.5GB)
7b-v0.2-q8_0 (7.2GB)
xwinlm
π
Copied!
A conversational model built on Llama 2 demonstrates strong performance across multiple benchmarks. It competes effectively with others in the field.
Category: Language
Downloads: 76K
Last Updated: 1 year ago Read more about: Xwin-LM
latest (4.7GB)
70b (40GB)
8b (4.7GB)
70b-v1.5 (40GB)
70b-v1.5-fp16 (141GB)
70b-v1.5-q2_K (26GB)
70b-v1.5-q3_K_L (37GB)
70b-v1.5-q3_K_M (34GB)
70b-v1.5-q3_K_S (31GB)
70b-v1.5-q4_0 (40GB)
70b-v1.5-q4_1 (44GB)
70b-v1.5-q4_K_M (43GB)
70b-v1.5-q4_K_S (40GB)
70b-v1.5-q5_0 (49GB)
70b-v1.5-q5_1 (53GB)
70b-v1.5-q5_K_M (50GB)
70b-v1.5-q5_K_S (49GB)
70b-v1.5-q6_K (58GB)
70b-v1.5-q8_0 (75GB)
8b-v1.5 (4.7GB)
8b-v1.5-fp16 (16GB)
8b-v1.5-q2_K (3.2GB)
8b-v1.5-q3_K_L (4.3GB)
8b-v1.5-q3_K_M (4.0GB)
8b-v1.5-q3_K_S (3.7GB)
8b-v1.5-q4_0 (4.7GB)
8b-v1.5-q4_1 (5.1GB)
8b-v1.5-q4_K_M (4.9GB)
8b-v1.5-q4_K_S (4.7GB)
8b-v1.5-q5_0 (5.6GB)
8b-v1.5-q5_1 (6.1GB)
8b-v1.5-q5_K_M (5.7GB)
8b-v1.5-q5_K_S (5.6GB)
8b-v1.5-q6_K (6.6GB)
8b-v1.5-q8_0 (8.5GB)
llama3-chatqa
π
Copied!
NVIDIA has developed a model leveraging Llama 3, which is particularly effective at conversational question answering (QA) and retrieval-augmented generation (RAG). It excels in these tasks, making it a powerful tool for generating responses and retrieving information.
Category: Specialized
Downloads: 76.5K
Last Updated: 5 months ago Read more about: LLama 3 ChatQA-1.5
latest (671MB)
335m (671MB)
335m-en-v1.5-fp16 (671MB)
bge-large
π
Copied!
BAAI's embedding model converts texts into vectors.
Category: Tiny,
Downloads: 10.4K
Last Updated: 3 months ago
latest (563MB)
278m (563MB)
278m-mpnet-base-v2-fp16 (563MB)
paraphrase-multilingual
π
Copied!
The sentence-transformers model is suitable for tasks such as clustering and semantic search. It can efficiently process and understand textual data for these applications.
Category: Tiny,
Downloads: 5,995
Last Updated: 3 months ago
latest (3.8GB)
13b (7.4GB)
7b (3.8GB)
13b-fp16 (26GB)
13b-q2_K (5.4GB)
13b-q3_K_L (6.9GB)
13b-q3_K_M (6.3GB)
13b-q3_K_S (5.7GB)
13b-q4_0 (7.4GB)
13b-q4_1 (8.2GB)
13b-q4_K_M (7.9GB)
13b-q4_K_S (7.4GB)
13b-q5_0 (9.0GB)
13b-q5_1 (9.8GB)
13b-q5_K_M (9.2GB)
13b-q5_K_S (9.0GB)
13b-q6_K (11GB)
13b-q8_0 (14GB)
7b-fp16 (13GB)
7b-q2_K (2.8GB)
7b-q3_K_L (3.6GB)
7b-q3_K_M (3.3GB)
7b-q3_K_S (2.9GB)
7b-q4_0 (3.8GB)
7b-q4_1 (4.2GB)
7b-q4_K_M (4.1GB)
7b-q4_K_S (3.9GB)
7b-q5_0 (4.7GB)
7b-q5_1 (5.1GB)
7b-q5_K_M (4.8GB)
7b-q5_K_S (4.7GB)
7b-q6_K (5.5GB)
7b-q8_0 (7.2GB)
orca2
π
Copied!
Orca 2, developed by Microsoft Research, is an optimized version of Meta's Llama 2 models. It is specifically crafted to excel in reasoning tasks.
Category: Language
Downloads: 56K
Last Updated: 11 months ago Read more about: Orca 2
latest (1.2GB)
567m (1.2GB)
567m-fp16 (1.2GB)
bge-m3
π
Copied!
The BGE-M3, a recent model from BAAI, stands out for its versatility in multi-functionality, multilingual capabilities, and multi-granularity.
Category: Language
Downloads: 26.8K
Last Updated: 3 months ago
70b-llama2-q2_K (29GB)
70b-llama2-q3_K_L (36GB)
70b-llama2-q3_K_M (33GB)
70b-llama2-q3_K_S (30GB)
70b-llama2-q4_0 (39GB)
70b-llama2-q4_1 (43GB)
70b-llama2-q4_K_M (41GB)
70b-llama2-q4_K_S (39GB)
70b-llama2-q5_0 (47GB)
70b-llama2-q5_K_M (49GB)
70b-llama2-q5_K_S (47GB)
70b-llama2-q6_K (57GB)
70b-llama2-q8_0 (73GB)
30b-fp16 (65GB)
30b-q2_K (14GB)
30b-q3_K_L (17GB)
30b-q3_K_M (16GB)
30b-q3_K_S (14GB)
30b-q4_0 (18GB)
30b-q4_1 (20GB)
30b-q4_K_M (20GB)
30b-q4_K_S (18GB)
30b-q5_0 (22GB)
30b-q5_1 (24GB)
30b-q5_K_M (23GB)
30b-q5_K_S (22GB)
30b-q6_K (27GB)
30b-q8_0 (35GB)
13b-llama2-fp16 (26GB)
13b-llama2-q2_K (5.4GB)
13b-llama2-q3_K_L (6.9GB)
13b-llama2-q3_K_M (6.3GB)
13b-llama2-q3_K_S (5.7GB)
13b-llama2-q4_0 (7.4GB)
13b-llama2-q4_1 (8.2GB)
13b-llama2-q4_K_M (7.9GB)
13b-llama2-q4_K_S (7.4GB)
13b-llama2-q5_0 (9.0GB)
13b-llama2-q5_1 (9.8GB)
13b-llama2-q5_K_M (9.2GB)
13b-llama2-q5_K_S (9.0GB)
13b-llama2-q6_K (11GB)
13b-llama2-q8_0 (14GB)
13b-fp16 (26GB)
13b-q2_K (5.4GB)
13b-q3_K_L (6.9GB)
13b-q3_K_M (6.3GB)
13b-q3_K_S (5.7GB)
13b-q4_0 (7.4GB)
13b-q4_1 (8.2GB)
13b-q4_K_M (7.9GB)
13b-q4_K_S (7.4GB)
13b-q5_0 (9.0GB)
13b-q5_1 (9.8GB)
13b-q5_K_M (9.2GB)
13b-q5_K_S (9.0GB)
13b-q6_K (11GB)
13b-q8_0 (14GB)
7b-fp16 (13GB)
7b-q2_K (2.8GB)
7b-q3_K_L (3.6GB)
7b-q3_K_M (3.3GB)
7b-q3_K_S (2.9GB)
7b-q4_0 (3.8GB)
7b-q4_1 (4.2GB)
7b-q4_K_M (4.1GB)
7b-q4_K_S (3.9GB)
7b-q5_0 (4.7GB)
7b-q5_1 (5.1GB)
7b-q5_K_M (4.8GB)
7b-q5_K_S (4.7GB)
7b-q6_K (5.5GB)
7b-q8_0 (7.2GB)
wizardlm
π
Copied!
A general-purpose model that utilizes Llama 2.
Category: Language
Downloads: 69.7K
Last Updated: 1 year ago Read more about: WizardLM
latest (4.1GB)
7b (4.1GB)
7b-instruct-fp16 (14GB)
7b-instruct-q2_K (3.1GB)
7b-instruct-q3_K_L (3.8GB)
7b-instruct-q3_K_M (3.5GB)
7b-instruct-q3_K_S (3.2GB)
7b-instruct-q4_0 (4.1GB)
7b-instruct-q4_1 (4.6GB)
7b-instruct-q4_K_M (4.4GB)
7b-instruct-q4_K_S (4.1GB)
7b-instruct-q5_0 (5.0GB)
7b-instruct-q5_1 (5.4GB)
7b-instruct-q5_K_M (5.1GB)
7b-instruct-q5_K_S (5.0GB)
7b-instruct-q6_K (5.9GB)
7b-instruct-q8_0 (7.7GB)
7b-text (4.1GB)
7b-text-fp16 (14GB)
7b-text-q2_K (3.1GB)
7b-text-q3_K_L (3.8GB)
7b-text-q3_K_M (3.5GB)
7b-text-q3_K_S (3.2GB)
7b-text-q4_0 (4.1GB)
7b-text-q4_1 (4.6GB)
7b-text-q4_K_M (4.4GB)
7b-text-q4_K_S (4.1GB)
7b-text-q5_0 (5.0GB)
7b-text-q5_1 (5.4GB)
7b-text-q5_K_M (5.1GB)
7b-text-q5_K_S (5.0GB)
7b-text-q6_K (5.9GB)
7b-text-q8_0 (7.7GB)
7b-v1.2-text (4.1GB)
7b-v1.2-text-fp16 (14GB)
7b-v1.2-text-q2_K (3.1GB)
7b-v1.2-text-q3_K_L (3.8GB)
7b-v1.2-text-q3_K_M (3.5GB)
7b-v1.2-text-q3_K_S (3.2GB)
7b-v1.2-text-q4_0 (4.1GB)
7b-v1.2-text-q4_1 (4.6GB)
7b-v1.2-text-q4_K_M (4.4GB)
7b-v1.2-text-q4_K_S (4.1GB)
7b-v1.2-text-q5_0 (5.0GB)
7b-v1.2-text-q5_1 (5.4GB)
7b-v1.2-text-q5_K_M (5.1GB)
7b-v1.2-text-q5_K_S (5.0GB)
7b-v1.2-text-q6_K (5.9GB)
7b-v1.2-text-q8_0 (7.7GB)
samantha-mistral
π
Copied!
An assistant specializing in philosophy, psychology, and personal relationships. Built on the Mistral model.
Category: Specialized
Downloads: 59.9K
Last Updated: 1 year ago Read more about: Samantha Mistral
latest (1.6GB)
2.7b (1.6GB)
2.7b-v2.6 (1.6GB)
2.7b-v2.6-q2_K (1.2GB)
2.7b-v2.6-q3_K_L (1.6GB)
2.7b-v2.6-q3_K_M (1.5GB)
2.7b-v2.6-q3_K_S (1.3GB)
2.7b-v2.6-q4_0 (1.6GB)
2.7b-v2.6-q4_K_M (1.8GB)
2.7b-v2.6-q4_K_S (1.6GB)
2.7b-v2.6-q5_0 (1.9GB)
2.7b-v2.6-q5_K_M (2.1GB)
2.7b-v2.6-q5_K_S (1.9GB)
2.7b-v2.6-q6_K (2.3GB)
2.7b-v2.6-q8_0 (3.0GB)
dolphin-phi
π
Copied!
The uncensored Dolphin model, developed by Eric Hartford, is a 2.7 billion parameter model based on Microsoft's Phi language model.
Category: Uncensored
Downloads: 48.3K
Last Updated: 10 months ago Read more about: Dolphin Phi
latest (3.8GB)
70b (39GB)
13b (7.4GB)
7b (3.8GB)
70b-fp16 (138GB)
70b-q2_K (29GB)
70b-q3_K_L (36GB)
70b-q3_K_M (33GB)
70b-q3_K_S (30GB)
70b-q4_0 (39GB)
70b-q4_1 (43GB)
70b-q4_K_M (41GB)
70b-q4_K_S (39GB)
70b-q5_0 (47GB)
70b-q5_1 (52GB)
70b-q5_K_M (49GB)
70b-q5_K_S (47GB)
70b-q6_K (57GB)
70b-q8_0 (73GB)
13b-fp16 (26GB)
13b-q2_K (5.4GB)
13b-q3_K_L (6.9GB)
13b-q3_K_M (6.3GB)
13b-q3_K_S (5.7GB)
13b-q4_0 (7.4GB)
13b-q4_1 (8.2GB)
13b-q4_K_M (7.9GB)
13b-q4_K_S (7.4GB)
13b-q5_0 (9.0GB)
13b-q5_1 (9.8GB)
13b-q5_K_M (9.2GB)
13b-q5_K_S (9.0GB)
13b-q6_K (11GB)
13b-q8_0 (14GB)
7b-fp16 (13GB)
7b-q2_K (2.8GB)
7b-q3_K_L (3.6GB)
7b-q3_K_M (3.3GB)
7b-q3_K_S (2.9GB)
7b-q4_0 (3.8GB)
7b-q4_1 (4.2GB)
7b-q4_K_M (4.1GB)
7b-q4_K_S (3.9GB)
7b-q5_0 (4.7GB)
7b-q5_1 (5.1GB)
7b-q5_K_M (4.8GB)
7b-q5_K_S (4.7GB)
7b-q6_K (5.5GB)
7b-q8_0 (7.2GB)
stable-beluga
π
Copied!
A Llama 2-based model has been fine-tuned using an Orca-style dataset, originally named Free Willy.
Category: Language
Downloads: 53.4K
Last Updated: 1 year ago Read more about: Stable Beluga
latest (4.7GB)
7b (4.7GB)
7b-v1-fp16 (15GB)
7b-v1-q2_K (3.7GB)
7b-v1-q3_K_L (4.4GB)
7b-v1-q3_K_M (4.1GB)
7b-v1-q3_K_S (3.8GB)
7b-v1-q4_0 (4.7GB)
7b-v1-q4_1 (5.2GB)
7b-v1-q4_K_M (5.0GB)
7b-v1-q4_K_S (4.8GB)
7b-v1-q5_0 (5.6GB)
7b-v1-q5_1 (6.1GB)
7b-v1-q5_K_M (5.8GB)
7b-v1-q5_K_S (5.6GB)
7b-v1-q6_K (6.6GB)
7b-v1-q8_0 (8.3GB)
bakllava
π
Copied!
BakLLaVA is a multimodal model that combines the Mistral 7B base model with the LLaVA architecture.
Category: Multimodal
Downloads: 93.5K
Last Updated: 10 months ago Read more about: BakLLaVA
latest (7.4GB)
13b (7.4GB)
13b-llama2 (7.4GB)
13b-llama2-fp16 (26GB)
13b-llama2-q2_K (5.4GB)
13b-llama2-q3_K_L (6.9GB)
13b-llama2-q3_K_M (6.3GB)
13b-llama2-q3_K_S (5.7GB)
13b-llama2-q4_0 (7.4GB)
13b-llama2-q4_1 (8.2GB)
13b-llama2-q4_K_M (7.9GB)
13b-llama2-q4_K_S (7.4GB)
13b-llama2-q5_0 (9.0GB)
13b-llama2-q5_1 (9.8GB)
13b-llama2-q5_K_M (9.2GB)
13b-llama2-q5_K_S (9.0GB)
13b-llama2-q6_K (11GB)
13b-llama2-q8_0 (14GB)
wizardlm-uncensored
π
Copied!
Unedited version of the Wizard LM model.
Category: Uncensored
Downloads: 45.5K
Last Updated: 1 year ago Read more about: WizardLM Uncensored
latest (3.8GB)
7b (3.8GB)
7b-fp16 (13GB)
7b-q2_K (2.8GB)
7b-q3_K_L (3.6GB)
7b-q3_K_M (3.3GB)
7b-q3_K_S (2.9GB)
7b-q4_0 (3.8GB)
7b-q4_1 (4.2GB)
7b-q4_K_M (4.1GB)
7b-q4_K_S (3.9GB)
7b-q5_0 (4.7GB)
7b-q5_1 (5.1GB)
7b-q5_K_M (4.8GB)
7b-q5_K_S (4.7GB)
7b-q6_K (5.5GB)
7b-q8_0 (7.2GB)
medllama2
π
Copied!
The Llama 2 model has been fine-tuned to respond to medical queries using an open-source medical dataset. This adaptation enhances its effectiveness in addressing health-related inquiries.
Category: Specialized
Downloads: 38.3K
Last Updated: 1 year ago Read more about: MedLlama2
latest (4.1GB)
7b (4.1GB)
7b-128k (4.1GB)
7b-128k-fp16 (14GB)
7b-128k-q2_K (3.1GB)
7b-128k-q3_K_L (3.8GB)
7b-128k-q3_K_M (3.5GB)
7b-128k-q3_K_S (3.2GB)
7b-128k-q4_0 (4.1GB)
7b-128k-q4_1 (4.6GB)
7b-128k-q4_K_M (4.4GB)
7b-128k-q4_K_S (4.1GB)
7b-128k-q5_0 (5.0GB)
7b-128k-q5_1 (5.4GB)
7b-128k-q5_K_M (5.1GB)
7b-128k-q5_K_S (5.0GB)
7b-128k-q6_K (5.9GB)
7b-128k-q8_0 (7.7GB)
7b-64k (4.1GB)
7b-64k-q2_K (3.1GB)
7b-64k-q3_K_L (3.8GB)
7b-64k-q3_K_M (3.5GB)
7b-64k-q3_K_S (3.2GB)
7b-64k-q4_0 (4.1GB)
7b-64k-q4_1 (4.6GB)
7b-64k-q4_K_M (4.4GB)
7b-64k-q4_K_S (4.1GB)
7b-64k-q5_0 (5.0GB)
7b-64k-q5_1 (5.4GB)
7b-64k-q5_K_M (5.1GB)
7b-64k-q5_K_S (5.0GB)
7b-64k-q6_K (5.9GB)
7b-64k-q8_0 (7.7GB)
yarn-mistral
π
Copied!
A version of Mistral has been expanded to accommodate context windows of 64K or 128K.
Category: Language
Downloads: 41.1K
Last Updated: 1 year ago Read more about: Yarn Mistral
latest (26GB)
8x7b (26GB)
dpo (26GB)
8x7b-dpo-fp16 (93GB)
8x7b-dpo-q2_K (16GB)
8x7b-dpo-q3_K_L (20GB)
8x7b-dpo-q3_K_M (20GB)
8x7b-dpo-q3_K_S (20GB)
8x7b-dpo-q4_0 (26GB)
8x7b-dpo-q4_1 (29GB)
8x7b-dpo-q4_K_M (26GB)
8x7b-dpo-q4_K_S (26GB)
8x7b-dpo-q5_0 (32GB)
8x7b-dpo-q5_1 (35GB)
8x7b-dpo-q5_K_M (32GB)
8x7b-dpo-q5_K_S (32GB)
8x7b-dpo-q6_K (38GB)
8x7b-dpo-q8_0 (50GB)
nous-hermes2-mixtral
π
Copied!
The Nous Hermes 2 model by Nous Research has now been trained on Mixtral.
latest (4.7GB)
instruct (4.7GB)
text (4.7GB)
8b-instruct-fp16 (17GB)
8b-instruct-q2_K (3.5GB)
8b-instruct-q3_K_L (4.5GB)
8b-instruct-q3_K_M (4.1GB)
8b-instruct-q3_K_S (3.6GB)
8b-instruct-q4_0 (4.7GB)
8b-instruct-q4_1 (5.3GB)
8b-instruct-q4_K_M (5.1GB)
8b-instruct-q4_K_S (4.8GB)
8b-instruct-q5_0 (5.8GB)
8b-instruct-q5_1 (6.3GB)
8b-instruct-q5_K_M (5.9GB)
8b-instruct-q5_K_S (5.8GB)
8b-instruct-q6_K (6.9GB)
8b-instruct-q8_0 (8.9GB)
8b-text-fp16 (17GB)
8b-text-q2_K (3.5GB)
8b-text-q3_K_L (4.5GB)
8b-text-q3_K_M (4.1GB)
8b-text-q3_K_S (3.6GB)
8b-text-q4_0 (4.7GB)
8b-text-q4_1 (5.3GB)
8b-text-q4_K_M (5.1GB)
8b-text-q4_K_S (4.8GB)
8b-text-q5_0 (5.8GB)
8b-text-q5_1 (6.3GB)
8b-text-q5_K_M (5.9GB)
8b-text-q5_K_S (5.8GB)
8b-text-q6_K (6.9GB)
8b-text-q8_0 (8.9GB)
llama-pro
π
Copied!
An extended version of Llama 2 designed to combine general language comprehension with specialized knowledge, especially in programming and mathematics.
Category: Language
Downloads: 40.6K
Last Updated: 10 months ago Read more about: LLaMa-Pro
latest (8.9GB)
236b (133GB)
16b (8.9GB)
lite (8.9GB)
236b-chat-f16 (472GB)
236b-chat-fp16 (472GB)
236b-chat-q2_K (86GB)
236b-chat-q3_K_L (122GB)
236b-chat-q3_K_M (113GB)
236b-chat-q3_K_S (102GB)
236b-chat-q4_0 (133GB)
236b-chat-q4_1 (148GB)
236b-chat-q4_K_M (142GB)
236b-chat-q4_K_S (134GB)
236b-chat-q5_0 (162GB)
236b-chat-q5_1 (177GB)
236b-chat-q5_K_M (167GB)
236b-chat-q5_K_S (162GB)
236b-chat-q6_K (194GB)
236b-chat-q8_0 (251GB)
16b-lite-chat-f16 (31GB)
16b-lite-chat-fp16 (31GB)
16b-lite-chat-q2_K (6.4GB)
16b-lite-chat-q3_K_L (8.5GB)
16b-lite-chat-q3_K_M (8.1GB)
16b-lite-chat-q3_K_S (7.5GB)
16b-lite-chat-q4_0 (8.9GB)
16b-lite-chat-q4_1 (9.9GB)
16b-lite-chat-q4_K_M (10GB)
16b-lite-chat-q4_K_S (9.5GB)
16b-lite-chat-q5_0 (11GB)
16b-lite-chat-q5_1 (12GB)
16b-lite-chat-q5_K_M (12GB)
16b-lite-chat-q5_K_S (11GB)
16b-lite-chat-q6_K (14GB)
16b-lite-chat-q8_0 (17GB)
deepseek-v2
π
Copied!
An efficient and cost-effective Mixture-of-Experts language model that demonstrates strength in performance.
Category: Language
Downloads: 64.8K
Last Updated: 4 months ago Read more about: DeepSeek-V2
latest (3.8GB)
70b (39GB)
7b (3.8GB)
70b-q4_0 (39GB)
70b-q4_1 (43GB)
70b-q4_K_S (39GB)
70b-q5_1 (52GB)
7b-fp16 (13GB)
7b-q2_K (2.8GB)
7b-q3_K_L (3.6GB)
7b-q3_K_M (3.3GB)
7b-q3_K_S (2.9GB)
7b-q4_0 (3.8GB)
7b-q4_1 (4.2GB)
7b-q4_K_M (4.1GB)
7b-q4_K_S (3.9GB)
7b-q5_0 (4.7GB)
7b-q5_1 (5.1GB)
7b-q5_K_M (4.8GB)
7b-q5_K_S (4.7GB)
7b-q6_K (5.5GB)
7b-q8_0 (7.2GB)
meditron
π
Copied!
An open-source medical large language model, tailored from Llama 2 for healthcare applications.
Category: Specialized
Downloads: 37.6K
Last Updated: 11 months ago Read more about: Meditron
latest (7.4GB)
13b (7.4GB)
13b-llama2-chat (7.4GB)
13b-llama2 (7.4GB)
13b-llama2-chat-fp16 (26GB)
13b-llama2-chat-q2_K (5.4GB)
13b-llama2-chat-q3_K_L (6.9GB)
13b-llama2-chat-q3_K_M (6.3GB)
13b-llama2-chat-q3_K_S (5.7GB)
13b-llama2-chat-q4_0 (7.4GB)
13b-llama2-chat-q4_1 (8.2GB)
13b-llama2-chat-q4_K_M (7.9GB)
13b-llama2-chat-q4_K_S (7.4GB)
13b-llama2-chat-q5_0 (9.0GB)
13b-llama2-chat-q5_1 (9.8GB)
13b-llama2-chat-q5_K_M (9.2GB)
13b-llama2-chat-q5_K_S (9.0GB)
13b-llama2-chat-q6_K (11GB)
13b-llama2-chat-q8_0 (14GB)
codeup
π
Copied!
An excellent code generation model built on Llama2.
Category: Coding
Downloads: 32.9K
Last Updated: 1 year ago Read more about: CodeUp
latest (7.4GB)
13b (7.4GB)
13b-v2-fp16 (26GB)
13b-v2-q2_K (5.4GB)
13b-v2-q3_K_L (6.9GB)
13b-v2-q3_K_M (6.3GB)
13b-v2-q3_K_S (5.7GB)
13b-v2-q4_0 (7.4GB)
13b-v2-q4_1 (8.2GB)
13b-v2-q4_K_M (7.9GB)
13b-v2-q4_K_S (7.4GB)
13b-v2-q5_0 (9.0GB)
13b-v2-q5_1 (9.8GB)
13b-v2-q5_K_M (9.2GB)
13b-v2-q5_K_S (9.0GB)
13b-v2-q6_K (11GB)
13b-v2-q8_0 (14GB)
13b-fp16 (26GB)
13b-q2_K (5.4GB)
13b-q3_K_L (6.9GB)
13b-q3_K_M (6.3GB)
13b-q3_K_S (5.7GB)
13b-q4_0 (7.4GB)
13b-q4_1 (8.2GB)
13b-q4_K_M (7.9GB)
13b-q4_K_S (7.4GB)
13b-q5_0 (9.0GB)
13b-q5_1 (9.8GB)
13b-q5_K_M (9.2GB)
13b-q5_K_S (9.0GB)
13b-q6_K (11GB)
13b-q8_0 (14GB)
nexusraven
π
Copied!
Nexus Raven is a 13B model specifically designed for function calling tasks. It has been fine-tuned with a focus on instruction execution.
Category: Specialized
Downloads: 37.2K
Last Updated: 9 months ago Read more about: Nexus Raven 13B
latest (7.4GB)
13b (7.4GB)
13b-16k (7.4GB)
13b-16k-fp16 (26GB)
13b-16k-q2_K (5.4GB)
13b-16k-q3_K_L (6.9GB)
13b-16k-q3_K_M (6.3GB)
13b-16k-q3_K_S (5.7GB)
13b-16k-q4_0 (7.4GB)
13b-16k-q4_1 (8.2GB)
13b-16k-q4_K_M (7.9GB)
13b-16k-q4_K_S (7.4GB)
13b-16k-q5_0 (9.0GB)
13b-16k-q5_1 (9.8GB)
13b-16k-q5_K_M (9.2GB)
13b-16k-q5_K_S (9.0GB)
13b-16k-q6_K (11GB)
13b-16k-q8_0 (14GB)
everythinglm
π
Copied!
An uncensored model based on Llama2 that offers support for a 16K context window.
Category: Uncensored
Downloads: 31.1K
Last Updated: 10 months ago Read more about: Everything LM
latest (2.9GB)
3.8b (2.9GB)
3.8b-mini-fp16 (8.3GB)
3.8b-mini-q4_0 (2.9GB)
llava-phi3
π
Copied!
A compact LLaVA model has been fine-tuned based on Phi 3 Mini.
Category: Multimodal
Downloads: 48.3K
Last Updated: 6 months ago Read more about: LLaVa Phi-3
latest (3.8GB)
7b (3.8GB)
7b-s-cl (3.8GB)
7b-s-cl-fp16 (13GB)
7b-s-cl-q2_K (2.8GB)
7b-s-cl-q3_K_L (3.6GB)
7b-s-cl-q3_K_M (3.3GB)
7b-s-cl-q3_K_S (2.9GB)
7b-s-cl-q4_0 (3.8GB)
7b-s-cl-q4_1 (4.2GB)
7b-s-cl-q4_K_M (4.1GB)
7b-s-cl-q4_K_S (3.9GB)
7b-s-cl-q5_0 (4.7GB)
7b-s-cl-q5_1 (5.1GB)
7b-s-cl-q5_K_M (4.8GB)
7b-s-cl-q5_K_S (4.7GB)
7b-s-cl-q6_K (5.5GB)
7b-s-cl-q8_0 (7.2GB)
magicoder
π
Copied!
Category: Language
Downloads: 28.3K
Last Updated: 11 months ago Read more about: Magicoder
latest (19GB)
34b (19GB)
34b-v0.1-fp16 (67GB)
34b-v0.1-q2_K (14GB)
34b-v0.1-q3_K_L (18GB)
34b-v0.1-q3_K_M (16GB)
34b-v0.1-q3_K_S (15GB)
34b-v0.1-q4_0 (19GB)
34b-v0.1-q4_1 (21GB)
34b-v0.1-q4_K_M (20GB)
34b-v0.1-q5_0 (23GB)
34b-v0.1-q5_1 (25GB)
34b-v0.1-q5_K_M (24GB)
34b-v0.1-q5_K_S (23GB)
34b-v0.1-q6_K (28GB)
34b-v0.1-q8_0 (36GB)
codebooga
π
Copied!
A highly effective code instruction model has been developed by combining two existing code models.
Category: Coding
Downloads: 27.2K
Last Updated: 1 year ago Read more about: Codebooga
latest (4.1GB)
7b (4.1GB)
7b-v0.1-fp16 (14GB)
7b-v0.1-q2_K (3.1GB)
7b-v0.1-q3_K_L (3.8GB)
7b-v0.1-q3_K_M (3.5GB)
7b-v0.1-q3_K_S (3.2GB)
7b-v0.1-q4_0 (4.1GB)
7b-v0.1-q4_1 (4.6GB)
7b-v0.1-q4_K_M (4.4GB)
7b-v0.1-q4_K_S (4.1GB)
7b-v0.1-q5_0 (5.0GB)
7b-v0.1-q5_1 (5.4GB)
7b-v0.1-q5_K_M (5.1GB)
7b-v0.1-q5_K_S (5.0GB)
7b-v0.1-q6_K (5.9GB)
7b-v0.1-q8_0 (7.7GB)
mistrallite
π
Copied!
MistralLite is a refined model derived from Mistral, designed to improve the handling of lengthy contexts. Its advanced capabilities enable better processing for extended information.
Category: Language
Downloads: 25.7K
Last Updated: 1 year ago Read more about: Mistrallite
latest (7.4GB)
13b (7.4GB)
13b-fp16 (26GB)
13b-q2_K (5.4GB)
13b-q3_K_L (6.9GB)
13b-q3_K_M (6.3GB)
13b-q3_K_S (5.7GB)
13b-q4_0 (7.4GB)
13b-q4_1 (8.2GB)
13b-q4_K_M (7.9GB)
13b-q4_K_S (7.4GB)
13b-q5_0 (9.0GB)
13b-q5_1 (9.8GB)
13b-q5_K_M (9.2GB)
13b-q5_K_S (9.0GB)
13b-q6_K (11GB)
13b-q8_0 (14GB)
wizard-vicuna
π
Copied!
Wizard Vicuna is a 13 billion parameter model built on Llama 2, developed by MelodysDreamj.
Category: Language
Downloads: 26.3K
Last Updated: 1 year ago Read more about: Wizard Vicuna
latest (5.5GB)
9b (5.5GB)
9b-chat-fp16 (19GB)
9b-chat-q2_K (4.0GB)
9b-chat-q3_K_L (5.3GB)
9b-chat-q3_K_M (5.1GB)
9b-chat-q3_K_S (4.6GB)
9b-chat-q4_0 (5.5GB)
9b-chat-q4_1 (6.0GB)
9b-chat-q4_K_M (6.3GB)
9b-chat-q4_K_S (5.8GB)
9b-chat-q5_0 (6.6GB)
9b-chat-q5_1 (7.1GB)
9b-chat-q5_K_M (7.1GB)
9b-chat-q5_K_S (6.7GB)
9b-chat-q6_K (8.3GB)
9b-chat-q8_0 (10.0GB)
9b-text-fp16 (19GB)
9b-text-q2_K (4.0GB)
9b-text-q3_K_L (5.3GB)
9b-text-q3_K_M (5.1GB)
9b-text-q3_K_S (4.6GB)
9b-text-q4_0 (5.5GB)
9b-text-q4_1 (6.0GB)
9b-text-q4_K_M (6.3GB)
9b-text-q4_K_S (5.8GB)
9b-text-q5_0 (6.6GB)
9b-text-q5_1 (7.1GB)
9b-text-q5_K_M (7.1GB)
9b-text-q5_K_S (6.7GB)
9b-text-q6_K (8.3GB)
9b-text-q8_0 (10.0GB)
glm4
π
Copied!
An impressive multi-lingual general language model that competes well with Llama 3. Its performance is notably strong.
Category: Language
Downloads: 89.4K
Last Updated: 3 months ago Read more about: GLM4
latest (3.8GB)
7b (3.8GB)
7b-fp16 (13GB)
7b-q2_K (2.5GB)
7b-q3_K_L (3.6GB)
7b-q3_K_M (3.3GB)
7b-q3_K_S (2.9GB)
7b-q4_0 (3.8GB)
7b-q4_1 (4.2GB)
7b-q4_K_M (4.1GB)
7b-q4_K_S (3.9GB)
7b-q5_0 (4.7GB)
7b-q5_1 (5.1GB)
7b-q5_K_M (4.8GB)
7b-q5_K_S (4.7GB)
7b-q6_K (5.5GB)
7b-q8_0 (7.2GB)
duckdb-nsql
π
Copied!
The 7B parameter text-to-SQL model was developed by MotherDuck and Numbers Station.
Category: Specialized
Downloads: 24.4K
Last Updated: 9 months ago Read more about: DuckDB-NSQL
latest (6.4GB)
11b (6.4GB)
11b-fp16 (22GB)
11b-q2_K (4.3GB)
11b-q3_K_L (5.8GB)
11b-q3_K_M (5.4GB)
11b-q3_K_S (4.9GB)
11b-q4_0 (6.4GB)
11b-q4_1 (7.1GB)
11b-q4_K_M (6.8GB)
11b-q4_K_S (6.4GB)
11b-q5_0 (7.7GB)
11b-q5_1 (8.4GB)
11b-q5_K_M (8.2GB)
11b-q5_K_S (7.7GB)
11b-q6_K (9.2GB)
11b-q8_0 (12GB)
falcon2
π
Copied!
Falcon2 is a decoder-only model with 11 billion parameters, developed by TII and trained on 5 trillion tokens.
Category: Language
Downloads: 26K
Last Updated: 5 months ago Read more about: Falcon2
latest (74GB)
132b (74GB)
instruct (74GB)
132b-instruct-fp16 (263GB)
132b-instruct-q2_K (48GB)
132b-instruct-q4_0 (74GB)
132b-instruct-q8_0 (140GB)
dbrx
π
Copied!
DBRX is a versatile, open-source large language model developed by Databricks. It is designed for a wide range of applications.
Category: Language
Downloads: 16.1K
Last Updated: 6 months ago Read more about: DBRX
latest (5.5GB)
9b (5.5GB)
9b-all-fp16 (19GB)
9b-all-q2_K (4.0GB)
9b-all-q3_K_L (5.3GB)
9b-all-q3_K_M (5.1GB)
9b-all-q3_K_S (4.6GB)
9b-all-q4_0 (5.5GB)
9b-all-q4_1 (6.0GB)
9b-all-q4_K_M (6.3GB)
9b-all-q4_K_S (5.8GB)
9b-all-q5_0 (6.6GB)
9b-all-q5_1 (7.1GB)
9b-all-q5_K_M (7.1GB)
9b-all-q5_K_S (6.7GB)
9b-all-q6_K (8.3GB)
9b-all-q8_0 (10.0GB)
codegeex4
π
Copied!
A flexible model for AI software development applications, such as code completion. It adapts to various use cases effectively.
Category: Coding
Downloads: 122.6K
Last Updated: 3 months ago Read more about: Codegeex4
latest (4.5GB)
7b (4.5GB)
7b-chat-v2.5-fp16 (15GB)
7b-chat-v2.5-q2_K (3.0GB)
7b-chat-v2.5-q3_K_L (4.1GB)
7b-chat-v2.5-q3_K_M (3.8GB)
7b-chat-v2.5-q3_K_S (3.5GB)
7b-chat-v2.5-q4_0 (4.5GB)
7b-chat-v2.5-q4_1 (4.9GB)
7b-chat-v2.5-q4_K_M (4.7GB)
7b-chat-v2.5-q4_K_S (4.5GB)
7b-chat-v2.5-q5_0 (5.4GB)
7b-chat-v2.5-q5_1 (5.8GB)
7b-chat-v2.5-q5_K_M (5.5GB)
7b-chat-v2.5-q5_K_S (5.4GB)
7b-chat-v2.5-q6_K (6.4GB)
7b-chat-v2.5-q8_0 (8.2GB)
internlm2
π
Copied!
InternLM2.5 is a 7B parameter model designed for real-world applications, showcasing exceptional reasoning abilities.
Category: Language
Downloads: 52.8K
Last Updated: 2 months ago Read more about: InternLM2.5
latest (4.1GB)
7b (4.1GB)
7b-v0.1-fp16 (14GB)
7b-v0.1-q2_K (2.7GB)
7b-v0.1-q3_K_L (3.8GB)
7b-v0.1-q3_K_M (3.5GB)
7b-v0.1-q3_K_S (3.2GB)
7b-v0.1-q4_0 (4.1GB)
7b-v0.1-q4_1 (4.6GB)
7b-v0.1-q4_K_M (4.4GB)
7b-v0.1-q4_K_S (4.1GB)
7b-v0.1-q5_0 (5.0GB)
7b-v0.1-q5_1 (5.4GB)
7b-v0.1-q5_K_M (5.1GB)
7b-v0.1-q5_K_S (5.0GB)
7b-v0.1-q6_K (5.9GB)
7b-v0.1-q8_0 (7.7GB)
mathstral
π
Copied!
MathΞ£tral is a 7 billion parameter model developed by Mistral AI, aimed at enhancing mathematical reasoning and facilitating scientific discoveries.
Category: Specialized
Downloads: 20.2K
Last Updated: 3 months ago Read more about: Mathstral
latest (4.7GB)
70b (40GB)
8b (4.7GB)
70b-fp16 (141GB)
70b-q2_K (26GB)
70b-q3_K_L (37GB)
70b-q3_K_M (34GB)
70b-q3_K_S (31GB)
70b-q4_0 (40GB)
70b-q4_1 (44GB)
70b-q4_K_M (43GB)
70b-q4_K_S (40GB)
70b-q5_0 (49GB)
70b-q5_1 (53GB)
70b-q5_K_M (50GB)
70b-q5_K_S (49GB)
70b-q6_K (58GB)
70b-q8_0 (75GB)
8b-fp16 (16GB)
8b-q2_K (3.2GB)
8b-q3_K_L (4.3GB)
8b-q3_K_M (4.0GB)
8b-q3_K_S (3.7GB)
8b-q4_0 (4.7GB)
8b-q4_1 (5.1GB)
8b-q4_K_M (4.9GB)
8b-q4_K_S (4.7GB)
8b-q5_0 (5.6GB)
8b-q5_1 (6.1GB)
8b-q5_K_M (5.7GB)
8b-q5_K_S (5.6GB)
8b-q6_K (6.6GB)
8b-q8_0 (8.5GB)
llama3-groq-tool-use
π
Copied!
Groq has developed a series of models that significantly enhance open-source AI capabilities for tool usage and function calling. These advancements represent a major step forward in the field.
Category: Specialized
Downloads: 34.9K
Last Updated: 3 months ago Read more about: Llama3-groq-tool-use
latest (40GB)
70b (40GB)
70b-fp16 (141GB)
70b-q2_K (26GB)
70b-q3_K_L (37GB)
70b-q3_K_M (34GB)
70b-q3_K_S (31GB)
70b-q4_0 (40GB)
70b-q4_1 (44GB)
70b-q4_K_M (43GB)
70b-q4_K_S (40GB)
70b-q5_0 (49GB)
70b-q5_1 (53GB)
70b-q5_K_M (50GB)
70b-q5_K_S (49GB)
70b-q6_K (58GB)
70b-q8_0 (75GB)
firefunction-v2
π
Copied!
A function-calling model utilizing open weights, built on Llama 3, offers capabilities that are competitive with GPT-4's function calling.
Category: Language
Downloads: 13.5K
Last Updated: 3 months ago Read more about: Firefunction-v2
latest (2.2GB)
3.8b (2.2GB)
3.8b-fp16 (7.6GB)
3.8b-q2_K (1.4GB)
3.8b-q3_K_L (2.1GB)
3.8b-q3_K_M (2.0GB)
3.8b-q3_K_S (1.7GB)
3.8b-q4_0 (2.2GB)
3.8b-q4_1 (2.4GB)
3.8b-q4_K_M (2.4GB)
3.8b-q4_K_S (2.2GB)
3.8b-q5_0 (2.6GB)
3.8b-q5_1 (2.9GB)
3.8b-q5_K_M (2.8GB)
3.8b-q5_K_S (2.6GB)
3.8b-q6_K (3.1GB)
3.8b-q8_0 (4.1GB)
nuextract
π
Copied!
A 3.8 billion parameter model has been fine-tuned on a private, high-quality synthetic dataset for the purpose of information extraction, utilizing Phi-3 as its foundation.
Category: Language
Downloads: 15.8K
Last Updated: 3 months ago Read more about: NuExtract