AI Library
Nous Hermes, developed by Nous Research, boasts two primary variants based on different architectures: a 13 billion parameter model built upon the original Llama framework and both 7 billion and 13 billion parameter models based on Llama 2. All these models are general-use and have been trained using identical datasets, ensuring versatility and reliability in applications.
Before running Nous Hermes, it is important to be aware of the memory requirements:
If you encounter issues when using higher quantization levels, consider switching to the q4 model or closing other memory-intensive applications.
Ollama provides various quantized variants of the Nous Hermes models, optimized for local execution:
For your convenience, Nous Hermes models come with various aliases:
Aliases |
---|
latest, 7b, 7b-llama2, 7b-llama2-q4_0 |
13b, 13b-llama2, 13b-llama2-q4_0 |