GPT4All
Demo, data and code to train an assistant-style large language model with ~800k GPT-3.5-Turbo Generations based on LLaMa


Run on M1 Mac (not sped up!)
Try it yourself
Clone this repository down and download the CPU quantized gpt4all model.
Place the quantized model in the
chat
directory and start chatting by running:./chat/gpt4all-lora-quantized-OSX-m1
on M1 Mac/OSX
./chat/gpt4all-lora-quantized-linux-x86
on Windows/Linux
To compile for custom hardware, see our fork of the Alpaca C++ repo.
Note: the full model on GPU (16GB of RAM required) performs much better in our qualitative evaluations.
Reproducibility
Trained LoRa Weights:
- gpt4all-lora: https://huggingface.co/nomic-ai/gpt4all-lora
- gpt4all-lora-epoch-2 https://huggingface.co/nomic-ai/gpt4all-lora-epoch-2
Raw Data:
We are not distributing a LLaMa 7B checkpoint.
You can reproduce our trained model by doing the following:
Setup
Clone the repo
git clone --recurse-submodules git@github.com:nomic-ai/gpt4all.git
git submodule configure && git submodule update
Setup the environment
Training
Generate
Sample Generations
Provide instructions for the given exercise. Leg Raises
A color description has been provided. Find the CSS code associated with that color. A light red color with a medium light shade of pink
Come up with an interesting idea for a new movie plot. Your plot should be described with a title and a summary.
Reverse a string in python.
List 10 dogs.
Write me a poem about the fall of Julius Ceasar into a ceasar salad in iambic pentameter.
What is a three word topic describing the following keywords: baseball, football, soccer:
If you utilize this reposistory, models or data in a downstream project, please consider citing it with: