AI: Difference between revisions
Jump to navigation
Jump to search
mNo edit summary |
m (→LLama) |
||
| Line 26: | Line 26: | ||
python server.py --model llama-7b-4bit --wbits 4 |
python server.py --model llama-7b-4bit --wbits 4 |
||
python server.py --model llama-13b-4bit-128g --wbits 4 --groupsize 128</blockquote> |
python server.py --model llama-13b-4bit-128g --wbits 4 --groupsize 128</blockquote>https://github.com/qwopqwop200/GPTQ-for-LLaMa/issues/59 for installing with out of space error |
||
Revision as of 15:39, 1 April 2023
LLama
https://rentry.org/llama-tard-v2
https://hackmd.io/@reneil1337/alpaca
https://find.4chan.org/?q=AI+Dynamic+Storytelling+General
https://find.4chan.org/?q=AI+Chatbot+General
https://find.4chan.org/?q=%2Flmg%2F (local models general)
https://boards.4channel.org/g/thread/92400764#p92400764
https://files.catbox.moe/lvefgy.json
https://pytorch.org/hub/nvidia_deeplearningexamples_tacotron2/
python server.py --model llama-7b-4bit --wbits 4python server.py --model llama-13b-4bit-128g --wbits 4 --groupsize 128
https://github.com/qwopqwop200/GPTQ-for-LLaMa/issues/59 for installing with out of space error