AI: Difference between revisions

From Hegemon Wiki
Jump to navigation Jump to search
mNo edit summary
Line 27: Line 27:


python server.py --model llama-13b-4bit-128g --wbits 4 --groupsize 128</blockquote>https://github.com/qwopqwop200/GPTQ-for-LLaMa/issues/59 for installing with out of space error
python server.py --model llama-13b-4bit-128g --wbits 4 --groupsize 128</blockquote>https://github.com/qwopqwop200/GPTQ-for-LLaMa/issues/59 for installing with out of space error

https://github.com/oobabooga/text-generation-webui/wiki/LLaMA-model#4-bit-mode

Revision as of 15:41, 1 April 2023