AI: Difference between revisions

From Hegemon Wiki
Jump to navigation Jump to search
mNo edit summary
Line 26: Line 26:
python server.py --model llama-7b-4bit --wbits 4
python server.py --model llama-7b-4bit --wbits 4


python server.py --model llama-13b-4bit-128g --wbits 4 --groupsize 128</blockquote>
python server.py --model llama-13b-4bit-128g --wbits 4 --groupsize 128</blockquote>https://github.com/qwopqwop200/GPTQ-for-LLaMa/issues/59 for installing with out of space error

Revision as of 15:39, 1 April 2023