The Rhizome Microgrants 2025-26 program has been funded by small donations from our community. Octant-funded projects were ...
Vector Post-Training Quantization (VPTQ) is a novel Post-Training Quantization method that leverages Vector Quantization to high accuracy on LLMs at an extremely low bit-width (<2-bit). VPTQ can ...
Abstract: With the enlarging number of transports on the road and fast growth, traffic flow is a significant current worry that obstructs the financial system’s evolution and affects the quality of ...
A paper co-authored by Prof. Alex Lew has been selected as one of four "Outstanding Papers" at this year's Conference on Language Modeling (COLM 2025), held in Montreal in October. Lew and his ...
Abstract: Large language models (LLMs), such as GPT-4 have demonstrated their ability to understand natural language and generate complex code snippets. This article introduces a novel LLM ...