This product was not featured by Product Hunt yet. It will not be visible on their landing page and won't be ranked (cannot win product of the day regardless of upvotes).
54% faster LLM training. SCAO is a sparse, second-order PyTorch optimizer designed as a high-throughput, drop-in replacement for AdamW. - whispering3/scao
The journey from a rejected Pull Request to a functional standalone tool taught me a lot about the gap between academic papers and what we actually need at home on our own GPUs.
I built this because I wanted to see if we could make "expensive" math affordable for devs with modest setups.
No comment highlights available yet. Please check back later!
About SCAO — Optimizer on Product Hunt
“I built a 2nd-order optimizer for LLMs.”
SCAO — Optimizer was submitted on Product Hunt and earned 2 upvotes and 1 comments, placing #158 on the daily leaderboard. 54% faster LLM training. SCAO is a sparse, second-order PyTorch optimizer designed as a high-throughput, drop-in replacement for AdamW. - whispering3/scao
SCAO — Optimizer was featured in Artificial Intelligence (467.3k followers) and GitHub (41.2k followers) on Product Hunt. Together, these topics include over 109.9k products, making this a competitive space to launch in.
Who hunted SCAO — Optimizer?
SCAO — Optimizer was hunted by Danilo Souza. A “hunter” on Product Hunt is the community member who submits a product to the platform — uploading the images, the link, and tagging the makers behind it. Hunters typically write the first comment explaining why a product is worth attention, and their followers are notified the moment they post. Around 79% of featured launches on Product Hunt are self-hunted by their makers, but a well-known hunter still acts as a signal of quality to the rest of the community. See the full all-time top hunters leaderboard to discover who is shaping the Product Hunt ecosystem.
Want to see how SCAO — Optimizer stacked up against nearby launches in real time? Check out the live launch dashboard for upvote speed charts, proximity comparisons, and more analytics.
Hi everyone! Danilo here, the creator of SCAO.
The journey from a rejected Pull Request to a functional standalone tool taught me a lot about the gap between academic papers and what we actually need at home on our own GPUs.
I built this because I wanted to see if we could make "expensive" math affordable for devs with modest setups.
Some quick tips for testing:
Grab the scao.py from the repo.
If you have <8GB VRAM, use the train_local.py example (it uses LoRA).
If you want to see raw speed, try train_1m.py.
I'll be around all day to answer questions about the preconditoner math, the INT8 implementation, or just to chat about LLM architecture.
Can’t wait to hear your feedback!