LLMs do suck at math, if you look into it, the o1 models actually escape the LLM output and write a python function to calculate the output, I’ve been able to break their math functions by asking for functions that use math not in the standard Python library.
I know someone also wrote a wolfram integration to help solve LLMs math problems.
Terrence Tao (one of the most famous and active mathematician) recently wrote his thoughts in Mastodon on o1 mathematical capabilities. Interesting read: https://mathstodon.xyz/@tao/113132502735585408
Thanks for sharing, knew him from some numberphile vids cool to see they have a mastadon account. Good to know that LLMs are crawling from “incompentent graduate” to “mediocre graduate”. Which basically means its already smarter than most people for many kinds of reasoning task.
I’m not a big fan of the way the guy speaks though, as is common for super intelligent academic types they have to use overly complicated wording to formally describe even the most basic opinions while mixing in hints of inflated ego and intellectual superiority. He should start experimenting with having o-1 as his editor and summarize his toots.
The language wasn’t that complex
Wow that’s really clever actually. Basically using the library as digital scratch paper