My original plan for ts-wolfram was to quickly write a toy Wolfram Language interpreter in Typescript to better understand Mathematica internals, then abandon it as an exhaust product of learning. But one feature of interpreters is that building them is really fun. Once you get something working you want to keep hacking on it. So, last weekend I decided to allow myself to get nerd-sniped and worked on ts-wolfram
some more.
I wanted to find out how much slower ts-wolfram
is than Mathematica. To measure this I added two more commands: Do
which evaluates an expression a specified number of times, and Timing
which does actual measurement. I then measured the performance of a simple (and deliberately very inefficient) fibonacci function on my Apple M1:
fib[1] := 1
fib[2] := 1
fib[n_] := fib[n-2] + fib[n-1]
Timing[Do[fib[15], 1000]]
(* Mathematica: 0.44s *)
(* ts-wolfram: 3.4s *)
A mere ~8x slowdown was surprising! I put no effort into efficiency and expected closer to ~100x slowdown. Still, I was curious how much time I could shave off with simple optimizations. I reran the code with node --inspect
, connected Chrome’s excellent profile visualizer, and narrowed down the hot spots. I then made the following changes:
Map
allocations in the inner loop.All of these changes combined got me down to… 0.98s, an only ~2.2x slowdown!1 I find this incredible. Certainly Mathematica’s term rewrite loop is optimized to death, and I only spent an hour or two making the most basic optimizations. The fact that V8 runs my barely optimized term rewriting code only ~2.2x slower than Mathematica’s hyper-optimized engine is a testament to the incredible work done by V8 performance engineers.
EDIT: Thanks to Aaron O’Mullan’s PR, ts-wolfram
now has performance parity with Mathematica on the fib
benchmark. I find this absolutely mindblowing.
As I write this I still find myself surprised. I always vaguely knew that V8 is fast. But it never sunk in exactly how fast it is until now.
All the usual benchmarking disclaimers apply. This is meant to be a smoke test of a pet project rather than a serious industry benchmark.↩︎
Oct 21, 2024