hashBits() is used to generate a hash code from the same 64 bits used
to represent a Wren number as a double. When building a map containing
a large number of integer keys, it's important for this to do a good
job scattering the bits across the 32-bit key space.
Alas, it does not. Worse, the benchmark to test this happens to stop
just before the performance falls off a cliff, so this was easy to
overlook.
This replaces it with the hash function V8 uses, which has much better
performance across the numeric range.
I've got some ideas on how to tweak the embedding API, but I want to
see what performance impact they have first, so this adds a little
benchmark that just calls a foreign method a ton of times.
This allows "%(...)" inside a string literal to interpolate the
stringified result of an expression.
It doesn't support custom interpolators or format strings, but we can
consider extending that later.
Get rid of the separate opt-in IO class and replace it with a core
System class.
- Remove wren_io.c, wren_io.h, and io.wren.
- Remove the flags that disable it.
- Remove the overloads for print() with different arity. (It was an
experiment, but I don't think it's that useful.)
- Remove IO.read(). That will reappear using libuv in the CLI at some
point.
- Remove IO.time. Doesn't seem to have been used.
- Update all of the tests, docs, etc.
I'm sorry for all the breakage this causes, but I think "System" is a
better name for this class (it makes it natural to add things like
"System.gc()") and frees up "IO" for referring to the CLI's IO module.
- Create separate libs for each architecture. OS X doesn't need this
(we just build a universal binary), but it will help Linux.
- Move the libuv build stuff into wren.mk where the actual dependency
on the lib is.
- Download libuv to deps/ instead of build/. That way "make clean"
doesn't blow it away.
- Don't redownload libuv unless needed.
* Eliminate "new" reserved word.
* Allow "this" before a method definition to define a constructor.
* Only create a default constructor for classes that don't define one.
Previously, fibers had a hard-coded limit to how big their stack size
is. This limit exists in two forms: the number of distinct call frames
(basically the maximum call depth), and the number of unique stack
slots.
This fixes the first half of this by dynamically allocating the call
frame array and growing it as needed. This makes new fibers smallers
since they can start with a very small array. Checking and growing as
needed doesn't noticeably regress the perf on the other benchmarks, and
it makes a new fiber benchmark about 45% faster.
The stack array is still hardcoded, but that will be in another commit.