If a test expected an error and found at least one, it would not fail on any other expected errors that didn't occur. Also, some tests were expecting a compile time error message even though the test script doesn't validate those (yet). The test function was getting monolithic, so I went ahead and split it into a separate little class.
This contains the automated validation suite for the VM and built-in libraries.
-
benchmark/- Performance tests. These aren't strictly pass/fail, but let us compare performance both against other languages and against previous builds of Wren itself. -
core/- Tests for the built in core library, mainly methods on the core classes. If a bug is inwren_core.corwren_value.c, it will most likely break one of these tests. -
io/- Tests for the built in IO library. In other words, methods on theIOclass. If a bug is inwren_io.c, it should break one of these tests. -
language/- Tests of the language itself, its grammar and runtime semantics. If a bug is inwren_compiler.corwren_vm.c, it will most likely break one of these tests. This includes tests for the syntax for the literal forms of the core classes. -
limit/- Tests for various hardcoded limits. The language doesn't officially specify these limits, but the Wren implementation has them. These tests ensure that limit behavior is well-defined and tested.