Welcome
- E

Welcome to Schema Benchmarks, an open source project aiming to provide detailed and transparent comparisons of schema validation libraries.
You can explore the results through the sidebar (and the homepage), and view the full codebase over on GitHub.
I'm @eskimojo, the main developer of this project, working with @fabian-hiller, as part of Open Circle.
TL;DR
Schema Benchmarks provides comprehensive, transparent comparisons of TypeScript schema validation libraries across four key metrics:
- Bundle Size: Typia (~1.9KB) and Valibot (~1.9KB) are the smallest; Zod classic (58KB) is the largest
- Initialization: Typia leads comfortably but all libraries except AJV are fairly fast; AJV is notably slow (~4ms, 90x slower than Valibot)
- Validation: Libraries with certain optimizations (Typia, TypeBox, AJV) are fastest; but all perform reasonably well except Yup
- Parsing: Libraries that abort early have an advantage on invalid data; TypeBox, VineJS, and Yup are consistently slowest
Caveats
We aim to provide a benchmark that will match realistic usage of these libraries, however there are some inherent differences, as our benchmarks are run in series in a Node.js environment, and each test case is looped to get an average.
It's also worth noting that different use cases will benefit from different advantages (for example, bundle size matters a lot more on the client than the server).
There are also some aspects that might be important to you that we cannot measure, such as DX and ecosystem size.
We do provide minimal code snippets for each library to give a loose idea of DX, along with links to the source code used for the bundle size tests.
Why Schema Benchmarks?
Fabian, in his role as the creator of valibot, is well known for his passion for comparing schema validation libraries. However, this was largely confined to his social media posts, and Valibot's website.
Other schema validation libraries have their own comparisons listed on their websites too - but there was no central resource for them.
In early November, Fabian approached me with his idea for a dedicated schema benchmarking project, one that measures all of the different aspects of what can make a schema fast or slow.
For our benchmarks, we separate out each step of a schema's process. This includes:
- Download - How much a schema adds to your app's bundle size, and thus how much download time it can add.
- Initialization - How long it takes to create a schema.
- Validation - How long it takes to check if a value matches a schema (and return a boolean)
- Parsing - How long it takes to check if a value matches a schema, and return a new (typed, and maybe transformed) value.
We also track optimizations used by these libraries (for example, whether it hooks into the build process like typia, or if it generates JIT code at runtime like arktype), and whether they return the first issue found or all of them.
We run these benchmarks in a GitHub action, using rolldown and tinybench.
Current Findings
Info
Download
For schema libraries used on the client, bundle size can matter a lot, as it has an effect on how long it takes for your website to load. On the server, it may matter less.
In order to measure these, we create example usage files for each library, and measure the size of the compiled output (both minified and unminified).
Here are the current results for minified and gzipped bundle size:
At time of writing, two clear leaders emerge:
typia- A transformer library that hooks into the build process, turning Typescript types into runtime functions.valibot- A schema library with a focus on bundle size.
Zod's new zod/mini variant follows behind, ending up at around 5KB (vs the 1.9KB achieved by both aforementioned libraries).
On the opposite end, zod's "classic" API clocks in with the largest size of 58KB, with joi not far behind at 53KB, and effect/Schema at 50KB.
Initialization
schemas.ts
import * as v from "valibot"; export const personSchema = v.object({ name: v.string(), age: v.number(), });
Creating a schema can be a one time cost, but it depends on usage. Servers that are long lasting probably don't care, but short lived processes like CLI tools or browser extensions would want to avoid doing too much work at startup.
For each library, we create a getSchema function that returns a schema (with the same validations as the other libraries, to keep the comparison fair). We then benchmark how long it takes to call this function.
Here are the current results:
typia is notably faster to initialize - which is to be expected, as it generates optimised plain functions at build time. sury and typebox are also very fast, followed by valibot.
A clear outlier here is ajv, taking 4ms to initialize (roughly 90x slower than valibot, for example).
Validation
schemas.ts
import * as v from "valibot"; import { personSchema } from "./schemas"; if (v.is(personSchema, data)) { // data is narrowed to Person }
Checking if a given value matches the schema. This is different to parsing because it doesn't return a new value.
As validation methods are expected to return a boolean, it's not tracked whether they abort early or not (as it is assumed they would if possible).
Note
Not every library supports validation - for example, zod only supports parsing.
In these cases, we categorise them accordingly.
Validating valid data:
Validating invalid data:
As to be expected, the libraries that use optimizations like precompilation (typia) or JIT (typebox, ajv) are among the fastest.
effect, arktype, and valibot are not too far behind, with yup significantly slower than the rest.
Parsing
schemas.ts
import * as v from "valibot"; import { personSchema } from "./schemas"; const person = v.parse(personSchema, data); // person is of type Person
Parsing a value to match the schema. This is different to validation because it returns a new value, instead of a boolean.
Note
Libraries that throw an error (instead of returning one) have to be wrapped in a try/catch for the benchmark to work. This is noted in the graph below with an asterisk (*), as this may have an unknown performance impact.
Benchmarks that abort early (i.e. return the first error found) are noted in the graph below with a dagger (†) - this is relevant when parsing invalid data, as it will (usually) be faster than libraries that return all errors.
Parsing valid data:
Parsing invalid data:
Unsurprisingly, aborting early when possible assists with speed of parsing invalid data. For example, zod and arktype suffer in this comparison, as they don't provide an option to abort early.
When filtering out all results that abort early, the playing field levels out a bit:
Consistently slowest in these tests appears to be typebox, followed by vinejs, then yup - though two of these involve a try/catch wrapper, which may be slowing them down.
Future Plans
We'll continue to contribute new features and improvements to this project over time.
For example, we're looking into adding stack trace testing, inspired by this tweet by @trav:
Contributing
Think you can help us out? Great! Check out our contributing guide.