74eb5fa045
- `example/proxy` is new folder for `example_proxy`. - `riven` is new folder for the main riven lib. - Updated metadata to be an array and include HTTP method. |
||
---|---|---|
.. | ||
dist | ||
lib | ||
scripts | ||
.tonic_example.js | ||
LICENSE | ||
package.json | ||
README.md |
Ajv: Another JSON Schema Validator
The fastest JSON Schema validator for Node.js and browser. Supports draft-04/06/07.
Using version 6
JSON Schema draft-07 is published.
Ajv version 6.0.0 that supports draft-07 is released. It may require either migrating your schemas or updating your code (to continue using draft-04 and v5 schemas, draft-06 schemas will be supported without changes).
Please note: To use Ajv with draft-06 schemas you need to explicitly add the meta-schema to the validator instance:
ajv.addMetaSchema(require('ajv/lib/refs/json-schema-draft-06.json'));
To use Ajv with draft-04 schemas in addition to explicitly adding meta-schema you also need to use option schemaId:
var ajv = new Ajv({schemaId: 'id'});
// If you want to use both draft-04 and draft-06/07 schemas:
// var ajv = new Ajv({schemaId: 'auto'});
ajv.addMetaSchema(require('ajv/lib/refs/json-schema-draft-04.json'));
Contents
- Performance
- Features
- Getting started
- Frequently Asked Questions
- Using in browser
- Command line interface
- Validation
- Security considerations
- Modifying data during validation
- API
- Plugins
- Related packages
- Some packages using Ajv
- Tests, Contributing, History, Support, License
Performance
Ajv generates code using doT templates to turn JSON Schemas into super-fast validation functions that are efficient for v8 optimization.
Currently Ajv is the fastest and the most standard compliant validator according to these benchmarks:
- json-schema-benchmark - 50% faster than the second place
- jsck benchmark - 20-190% faster
- z-schema benchmark
- themis benchmark
Performance of different validators by json-schema-benchmark:
Features
- Ajv implements full JSON Schema draft-06/07 and draft-04 standards:
- all validation keywords (see JSON Schema validation keywords)
- full support of remote refs (remote schemas have to be added with
addSchema
or compiled to be available) - support of circular references between schemas
- correct string lengths for strings with unicode pairs (can be turned off)
- formats defined by JSON Schema draft-07 standard and custom formats (can be turned off)
- validates schemas against meta-schema
- supports browsers and Node.js 0.10-8.x
- asynchronous loading of referenced schemas during compilation
- "All errors" validation mode with option allErrors
- error messages with parameters describing error reasons to allow creating custom error messages
- i18n error messages support with ajv-i18n package
- filtering data from additional properties
- assigning defaults to missing properties and items
- coercing data to the types specified in
type
keywords - custom keywords
- draft-06/07 keywords
const
,contains
,propertyNames
andif/then/else
- draft-06 boolean schemas (
true
/false
as a schema to always pass/fail). - keywords
switch
,patternRequired
,formatMaximum
/formatMinimum
andformatExclusiveMaximum
/formatExclusiveMinimum
from JSON Schema extension proposals with ajv-keywords package - $data reference to use values from the validated data as values for the schema keywords
- asynchronous validation of custom formats and keywords
Currently Ajv is the only validator that passes all the tests from JSON Schema Test Suite (according to json-schema-benchmark, apart from the test that requires that 1.0
is not an integer that is impossible to satisfy in JavaScript).
Install
npm install ajv
Getting started
Try it in the Node.js REPL: https://tonicdev.com/npm/ajv
The fastest validation call:
var Ajv = require('ajv');
var ajv = new Ajv(); // options can be passed, e.g. {allErrors: true}
var validate = ajv.compile(schema);
var valid = validate(data);
if (!valid) console.log(validate.errors);
or with less code
// ...
var valid = ajv.validate(schema, data);
if (!valid) console.log(ajv.errors);
// ...
or
// ...
var valid = ajv.addSchema(schema, 'mySchema')
.validate('mySchema', data);
if (!valid) console.log(ajv.errorsText());
// ...
See API and Options for more details.
Ajv compiles schemas to functions and caches them in all cases (using schema serialized with fast-json-stable-stringify or a custom function as a key), so that the next time the same schema is used (not necessarily the same object instance) it won't be compiled again.
The best performance is achieved when using compiled functions returned by compile
or getSchema
methods (there is no additional function call).
Please note: every time a validation function or ajv.validate
are called errors
property is overwritten. You need to copy errors
array reference to another variable if you want to use it later (e.g., in the callback). See Validation errors
Using in browser
You can require Ajv directly from the code you browserify - in this case Ajv will be a part of your bundle.
If you need to use Ajv in several bundles you can create a separate UMD bundle using npm run bundle
script (thanks to siddo420).
Then you need to load Ajv in the browser:
<script src="ajv.min.js"></script>
This bundle can be used with different module systems; it creates global Ajv
if no module system is found.
The browser bundle is available on cdnjs.
Ajv is tested with these browsers:
Please note: some frameworks, e.g. Dojo, may redefine global require in such way that is not compatible with CommonJS module format. In such case Ajv bundle has to be loaded before the framework and then you can use global Ajv (see issue #234).
Command line interface
CLI is available as a separate npm package ajv-cli. It supports:
- compiling JSON Schemas to test their validity
- BETA: generating standalone module exporting a validation function to be used without Ajv (using ajv-pack)
- migrate schemas to draft-07 (using json-schema-migrate)
- validating data file(s) against JSON Schema
- testing expected validity of data against JSON Schema
- referenced schemas
- custom meta-schemas
- files in JSON and JavaScript format
- all Ajv options
- reporting changes in data after validation in JSON-patch format
Validation keywords
Ajv supports all validation keywords from draft-07 of JSON Schema standard:
- type
- for numbers - maximum, minimum, exclusiveMaximum, exclusiveMinimum, multipleOf
- for strings - maxLength, minLength, pattern, format
- for arrays - maxItems, minItems, uniqueItems, items, additionalItems, contains
- for objects - maxProperties, minProperties, required, properties, patternProperties, additionalProperties, dependencies, propertyNames
- for all types - enum, const
- compound keywords - not, oneOf, anyOf, allOf, if/then/else
With ajv-keywords package Ajv also supports validation keywords from JSON Schema extension proposals for JSON Schema standard:
- patternRequired - like
required
but with patterns that some property should match. - formatMaximum, formatMinimum, formatExclusiveMaximum, formatExclusiveMinimum - setting limits for date, time, etc.
See JSON Schema validation keywords for more details.
Annotation keywords
JSON Schema specification defines several annotation keywords that describe schema itself but do not perform any validation.
title
anddescription
: information about the data represented by that schema$comment
(NEW in draft-07): information for developers. With option$comment
Ajv logs or passes the comment string to the user-supplied function. See Options.default
: a default value of the data instance, see Assigning defaults.examples
(NEW in draft-06): an array of data instances. Ajv does not check the validity of these instances against the schema.readOnly
andwriteOnly
(NEW in draft-07): marks data-instance as read-only or write-only in relation to the source of the data (database, api, etc.).contentEncoding
: RFC 2045, e.g., "base64".contentMediaType
: RFC 2046, e.g., "image/png".
Please note: Ajv does not implement validation of the keywords examples
, contentEncoding
and contentMediaType
but it reserves them. If you want to create a plugin that implements some of them, it should remove these keywords from the instance.
Formats
The following formats are supported for string validation with "format" keyword:
- date: full-date according to RFC3339.
- time: time with optional time-zone.
- date-time: date-time from the same source (time-zone is mandatory).
date
,time
anddate-time
validate ranges infull
mode and only regexp infast
mode (see options). - uri: full URI.
- uri-reference: URI reference, including full and relative URIs.
- uri-template: URI template according to RFC6570
- url (deprecated): URL record.
- email: email address.
- hostname: host name according to RFC1034.
- ipv4: IP address v4.
- ipv6: IP address v6.
- regex: tests whether a string is a valid regular expression by passing it to RegExp constructor.
- uuid: Universally Unique IDentifier according to RFC4122.
- json-pointer: JSON-pointer according to RFC6901.
- relative-json-pointer: relative JSON-pointer according to this draft.
Please note: JSON Schema draft-07 also defines formats iri
, iri-reference
, idn-hostname
and idn-email
for URLs, hostnames and emails with international characters. Ajv does not implement these formats. If you create Ajv plugin that implements them please make a PR to mention this plugin here.
There are two modes of format validation: fast
and full
. This mode affects formats date
, time
, date-time
, uri
, uri-reference
, email
, and hostname
. See Options for details.
You can add additional formats and replace any of the formats above using addFormat method.
The option unknownFormats
allows changing the default behaviour when an unknown format is encountered. In this case Ajv can either fail schema compilation (default) or ignore it (default in versions before 5.0.0). You also can whitelist specific format(s) to be ignored. See Options for details.
You can find regular expressions used for format validation and the sources that were used in formats.js.
Combining schemas with $ref
You can structure your validation logic across multiple schema files and have schemas reference each other using $ref
keyword.
Example:
var schema = {
"$id": "http://example.com/schemas/schema.json",
"type": "object",
"properties": {
"foo": { "$ref": "defs.json#/definitions/int" },
"bar": { "$ref": "defs.json#/definitions/str" }
}
};
var defsSchema = {
"$id": "http://example.com/schemas/defs.json",
"definitions": {
"int": { "type": "integer" },
"str": { "type": "string" }
}
};
Now to compile your schema you can either pass all schemas to Ajv instance:
var ajv = new Ajv({schemas: [schema, defsSchema]});
var validate = ajv.getSchema('http://example.com/schemas/schema.json');
or use addSchema
method:
var ajv = new Ajv;
var validate = ajv.addSchema(defsSchema)
.compile(schema);
See Options and addSchema method.
Please note:
$ref
is resolved as the uri-reference using schema $id as the base URI (see the example).- References can be recursive (and mutually recursive) to implement the schemas for different data structures (such as linked lists, trees, graphs, etc.).
- You don't have to host your schema files at the URIs that you use as schema $id. These URIs are only used to identify the schemas, and according to JSON Schema specification validators should not expect to be able to download the schemas from these URIs.
- The actual location of the schema file in the file system is not used.
- You can pass the identifier of the schema as the second parameter of
addSchema
method or as a property name inschemas
option. This identifier can be used instead of (or in addition to) schema $id. - You cannot have the same $id (or the schema identifier) used for more than one schema - the exception will be thrown.
- You can implement dynamic resolution of the referenced schemas using
compileAsync
method. In this way you can store schemas in any system (files, web, database, etc.) and reference them without explicitly adding to Ajv instance. See Asynchronous schema compilation.
$data reference
With $data
option you can use values from the validated data as the values for the schema keywords. See proposal for more information about how it works.
$data
reference is supported in the keywords: const, enum, format, maximum/minimum, exclusiveMaximum / exclusiveMinimum, maxLength / minLength, maxItems / minItems, maxProperties / minProperties, formatMaximum / formatMinimum, formatExclusiveMaximum / formatExclusiveMinimum, multipleOf, pattern, required, uniqueItems.
The value of "$data" should be a JSON-pointer to the data (the root is always the top level data object, even if the $data reference is inside a referenced subschema) or a relative JSON-pointer (it is relative to the current point in data; if the $data reference is inside a referenced subschema it cannot point to the data outside of the root level for this subschema).
Examples.
This schema requires that the value in property smaller
is less or equal than the value in the property larger:
var ajv = new Ajv({$data: true});
var schema = {
"properties": {
"smaller": {
"type": "number",
"maximum": { "$data": "1/larger" }
},
"larger": { "type": "number" }
}
};
var validData = {
smaller: 5,
larger: 7
};
ajv.validate(schema, validData); // true
This schema requires that the properties have the same format as their field names:
var schema = {
"additionalProperties": {
"type": "string",
"format": { "$data": "0#" }
}
};
var validData = {
'date-time': '1963-06-19T08:30:06.283185Z',
email: 'joe.bloggs@example.com'
}
$data
reference is resolved safely - it won't throw even if some property is undefined. If $data
resolves to undefined
the validation succeeds (with the exclusion of const
keyword). If $data
resolves to incorrect type (e.g. not "number" for maximum keyword) the validation fails.
$merge and $patch keywords
With the package ajv-merge-patch you can use the keywords $merge
and $patch
that allow extending JSON Schemas with patches using formats JSON Merge Patch (RFC 7396) and JSON Patch (RFC 6902).
To add keywords $merge
and $patch
to Ajv instance use this code:
require('ajv-merge-patch')(ajv);
Examples.
Using $merge
:
{
"$merge": {
"source": {
"type": "object",
"properties": { "p": { "type": "string" } },
"additionalProperties": false
},
"with": {
"properties": { "q": { "type": "number" } }
}
}
}
Using $patch
:
{
"$patch": {
"source": {
"type": "object",
"properties": { "p": { "type": "string" } },
"additionalProperties": false
},
"with": [
{ "op": "add", "path": "/properties/q", "value": { "type": "number" } }
]
}
}
The schemas above are equivalent to this schema:
{
"type": "object",
"properties": {
"p": { "type": "string" },
"q": { "type": "number" }
},
"additionalProperties": false
}
The properties source
and with
in the keywords $merge
and $patch
can use absolute or relative $ref
to point to other schemas previously added to the Ajv instance or to the fragments of the current schema.
See the package ajv-merge-patch for more information.
Defining custom keywords
The advantages of using custom keywords are:
- allow creating validation scenarios that cannot be expressed using JSON Schema
- simplify your schemas
- help bringing a bigger part of the validation logic to your schemas
- make your schemas more expressive, less verbose and closer to your application domain
- implement custom data processors that modify your data (
modifying
option MUST be used in keyword definition) and/or create side effects while the data is being validated
If a keyword is used only for side-effects and its validation result is pre-defined, use option valid: true/false
in keyword definition to simplify both generated code (no error handling in case of valid: true
) and your keyword functions (no need to return any validation result).
The concerns you have to be aware of when extending JSON Schema standard with custom keywords are the portability and understanding of your schemas. You will have to support these custom keywords on other platforms and to properly document these keywords so that everybody can understand them in your schemas.
You can define custom keywords with addKeyword method. Keywords are defined on the ajv
instance level - new instances will not have previously defined keywords.
Ajv allows defining keywords with:
- validation function
- compilation function
- macro function
- inline compilation function that should return code (as string) that will be inlined in the currently compiled schema.
Example. range
and exclusiveRange
keywords using compiled schema:
ajv.addKeyword('range', {
type: 'number',
compile: function (sch, parentSchema) {
var min = sch[0];
var max = sch[1];
return parentSchema.exclusiveRange === true
? function (data) { return data > min && data < max; }
: function (data) { return data >= min && data <= max; }
}
});
var schema = { "range": [2, 4], "exclusiveRange": true };
var validate = ajv.compile(schema);
console.log(validate(2.01)); // true
console.log(validate(3.99)); // true
console.log(validate(2)); // false
console.log(validate(4)); // false
Several custom keywords (typeof, instanceof, range and propertyNames) are defined in ajv-keywords package - they can be used for your schemas and as a starting point for your own custom keywords.
See Defining custom keywords for more details.
Asynchronous schema compilation
During asynchronous compilation remote references are loaded using supplied function. See compileAsync
method and loadSchema
option.
Example:
var ajv = new Ajv({ loadSchema: loadSchema });
ajv.compileAsync(schema).then(function (validate) {
var valid = validate(data);
// ...
});
function loadSchema(uri) {
return request.json(uri).then(function (res) {
if (res.statusCode >= 400)
throw new Error('Loading error: ' + res.statusCode);
return res.body;
});
}
Please note: Option missingRefs
should NOT be set to "ignore"
or "fail"
for asynchronous compilation to work.
Asynchronous validation
Example in Node.js REPL: https://tonicdev.com/esp/ajv-asynchronous-validation
You can define custom formats and keywords that perform validation asynchronously by accessing database or some other service. You should add async: true
in the keyword or format definition (see addFormat, addKeyword and Defining custom keywords).
If your schema uses asynchronous formats/keywords or refers to some schema that contains them it should have "$async": true
keyword so that Ajv can compile it correctly. If asynchronous format/keyword or reference to asynchronous schema is used in the schema without $async
keyword Ajv will throw an exception during schema compilation.
Please note: all asynchronous subschemas that are referenced from the current or other schemas should have "$async": true
keyword as well, otherwise the schema compilation will fail.
Validation function for an asynchronous custom format/keyword should return a promise that resolves with true
or false
(or rejects with new Ajv.ValidationError(errors)
if you want to return custom errors from the keyword function).
Ajv compiles asynchronous schemas to es7 async functions that can optionally be transpiled with nodent. Async functions are supported in Node.js 7+ and all modern browsers. You can also supply any other transpiler as a function via processCode
option. See Options.
The compiled validation function has $async: true
property (if the schema is asynchronous), so you can differentiate these functions if you are using both synchronous and asynchronous schemas.
Validation result will be a promise that resolves with validated data or rejects with an exception Ajv.ValidationError
that contains the array of validation errors in errors
property.
Example:
var ajv = new Ajv;
// require('ajv-async')(ajv);
ajv.addKeyword('idExists', {
async: true,
type: 'number',
validate: checkIdExists
});
function checkIdExists(schema, data) {
return knex(schema.table)
.select('id')
.where('id', data)
.then(function (rows) {
return !!rows.length; // true if record is found
});
}
var schema = {
"$async": true,
"properties": {
"userId": {
"type": "integer",
"idExists": { "table": "users" }
},
"postId": {
"type": "integer",
"idExists": { "table": "posts" }
}
}
};
var validate = ajv.compile(schema);
validate({ userId: 1, postId: 19 })
.then(function (data) {
console.log('Data is valid', data); // { userId: 1, postId: 19 }
})
.catch(function (err) {
if (!(err instanceof Ajv.ValidationError)) throw err;
// data is invalid
console.log('Validation errors:', err.errors);
});
Using transpilers with asynchronous validation functions.
ajv-async uses nodent to transpile async functions. To use another transpiler you should separately install it (or load its bundle in the browser).
Using nodent
var ajv = new Ajv;
require('ajv-async')(ajv);
// in the browser if you want to load ajv-async bundle separately you can:
// window.ajvAsync(ajv);
var validate = ajv.compile(schema); // transpiled es7 async function
validate(data).then(successFunc).catch(errorFunc);
Using other transpilers
var ajv = new Ajv({ processCode: transpileFunc });
var validate = ajv.compile(schema); // transpiled es7 async function
validate(data).then(successFunc).catch(errorFunc);
See Options.
Security considerations
JSON Schema, if properly used, can replace data sanitisation. It doesn't replace other API security considerations. It also introduces additional security aspects to consider.
Security contact
To report a security vulnerability, please use the Tidelift security contact. Tidelift will coordinate the fix and disclosure. Please do NOT report security vulnerabilities via GitHub issues.
Untrusted schemas
Ajv treats JSON schemas as trusted as your application code. This security model is based on the most common use case, when the schemas are static and bundled together with the application.
If your schemas are received from untrusted sources (or generated from untrusted data) there are several scenarios you need to prevent:
- compiling schemas can cause stack overflow (if they are too deep)
- compiling schemas can be slow (e.g. #557)
- validating certain data can be slow
It is difficult to predict all the scenarios, but at the very least it may help to limit the size of untrusted schemas (e.g. limit JSON string length) and also the maximum schema object depth (that can be high for relatively small JSON strings). You also may want to mitigate slow regular expressions in pattern
and patternProperties
keywords.
Regardless the measures you take, using untrusted schemas increases security risks.
Circular references in JavaScript objects
Ajv does not support schemas and validated data that have circular references in objects. See issue #802.
An attempt to compile such schemas or validate such data would cause stack overflow (or will not complete in case of asynchronous validation). Depending on the parser you use, untrusted data can lead to circular references.
Security risks of trusted schemas
Some keywords in JSON Schemas can lead to very slow validation for certain data. These keywords include (but may be not limited to):
pattern
andformat
for large strings - usemaxLength
to mitigateuniqueItems
for large non-scalar arrays - usemaxItems
to mitigatepatternProperties
for large property names - usepropertyNames
to mitigate
Please note: The suggestions above to prevent slow validation would only work if you do NOT use allErrors: true
in production code (using it would continue validation after validation errors).
You can validate your JSON schemas against this meta-schema to check that these recommendations are followed:
const isSchemaSecure = ajv.compile(require('ajv/lib/refs/json-schema-secure.json'));
const schema1 = {format: 'email'};
isSchemaSecure(schema1); // false
const schema2 = {format: 'email', maxLength: 256};
isSchemaSecure(schema2); // true
Please note: following all these recommendation is not a guarantee that validation of untrusted data is safe - it can still lead to some undesirable results.
Filtering data
With option removeAdditional
(added by andyscott) you can filter data during the validation.
This option modifies original data.
Example:
var ajv = new Ajv({ removeAdditional: true });
var schema = {
"additionalProperties": false,
"properties": {
"foo": { "type": "number" },
"bar": {
"additionalProperties": { "type": "number" },
"properties": {
"baz": { "type": "string" }
}
}
}
}
var data = {
"foo": 0,
"additional1": 1, // will be removed; `additionalProperties` == false
"bar": {
"baz": "abc",
"additional2": 2 // will NOT be removed; `additionalProperties` != false
},
}
var validate = ajv.compile(schema);
console.log(validate(data)); // true
console.log(data); // { "foo": 0, "bar": { "baz": "abc", "additional2": 2 }
If removeAdditional
option in the example above were "all"
then both additional1
and additional2
properties would have been removed.
If the option were "failing"
then property additional1
would have been removed regardless of its value and property additional2
would have been removed only if its value were failing the schema in the inner additionalProperties
(so in the example above it would have stayed because it passes the schema, but any non-number would have been removed).
Please note: If you use removeAdditional
option with additionalProperties
keyword inside anyOf
/oneOf
keywords your validation can fail with this schema, for example:
{
"type": "object",
"oneOf": [
{
"properties": {
"foo": { "type": "string" }
},
"required": [ "foo" ],
"additionalProperties": false
},
{
"properties": {
"bar": { "type": "integer" }
},
"required": [ "bar" ],
"additionalProperties": false
}
]
}
The intention of the schema above is to allow objects with either the string property "foo" or the integer property "bar", but not with both and not with any other properties.
With the option removeAdditional: true
the validation will pass for the object { "foo": "abc"}
but will fail for the object {"bar": 1}
. It happens because while the first subschema in oneOf
is validated, the property bar
is removed because it is an additional property according to the standard (because it is not included in properties
keyword in the same schema).
While this behaviour is unexpected (issues #129, #134), it is correct. To have the expected behaviour (both objects are allowed and additional properties are removed) the schema has to be refactored in this way:
{
"type": "object",
"properties": {
"foo": { "type": "string" },
"bar": { "type": "integer" }
},
"additionalProperties": false,
"oneOf": [
{ "required": [ "foo" ] },
{ "required": [ "bar" ] }
]
}
The schema above is also more efficient - it will compile into a faster function.
Assigning defaults
With option useDefaults
Ajv will assign values from default
keyword in the schemas of properties
and items
(when it is the array of schemas) to the missing properties and items.
With the option value "empty"
properties and items equal to null
or ""
(empty string) will be considered missing and assigned defaults.
This option modifies original data.
Please note: the default value is inserted in the generated validation code as a literal, so the value inserted in the data will be the deep clone of the default in the schema.
Example 1 (default
in properties
):
var ajv = new Ajv({ useDefaults: true });
var schema = {
"type": "object",
"properties": {
"foo": { "type": "number" },
"bar": { "type": "string", "default": "baz" }
},
"required": [ "foo", "bar" ]
};
var data = { "foo": 1 };
var validate = ajv.compile(schema);
console.log(validate(data)); // true
console.log(data); // { "foo": 1, "bar": "baz" }
Example 2 (default
in items
):
var schema = {
"type": "array",
"items": [
{ "type": "number" },
{ "type": "string", "default": "foo" }
]
}
var data = [ 1 ];
var validate = ajv.compile(schema);
console.log(validate(data)); // true
console.log(data); // [ 1, "foo" ]
default
keywords in other cases are ignored:
- not in
properties
oritems
subschemas - in schemas inside
anyOf
,oneOf
andnot
(see #42) - in
if
subschema ofswitch
keyword - in schemas generated by custom macro keywords
The strictDefaults
option customizes Ajv's behavior for the defaults that Ajv ignores (true
raises an error, and "log"
outputs a warning).
Coercing data types
When you are validating user inputs all your data properties are usually strings. The option coerceTypes
allows you to have your data types coerced to the types specified in your schema type
keywords, both to pass the validation and to use the correctly typed data afterwards.
This option modifies original data.
Please note: if you pass a scalar value to the validating function its type will be coerced and it will pass the validation, but the value of the variable you pass won't be updated because scalars are passed by value.
Example 1:
var ajv = new Ajv({ coerceTypes: true });
var schema = {
"type": "object",
"properties": {
"foo": { "type": "number" },
"bar": { "type": "boolean" }
},
"required": [ "foo", "bar" ]
};
var data = { "foo": "1", "bar": "false" };
var validate = ajv.compile(schema);
console.log(validate(data)); // true
console.log(data); // { "foo": 1, "bar": false }
Example 2 (array coercions):
var ajv = new Ajv({ coerceTypes: 'array' });
var schema = {
"properties": {
"foo": { "type": "array", "items": { "type": "number" } },
"bar": { "type": "boolean" }
}
};
var data = { "foo": "1", "bar": ["false"] };
var validate = ajv.compile(schema);
console.log(validate(data)); // true
console.log(data); // { "foo": [1], "bar": false }
The coercion rules, as you can see from the example, are different from JavaScript both to validate user input as expected and to have the coercion reversible (to correctly validate cases where different types are defined in subschemas of "anyOf" and other compound keywords).
See Coercion rules for details.
API
new Ajv(Object options) -> Object
Create Ajv instance.
.compile(Object schema) -> Function<Object data>
Generate validating function and cache the compiled schema for future use.
Validating function returns a boolean value. This function has properties errors
and schema
. Errors encountered during the last validation are assigned to errors
property (it is assigned null
if there was no errors). schema
property contains the reference to the original schema.
The schema passed to this method will be validated against meta-schema unless validateSchema
option is false. If schema is invalid, an error will be thrown. See options.
.compileAsync(Object schema [, Boolean meta] [, Function callback]) -> Promise
Asynchronous version of compile
method that loads missing remote schemas using asynchronous function in options.loadSchema
. This function returns a Promise that resolves to a validation function. An optional callback passed to compileAsync
will be called with 2 parameters: error (or null) and validating function. The returned promise will reject (and the callback will be called with an error) when:
- missing schema can't be loaded (
loadSchema
returns a Promise that rejects). - a schema containing a missing reference is loaded, but the reference cannot be resolved.
- schema (or some loaded/referenced schema) is invalid.
The function compiles schema and loads the first missing schema (or meta-schema) until all missing schemas are loaded.
You can asynchronously compile meta-schema by passing true
as the second parameter.
See example in Asynchronous compilation.
.validate(Object schema|String key|String ref, data) -> Boolean
Validate data using passed schema (it will be compiled and cached).
Instead of the schema you can use the key that was previously passed to addSchema
, the schema id if it was present in the schema or any previously resolved reference.
Validation errors will be available in the errors
property of Ajv instance (null
if there were no errors).
Please note: every time this method is called the errors are overwritten so you need to copy them to another variable if you want to use them later.
If the schema is asynchronous (has $async
keyword on the top level) this method returns a Promise. See Asynchronous validation.
.addSchema(Array<Object>|Object schema [, String key]) -> Ajv
Add schema(s) to validator instance. This method does not compile schemas (but it still validates them). Because of that dependencies can be added in any order and circular dependencies are supported. It also prevents unnecessary compilation of schemas that are containers for other schemas but not used as a whole.
Array of schemas can be passed (schemas should have ids), the second parameter will be ignored.
Key can be passed that can be used to reference the schema and will be used as the schema id if there is no id inside the schema. If the key is not passed, the schema id will be used as the key.
Once the schema is added, it (and all the references inside it) can be referenced in other schemas and used to validate data.
Although addSchema
does not compile schemas, explicit compilation is not required - the schema will be compiled when it is used first time.
By default the schema is validated against meta-schema before it is added, and if the schema does not pass validation the exception is thrown. This behaviour is controlled by validateSchema
option.
Please note: Ajv uses the method chaining syntax for all methods with the prefix add*
and remove*
.
This allows you to do nice things like the following.
var validate = new Ajv().addSchema(schema).addFormat(name, regex).getSchema(uri);
.addMetaSchema(Array<Object>|Object schema [, String key]) -> Ajv
Adds meta schema(s) that can be used to validate other schemas. That function should be used instead of addSchema
because there may be instance options that would compile a meta schema incorrectly (at the moment it is removeAdditional
option).
There is no need to explicitly add draft-07 meta schema (http://json-schema.org/draft-07/schema) - it is added by default, unless option meta
is set to false
. You only need to use it if you have a changed meta-schema that you want to use to validate your schemas. See validateSchema
.
.validateSchema(Object schema) -> Boolean
Validates schema. This method should be used to validate schemas rather than validate
due to the inconsistency of uri
format in JSON Schema standard.
By default this method is called automatically when the schema is added, so you rarely need to use it directly.
If schema doesn't have $schema
property, it is validated against draft 6 meta-schema (option meta
should not be false).
If schema has $schema
property, then the schema with this id (that should be previously added) is used to validate passed schema.
Errors will be available at ajv.errors
.
.getSchema(String key) -> Function<Object data>
Retrieve compiled schema previously added with addSchema
by the key passed to addSchema
or by its full reference (id). The returned validating function has schema
property with the reference to the original schema.
.removeSchema([Object schema|String key|String ref|RegExp pattern]) -> Ajv
Remove added/cached schema. Even if schema is referenced by other schemas it can be safely removed as dependent schemas have local references.
Schema can be removed using:
- key passed to
addSchema
- it's full reference (id)
- RegExp that should match schema id or key (meta-schemas won't be removed)
- actual schema object that will be stable-stringified to remove schema from cache
If no parameter is passed all schemas but meta-schemas will be removed and the cache will be cleared.
.addFormat(String name, String|RegExp|Function|Object format) -> Ajv
Add custom format to validate strings or numbers. It can also be used to replace pre-defined formats for Ajv instance.
Strings are converted to RegExp.
Function should return validation result as true
or false
.
If object is passed it should have properties validate
, compare
and async
:
- validate: a string, RegExp or a function as described above.
- compare: an optional comparison function that accepts two strings and compares them according to the format meaning. This function is used with keywords
formatMaximum
/formatMinimum
(defined in ajv-keywords package). It should return1
if the first value is bigger than the second value,-1
if it is smaller and0
if it is equal. - async: an optional
true
value ifvalidate
is an asynchronous function; in this case it should return a promise that resolves with a valuetrue
orfalse
. - type: an optional type of data that the format applies to. It can be
"string"
(default) or"number"
(see https://github.com/epoberezkin/ajv/issues/291#issuecomment-259923858). If the type of data is different, the validation will pass.
Custom formats can be also added via formats
option.
.addKeyword(String keyword, Object definition) -> Ajv
Add custom validation keyword to Ajv instance.
Keyword should be different from all standard JSON Schema keywords and different from previously defined keywords. There is no way to redefine keywords or to remove keyword definition from the instance.
Keyword must start with a letter, _
or $
, and may continue with letters, numbers, _
, $
, or -
.
It is recommended to use an application-specific prefix for keywords to avoid current and future name collisions.
Example Keywords:
"xyz-example"
: valid, and uses prefix for the xyz project to avoid name collisions."example"
: valid, but not recommended as it could collide with future versions of JSON Schema etc."3-example"
: invalid as numbers are not allowed to be the first character in a keyword
Keyword definition is an object with the following properties:
- type: optional string or array of strings with data type(s) that the keyword applies to. If not present, the keyword will apply to all types.
- validate: validating function
- compile: compiling function
- macro: macro function
- inline: compiling function that returns code (as string)
- schema: an optional
false
value used with "validate" keyword to not pass schema - metaSchema: an optional meta-schema for keyword schema
- dependencies: an optional list of properties that must be present in the parent schema - it will be checked during schema compilation
- modifying:
true
MUST be passed if keyword modifies data - statements:
true
can be passed in case inline keyword generates statements (as opposed to expression) - valid: pass
true
/false
to pre-define validation result, the result returned from validation function will be ignored. This option cannot be used with macro keywords. - $data: an optional
true
value to support $data reference as the value of custom keyword. The reference will be resolved at validation time. If the keyword has meta-schema it would be extended to allow $data and it will be used to validate the resolved value. Supporting $data reference requires that keyword has validating function (as the only option or in addition to compile, macro or inline function). - async: an optional
true
value if the validation function is asynchronous (whether it is compiled or passed in validate property); in this case it should return a promise that resolves with a valuetrue
orfalse
. This option is ignored in case of "macro" and "inline" keywords. - errors: an optional boolean or string
"full"
indicating whether keyword returns errors. If this property is not set Ajv will determine if the errors were set in case of failed validation.
compile, macro and inline are mutually exclusive, only one should be used at a time. validate can be used separately or in addition to them to support $data reference.
Please note: If the keyword is validating data type that is different from the type(s) in its definition, the validation function will not be called (and expanded macro will not be used), so there is no need to check for data type inside validation function or inside schema returned by macro function (unless you want to enforce a specific type and for some reason do not want to use a separate type
keyword for that). In the same way as standard keywords work, if the keyword does not apply to the data type being validated, the validation of this keyword will succeed.
See Defining custom keywords for more details.
.getKeyword(String keyword) -> Object|Boolean
Returns custom keyword definition, true
for pre-defined keywords and false
if the keyword is unknown.
.removeKeyword(String keyword) -> Ajv
Removes custom or pre-defined keyword so you can redefine them.
While this method can be used to extend pre-defined keywords, it can also be used to completely change their meaning - it may lead to unexpected results.
Please note: schemas compiled before the keyword is removed will continue to work without changes. To recompile schemas use removeSchema
method and compile them again.
.errorsText([Array<Object> errors [, Object options]]) -> String
Returns the text with all errors in a String.
Options can have properties separator
(string used to separate errors, ", " by default) and dataVar
(the variable name that dataPaths are prefixed with, "data" by default).
Options
Defaults:
{
// validation and reporting options:
$data: false,
allErrors: false,
verbose: false,
$comment: false, // NEW in Ajv version 6.0
jsonPointers: false,
uniqueItems: true,
unicode: true,
nullable: false,
format: 'fast',
formats: {},
unknownFormats: true,
schemas: {},
logger: undefined,
// referenced schema options:
schemaId: '$id',
missingRefs: true,
extendRefs: 'ignore', // recommended 'fail'
loadSchema: undefined, // function(uri: string): Promise {}
// options to modify validated data:
removeAdditional: false,
useDefaults: false,
coerceTypes: false,
// strict mode options
strictDefaults: false,
strictKeywords: false,
// asynchronous validation options:
transpile: undefined, // requires ajv-async package
// advanced options:
meta: true,
validateSchema: true,
addUsedSchema: true,
inlineRefs: true,
passContext: false,
loopRequired: Infinity,
ownProperties: false,
multipleOfPrecision: false,
errorDataPath: 'object', // deprecated
messages: true,
sourceCode: false,
processCode: undefined, // function (str: string): string {}
cache: new Cache,
serialize: undefined
}
Validation and reporting options
- $data: support $data references. Draft 6 meta-schema that is added by default will be extended to allow them. If you want to use another meta-schema you need to use $dataMetaSchema method to add support for $data reference. See API.
- allErrors: check all rules collecting all errors. Default is to return after the first error.
- verbose: include the reference to the part of the schema (
schema
andparentSchema
) and validated data in errors (false by default). - $comment (NEW in Ajv version 6.0): log or pass the value of
$comment
keyword to a function. Option values:false
(default): ignore $comment keyword.true
: log the keyword value to console.- function: pass the keyword value, its schema path and root schema to the specified function
- jsonPointers: set
dataPath
property of errors using JSON Pointers instead of JavaScript property access notation. - uniqueItems: validate
uniqueItems
keyword (true by default). - unicode: calculate correct length of strings with unicode pairs (true by default). Pass
false
to use.length
of strings that is faster, but gives "incorrect" lengths of strings with unicode pairs - each unicode pair is counted as two characters. - nullable: support keyword "nullable" from Open API 3 specification.
- format: formats validation mode. Option values:
"fast"
(default) - simplified and fast validation (see Formats for details of which formats are available and affected by this option)."full"
- more restrictive and slow validation. E.g., 25:00:00 and 2015/14/33 will be invalid time and date in 'full' mode but it will be valid in 'fast' mode.false
- ignore all format keywords.
- formats: an object with custom formats. Keys and values will be passed to
addFormat
method. - unknownFormats: handling of unknown formats. Option values:
true
(default) - if an unknown format is encountered the exception is thrown during schema compilation. Ifformat
keyword value is $data reference and it is unknown the validation will fail.[String]
- an array of unknown format names that will be ignored. This option can be used to allow usage of third party schemas with format(s) for which you don't have definitions, but still fail if another unknown format is used. Ifformat
keyword value is $data reference and it is not in this array the validation will fail."ignore"
- to log warning during schema compilation and always pass validation (the default behaviour in versions before 5.0.0). This option is not recommended, as it allows to mistype format name and it won't be validated without any error message. This behaviour is required by JSON Schema specification.
- schemas: an array or object of schemas that will be added to the instance. In case you pass the array the schemas must have IDs in them. When the object is passed the method
addSchema(value, key)
will be called for each schema in this object. - logger: sets the logging method. Default is the global
console
object that should have methodslog
,warn
anderror
. Option values:- custom logger - it should have methods
log
,warn
anderror
. If any of these methods is missing an exception will be thrown. false
- logging is disabled.
- custom logger - it should have methods
Referenced schema options
- schemaId: this option defines which keywords are used as schema URI. Option value:
"$id"
(default) - only use$id
keyword as schema URI (as specified in JSON Schema draft-06/07), ignoreid
keyword (if it is present a warning will be logged)."id"
- only useid
keyword as schema URI (as specified in JSON Schema draft-04), ignore$id
keyword (if it is present a warning will be logged)."auto"
- use both$id
andid
keywords as schema URI. If both are present (in the same schema object) and different the exception will be thrown during schema compilation.
- missingRefs: handling of missing referenced schemas. Option values:
true
(default) - if the reference cannot be resolved during compilation the exception is thrown. The thrown error has propertiesmissingRef
(with hash fragment) andmissingSchema
(without it). Both properties are resolved relative to the current base id (usually schema id, unless it was substituted)."ignore"
- to log error during compilation and always pass validation."fail"
- to log error and successfully compile schema but fail validation if this rule is checked.
- extendRefs: validation of other keywords when
$ref
is present in the schema. Option values:"ignore"
(default) - when$ref
is used other keywords are ignored (as per JSON Reference standard). A warning will be logged during the schema compilation."fail"
(recommended) - if other validation keywords are used together with$ref
the exception will be thrown when the schema is compiled. This option is recommended to make sure schema has no keywords that are ignored, which can be confusing.true
- validate all keywords in the schemas with$ref
(the default behaviour in versions before 5.0.0).
- loadSchema: asynchronous function that will be used to load remote schemas when
compileAsync
method is used and some reference is missing (optionmissingRefs
should NOT be 'fail' or 'ignore'). This function should accept remote schema uri as a parameter and return a Promise that resolves to a schema. See example in Asynchronous compilation.
Options to modify validated data
- removeAdditional: remove additional properties - see example in Filtering data. This option is not used if schema is added with
addMetaSchema
method. Option values:false
(default) - not to remove additional properties"all"
- all additional properties are removed, regardless ofadditionalProperties
keyword in schema (and no validation is made for them).true
- only additional properties withadditionalProperties
keyword equal tofalse
are removed."failing"
- additional properties that fail schema validation will be removed (whereadditionalProperties
keyword isfalse
or schema).
- useDefaults: replace missing or undefined properties and items with the values from corresponding
default
keywords. Default behaviour is to ignoredefault
keywords. This option is not used if schema is added withaddMetaSchema
method. See examples in Assigning defaults. Option values:false
(default) - do not use defaultstrue
- insert defaults by value (object literal is used)."empty"
- in addition to missing or undefined, use defaults for properties and items that are equal tonull
or""
(an empty string)."shared"
(deprecated) - insert defaults by reference. If the default is an object, it will be shared by all instances of validated data. If you modify the inserted default in the validated data, it will be modified in the schema as well.
- coerceTypes: change data type of data to match
type
keyword. See the example in Coercing data types and coercion rules. Option values:false
(default) - no type coercion.true
- coerce scalar data types."array"
- in addition to coercions between scalar types, coerce scalar data to an array with one element and vice versa (as required by the schema).
Strict mode options
- strictDefaults: report ignored
default
keywords in schemas. Option values:false
(default) - ignored defaults are not reportedtrue
- if an ignored default is present, throw an error"log"
- if an ignored default is present, log warning
- strictKeywords: report unknown keywords in schemas. Option values:
false
(default) - unknown keywords are not reportedtrue
- if an unknown keyword is present, throw an error"log"
- if an unknown keyword is present, log warning
Asynchronous validation options
- transpile: Requires ajv-async package. It determines whether Ajv transpiles compiled asynchronous validation function. Option values:
undefined
(default) - transpile with nodent if async functions are not supported.true
- always transpile with nodent.false
- do not transpile; if async functions are not supported an exception will be thrown.
Advanced options
- meta: add meta-schema so it can be used by other schemas (true by default). If an object is passed, it will be used as the default meta-schema for schemas that have no
$schema
keyword. This default meta-schema MUST have$schema
keyword. - validateSchema: validate added/compiled schemas against meta-schema (true by default).
$schema
property in the schema can be http://json-schema.org/draft-07/schema or absent (draft-07 meta-schema will be used) or can be a reference to the schema previously added withaddMetaSchema
method. Option values:true
(default) - if the validation fails, throw the exception."log"
- if the validation fails, log error.false
- skip schema validation.
- addUsedSchema: by default methods
compile
andvalidate
add schemas to the instance if they have$id
(orid
) property that doesn't start with "#". If$id
is present and it is not unique the exception will be thrown. Set this option tofalse
to skip adding schemas to the instance and the$id
uniqueness check when these methods are used. This option does not affectaddSchema
method. - inlineRefs: Affects compilation of referenced schemas. Option values:
true
(default) - the referenced schemas that don't have refs in them are inlined, regardless of their size - that substantially improves performance at the cost of the bigger size of compiled schema functions.false
- to not inline referenced schemas (they will be compiled as separate functions).- integer number - to limit the maximum number of keywords of the schema that will be inlined.
- passContext: pass validation context to custom keyword functions. If this option is
true
and you pass some context to the compiled validation function withvalidate.call(context, data)
, thecontext
will be available asthis
in your custom keywords. By defaultthis
is Ajv instance. - loopRequired: by default
required
keyword is compiled into a single expression (or a sequence of statements inallErrors
mode). In case of a very large number of properties in this keyword it may result in a very big validation function. Pass integer to set the number of properties above whichrequired
keyword will be validated in a loop - smaller validation function size but also worse performance. - ownProperties: by default Ajv iterates over all enumerable object properties; when this option is
true
only own enumerable object properties (i.e. found directly on the object rather than on its prototype) are iterated. Contributed by @mbroadst. - multipleOfPrecision: by default
multipleOf
keyword is validated by comparing the result of division with parseInt() of that result. It works for dividers that are bigger than 1. For small dividers such as 0.01 the result of the division is usually not integer (even when it should be integer, see issue #84). If you need to use fractional dividers set this option to some positive integer N to havemultipleOf
validated using this formula:Math.abs(Math.round(division) - division) < 1e-N
(it is slower but allows for float arithmetics deviations). - errorDataPath (deprecated): set
dataPath
to point to 'object' (default) or to 'property' when validating keywordsrequired
,additionalProperties
anddependencies
. - messages: Include human-readable messages in errors.
true
by default.false
can be passed when custom messages are used (e.g. with ajv-i18n). - sourceCode: add
sourceCode
property to validating function (for debugging; this code can be different from the result of toString call). - processCode: an optional function to process generated code before it is passed to Function constructor. It can be used to either beautify (the validating function is generated without line-breaks) or to transpile code. Starting from version 5.0.0 this option replaced options:
beautify
that formatted the generated function using js-beautify. If you want to beautify the generated code passrequire('js-beautify').js_beautify
.transpile
that transpiled asynchronous validation function. You can still usetranspile
option with ajv-async package. See Asynchronous validation for more information.
- cache: an optional instance of cache to store compiled schemas using stable-stringified schema as a key. For example, set-associative cache sacjs can be used. If not passed then a simple hash is used which is good enough for the common use case (a limited number of statically defined schemas). Cache should have methods
put(key, value)
,get(key)
,del(key)
andclear()
. - serialize: an optional function to serialize schema to cache key. Pass
false
to use schema itself as a key (e.g., if WeakMap used as a cache). By default fast-json-stable-stringify is used.
Validation errors
In case of validation failure, Ajv assigns the array of errors to errors
property of validation function (or to errors
property of Ajv instance when validate
or validateSchema
methods were called). In case of asynchronous validation, the returned promise is rejected with exception Ajv.ValidationError
that has errors
property.
Error objects
Each error is an object with the following properties:
- keyword: validation keyword.
- dataPath: the path to the part of the data that was validated. By default
dataPath
uses JavaScript property access notation (e.g.,".prop[1].subProp"
). When the optionjsonPointers
is true (see Options)dataPath
will be set using JSON pointer standard (e.g.,"/prop/1/subProp"
). - schemaPath: the path (JSON-pointer as a URI fragment) to the schema of the keyword that failed validation.
- params: the object with the additional information about error that can be used to create custom error messages (e.g., using ajv-i18n package). See below for parameters set by all keywords.
- message: the standard error message (can be excluded with option
messages
set to false). - schema: the schema of the keyword (added with
verbose
option). - parentSchema: the schema containing the keyword (added with
verbose
option) - data: the data validated by the keyword (added with
verbose
option).
Please note: propertyNames
keyword schema validation errors have an additional property propertyName
, dataPath
points to the object. After schema validation for each property name, if it is invalid an additional error is added with the property keyword
equal to "propertyNames"
.
Error parameters
Properties of params
object in errors depend on the keyword that failed validation.
maxItems
,minItems
,maxLength
,minLength
,maxProperties
,minProperties
- propertylimit
(number, the schema of the keyword).additionalItems
- propertylimit
(the maximum number of allowed items in case whenitems
keyword is an array of schemas andadditionalItems
is false).additionalProperties
- propertyadditionalProperty
(the property not used inproperties
andpatternProperties
keywords).dependencies
- properties:property
(dependent property),missingProperty
(required missing dependency - only the first one is reported currently)deps
(required dependencies, comma separated list as a string),depsCount
(the number of required dependencies).
format
- propertyformat
(the schema of the keyword).maximum
,minimum
- properties:limit
(number, the schema of the keyword),exclusive
(boolean, the schema ofexclusiveMaximum
orexclusiveMinimum
),comparison
(string, comparison operation to compare the data to the limit, with the data on the left and the limit on the right; can be "<", "<=", ">", ">=")
multipleOf
- propertymultipleOf
(the schema of the keyword)pattern
- propertypattern
(the schema of the keyword)required
- propertymissingProperty
(required property that is missing).propertyNames
- propertypropertyName
(an invalid property name).patternRequired
(in ajv-keywords) - propertymissingPattern
(required pattern that did not match any property).type
- propertytype
(required type(s), a string, can be a comma-separated list)uniqueItems
- propertiesi
andj
(indices of duplicate items).const
- propertyallowedValue
pointing to the value (the schema of the keyword).enum
- propertyallowedValues
pointing to the array of values (the schema of the keyword).$ref
- propertyref
with the referenced schema URI.oneOf
- propertypassingSchemas
(array of indices of passing schemas, null if no schema passes).- custom keywords (in case keyword definition doesn't create errors) - property
keyword
(the keyword name).
Plugins
Ajv can be extended with plugins that add custom keywords, formats or functions to process generated code. When such plugin is published as npm package it is recommended that it follows these conventions:
- it exports a function
- this function accepts ajv instance as the first parameter and returns the same instance to allow chaining
- this function can accept an optional configuration as the second parameter
If you have published a useful plugin please submit a PR to add it to the next section.
Related packages
- ajv-async - plugin to configure async validation mode
- ajv-bsontype - plugin to validate mongodb's bsonType formats
- ajv-cli - command line interface
- ajv-errors - plugin for custom error messages
- ajv-i18n - internationalised error messages
- ajv-istanbul - plugin to instrument generated validation code to measure test coverage of your schemas
- ajv-keywords - plugin with custom validation keywords (select, typeof, etc.)
- ajv-merge-patch - plugin with keywords $merge and $patch
- ajv-pack - produces a compact module exporting validation functions
Some packages using Ajv
- webpack - a module bundler. Its main purpose is to bundle JavaScript files for usage in a browser
- jsonscript-js - the interpreter for JSONScript - scripted processing of existing endpoints and services
- osprey-method-handler - Express middleware for validating requests and responses based on a RAML method object, used in osprey - validating API proxy generated from a RAML definition
- har-validator - HTTP Archive (HAR) validator
- jsoneditor - a web-based tool to view, edit, format, and validate JSON http://jsoneditoronline.org
- JSON Schema Lint - a web tool to validate JSON/YAML document against a single JSON Schema http://jsonschemalint.com
- objection - SQL-friendly ORM for Node.js
- table - formats data into a string table
- ripple-lib - a JavaScript API for interacting with Ripple in Node.js and the browser
- restbase - distributed storage with REST API & dispatcher for backend services built to provide a low-latency & high-throughput API for Wikipedia / Wikimedia content
- hippie-swagger - Hippie wrapper that provides end to end API testing with swagger validation
- react-form-controlled - React controlled form components with validation
- rabbitmq-schema - a schema definition module for RabbitMQ graphs and messages
- @query/schema - stream filtering with a URI-safe query syntax parsing to JSON Schema
- chai-ajv-json-schema - chai plugin to us JSON Schema with expect in mocha tests
- grunt-jsonschema-ajv - Grunt plugin for validating files against JSON Schema
- extract-text-webpack-plugin - extract text from bundle into a file
- electron-builder - a solution to package and build a ready for distribution Electron app
- addons-linter - Mozilla Add-ons Linter
- gh-pages-generator - multi-page site generator converting markdown files to GitHub pages
- ESLint - the pluggable linting utility for JavaScript and JSX
Tests
npm install
git submodule update --init
npm test
Contributing
All validation functions are generated using doT templates in dot folder. Templates are precompiled so doT is not a run-time dependency.
npm run build
- compiles templates to dotjs folder.
npm run watch
- automatically compiles templates when files in dot folder change
Please see Contributing guidelines
Changes history
See https://github.com/epoberezkin/ajv/releases
Please note: Changes in version 6.0.0.
Open-source software support
Ajv is a part of Tidelift subscription - it provides a centralised support to open-source software users, in addition to the support provided by software maintainers.