How It Works

How It Works

Arkiver is built by devs for devs. Creating and accessing a custom web3 api consists of the 3 Steps:

Defining Entities

Entities can be treated like tables in a relational database. These definitions describe the format and structure you want the data to be presented. These entities can be read, queries, updated and created within the indexing scripts.

Custom Indexing Scripts

The first step to writing custom scripts is specifying the data sources. Arkiver currently supports two data sources on any EVM chain, Events and Blocks.

Event Datasource: Trigger each time a specific event is emitted

Block Datasource: Trigger every ‘n’ blocks

Each time the datasource is triggered, a handler function is called in your custom script. This is where the magic happens. Indexing scripts are written in typescript with no restrictions, the scripts have access to all NPM modules and can even query external endpoints while executing the script. The key purpose of the scripts is to query the desired raw data from the blockchain while providing the required manipulation and enrichment to the data before storing it into entities.

The indexer can be run locally in a testmode with a single cli command from the Arkiver CLI, from which a local database container and a graph QL playground container are created. This enables the thorough testing of both the indexing scripts and the GraphQL API prior to deployment or updates.

Once the scripts are complete, the arkiver job can be pushed to production for indexing on optimised indexing servers..


When the Arkiver job begins indexing, a serverless GraphQL endpoint is made available to the user. This endpoint allows public access to the data through a simple API. The API’s features include:

  • Pagination

  • Filters & Queries

  • Entity Relations

  • Querying Entities Relations

Among others, with more on the roadmap.

Last updated