Monitoring Operations

With Hive, you can not just persist the schema and documents but you can also measure or collect data against all your GraphQL operations and generate analytics on the same. You can do this by enabling usage flag available in the Hive Client.

Requirements

Hive Client comes with a generic client and plugins for Envelop and Apollo Server.

npm install @graphql-hive/client
yarn add @graphql-hive/client

Using Hive Client

Usage options

Full configuration is available under HiveUsagePluginOptions interface. You can explore it through your IDE or using unpkg.

Apollo Server

Thanks to the plugin system it's a matter of adding hiveApollo plugin to ApolloServer instance:

import { ApolloServer } from 'apollo-server'
import { hiveApollo } from '@graphql-hive/client'
const server = new ApolloServer({
typeDefs,
resolvers,
plugins: [
hiveApollo({
enabled: true,
debug: true, // or false
token: 'YOUR-TOKEN',
usage: true // or { ... usage options }
})
]
})

Envelop

If you're not familiar with Envelop - in "short" it's a lightweight JavaScript library for wrapping GraphQL execution layer and flow, allowing developers to develop, share and collaborate on GraphQL-related plugins, while filling the missing pieces in GraphQL implementations.

Here's more on that topic.

import { envelop } from '@envelop/core'
import { useHive } from '@graphql-hive/client'
const envelopProxy = envelop({
plugins: [
useHive({
enabled: true,
debug: true, // or false
token: 'YOUR-TOKEN',
usage: true // or { ... usage options }
})
]
})

Apollo Router

GraphQL Hive ships a custom version of Apollo Router. The reason is that in order to extend Apollo Router, we need to write a native Rust plugin.

Download Apollo Router for Linux (x86_64), MacOS (x86_64) or Windows (x86_64):

curl -fsSL https://graphql-hive.com/apollo-router-download.sh | bash

Write router.yaml file and enable hive.usage plugin:

plugins:
hive.usage:
{}
# Sample rate to determine sampling.
# 0.0 = 0% chance of being sent
# 1.0 = 100% chance of being sent.
# Default: 1.0
# sample_rate: "0.5",
#
# A list of operations (by name) to be ignored by Hive.
# exclude: ["IntrospectionQuery", "MeQuery"],
#
# Uses graphql-client-name by default
# client_name_header: "x-client-name",
# Uses graphql-client-version by default
# client_version_header: "x-client-version",

You can also enable other feautes like sampling.

Now, start the router:

HIVE_TOKEN="your-token" ./router --config router.yaml

If everything worked correctly, you should see the first operations in GraphQL Hive in 1-2 minutes.

Other servers

The createHive function creates a generic Hive Client to be used with any GraphQL flow.

The collectUsage method accepts the same arguments as execute function of graphql-js and returns a function that expects the execution result object.

  • collectUsage(args) - should be called when a GraphQL execution starts.
  • finish(result) (function returned by collectUsage(args)) - has to be invoked right after execution finishes.
import express from 'express'
import { execute } from 'graphql'
import { graphqlHTTP } from 'express-graphql'
import { createHive } from '@graphql-hive/client'
const app = express()
const hive = createHive({
enabled: true,
debug: true, // or false
token: 'YOUR-TOKEN',
usage: true // or { ... usage options }
})
app.post(
'/graphql',
graphqlHTTP({
schema: yourSchema,
async customExecuteFn(args) {
const finish = hive.collectUsage(args)
const result = await execute(args)
finish(result)
return result
}
})
)
💡

We wrapped execute function with Hive and used it as a custom execute function in express-graphql. This is the easiest and closest way to control GraphQL execution.

Client info

Hive enables you to assign collected operation to an actual GraphQL client (consumer).

import { ApolloServer } from 'apollo-server'
import { hiveApollo } from '@graphql-hive/client'
const server = new ApolloServer({
typeDefs,
resolvers,
plugins: [
hiveApollo({
enabled: true,
debug: true, // or false
token: 'YOUR-TOKEN',
usage: {
clientInfo(context) {
const name = context.headers['x-graphql-client-name']
const version = context.headers['x-graphql-client-version']
if (name && version) {
return { name, version }
}
return null
}
}
})
],
context({ req }) {
return {
headers: req.headers
}
}
})
💡

The clientInfo method receives a context object as the first and only argument and expects to get either null, undefined or {(name, version)} object.

Excluding operations

In case you want to skip some operations, here's how to do it:

import { ApolloServer } from 'apollo-server'
import { hiveApollo } from '@graphql-hive/client'
const server = new ApolloServer({
typeDefs,
resolvers,
plugins: [
hiveApollo({
enabled: true,
debug: true, // or false
token: 'YOUR-TOKEN',
usage: {
exclude: ['unwantedOperationName', 'anotherOperationName']
}
})
]
})

Using Dashboard

Once this is done, and you start collecting the analytics as relevant you will be able to find all the analytics collected in the Operations Dashboard as seen below which you can then drill into

Monitoring view

Other languages

GraphQL Ruby

Our graphql-hive gem allows GraphQL Ruby users to use Hive to both monitor operations and publish schemas.

1. Install the graphql-hive gem

gem install graphql-hive

2. Configure GraphQL::Hive in your Schema

Add GraphQL::Hive at the end of your schema definition:

class Schema < GraphQL::Schema
query QueryType
use(
GraphQL::Hive,
{
token: '<YOUR_TOKEN>',
reporting: {
author: ENV['GITHUB_USER'],
commit: ENV['GITHUB_COMMIT']
},
}
)
end

The reporting configuration is required to push your GraphQL Schema to the Hive registry. Doing so will help better detect breaking changes and more upcoming features. If you only want to use the operations monitoring, replace the reporting option with the following report_schema: false.

More information is available on the graphql-hive repository: https://github.com/charlypoly/graphql-ruby-hive

Laravel Lighthouse (PHP)

The Lighthouse GraphQL Hive 3rd party integration can be used to measure and collect data against all your GraphQL operations. At the moment, it does not support Schema publishing.

Get started on their repository.