1
0
mirror of https://github.com/oceanprotocol/docs.git synced 2024-11-26 19:49:26 +01:00

Resolve merge conflict

This commit is contained in:
Akshay 2022-02-21 12:48:45 +01:00
commit 3284b521ed
16 changed files with 1481 additions and 868 deletions

View File

@ -5,6 +5,11 @@ slug: /concepts/compute-to-data/
section: concepts
---
## Quick Start
- [Compute-to-Data example](https://github.com/oceanprotocol/ocean.py/blob/main/READMEs/c2d-flow.md)
## Motivation
The most basic scenario for a Publisher is to provide access to the datasets they own or manage. However, a Publisher may offer a service to execute some computation on top of their data. This has some benefits:
@ -15,105 +20,9 @@ The most basic scenario for a Publisher is to provide access to the datasets the
[This page](https://oceanprotocol.com/technology/compute-to-data) elaborates on the benefits.
## Datasets & Algorithms
With Compute-to-Data, datasets are not allowed to leave the premises of the data holder, only algorithms can be permitted to run on them under certain conditions within an isolated and secure environment. Algorithms are an asset type just like datasets. They they too can have a pool or a fixed price to determine their price whenever they are used.
Algorithms can be public or private by setting `"attributes.main.type"` value as follows:
- `"access"` - public. The algorithm can be downloaded, given appropriate datatoken.
- `"compute"` - private. The algorithm is only available to use as part of a compute job without any way to download it. The dataset must be published on the same Ocean Provider as the dataset it's targeted to run on.
For each dataset, publishers can choose to allow various permission levels for algorithms to run:
- allow selected algorithms, referenced by their DID
- allow all algorithms published within a network or marketplace
- allow raw algorithms, for advanced use cases circumventing algorithm as an asset type, but most prone to data escape
All implementations should set permissions to private by default: upon publishing a compute dataset, no algorithms should be allowed to run on it. This is to prevent data escape by a rogue algorithm being written in a way to extract all data from a dataset.
## Architecture Overview
Here's the sequence diagram for starting a new compute job.
![Sequence Diagram for computing services](images/Starting New Compute Job.png)
The Consumer calls the Provider with `start(did, algorithm, additionalDIDs)`. It returns job id `XXXX`. The Provider oversees the rest of the work. At any point, the Consumer can query the Provider for the job status via `getJobDetails(XXXX)`.
Here's how Provider works. First, it ensures that the Consumer has sent the appropriate datatokens to get access. Then, it calls asks the Operator-Service (a microservice) to start the job, which passes on the request to Operator-Engine (the actual compute system). Operator-Engine runs Kubernetes compute jobs etc as needed. Operator-Engine reports when to Operator-Service when the job has finished.
Here's the actors/components:
- Consumers - The end users who need to use some computing services offered by the same Publisher as the data Publisher.
- Operator-Service - Micro-service that is handling the compute requests.
- Operator-Engine - The computing systems where the compute will be executed.
- Kubernetes - a K8 cluster
Before the flow can begin, these pre-conditions must be met:
- The Asset DDO has a `compute` service.
- The Asset DDO compute service must permit algorithms to run on it.
- The Asset DDO must specify an Ocean Provider endpoint exposed by the Publisher.
## Access Control using Ocean Provider
As [with the `access` service](/concepts/architecture/#datatokens--access-control-tools), the `compute` service requires the **Ocean Provider** as a component handled by Publishers. Ocean Provider is in charge of interacting with users and managing the basics of a Publisher's infrastructure to integrate this infrastructure into Ocean Protocol. The direct interaction with the infrastructure where the data resides happens through this component only.
Ocean Provider includes the credentials to interact with the infrastructure (initially in cloud providers, but it could be on-premise).
<repo name="provider"></repo>
## Compute-to-Data Environment
### Operator Service
The **Operator Service** is a micro-service in charge of managing the workflow executing requests.
The main responsibilities are:
- Expose an HTTP API allowing for the execution of data access and compute endpoints.
- Interact with the infrastructure (cloud/on-premise) using the Publisher's credentials.
- Start/stop/execute computing instances with the algorithms provided by users.
- Retrieve the logs generated during executions.
Typically the Operator Service is integrated from Ocean Provider, but can be called independently of it.
The Operator Service is in charge of establishing the communication with the K8s cluster, allowing it to:
- Register new compute jobs
- List the current compute jobs
- Get a detailed result for a given job
- Stop a running job
The Operator Service doesn't provide any storage capability, all the state is stored directly in the K8s cluster.
<repo name="operator-service"></repo>
### Operator Engine
The **Operator Engine** is in charge of orchestrating the compute infrastructure using Kubernetes as backend where each compute job runs in an isolated [Kubernetes Pod](https://kubernetes.io/docs/concepts/workloads/pods/). Typically the Operator Engine retrieves the workflows created by the Operator Service in Kubernetes, and manage the infrastructure necessary to complete the execution of the compute workflows.
The Operator Engine is in charge of retrieving all the workflows registered in a K8s cluster, allowing to:
- Orchestrate the flow of the execution
- Start the configuration pod in charge of download the workflow dependencies (datasets and algorithms)
- Start the pod including the algorithm to execute
- Start the publishing pod that publish the new assets created in the Ocean Protocol network.
- The Operator Engine doesn't provide any storage capability, all the state is stored directly in the K8s cluster.
<repo name="operator-engine"></repo>
### Pod: Configuration
<repo name="pod-configuration"></repo>
### Pod: Publishing
<repo name="pod-publishing"></repo>
## Further Reading
- [Compute-to-Data architecture](/tutorials/compute-to-data-architecture/)
- [Tutorial: Writing Algorithms](/tutorials/compute-to-data-algorithms/)
- [Tutorial: Set Up a Compute-to-Data Environment](/tutorials/compute-to-data/)
- [Use Compute-to-Data in Ocean Market](https://blog.oceanprotocol.com/compute-to-data-is-now-available-in-ocean-market-58868be52ef7)

View File

@ -1,349 +0,0 @@
---
title: DDO Metadata
description: Specification of the DDO subset dedicated to asset metadata
slug: /concepts/ddo-metadata/
section: concepts
---
## Overview
This page defines the schema for asset _metadata_. Metadata is the subset of an Ocean DDO that holds information about the asset.
The schema is based on public schema.org [DataSet schema](https://schema.org/Dataset).
Standardizing labels is key to effective searching, sorting and filtering (discovery).
This page specifies metadata attributes that _must_ be included, and that _may_ be included. These attributes are organized hierarchically, from top-layer attributes like `"main"` to sub-level attributes like `"main.type"`. This page also provides DDO metadata examples.
## Rules for Metadata Storage and Control in Ocean
The publisher publishes an asset DDO (including metadata) onto the chain.
The publisher may be the asset owner, or a marketplace acting on behalf of the owner.
Most metadata fields may be modified after creation. The blockchain records the provenance of changes.
DDOs (including metadata) are found in two places:
- _Remote_ - main storage, on-chain. File URLs are always encrypted. One may actually encrypt all metadata, at a severe cost to discoverability.
- _Local_ - local cache. All fields are in plaintext.
Ocean Aquarius helps manage metadata. It can be used to write DDOs to the chain, read from the chain, and has a local cache of the DDO in plaintext with fast search.
## Fields for Metadata
An asset represents a resource in Ocean, e.g. a dataset or an algorithm.
A `metadata` object has the following attributes, all of which are objects. Some are only required for local or remote, and are specified as such.
| Attribute | Required | Description |
| --------------------------- | -------- | ---------------------------------------------------------- |
| **`main`** | **Yes** | Main attributes |
| **`encryptedFiles`** | Remote | Encrypted string of the `attributes.main.files` object. |
| **`encryptedServices`** | Remote | Encrypted string of the `attributes.main.services` object. |
| **`status`** | No | Status attributes |
| **`additionalInformation`** | No | Optional attributes |
The `main` and `additionalInformation` attributes are independent of the asset type.
## Fields for `attributes.main`
The `main` object has the following attributes.
| Attribute | Type | Required | Description |
| ------------------- | --------------------- | -------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| **`name`** | Text |**Yes** | Descriptive name or title of the asset. |
| **`type`** | Text |**Yes** | Asset type. Includes `"dataset"` (e.g. csv file), `"algorithm"` (e.g. Python script). Each type needs a different subset of metadata attributes. |
| **`author`** | Text |**Yes** | Name of the entity generating this data (e.g. Tfl, Disney Corp, etc.). |
| **`license`** | Text |**Yes** | Short name referencing the license of the asset (e.g. Public Domain, CC-0, CC-BY, No License Specified, etc. ). If it's not specified, the following value will be added: "No License Specified". |
| **`files`** | Array of files object |**Yes** | Array of `File` objects including the encrypted file urls. |
| **`dateCreated`** | DateTime |**Yes** | The date on which the asset was created by the originator. ISO 8601 format, Coordinated Universal Time, e.g. `2019-01-31T08:38:32Z`. |
| **`datePublished`** | DateTime | Remote | The date on which the asset DDO is registered into the metadata store (Aquarius) |
## Fields for `attributes.main.files`
The `files` object has a list of `file` objects.
Each `file` object has the following attributes, with the details necessary to consume and validate the data.
| Attribute | Required | Description |
| -------------------- | -------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| **`index`** |**Yes** | Index number starting from 0 of the file. |
| **`contentType`** |**Yes** | File format. |
| **`url`** | Local | Content URL. Omitted from the remote metadata. Supports `http(s)://` and `ipfs://` URLs. |
| **`name`** | No | File name. |
| **`checksum`** | No | Checksum of the file using your preferred format (i.e. MD5). Format specified in `checksumType`. If it's not provided can't be validated if the file was not modified after registering. |
| **`checksumType`** | No | Format of the provided checksum. Can vary according to server (i.e Amazon vs. Azure) |
| **`contentLength`** | No | Size of the file in bytes. |
| **`encoding`** | No | File encoding (e.g. UTF-8). |
| **`compression`** | No | File compression (e.g. no, gzip, bzip2, etc). |
| **`encrypted`** | No | Boolean. Is the file encrypted? If is not set is assumed the file is not encrypted |
| **`encryptionMode`** | No | Encryption mode used. Just valid if `encrypted=true` |
| **`resourceId`** | No | Remote identifier of the file in the external provider. It is typically the remote id in the cloud provider. |
| **`attributes`** | No | Key-Value hash map with additional attributes describing the asset file. It could include details like the Amazon S3 bucket, region, etc. |
## Fields for `attributes.status`
A `status` object has the following attributes.
| Attribute | Type | Required | Description |
| --------------------- | ------- | -------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| **`isListed`** | Boolean | No | Use to flag unsuitable content. True by default. If it's false, the content must not be returned. |
| **`isRetired`** | Boolean | No | Flag retired content. False by default. If it's true, the content may either not be returned, or returned with a note about retirement. |
| **`isOrderDisabled`** | Boolean | No | For temporarily disabling ordering assets, e.g. when file host is in maintenance. False by default. If it's true, no ordering of assets for download or compute should be allowed. |
## Fields for `attributes.additionalInformation`
All the additional information will be stored as part of the `additionalInformation` section.
| Attribute | Type | Required |
| --------------------- | ------------- | -------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| **`tags`** | Array of Text | No | Array of keywords or tags used to describe this content. Empty by default. |
| **`description`** | Text | No | Details of what the resource is. For a dataset, this attribute explains what the data represents and what it can be used for. |
| **`copyrightHolder`** | Text | No | The party holding the legal copyright. Empty by default. |
| **`workExample`** | Text | No | Example of the concept of this asset. This example is part of the metadata, not an external link. |
| **`links`** | Array of Link | No | Mapping of links for data samples, or links to find out more information. Links may be to either a URL or another Asset. We expect marketplaces to converge on agreements of typical formats for linked data: The Ocean Protocol itself does not mandate any specific formats as these requirements are likely to be domain-specific. The links array can be an empty array, but if there is a link object in it, then an "url" is required in that link object. |
| **`inLanguage`** | Text | No | The language of the content. Please use one of the language codes from the [IETF BCP 47 standard](https://tools.ietf.org/html/bcp47). |
| **`categories`** | Array of Text | No | Optional array of categories associated to the asset. Note: recommended to use `"tags"` instead of this. |
## Fields - Other Suggestions
Here are example attributes to help an asset's discoverability.
| Attribute | Description |
| ---------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| **`updateFrequency`** | An indication of update latency - i.e. How often are updates expected (seldom, annually, quarterly, etc.), or is the resource static that is never expected to get updated. |
| **`structuredMarkup`** | A link to machine-readable structured markup (such as ttl/json-ld/rdf) describing the dataset. |
## DDO Metadata Example - Local
This is what the DDO metadata looks like. All fields are in plaintext. This is before it's stored on-chain or when it's retrieved and decrypted into a local cache.
```json
{
"main": {
"name": "Madrid Weather forecast",
"dateCreated": "2019-05-16T12:36:14.535Z",
"author": "Norwegian Meteorological Institute",
"type": "dataset",
"license": "Public Domain",
"price": "123000000000000000000",
"files": [
{
"index": 0,
"url": "https://example-url.net/weather/forecast/madrid/350750305731.xml",
"contentLength": "0",
"contentType": "text/xml",
"compression": "none"
}
]
},
"additionalInformation": {
"description": "Weather forecast of Europe/Madrid in XML format",
"copyrightHolder": "Norwegian Meteorological Institute",
"categories": ["Other"],
"links": [],
"tags": [],
"updateFrequency": null,
"structuredMarkup": []
},
"status": {
"isListed": true,
"isRetired": false,
"isOrderDisabled": false
}
}
```
## DDO Metadata Example - Remote
The previous example was for a local cache, with all fields in plaintext.
Here's the same example, for remote on-chain storage. That is, it's how metadata looks as a response to querying Aquarius (remote metadata).
How remote is changed, compared to local:
- `url` is removed from all objects in the `files` array
- `encryptedFiles` is added.
```json
{
"service": [
{
"index": 0,
"serviceEndpoint": "http://aquarius:5000/api/v1/aquarius/assets/ddo/{did}",
"type": "metadata",
"attributes": {
"main": {
"type": "dataset",
"name": "Madrid Weather forecast",
"dateCreated": "2019-05-16T12:36:14.535Z",
"author": "Norwegian Meteorological Institute",
"license": "Public Domain",
"files": [
{
"contentLength": "0",
"contentType": "text/xml",
"compression": "none",
"index": 0
}
],
"datePublished": "2019-05-16T12:41:01Z"
},
"encryptedFiles": "0x7a0d1c66ae861…df43aa9",
"additionalInformation": {
"description": "Weather forecast of Europe/Madrid in XML format",
"copyrightHolder": "Norwegian Meteorological Institute",
"categories": ["Other"],
"links": [],
"tags": [],
"updateFrequency": null,
"structuredMarkup": []
},
"status": {
"isListed": true,
"isRetired": false,
"isOrderDisabled": false
}
}
}
]
}
```
## Fields when `attributes.main.type = algorithm`
An asset of type `algorithm` has the following additional attributes under `main.algorithm`:
| Attribute | Type | Required | Description |
| --------------- | -------- | -------- | --------------------------------------------- |
| **`container`** | `Object` |**Yes** | Object describing the Docker container image. |
| **`language`** | `string` | No | Language used to implement the software |
| **`format`** | `string` | No | Packaging format of the software. |
| **`version`** | `string` | No | Version of the software. |
The `container` object has the following attributes:
| Attribute | Type | Required | Description |
| ---------------- | -------- | -------- | ----------------------------------------------------------------- |
| **`entrypoint`** | `string` |**Yes** | The command to execute, or script to run inside the Docker image. |
| **`image`** | `string` |**Yes** | Name of the Docker image. |
| **`tag`** | `string` |**Yes** | Tag of the Docker image. |
| **`checksum`** | `string` |**Yes** | Checksum of the Docker image. |
```json
{
"index": 0,
"serviceEndpoint": "http://localhost:5000/api/v1/aquarius/assets/ddo/{did}",
"type": "metadata",
"attributes": {
"main": {
"author": "John Doe",
"dateCreated": "2019-02-08T08:13:49Z",
"license": "CC-BY",
"name": "My super algorithm",
"type": "algorithm",
"algorithm": {
"language": "scala",
"format": "docker-image",
"version": "0.1",
"container": {
"entrypoint": "node $ALGO",
"image": "node",
"tag": "10",
"checksum": "efb2c764274b745f5fc37f97c6b0e761"
}
},
"files": [
{
"name": "build_model",
"url": "https://raw.gith ubusercontent.com/oceanprotocol/test-algorithm/master/javascript/algo.js",
"index": 0,
"checksum": "efb2c764274b745f5fc37f97c6b0e761",
"contentLength": "4535431",
"contentType": "text/plain",
"encoding": "UTF-8",
"compression": "zip"
}
]
},
"additionalInformation": {
"description": "Workflow to aggregate weather information",
"tags": ["weather", "uk", "2011", "workflow", "aggregation"],
"copyrightHolder": "John Doe"
}
}
}
```
## Fields when `attributes.main.type = compute`
An asset with a service of type `compute` has the following additional attributes under `main.privacy`:
| Attribute | Type | Required | Description |
| --------------------------------- | ------------------ | -------- | ---------------------------------------------------------- |
| **`allowRawAlgorithm`** | `boolean` |**Yes** | If True, a drag & drop algo can be runned |
| **`allowNetworkAccess`** | `boolean` |**Yes** | If True, the algo job will have network access (stil WIP) |
| **`publisherTrustedAlgorithms `** | Array of `Objects` |**Yes** | If Empty , then any published algo is allowed. (see below) |
The `publisherTrustedAlgorithms ` is an array of objects with the following structure:
| Attribute | Type | Required | Description |
| ------------------------------ | -------- | -------- | ------------------------------------------------------------------ |
| **`did`** | `string` |**Yes** | The did of the algo which is trusted by the publisher. |
| **`filesChecksum`** | `string` |**Yes** | Hash of ( algorithm's encryptedFiles + files section (as string) ) |
| **`containerSectionChecksum`** | `string` |**Yes** | Hash of the algorithm container section (as string) |
To produce `filesChecksum`:
```javascript
sha256(
algorithm_ddo.service['metadata'].attributes.encryptedFiles +
JSON.Stringify(algorithm_ddo.service['metadata'].attributes.main.files)
)
```
To produce `containerSectionChecksum`:
```javascript
sha256(
JSON.Stringify(
algorithm_ddo.service['metadata'].attributes.main.algorithm.container
)
)
```
### Example of a compute service
```json
{
"type": "compute",
"index": 1,
"serviceEndpoint": "https://provider.oceanprotocol.com",
"attributes": {
"main": {
"name": "dataAssetComputingService",
"creator": "0xA32C84D2B44C041F3a56afC07a33f8AC5BF1A071",
"datePublished": "2021-02-17T06:31:33Z",
"cost": "1",
"timeout": 3600,
"privacy": {
"allowRawAlgorithm": true,
"allowNetworkAccess": false,
"publisherTrustedAlgorithms": [
{
"did": "0xxxxx",
"filesChecksum": "1234",
"containerSectionChecksum": "7676"
},
{
"did": "0xxxxx",
"filesChecksum": "1232334",
"containerSectionChecksum": "98787"
}
]
}
}
}
}
```

File diff suppressed because it is too large Load Diff

Binary file not shown.

Before

Width:  |  Height:  |  Size: 30 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 57 KiB

View File

@ -0,0 +1,83 @@
---
title: Compute-to-Data
description: Architecture overview
---
## Architecture Overview
Here's the sequence diagram for starting a new compute job.
![Sequence Diagram for computing services](images/Starting New Compute Job.png)
The Consumer calls the Provider with `start(did, algorithm, additionalDIDs)`. It returns job id `XXXX`. The Provider oversees the rest of the work. At any point, the Consumer can query the Provider for the job status via `getJobDetails(XXXX)`.
Here's how Provider works. First, it ensures that the Consumer has sent the appropriate datatokens to get access. Then, it calls asks the Operator-Service (a microservice) to start the job, which passes on the request to Operator-Engine (the actual compute system). Operator-Engine runs Kubernetes compute jobs etc as needed. Operator-Engine reports when to Operator-Service when the job has finished.
Here's the actors/components:
- Consumers - The end users who need to use some computing services offered by the same Publisher as the data Publisher.
- Operator-Service - Micro-service that is handling the compute requests.
- Operator-Engine - The computing systems where the compute will be executed.
- Kubernetes - a K8 cluster
Before the flow can begin, these pre-conditions must be met:
- The Asset DDO has a `compute` service.
- The Asset DDO compute service must permit algorithms to run on it.
- The Asset DDO must specify an Ocean Provider endpoint exposed by the Publisher.
## Access Control using Ocean Provider
As [with the `access` service](/concepts/architecture/#datatokens--access-control-tools), the `compute` service requires the **Ocean Provider** as a component handled by Publishers. Ocean Provider is in charge of interacting with users and managing the basics of a Publisher's infrastructure to integrate this infrastructure into Ocean Protocol. The direct interaction with the infrastructure where the data resides happens through this component only.
Ocean Provider includes the credentials to interact with the infrastructure (initially in cloud providers, but it could be on-premise).
<repo name="provider"></repo>
## Compute-to-Data Environment
### Operator Service
The **Operator Service** is a micro-service in charge of managing the workflow executing requests.
The main responsibilities are:
- Expose an HTTP API allowing for the execution of data access and compute endpoints.
- Interact with the infrastructure (cloud/on-premise) using the Publisher's credentials.
- Start/stop/execute computing instances with the algorithms provided by users.
- Retrieve the logs generated during executions.
Typically the Operator Service is integrated from Ocean Provider, but can be called independently of it.
The Operator Service is in charge of establishing the communication with the K8s cluster, allowing it to:
- Register new compute jobs
- List the current compute jobs
- Get a detailed result for a given job
- Stop a running job
The Operator Service doesn't provide any storage capability, all the state is stored directly in the K8s cluster.
<repo name="operator-service"></repo>
### Operator Engine
The **Operator Engine** is in charge of orchestrating the compute infrastructure using Kubernetes as backend where each compute job runs in an isolated [Kubernetes Pod](https://kubernetes.io/docs/concepts/workloads/pods/). Typically the Operator Engine retrieves the workflows created by the Operator Service in Kubernetes, and manage the infrastructure necessary to complete the execution of the compute workflows.
The Operator Engine is in charge of retrieving all the workflows registered in a K8s cluster, allowing to:
- Orchestrate the flow of the execution
- Start the configuration pod in charge of download the workflow dependencies (datasets and algorithms)
- Start the pod including the algorithm to execute
- Start the publishing pod that publish the new assets created in the Ocean Protocol network.
- The Operator Engine doesn't provide any storage capability, all the state is stored directly in the K8s cluster.
<repo name="operator-engine"></repo>
### Pod: Configuration
<repo name="pod-configuration"></repo>
### Pod: Publishing
<repo name="pod-publishing"></repo>

View File

@ -0,0 +1,27 @@
---
title: Compute-to-Data
description: Datasets and Algorithms
---
## Datasets & Algorithms
With Compute-to-Data, datasets are not allowed to leave the premises of the data holder, only algorithms can be permitted to run on them under certain conditions within an isolated and secure environment. Algorithms are an asset type just like datasets. They too can have a pool or a fixed price to determine their price whenever they are used.
Algorithms can be public or private by setting `"attributes.main.type"` value in DDO as follows:
- `"access"` - public. The algorithm can be downloaded, given appropriate datatoken.
- `"compute"` - private. The algorithm is only available to use as part of a compute job without any way to download it. The Algorithm must be published on the same Ocean Provider as the dataset it's targeted to run on.
For each dataset, publishers can choose to allow various permission levels for algorithms to run:
- allow selected algorithms, referenced by their DID
- allow all algorithms published within a network or marketplace
- allow raw algorithms, for advanced use cases circumventing algorithm as an asset type, but most prone to data escape
All implementations should set permissions to private by default: upon publishing a compute dataset, no algorithms should be allowed to run on it. This is to prevent data escape by a rogue algorithm being written in a way to extract all data from a dataset.
## DDO Links
- [Algorithm DDO](/concepts/ddo-metadata/#fields-when-attributesmaintype--algorithm)
- [Compute DDO](/concepts/ddo-metadata/#fields-when-attributesmaintype--compute)

View File

@ -0,0 +1,318 @@
---
title: Setting up private docker registry for Compute-to-Data environment
description: Learn how to setup your own docker registry and push images for running algorithms in a C2D environment.
---
The document is intended for a production setup. The tutorial provides the steps to setup a private docker registry on the server for the following scenarios:
- Allow registry access only to the C2D environment.
- Anyone can pull the image from the registry but, only authenticated users will push images to the registry.
## Setup 1: Allow registry access only to the C2D environment
To implement this use case, 1 domain will be required:
- **example.com**: This domain will allow only image pull operations
_Note: Please change the domain names to your application-specific domain names._
### 1.1 Prerequisites
- Running docker environment on the linux server.
- Docker compose is installed.
- C2D environment is running.
- The domain names is mapped to the server hosting the registry.
### 1.2 Generate certificates
```bash
# install certbot: https://certbot.eff.org/
sudo certbot certonly --standalone --cert-name example.com -d example.com
```
_Note: Do check the access right of the files/directories where certificates are stored. Usually, they are at `/etc/letsencrypt/`._
### 1.3 Generate password file
Replace content in `<>` with appropriate content.
```bash
docker run \
--entrypoint htpasswd \
httpd:2 -Bbn <username> <password> > <path>/auth/htpasswd
```
### 1.4 Docker compose template file for registry
Copy the below yml content to `docker-compose.yml` file and replace content in `<>`.
```yml
version: '3'
services:
registry:
restart: always
container_name: my-docker-registry
image: registry:2
ports:
- 5050:5000
environment:
REGISTRY_AUTH: htpasswd
REGISTRY_AUTH_HTPASSWD_PATH: /auth/htpasswd
REGISTRY_AUTH_HTPASSWD_REALM: Registry Realm
REGISTRY_HTTP_SECRET: <secret>
volumes:
- <path>/data:/var/lib/registry
- <path>/auth:/auth
nginx:
image: nginx:latest
container_name: nginx
volumes:
- <path>/nginx/logs:/app/logs/
- nginx.conf:/etc/nginx/nginx.conf
- /etc/letsencrypt/:/etc/letsencrypt/
ports:
- 80:80
- 443:443
depends_on:
- registry
```
### 1.5 Nginx configuration
Copy the below nginx configuration to a `nginx.conf` file.
```conf
events {}
http {
access_log /app/logs/access.log;
error_log /app/logs/error.log;
server {
client_max_body_size 4096M;
listen 80 default_server;
server_name _;
return 301 https://$host$request_uri;
}
server {
# Allowed request size should be large enough to allow pull operations
client_max_body_size 4096M;
listen 443 ssl;
server_name example.com;
ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;
location / {
proxy_connect_timeout 75s;
proxy_pass http://registry-read-only:5000;
}
}
}
```
### 1.6 Create kubernetes secret in C2D server
Login into Compute-to-data enviroment and run the following command with appropriate credentials:
```bash
kubectl create secret docker-registry regcred --docker-server=example.com --docker-username=<username> --docker-password=<password> --docker-email=<email_id> -n ocean-compute
```
### 1.7 Update operator-engine configuration
Add `PULL_SECRET` property with value `regcred` in the [operator.yml](https://github.com/oceanprotocol/operator-engine/blob/main/kubernetes/operator.yml) file of operator-engine configuration.
For more detials on operator-engine properties refer this [link](https://github.com/oceanprotocol/operator-engine/blob/177ca7185c34aa2a503afbe026abb19c62c69e6d/README.md?plain=1#L106)
Apply updated operator-engine configuration.
```bash
kubectl config set-context --current --namespace ocean-compute
kubectl apply -f operator-engine/kubernetes/operator.yml
```
## Steup 2: Allow anyonymous `pull` operations
To implement this use case, 2 domains will be required:
- **example.com**: This domain will allow image push/pull operations only to the authenticated users.
- **readonly.example.com**: This domain will allow only image pull operations
_Note: Please change the domain names to your application-specific domain names._
### 2.1 Prerequisites
- Running docker environment on the linux server.
- Docker compose is installed.
- 2 domain names is mapped to the same server IP address.
### 2.2 Generate certificates
```bash
# install certbot: https://certbot.eff.org/
sudo certbot certonly --standalone --cert-name example.com -d example.com
sudo certbot certonly --standalone --cert-name readonly.example.com -d readonly.example.com
```
_Note: Do check the access right of the files/directories where certificates are stored. Usually, they are at `/etc/letsencrypt/`._
### 2.3 Generate password file
Replace content in `<>` with appropriate content.
```bash
docker run \
--entrypoint htpasswd \
httpd:2 -Bbn <username> <password> > <path>/auth/htpasswd
```
### 2.4 Docker compose template file for registry
Copy the below yml content to `docker-compose.yml` file and replace content in `<>`.
Here, we will be creating two services of the docker registry so that anyone can `pull` the images from the registry but, only authenticated users can `push` the images.
```yml
version: '3'
services:
registry:
restart: always
container_name: my-docker-registry
image: registry:2
ports:
- 5050:5000
environment:
REGISTRY_AUTH: htpasswd
REGISTRY_AUTH_HTPASSWD_PATH: /auth/htpasswd
REGISTRY_AUTH_HTPASSWD_REALM: Registry Realm
REGISTRY_HTTP_SECRET: <secret>
volumes:
- <path>/data:/var/lib/registry
- <path>/auth:/auth
registry-read-only:
restart: always
container_name: my-registry-read-only
image: registry:2
read_only: true
ports:
- 5051:5000
environment:
REGISTRY_HTTP_SECRET: ${REGISTRY_HTTP_SECRET}
volumes:
- <path>/docker-registry/data:/var/lib/registry:ro
depends_on:
- registry
nginx:
image: nginx:latest
container_name: nginx
volumes:
- <path>/nginx/logs:/app/logs/
- nginx.conf:/etc/nginx/nginx.conf
- /etc/letsencrypt/:/etc/letsencrypt/
ports:
- 80:80
- 443:443
depends_on:
- registry-read-only
```
### 2.5 Nginx configuration
Copy the below nginx configuration to a `nginx.conf` file.
```conf
events {}
http {
access_log /app/logs/access.log;
error_log /app/logs/error.log;
server {
client_max_body_size 4096M;
listen 80 default_server;
server_name _;
return 301 https://$host$request_uri;
}
server {
# Allowed request size should be large enough to allow push operations
client_max_body_size 4096M;
listen 443 ssl;
server_name readonly.example.com;
ssl_certificate /etc/letsencrypt/live/readonly.example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/readonly.example.com/privkey.pem;
location / {
proxy_connect_timeout 75s;
proxy_pass http://registry:5000;
}
}
server {
# Allowed request size should be large enough to allow pull operations
client_max_body_size 4096M;
listen 443 ssl;
server_name example.com;
ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;
location / {
proxy_connect_timeout 75s;
proxy_pass http://registry-read-only:5000;
}
}
}
```
## Start the registry
```bash
docker-compose -f docker-compose.yml up
```
## Working with registry
### Login to registry
```bash
docker login example.com -u <username> -p <password>
```
### Build and push an image to the registry
Use the commands below to build an image from a `Dockerfile` and push it to your private registry.
```bash
docker build . -t example.com/my-algo:latest
docker image push example.com/my-algo:latest
```
### List images in the registry
```bash
curl -X GET -u <username>:<password> https://example.com/v2/_catalog
```
### Pull an image from the registry
Use the commands below to build an image from a `Dockerfile` and push it to your private registry.
```bash
# requires login
docker image pull example.com/my-algo:latest
# allows anonymous pull if 2nd setup scenario is implemented
docker image pull readonly.example.com/my-algo:latest
```
### Next step
You can publish an algorithm asset with the metadata containing registry URL, image, and tag information to enable users to run C2D jobs.
## Further references
- [Setup Compute-to-Data environment](/tutorials/compute-to-data-minikube/)
- [Writing algorithms](/tutorials/compute-to-data-algorithms/)
- [C2D example](/references/read-the-docs/ocean-py/READMEs/c2d-flow.md)

View File

@ -24,8 +24,78 @@ wget -q --show-progress https://github.com/kubernetes/minikube/releases/download
sudo dpkg -i minikube_1.22.0-0_amd64.deb
```
## Start Minikube
First command is imporant, and solves a [PersistentVolumeClaims problem](https://github.com/kubernetes/minikube/issues/7828).
```bash
minikube config set kubernetes-version v1.16.0
minikube start --cni=calico --driver=docker --container-runtime=docker
```
## Install kubectl
```bash
curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"
curl -LO "https://dl.k8s.io/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl.sha256"
echo "$(<kubectl.sha256) kubectl" | sha256sum --check
sudo install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl
```
Wait untill all the defaults are running (1/1).
```bash
watch kubectl get pods --all-namespaces
```
### Run IPFS host
```bash
export ipfs_staging=~/ipfs_staging
export ipfs_data=~/ipfs_data
docker run -d --name ipfs_host -v $ipfs_staging:/export -v $ipfs_data:/data/ipfs -p 4001:4001 -p 4001:4001/udp -p 127.0.0.1:8080:8080 -p 127.0.0.1:5001:5001 ipfs/go-ipfs:latest
sudo /bin/sh -c 'echo "127.0.0.1 youripfsserver" >> /etc/hosts'
```
## Storage class (Optional)
For minikube, you can use the default 'standard' class.
For AWS, please make sure that your class allocates volumes in the same region and zone in which you are running your pods.
We created our own 'standard' class in AWS:
```bash
kubectl get storageclass standard -o yaml
```
```yaml
allowedTopologies:
- matchLabelExpressions:
- key: failure-domain.beta.kubernetes.io/zone
values:
- us-east-1a
apiVersion: storage.k8s.io/v1
kind: StorageClass
parameters:
fsType: ext4
type: gp2
provisioner: kubernetes.io/aws-ebs
reclaimPolicy: Delete
volumeBindingMode: Immediate
```
For more information, please visit https://kubernetes.io/docs/concepts/storage/storage-classes/
## Download and Configure Operator Service
Open new terminal and run the command below.
```bash
git clone https://github.com/oceanprotocol/operator-service.git
```
@ -68,30 +138,6 @@ Check the [README](https://github.com/oceanprotocol/operator-engine#customize-yo
At a minimum you should add your IPFS URLs or AWS settings, and add (or remove) notification URLs.
## Install kubectl
```bash
curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"
curl -LO "https://dl.k8s.io/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl.sha256"
echo "$(<kubectl.sha256) kubectl" | sha256sum --check
sudo install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl
```
## Start Minikube
First command is imporant, and solves a [PersistentVolumeClaims problem](https://github.com/kubernetes/minikube/issues/7828).
```bash
minikube config set kubernetes-version v1.16.0
minikube start --cni=calico --driver=docker --container-runtime=docker
```
Wait untill all the defaults are running (1/1).
```bash
watch kubectl get pods --all-namespaces
```
## Create namespaces

View File

@ -1,132 +0,0 @@
---
title: Set Up a Compute-to-Data Environment
description:
---
## Requirements
First, create a folder with the following structure:
```text
ocean/
barge/
operator-service/
operator-engine/
```
Then you need the following parts:
- working [Barge](https://github.com/oceanprotocol/barge). For this setup, we will asume the Barge is installed in /ocean/barge/
- a working Kubernetes (K8s) cluster ([Minikube](../compute-to-data-minikube/) is a good start)
- a working `kubectl` connected to the K8s cluster
- one folder (/ocean/operator-service/), in which we will download the following:
- [postgres-configmap.yaml](https://raw.githubusercontent.com/oceanprotocol/operator-service/main/kubernetes/postgres-configmap.yaml)
- [postgres-storage.yaml](https://raw.githubusercontent.com/oceanprotocol/operator-service/main/kubernetes/postgres-storage.yaml)
- [postgres-deployment.yaml](https://raw.githubusercontent.com/oceanprotocol/operator-service/main/kubernetes/postgres-deployment.yaml)
- [postgres-service.yaml](https://raw.githubusercontent.com/oceanprotocol/operator-service/main/kubernetes/postgresql-service.yaml)
- [deployment.yaml](https://raw.githubusercontent.com/oceanprotocol/operator-service/main/kubernetes/deployment.yaml)
- one folder (/ocean/operator-engine/), in which we will download the following:
- [sa.yaml](https://raw.githubusercontent.com/oceanprotocol/operator-engine/main/kubernetes/sa.yml)
- [binding.yaml](https://raw.githubusercontent.com/oceanprotocol/operator-engine/main/kubernetes/binding.yml)
- [operator.yaml](https://raw.githubusercontent.com/oceanprotocol/operator-engine/main/kubernetes/operator.yml)
## Customize your Operator Service deployment
The following resources need attention:
| Resource | Variable | Description |
| ------------------------- | ------------------ | ------------------------------------------------------------------------------------------------------ |
| `postgres-configmap.yaml` | | Contains secrets for the PostgreSQL deployment. |
| `deployment.yaml` | `ALGO_POD_TIMEOUT` | Allowed time for a algorithm to run. If it exceeded this value (in minutes), it's going to get killed. |
## Customize your Operator Engine deployment
Check the [README](https://github.com/oceanprotocol/operator-engine#customize-your-operator-engine-deployment) section of operator engine to customize your deployment
## Storage class
For minikube, you can use the default 'standard' class.
For AWS, please make sure that your class allocates volumes in the same region and zone in which you are running your pods.
We created our own 'standard' class in AWS:
```bash
kubectl get storageclass standard -o yaml
```
```yaml
allowedTopologies:
- matchLabelExpressions:
- key: failure-domain.beta.kubernetes.io/zone
values:
- us-east-1a
apiVersion: storage.k8s.io/v1
kind: StorageClass
parameters:
fsType: ext4
type: gp2
provisioner: kubernetes.io/aws-ebs
reclaimPolicy: Delete
volumeBindingMode: Immediate
```
For more information, please visit https://kubernetes.io/docs/concepts/storage/storage-classes/
## Create namespaces
```bash
kubectl create ns ocean-operator
kubectl create ns ocean-compute
```
## Deploy Operator Service
```bash
kubectl config set-context --current --namespace ocean-operator
kubectl create -f /ocean/operator-service/postgres-configmap.yaml
kubectl create -f /ocean/operator-service/postgres-storage.yaml
kubectl create -f /ocean/operator-service/postgres-deployment.yaml
kubectl create -f /ocean/operator-service/postgresql-service.yaml
kubectl apply -f /ocean/operator-service/deployment.yaml
```
## Deploy Operator Engine
```bash
kubectl config set-context --current --namespace ocean-compute
kubectl apply -f /ocean/operator-engine/sa.yml
kubectl apply -f /ocean/operator-engine/binding.yml
kubectl apply -f /ocean/operator-engine/operator.yml
kubectl create -f /ocean/operator-service/postgres-configmap.yaml
```
## Expose Operator Service
```bash
kubectl expose deployment operator-api --namespace=ocean-operator --port=8050
```
Run a port forward or create your ingress service (not covered here):
```bash
kubectl -n ocean-operator port-forward svc/operator-api 8050
```
## Initialize database
If your cluster is running on example.com:
```bash
curl -X POST "http://example.com:8050/api/v1/operator/pgsqlinit" -H "accept: application/json"
```
## Update Barge for local testing
Update Barge's Provider by adding or updating the `OPERATOR_SERVICE_URL` env in `/ocean/barge/compose-files/provider.yaml`
```yaml
OPERATOR_SERVICE_URL: http://example.com:8050/
```
Restart Barge with updated provider configuration

View File

Before

Width:  |  Height:  |  Size: 117 KiB

After

Width:  |  Height:  |  Size: 117 KiB

View File

@ -10,27 +10,20 @@
link: /concepts/datanft-and-datatoken/
- title: Roles
link: /concepts/roles/
- title: DIDs & DDOs
link: /concepts/did-ddo/
- title: Supported Networks
link: /concepts/networks/
- title: Deployments
link: /concepts/deployments/
- title: Projects using Ocean
link: /concepts/projects-using-ocean/
- group: Compute-to-Data
items:
- title: Compute-to-Data Overview
- title: Overview
link: /concepts/compute-to-data/
- group: Specifying Assets
items:
- title: DIDs & DDOs
link: /concepts/did-ddo/
- title: DDO Metadata
link: /concepts/ddo-metadata/
- group: Contribute
items:
- title: Projects using Ocean
link: /concepts/projects-using-ocean/
- title: Ways to Contribute
link: /concepts/contributing/
- title: Get Funding

View File

@ -37,12 +37,16 @@
- group: Compute-to-Data
items:
- title: Architecture Overview
link: /tutorials/compute-to-data-architecture/
- title: Run a Compute-to-Data Environment
link: /tutorials/compute-to-data-minikube/
- title: Datasets and algorithms
link: /tutorials/compute-to-data-datasets-algorithms/
- title: Writing Algorithms
link: /tutorials/compute-to-data-algorithms/
- title: Run a Compute-to-Data Environment
link: /tutorials/compute-to-data/
- title: Minikube Compute-to-Data Environment
link: /tutorials/compute-to-data-minikube/
- title: Setting up docker registry
link: /tutorials/compute-to-data-docker-registry/
- group: Storage Setup
items:

243
package-lock.json generated
View File

@ -5798,18 +5798,11 @@
"integrity": "sha512-1uIESzroqpaTzt9uX48HO+6gfnKu3RwvWdCcWSrX4csMInJfCo1yvKPNXCwXFRpJqRW25tiASb6No0YH57PXqg=="
},
"axios": {
"version": "0.24.0",
"resolved": "https://registry.npmjs.org/axios/-/axios-0.24.0.tgz",
"integrity": "sha512-Q6cWsys88HoPgAaFAVUb0WpPk0O8iTeisR9IMqy9G8AbO4NlpVknrnQS03zzF9PGAWgO3cgletO3VjV/P7VztA==",
"version": "0.25.0",
"resolved": "https://registry.npmjs.org/axios/-/axios-0.25.0.tgz",
"integrity": "sha512-cD8FOb0tRH3uuEe6+evtAbgJtfxr7ly3fQjYcMcuPlgkwVS9xboaVIpcDV+cYQe+yGykgwZCs1pzjntcGa6l5g==",
"requires": {
"follow-redirects": "^1.14.4"
},
"dependencies": {
"follow-redirects": {
"version": "1.14.5",
"resolved": "https://registry.npmjs.org/follow-redirects/-/follow-redirects-1.14.5.tgz",
"integrity": "sha512-wtphSXy7d4/OR+MvIFbCVBDzZ5520qV8XfPklSN5QtxuMUJZ+b0Wnst1e1lCDocfzuCkHqj8k0FpZqO+UIaKNA=="
}
"follow-redirects": "^1.14.7"
}
},
"axobject-query": {
@ -7703,11 +7696,6 @@
"follow-redirects": "^1.14.0"
}
},
"follow-redirects": {
"version": "1.14.4",
"resolved": "https://registry.npmjs.org/follow-redirects/-/follow-redirects-1.14.4.tgz",
"integrity": "sha512-zwGkiSXC1MUJG/qmeIFH2HBJx9u0V46QGUe3YR1fXG8bXQxq7fLj0RjLZQ5nubr9qNJUZrH+xUcwXEoXNpfS+g=="
},
"type-fest": {
"version": "1.0.2",
"resolved": "https://registry.npmjs.org/type-fest/-/type-fest-1.0.2.tgz",
@ -9017,9 +9005,9 @@
}
},
"dotenv": {
"version": "10.0.0",
"resolved": "https://registry.npmjs.org/dotenv/-/dotenv-10.0.0.tgz",
"integrity": "sha512-rlBi9d8jpv9Sf1klPjNfFAuWDjKLwTIJJ/VxtoTwIR6hnZxcEOQCZg2oIL3MWBYw5GpUDKOEnND7LXTbIpQ03Q==",
"version": "16.0.0",
"resolved": "https://registry.npmjs.org/dotenv/-/dotenv-16.0.0.tgz",
"integrity": "sha512-qD9WU0MPM4SWLPJy/r2Be+2WgQj8plChsyrCNQzW/0WjvcJQiKQJ9mH3ZgB3fxbUUxgc/11ZJ0Fi5KiimWGz2Q==",
"dev": true
},
"download": {
@ -9235,9 +9223,9 @@
}
},
"engine.io": {
"version": "4.1.1",
"resolved": "https://registry.npmjs.org/engine.io/-/engine.io-4.1.1.tgz",
"integrity": "sha512-t2E9wLlssQjGw0nluF6aYyfX8LwYU8Jj0xct+pAhfWfv/YrBn6TSNtEYsgxHIfaMqfrLx07czcMg9bMN6di+3w==",
"version": "4.1.2",
"resolved": "https://registry.npmjs.org/engine.io/-/engine.io-4.1.2.tgz",
"integrity": "sha512-t5z6zjXuVLhXDMiFJPYsPOWEER8B0tIsD3ETgw19S1yg9zryvUfY3Vhtk3Gf4sihw/bQGIqQ//gjvVlu+Ca0bQ==",
"requires": {
"accepts": "~1.3.4",
"base64id": "2.0.0",
@ -9254,9 +9242,9 @@
"integrity": "sha512-ZwrFkGJxUR3EIoXtO+yVE69Eb7KlixbaeAWfBQB9vVsNn/o+Yw69gBWSSDK825hQNdN+wF8zELf3dFNl/kxkUA=="
},
"debug": {
"version": "4.3.1",
"resolved": "https://registry.npmjs.org/debug/-/debug-4.3.1.tgz",
"integrity": "sha512-doEwdvm4PCeK4K3RQN2ZC2BYUBaxwLARCqZmMjtF8a51J2Rb0xpVloFRnCODwqjpwnAoao4pelN8l3RJdv3gRQ==",
"version": "4.3.3",
"resolved": "https://registry.npmjs.org/debug/-/debug-4.3.3.tgz",
"integrity": "sha512-/zxw5+vh1Tfv+4Qn7a5nsbcJKPaSvCDhojn6FEl9vupwK2VCSDtEiEtqr8DFtzYFOdz63LBkxec7DYuc2jon6Q==",
"requires": {
"ms": "2.1.2"
}
@ -11406,9 +11394,9 @@
}
},
"follow-redirects": {
"version": "1.13.1",
"resolved": "https://registry.npmjs.org/follow-redirects/-/follow-redirects-1.13.1.tgz",
"integrity": "sha512-SSG5xmZh1mkPGyKzjZP8zLjltIfpW32Y5QpdNJyjcfGxK3qo3NDDkZOZSFiGn1A6SclQxY9GzEwAHQ3dmYRWpg=="
"version": "1.14.7",
"resolved": "https://registry.npmjs.org/follow-redirects/-/follow-redirects-1.14.7.tgz",
"integrity": "sha512-+hbxoLbFMbRKDwohX8GkTataGqO6Jb7jGwpAlwgy2bIz25XtRm7KEzJM76R1WiNT5SwZkX4Y75SwBolkpmE7iQ=="
},
"for-in": {
"version": "1.0.2",
@ -11433,14 +11421,14 @@
}
},
"form-data-encoder": {
"version": "1.6.0",
"resolved": "https://registry.npmjs.org/form-data-encoder/-/form-data-encoder-1.6.0.tgz",
"integrity": "sha512-P97AVaOB8hZaniiKK3f46zxQcchQXI8EgBnX+2+719gLv5ZbDSf3J1XtIuAQ8xbGLU4vZYhy7xwhFtK8U5u9Nw=="
"version": "1.7.1",
"resolved": "https://registry.npmjs.org/form-data-encoder/-/form-data-encoder-1.7.1.tgz",
"integrity": "sha512-EFRDrsMm/kyqbTQocNvRXMLjc7Es2Vk+IQFx/YW7hkUH1eBl4J1fqiP34l74Yt0pFLCNpc06fkbVk00008mzjg=="
},
"formdata-node": {
"version": "4.3.0",
"resolved": "https://registry.npmjs.org/formdata-node/-/formdata-node-4.3.0.tgz",
"integrity": "sha512-TwqhWUZd2jB5l0kUhhcy1XYNsXq46NH6k60zmiu7xsxMztul+cCMuPSAQrSDV62zznhBKJdA9O+zeWj5i5Pbfg==",
"version": "4.3.2",
"resolved": "https://registry.npmjs.org/formdata-node/-/formdata-node-4.3.2.tgz",
"integrity": "sha512-k7lYJyzDOSL6h917favP8j1L0/wNyylzU+x+1w4p5haGVHNlP58dbpdJhiCUsDbWsa9HwEtLp89obQgXl2e0qg==",
"requires": {
"node-domexception": "1.0.0",
"web-streams-polyfill": "4.0.0-beta.1"
@ -12123,11 +12111,6 @@
"locate-path": "^2.0.0"
}
},
"follow-redirects": {
"version": "1.14.4",
"resolved": "https://registry.npmjs.org/follow-redirects/-/follow-redirects-1.14.4.tgz",
"integrity": "sha512-zwGkiSXC1MUJG/qmeIFH2HBJx9u0V46QGUe3YR1fXG8bXQxq7fLj0RjLZQ5nubr9qNJUZrH+xUcwXEoXNpfS+g=="
},
"gatsby-cli": {
"version": "2.19.3",
"resolved": "https://registry.npmjs.org/gatsby-cli/-/gatsby-cli-2.19.3.tgz",
@ -13720,9 +13703,9 @@
}
},
"gatsby-remark-vscode": {
"version": "3.3.0",
"resolved": "https://registry.npmjs.org/gatsby-remark-vscode/-/gatsby-remark-vscode-3.3.0.tgz",
"integrity": "sha512-55ucO1KryOwz9UlvQzsdNC6mI8wiWqSrE8pkV/fvHP9Q4NBttOGShU7pLuIUiWlSrzBFGWwtZSvVRTnklbPeCw==",
"version": "3.3.1",
"resolved": "https://registry.npmjs.org/gatsby-remark-vscode/-/gatsby-remark-vscode-3.3.1.tgz",
"integrity": "sha512-KUCDU8KauLikURYuGyAZ0aLhHhd/BP2XwzP/WP0wkgyKjh650PwewQghwpD9g2wVkN+fThFJDs4G64K4fduDGQ==",
"requires": {
"decompress": "^4.2.0",
"json5": "^2.1.1",
@ -14617,9 +14600,9 @@
}
},
"git-format-staged": {
"version": "2.1.2",
"resolved": "https://registry.npmjs.org/git-format-staged/-/git-format-staged-2.1.2.tgz",
"integrity": "sha512-ieP6iEyMJQ9xPKJGFSmK4HELcDdYwUO84dG4NBKdjaSTOdsZgrW9paLaEau2D4daPQjLwSsgwdqtYjqoVxz3Lw==",
"version": "2.1.3",
"resolved": "https://registry.npmjs.org/git-format-staged/-/git-format-staged-2.1.3.tgz",
"integrity": "sha512-M9q3W4CCQShYPHUiINhYUtHPJ3E1/aa3Ajbk8q2OAaCgqEmqZ6gBI6P1fnwD54/Fs9SA2MaOvDxpYRNa1OVGIA==",
"dev": true
},
"git-up": {
@ -19889,13 +19872,12 @@
}
},
"plist": {
"version": "3.0.3",
"resolved": "https://registry.npmjs.org/plist/-/plist-3.0.3.tgz",
"integrity": "sha512-ghdOKN99hh1oEmAlwBmPYo4L+tSQ7O3jRpkhWqOrMz86CWotpVzMevvQ+czo7oPDpOZyA6K06Ci7QVHpoh9gaA==",
"version": "3.0.4",
"resolved": "https://registry.npmjs.org/plist/-/plist-3.0.4.tgz",
"integrity": "sha512-ksrr8y9+nXOxQB2osVNqrgvX/XQPOXaU4BQMKjYq8PvaY1U18mo+fKgBSwzK+luSyinOuPae956lSVcBwxlAMg==",
"requires": {
"base64-js": "^1.5.1",
"xmlbuilder": "^9.0.7",
"xmldom": "^0.6.0"
"xmlbuilder": "^9.0.7"
},
"dependencies": {
"xmlbuilder": {
@ -20691,9 +20673,9 @@
"integrity": "sha1-6SQ0v6XqjBn0HN/UAddBo8gZ2Jc="
},
"prettier": {
"version": "2.5.0",
"resolved": "https://registry.npmjs.org/prettier/-/prettier-2.5.0.tgz",
"integrity": "sha512-FM/zAKgWTxj40rH03VxzIPdXmj39SwSjwG0heUcNFwI+EMZJnY93yAiKXM3dObIKAM5TA88werc8T/EwhB45eg=="
"version": "2.5.1",
"resolved": "https://registry.npmjs.org/prettier/-/prettier-2.5.1.tgz",
"integrity": "sha512-vBZcPRUR5MZJwoyi3ZoyQlc1rXeEck8KgeC9AwwOn+exuxLxq5toTRDTSaVrXHxelDMHy9zlicw8u66yxoSUFg=="
},
"prettier-linter-helpers": {
"version": "1.0.0",
@ -21681,9 +21663,9 @@
}
},
"rehype-react": {
"version": "7.0.3",
"resolved": "https://registry.npmjs.org/rehype-react/-/rehype-react-7.0.3.tgz",
"integrity": "sha512-nrn2fAYAPv/XD3mFe9Z2cfra1UY0a9TutNYdb5dAHsfz4HAzSVxf1LbyGins/1UtvKBzvNS/0FQJknjp/d+iEg==",
"version": "7.0.4",
"resolved": "https://registry.npmjs.org/rehype-react/-/rehype-react-7.0.4.tgz",
"integrity": "sha512-mC3gT/EVmxB8mgwz6XkupjF/UAhA2NOai/bYvTQYC+AW0jvomXB+LGpC4UcX3vsY327nM29BttEDG4lLrtqu/g==",
"requires": {
"@mapbox/hast-util-table-cell-style": "^0.2.0",
"@types/hast": "^2.0.0",
@ -21702,9 +21684,9 @@
}
},
"bail": {
"version": "2.0.1",
"resolved": "https://registry.npmjs.org/bail/-/bail-2.0.1.tgz",
"integrity": "sha512-d5FoTAr2S5DSUPKl85WNm2yUwsINN8eidIdIwsOge2t33DaOfOdSmmsI11jMN3GmALCXaw+Y6HMVHDzePshFAA=="
"version": "2.0.2",
"resolved": "https://registry.npmjs.org/bail/-/bail-2.0.2.tgz",
"integrity": "sha512-0xO6mYd7JB2YesxDKplafRpsiOzPt9V02ddPCLbY1xYGPOX24NTyN50qnUxgCPcSoYMhKpAuBTjQoRZCAkUDRw=="
},
"comma-separated-tokens": {
"version": "2.0.2",
@ -21741,9 +21723,9 @@
"integrity": "sha512-NXRbBtUdBioI73y/HmOhogw/U5msYPC9DAtGkJXeFcFWSFZw0mCUsPxk/snTuJHzNKA8kLBK4rH97RMB1BfCXw=="
},
"property-information": {
"version": "6.0.1",
"resolved": "https://registry.npmjs.org/property-information/-/property-information-6.0.1.tgz",
"integrity": "sha512-F4WUUAF7fMeF4/JUFHNBWDaKDXi2jbvqBW/y6o5wsf3j19wTZ7S60TmtB5HoBhtgw7NKQRMWuz5vk2PR0CygUg=="
"version": "6.1.1",
"resolved": "https://registry.npmjs.org/property-information/-/property-information-6.1.1.tgz",
"integrity": "sha512-hrzC564QIl0r0vy4l6MvRLhafmUowhO/O3KgVSoXIbbA2Sz4j8HGpJc6T2cubRVwMwpdiG/vKGfhT4IixmKN9w=="
},
"space-separated-tokens": {
"version": "2.0.1",
@ -21756,9 +21738,9 @@
"integrity": "sha512-FnHq5sTMxC0sk957wHDzRnemFnNBvt/gSY99HzK8F7UP5WAbvP70yX5bd7CjEQkN+TjdxwI7g7lJ6podqrG2/w=="
},
"unified": {
"version": "10.1.0",
"resolved": "https://registry.npmjs.org/unified/-/unified-10.1.0.tgz",
"integrity": "sha512-4U3ru/BRXYYhKbwXV6lU6bufLikoAavTwev89H5UxY8enDFaAT2VXmIXYNm6hb5oHPng/EXr77PVyDFcptbk5g==",
"version": "10.1.1",
"resolved": "https://registry.npmjs.org/unified/-/unified-10.1.1.tgz",
"integrity": "sha512-v4ky1+6BN9X3pQrOdkFIPWAaeDsHPE1svRDxq7YpTc2plkIqFMwukfqM+l0ewpP9EfwARlt9pPFAeWYhHm8X9w==",
"requires": {
"@types/unist": "^2.0.0",
"bail": "^2.0.0",
@ -21783,9 +21765,9 @@
}
},
"vfile": {
"version": "5.2.0",
"resolved": "https://registry.npmjs.org/vfile/-/vfile-5.2.0.tgz",
"integrity": "sha512-ftCpb6pU8Jrzcqku8zE6N3Gi4/RkDhRwEXSWudzZzA2eEOn/cBpsfk9aulCUR+j1raRSAykYQap9u6j6rhUaCA==",
"version": "5.2.1",
"resolved": "https://registry.npmjs.org/vfile/-/vfile-5.2.1.tgz",
"integrity": "sha512-vXW5XKbELM6mLj88kmkJ+gjFGZ/2gTmpdqPDjs3y+qbvI5i7md7rba/+pbYEawa7t22W7ynywPV6lUUAS1WiYg==",
"requires": {
"@types/unist": "^2.0.0",
"is-buffer": "^2.0.0",
@ -21794,18 +21776,18 @@
}
},
"vfile-message": {
"version": "3.0.2",
"resolved": "https://registry.npmjs.org/vfile-message/-/vfile-message-3.0.2.tgz",
"integrity": "sha512-UUjZYIOg9lDRwwiBAuezLIsu9KlXntdxwG+nXnjuQAHvBpcX3x0eN8h+I7TkY5nkCXj+cWVp4ZqebtGBvok8ww==",
"version": "3.1.0",
"resolved": "https://registry.npmjs.org/vfile-message/-/vfile-message-3.1.0.tgz",
"integrity": "sha512-4QJbBk+DkPEhBXq3f260xSaWtjE4gPKOfulzfMFF8ZNwaPZieWsg3iVlcmF04+eebzpcpeXOOFMfrYzJHVYg+g==",
"requires": {
"@types/unist": "^2.0.0",
"unist-util-stringify-position": "^3.0.0"
}
},
"web-namespaces": {
"version": "2.0.0",
"resolved": "https://registry.npmjs.org/web-namespaces/-/web-namespaces-2.0.0.tgz",
"integrity": "sha512-dE7ELZRVWh0ceQsRgkjLgsAvwTuv3kcjSY/hLjqL0llleUlQBDjE9JkB9FCBY5F2mnFEwiyJoowl8+NVGHe8dw=="
"version": "2.0.1",
"resolved": "https://registry.npmjs.org/web-namespaces/-/web-namespaces-2.0.1.tgz",
"integrity": "sha512-bKr1DkiNa2krS7qxNtdrtHAmzuYGFQLiQ13TsorsdT6ULTkPLKuu5+GsFpDlg6JFjUTwX2DyhMPG2be8uPrqsQ=="
}
}
},
@ -23049,11 +23031,6 @@
"requires": {
"follow-redirects": "^1.14.0"
}
},
"follow-redirects": {
"version": "1.14.4",
"resolved": "https://registry.npmjs.org/follow-redirects/-/follow-redirects-1.14.4.tgz",
"integrity": "sha512-zwGkiSXC1MUJG/qmeIFH2HBJx9u0V46QGUe3YR1fXG8bXQxq7fLj0RjLZQ5nubr9qNJUZrH+xUcwXEoXNpfS+g=="
}
}
},
@ -23073,9 +23050,9 @@
}
},
"slugify": {
"version": "1.6.3",
"resolved": "https://registry.npmjs.org/slugify/-/slugify-1.6.3.tgz",
"integrity": "sha512-1MPyqnIhgiq+/0iDJyqSJHENdnH5MMIlgJIBxmkRMzTNKlS/QsN5dXsB+MdDq4E6w0g9jFA4XOTRkVDjDae/2w=="
"version": "1.6.5",
"resolved": "https://registry.npmjs.org/slugify/-/slugify-1.6.5.tgz",
"integrity": "sha512-8mo9bslnBO3tr5PEVFzMPIWwWnipGS0xVbYf65zxDqfNwmzYn1LpiKNrR6DlClusuvo+hDHd1zKpmfAe83NQSQ=="
},
"smoothscroll-polyfill": {
"version": "0.4.4",
@ -24025,31 +24002,32 @@
}
},
"swagger-client": {
"version": "3.17.0",
"resolved": "https://registry.npmjs.org/swagger-client/-/swagger-client-3.17.0.tgz",
"integrity": "sha512-d8DOEME49wTXm+uT+lBAjJ5D6IDjEHdbkqa7MbcslR2c+oHIhi13ObwleVWGfr89MPkWgBl6RBq9VUHmrBJRbg==",
"version": "3.18.4",
"resolved": "https://registry.npmjs.org/swagger-client/-/swagger-client-3.18.4.tgz",
"integrity": "sha512-Wj26oEctONq/u0uM+eSj18675YM5e2vFnx7Kr4neLeXEHKUsfceVQ/OdtrBXdrT3VbtdBbZfMTfl1JOBpix2MA==",
"requires": {
"@babel/runtime-corejs3": "^7.11.2",
"btoa": "^1.2.1",
"cookie": "~0.4.1",
"cross-fetch": "^3.1.4",
"deep-extend": "~0.6.0",
"cross-fetch": "^3.1.5",
"deepmerge": "~4.2.2",
"fast-json-patch": "^3.0.0-1",
"form-data-encoder": "^1.4.3",
"formdata-node": "^4.0.0",
"is-plain-object": "^5.0.0",
"js-yaml": "^4.1.0",
"lodash": "^4.17.21",
"qs": "^6.9.4",
"qs": "^6.10.2",
"traverse": "~0.6.6",
"url": "~0.11.0"
},
"dependencies": {
"@babel/runtime-corejs3": {
"version": "7.15.4",
"resolved": "https://registry.npmjs.org/@babel/runtime-corejs3/-/runtime-corejs3-7.15.4.tgz",
"integrity": "sha512-lWcAqKeB624/twtTc3w6w/2o9RqJPaNBhPGK6DKLSiwuVWC7WFkypWyNg+CpZoyJH0jVzv1uMtXZ/5/lQOLtCg==",
"version": "7.17.0",
"resolved": "https://registry.npmjs.org/@babel/runtime-corejs3/-/runtime-corejs3-7.17.0.tgz",
"integrity": "sha512-qeydncU80ravKzovVncW3EYaC1ji3GpntdPgNcJy9g7hHSY6KX+ne1cbV3ov7Zzm4F1z0+QreZPCuw1ynkmYNg==",
"requires": {
"core-js-pure": "^3.16.0",
"core-js-pure": "^3.20.2",
"regenerator-runtime": "^0.13.4"
}
},
@ -24059,23 +24037,28 @@
"integrity": "sha512-8+9WqebbFzpX9OR+Wa6O29asIogeRMzcGtAINdpMHHyAg10f05aSFVBbcEqGf/PXw1EjAZ+q2/bEBg3DvurK3Q=="
},
"cookie": {
"version": "0.4.1",
"resolved": "https://registry.npmjs.org/cookie/-/cookie-0.4.1.tgz",
"integrity": "sha512-ZwrFkGJxUR3EIoXtO+yVE69Eb7KlixbaeAWfBQB9vVsNn/o+Yw69gBWSSDK825hQNdN+wF8zELf3dFNl/kxkUA=="
"version": "0.4.2",
"resolved": "https://registry.npmjs.org/cookie/-/cookie-0.4.2.tgz",
"integrity": "sha512-aSWTXFzaKWkvHO1Ny/s+ePFpvKsPnjc551iI41v3ny/ow6tBG5Vd+FuqGNhh1LxOmVzOlGUriIlOaokOvhaStA=="
},
"core-js-pure": {
"version": "3.18.3",
"resolved": "https://registry.npmjs.org/core-js-pure/-/core-js-pure-3.18.3.tgz",
"integrity": "sha512-qfskyO/KjtbYn09bn1IPkuhHl5PlJ6IzJ9s9sraJ1EqcuGyLGKzhSM1cY0zgyL9hx42eulQLZ6WaeK5ycJCkqw=="
"version": "3.21.0",
"resolved": "https://registry.npmjs.org/core-js-pure/-/core-js-pure-3.21.0.tgz",
"integrity": "sha512-VaJUunCZLnxuDbo1rNOzwbet9E1K9joiXS5+DQMPtgxd24wfsZbJZMMfQLGYMlCUvSxLfsRUUhoOR2x28mFfeg=="
},
"cross-fetch": {
"version": "3.1.4",
"resolved": "https://registry.npmjs.org/cross-fetch/-/cross-fetch-3.1.4.tgz",
"integrity": "sha512-1eAtFWdIubi6T4XPy6ei9iUFoKpUkIF971QLN8lIvvvwueI65+Nw5haMNKUwfJxabqlIIDODJKGrQ66gxC0PbQ==",
"version": "3.1.5",
"resolved": "https://registry.npmjs.org/cross-fetch/-/cross-fetch-3.1.5.tgz",
"integrity": "sha512-lvb1SBsI0Z7GDwmuid+mU3kWVBwTVUbe7S0H52yaaAdQOXq2YktTCZdlAcNKFzE6QtRz0snpw9bNiPeOIkkQvw==",
"requires": {
"node-fetch": "2.6.1"
"node-fetch": "2.6.7"
}
},
"is-plain-object": {
"version": "5.0.0",
"resolved": "https://registry.npmjs.org/is-plain-object/-/is-plain-object-5.0.0.tgz",
"integrity": "sha512-VRSzKkbMm5jMDoKLbltAkFQ5Qr7VDiTFGXxYFXXowVj387GeGNOCsOH6Msy00SGZ3Fp84b1Naa1psqgcCIEP5Q=="
},
"js-yaml": {
"version": "4.1.0",
"resolved": "https://registry.npmjs.org/js-yaml/-/js-yaml-4.1.0.tgz",
@ -24084,15 +24067,23 @@
"argparse": "^2.0.1"
}
},
"node-fetch": {
"version": "2.6.7",
"resolved": "https://registry.npmjs.org/node-fetch/-/node-fetch-2.6.7.tgz",
"integrity": "sha512-ZjMPFEfVx5j+y2yF35Kzx5sF7kDzxuDj6ziH4FFbOp87zKDZNx8yExJIb05OGF4Nlt9IHFIMBkRl41VdvcNdbQ==",
"requires": {
"whatwg-url": "^5.0.0"
}
},
"object-inspect": {
"version": "1.11.0",
"resolved": "https://registry.npmjs.org/object-inspect/-/object-inspect-1.11.0.tgz",
"integrity": "sha512-jp7ikS6Sd3GxQfZJPyH3cjcbJF6GZPClgdV+EFygjFLQ5FmW/dRUnTd9PQ9k0JhoNDabWFbpF1yCdSWCC6gexg=="
"version": "1.12.0",
"resolved": "https://registry.npmjs.org/object-inspect/-/object-inspect-1.12.0.tgz",
"integrity": "sha512-Ho2z80bVIvJloH+YzRmpZVQe87+qASmBUKZDWgx9cu+KDrX2ZDH/3tMy+gXbZETVGs2M8YdxObOh7XAtim9Y0g=="
},
"qs": {
"version": "6.10.1",
"resolved": "https://registry.npmjs.org/qs/-/qs-6.10.1.tgz",
"integrity": "sha512-M528Hph6wsSVOBiYUnGf+K/7w0hNshs/duGsNXPUCLH5XAqjEtiPGwNONLV0tBH8NoGb0mvD5JubnUTrujKDTg==",
"version": "6.10.3",
"resolved": "https://registry.npmjs.org/qs/-/qs-6.10.3.tgz",
"integrity": "sha512-wr7M2E0OFRfIfJZjKGieI8lBKb7fRCH4Fv5KNPEs7gJ8jadvotdsS08PzOKR7opXhZ/Xkjtt3WF9g38drmyRqQ==",
"requires": {
"side-channel": "^1.0.4"
}
@ -24469,6 +24460,11 @@
"punycode": "^2.1.1"
}
},
"tr46": {
"version": "0.0.3",
"resolved": "https://registry.npmjs.org/tr46/-/tr46-0.0.3.tgz",
"integrity": "sha1-gYT9NH2snNwYWZLzpmIuFLnZq2o="
},
"traverse": {
"version": "0.6.6",
"resolved": "https://registry.npmjs.org/traverse/-/traverse-0.6.6.tgz",
@ -25372,14 +25368,14 @@
"integrity": "sha512-2ham8XPWTONajOR0ohOKOHXkm3+gaBmGut3SRuu75xLd/RRaY6vqgh8NBYYk7+RW3u5AtzPQZG8F10LHkl0lAQ=="
},
"vscode-oniguruma": {
"version": "1.5.1",
"resolved": "https://registry.npmjs.org/vscode-oniguruma/-/vscode-oniguruma-1.5.1.tgz",
"integrity": "sha512-JrBZH8DCC262TEYcYdeyZusiETu0Vli0xFgdRwNJjDcObcRjbmJP+IFcA3ScBwIXwgFHYKbAgfxtM/Cl+3Spjw=="
"version": "1.6.1",
"resolved": "https://registry.npmjs.org/vscode-oniguruma/-/vscode-oniguruma-1.6.1.tgz",
"integrity": "sha512-vc4WhSIaVpgJ0jJIejjYxPvURJavX6QG41vu0mGhqywMkQqulezEqEQ3cO3gc8GvcOpX6ycmKGqRoROEMBNXTQ=="
},
"vscode-textmate": {
"version": "5.4.0",
"resolved": "https://registry.npmjs.org/vscode-textmate/-/vscode-textmate-5.4.0.tgz",
"integrity": "sha512-c0Q4zYZkcLizeYJ3hNyaVUM2AA8KDhNCA3JvXY8CeZSJuBdAy3bAvSbv46RClC4P3dSO9BdwhnKEx2zOo6vP/w=="
"version": "5.5.0",
"resolved": "https://registry.npmjs.org/vscode-textmate/-/vscode-textmate-5.5.0.tgz",
"integrity": "sha512-jToQkPGMNKn0eyKyitYeINJF0NoD240aYyKPIWJv5W2jfPt++jIRg0OSergubtGhbw6SoefkvBYEpX7TsfoSUQ=="
},
"warning": {
"version": "4.0.3",
@ -25630,6 +25626,11 @@
"resolved": "https://registry.npmjs.org/web-streams-polyfill/-/web-streams-polyfill-4.0.0-beta.1.tgz",
"integrity": "sha512-3ux37gEX670UUphBF9AMCq8XM6iQ8Ac6A+DSRRjDoRBm1ufCkaCDdNVbaqq60PsEkdNlLKrGtv/YBP4EJXqNtQ=="
},
"webidl-conversions": {
"version": "3.0.1",
"resolved": "https://registry.npmjs.org/webidl-conversions/-/webidl-conversions-3.0.1.tgz",
"integrity": "sha1-JFNCdeKnvGvnvIZhHMFq4KVlSHE="
},
"webpack": {
"version": "4.46.0",
"resolved": "https://registry.npmjs.org/webpack/-/webpack-4.46.0.tgz",
@ -26410,6 +26411,15 @@
"resolved": "https://registry.npmjs.org/whatwg-fetch/-/whatwg-fetch-2.0.4.tgz",
"integrity": "sha512-dcQ1GWpOD/eEQ97k66aiEVpNnapVj90/+R+SXTPYGHpYBBypfKJEQjLrvMZ7YXbKm21gXd4NcuxUTjiv1YtLng=="
},
"whatwg-url": {
"version": "5.0.0",
"resolved": "https://registry.npmjs.org/whatwg-url/-/whatwg-url-5.0.0.tgz",
"integrity": "sha1-lmRU6HZUYuN2RNNib2dCzotwll0=",
"requires": {
"tr46": "~0.0.3",
"webidl-conversions": "^3.0.0"
}
},
"which": {
"version": "1.3.1",
"resolved": "https://registry.npmjs.org/which/-/which-1.3.1.tgz",
@ -26777,11 +26787,6 @@
"resolved": "https://registry.npmjs.org/xmlbuilder/-/xmlbuilder-11.0.1.tgz",
"integrity": "sha512-fDlsI/kFEx7gLvbecc0/ohLG50fugQp8ryHzMTuW9vSa1GJ0XYWKnhsUx7oie3G98+r56aTQIUB4kht42R3JvA=="
},
"xmldom": {
"version": "0.6.0",
"resolved": "https://registry.npmjs.org/xmldom/-/xmldom-0.6.0.tgz",
"integrity": "sha512-iAcin401y58LckRZ0TkI4k0VSM1Qg0KGSc3i8rU+xrxe19A/BN1zHyVSJY7uoutVlaTSzYyk/v5AmkewAP7jtg=="
},
"xmlhttprequest-ssl": {
"version": "1.6.2",
"resolved": "https://registry.npmjs.org/xmlhttprequest-ssl/-/xmlhttprequest-ssl-1.6.2.tgz",

View File

@ -17,7 +17,7 @@
},
"dependencies": {
"@oceanprotocol/art": "^3.2.0",
"axios": "^0.24.0",
"axios": "^0.25.0",
"classnames": "^2.3.1",
"gatsby": "^2.32.13",
"gatsby-image": "^3.11.0",
@ -38,7 +38,7 @@
"gatsby-remark-images": "^3.11.1",
"gatsby-remark-responsive-iframe": "^2.11.0",
"gatsby-remark-smartypants": "^2.10.0",
"gatsby-remark-vscode": "^3.3.0",
"gatsby-remark-vscode": "^3.3.1",
"gatsby-source-filesystem": "^2.11.1",
"gatsby-source-git": "^1.1.0",
"gatsby-source-graphql": "^2.14.0",
@ -55,28 +55,28 @@
"react-helmet": "^6.1.0",
"react-json-view": "^1.21.3",
"react-scrollspy": "^3.4.3",
"rehype-react": "^7.0.3",
"rehype-react": "^7.0.4",
"remark": "^13.0.0",
"remark-github-plugin": "^1.4.0",
"remark-react": "^8.0.0",
"shortid": "^2.2.16",
"slugify": "^1.6.3",
"slugify": "^1.6.5",
"smoothscroll-polyfill": "^0.4.4",
"swagger-client": "^3.17.0"
"swagger-client": "^3.18.4"
},
"devDependencies": {
"@svgr/webpack": "^5.5.0",
"dotenv": "^10.0.0",
"dotenv": "^16.0.0",
"eslint": "^7.32.0",
"eslint-config-oceanprotocol": "^1.5.0",
"eslint-config-prettier": "^8.3.0",
"eslint-plugin-prettier": "^4.0.0",
"git-format-staged": "^2.1.2",
"git-format-staged": "^2.1.3",
"husky": "^7.0.4",
"markdownlint-cli": "^0.30.0",
"node-sass": "^5.0.0",
"npm-run-all": "^4.1.5",
"prettier": "^2.5.0"
"prettier": "^2.5.1"
},
"repository": {
"type": "git",

View File

@ -118,7 +118,7 @@ export const pageQuery = graphql`
query DocBySlug($slug: String!) {
markdownRemark(fields: { slug: { eq: $slug } }) {
id
tableOfContents(maxDepth: 2)
tableOfContents(maxDepth: 3)
html
htmlAst
frontmatter {