1
0
mirror of https://github.com/oceanprotocol/docs.git synced 2024-11-26 19:49:26 +01:00

Merge branch 'main' into issue-695-add-sample-responses

This commit is contained in:
Matthias Kretschmann 2021-09-01 20:17:01 +02:00
commit 168fd84be2
Signed by: m
GPG Key ID: 606EEEF3C479A91F
43 changed files with 1503 additions and 226 deletions

View File

@ -116,3 +116,4 @@ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
```

View File

@ -67,6 +67,36 @@ module.exports = {
{
from: '/concepts/connect-to-networks/',
to: '/concepts/networks/'
},
{
from: '/concepts/oeps-did/',
to: '/concepts/did-ddo/'
},
{
from: '/concepts/oeps-asset-ddo/',
to: '/concepts/ddo-metadata/'
},
{
from: '/tutorials/azure-for-brizo/',
to: '/tutorials/azure-for-provider/'
},
{
from: '/tutorials/amazon-s3-for-brizo/',
to: '/tutorials/amazon-s3-for-provider/'
},
{
from: '/tutorials/on-premise-for-brizo/',
to: '/tutorials/on-premise-for-provider/'
}
],
swaggerComponents: [
{
name: 'aquarius',
url: 'https://aquarius.oceanprotocol.com/spec'
},
{
name: 'provider',
url: 'https://provider.mainnet.oceanprotocol.com/spec'
}
]
}

View File

@ -56,7 +56,8 @@ Complementary to Ocean Market, Ocean has reference code to ease building **third
## Metadata Tools
Metadata (name of dataset, date created etc.) is used by marketplaces for data asset discovery. Each data asset can have a [decentralized identifier](https://w3c-ccg.github.io/did-spec/) (DID) that resolves to a DID document (DDO) for associated metadata. The DDO is essentially [JSON](https://www.json.org/) filling in metadata fields. [OEP7](https://github.com/oceanprotocol/OEPs/tree/master/7) formalizes Ocean DID usage.
Metadata (name of dataset, date created etc.) is used by marketplaces for data asset discovery. Each data asset can have a [decentralized identifier](https://w3c-ccg.github.io/did-spec/) (DID) that resolves to a DID document (DDO) for associated metadata. The DDO is essentially [JSON](https://www.json.org/) filling in metadata fields. For more details on working with OCEAN DIDs check out the [DID concept documentation](https://docs.oceanprotocol.com/concepts/did-ddo/).
The [DDO Metadata documentation](https://docs.oceanprotocol.com/concepts/ddo-metadata/) goes into more depth regarding metadata structure.
[OEP8](https://github.com/oceanprotocol/OEPs/tree/master/8) specifies Ocean metadata schema, including fields that must be filled. Its based on the public [DataSet schema from schema.org](https://schema.org/Dataset).

View File

@ -15,34 +15,42 @@ The most basic scenario for a Publisher is to provide access to the datasets the
[This page](https://oceanprotocol.com/technology/compute-to-data) elaborates on the benefits.
## Data Sets & Algorithms
## Datasets & Algorithms
With Compute-to-Data, data sets are not allowed to leave the premises of the data holder, only algorithms can be permitted to run on them under certain conditions within an isolated and secure environment. Algorithms are an asset type just like data sets and they too can have a pool or a fixed price to determine their price whenever they are used.
With Compute-to-Data, datasets are not allowed to leave the premises of the data holder, only algorithms can be permitted to run on them under certain conditions within an isolated and secure environment. Algorithms are an asset type just like datasets. They they too can have a pool or a fixed price to determine their price whenever they are used.
Algorithms can be either public or private by setting either an `access` or a `compute` service in their DDO. An algorithm set to public can be downloaded for its set price, while an algorithm set to private is only available as part of a compute job without any way to download it. If an algorithm is set to private, then the dataset must be published on the same Ocean Provider as the data set it should run on.
For each data set, publishers can choose to allow various permission levels for algorithms to run:
Algorithms can be public or private by setting `"attributes.main.type"` value as follows:
- `"access"` - public. The algorithm can be downloaded, given appropriate datatoken.
- `"compute"` - private. The algorithm is only available to use as part of a compute job without any way to download it. The dataset must be published on the same Ocean Provider as the dataset it's targeted to run on.
For each dataset, publishers can choose to allow various permission levels for algorithms to run:
- allow selected algorithms, referenced by their DID
- allow all algorithms published within a network or marketplace
- allow raw algorithms, for advanced use cases circumventing algorithm as an asset type, but most prone to data escape
All implementations should set permissions to private by default: upon publishing a compute data set, no algorithms should be allowed to run on it. This is to prevent data escape by a rogue algorithm being written in a way to extract all data from a data set.
All implementations should set permissions to private by default: upon publishing a compute dataset, no algorithms should be allowed to run on it. This is to prevent data escape by a rogue algorithm being written in a way to extract all data from a dataset.
## Architecture Overview
The architecture follows [OEP-12: Compute-to-Data](https://github.com/oceanprotocol/OEPs/tree/master/12) as a spec.
Here's the sequence diagram for starting a new compute job.
![Sequence Diagram for computing services](images/Starting New Compute Job.png)
In the above diagram you can see the initial integration supported. It involves the following components/actors:
The Consumer calls the Provider with `start(did, algorithm, additionalDIDs)`. It returns job id `XXXX`. The Provider oversees the rest of the work. At any point, the Consumer can query the Provider for the job status via `getJobDetails(XXXX)`.
Here's how Provider works. First, it ensures that the Consumer has sent the appropriate datatokens to get access. Then, it calls asks the Operator-Service (a microservice) to start the job, which passes on the request to Operator-Engine (the actual compute system). Operator-Engine runs Kubernetes compute jobs etc as needed. Operator-Engine reports when to Operator-Service when the job has finished.
Here's the actors/components:
- Consumers - The end users who need to use some computing services offered by the same Publisher as the data Publisher.
- Operator-Service - Micro-service that is handling the compute requests.
- Operator-Engine - The computing systems where the compute will be executed.
- Kubernetes - a K8 cluster
Before the flow can begin, the following pre-conditions must be met:
Before the flow can begin, these pre-conditions must be met:
- The Asset DDO has a `compute` service.
- The Asset DDO compute service must permit algorithms to run on it.
@ -109,3 +117,4 @@ The Operator Engine is in charge of retrieving all the workflows registered in a
- [Tutorial: Writing Algorithms](/tutorials/compute-to-data-algorithms/)
- [Tutorial: Set Up a Compute-to-Data Environment](/tutorials/compute-to-data/)
- [Compute-to-Data in Ocean Market](https://blog.oceanprotocol.com)
- [(Old) Compute-to-Data specs](https://github.com/oceanprotocol-archive/OEPs/tree/master/12) (OEP12)

View File

@ -0,0 +1,349 @@
---
title: DDO Metadata
description: Specification of the DDO subset dedicated to asset metadata
slug: /concepts/ddo-metadata/
section: concepts
---
## Overview
This page defines the schema for asset _metadata_. Metadata is the subset of an Ocean DDO that holds information about the asset.
The schema is based on public schema.org [DataSet schema](https://schema.org/Dataset).
Standardizing labels is key to effective searching, sorting and filtering (discovery).
This page specifies metadata attributes that _must_ be included, and that _may_ be included. These attributes are organized hierarchically, from top-layer attributes like `"main"` to sub-level attributes like `"main.type"`. This page also provides DDO metadata examples.
## Rules for Metadata Storage and Control in Ocean
The publisher publishes an asset DDO (including metadata) onto the chain.
The publisher may be the asset owner, or a marketplace acting on behalf of the owner.
Most metadata fields may be modified after creation. The blockchain records the provenance of changes.
DDOs (including metadata) are found in two places:
- _Remote_ - main storage, on-chain. File URLs are always encrypted. One may actually encrypt all metadata, at a severe cost to discoverability.
- _Local_ - local cache. All fields are in plaintext.
Ocean Aquarius helps manage metadata. It can be used to write DDOs to the chain, read from the chain, and has a local cache of the DDO in plaintext with fast search.
## Fields for Metadata
An asset represents a resource in Ocean, e.g. a dataset or an algorithm.
A `metadata` object has the following attributes, all of which are objects. Some are only required for local or remote, and are specified as such.
| Attribute | Required | Description |
| --------------------------- | -------- | ---------------------------------------------------------- |
| **`main`** | **Yes** | Main attributes |
| **`encryptedFiles`** | Remote | Encrypted string of the `attributes.main.files` object. |
| **`encryptedServices`** | Remote | Encrypted string of the `attributes.main.services` object. |
| **`status`** | No | Status attributes |
| **`additionalInformation`** | No | Optional attributes |
The `main` and `additionalInformation` attributes are independent of the asset type.
## Fields for `attributes.main`
The `main` object has the following attributes.
| Attribute | Type | Required | Description |
| ------------------- | --------------------- | -------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| **`name`** | Text |**Yes** | Descriptive name or title of the asset. |
| **`type`** | Text |**Yes** | Asset type. Includes `"dataset"` (e.g. csv file), `"algorithm"` (e.g. Python script). Each type needs a different subset of metadata attributes. |
| **`author`** | Text |**Yes** | Name of the entity generating this data (e.g. Tfl, Disney Corp, etc.). |
| **`license`** | Text |**Yes** | Short name referencing the license of the asset (e.g. Public Domain, CC-0, CC-BY, No License Specified, etc. ). If it's not specified, the following value will be added: "No License Specified". |
| **`files`** | Array of files object |**Yes** | Array of `File` objects including the encrypted file urls. |
| **`dateCreated`** | DateTime |**Yes** | The date on which the asset was created by the originator. ISO 8601 format, Coordinated Universal Time, e.g. `2019-01-31T08:38:32Z`. |
| **`datePublished`** | DateTime | Remote | The date on which the asset DDO is registered into the metadata store (Aquarius) |
## Fields for `attributes.main.files`
The `files` object has a list of `file` objects.
Each `file` object has the following attributes, with the details necessary to consume and validate the data.
| Attribute | Required | Description |
| -------------------- | -------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| **`index`** |**Yes** | Index number starting from 0 of the file. |
| **`contentType`** |**Yes** | File format. |
| **`url`** | Local | Content URL. Omitted from the remote metadata. Supports `http(s)://` and `ipfs://` URLs. |
| **`name`** | No | File name. |
| **`checksum`** | No | Checksum of the file using your preferred format (i.e. MD5). Format specified in `checksumType`. If it's not provided can't be validated if the file was not modified after registering. |
| **`checksumType`** | No | Format of the provided checksum. Can vary according to server (i.e Amazon vs. Azure) |
| **`contentLength`** | No | Size of the file in bytes. |
| **`encoding`** | No | File encoding (e.g. UTF-8). |
| **`compression`** | No | File compression (e.g. no, gzip, bzip2, etc). |
| **`encrypted`** | No | Boolean. Is the file encrypted? If is not set is assumed the file is not encrypted |
| **`encryptionMode`** | No | Encryption mode used. Just valid if `encrypted=true` |
| **`resourceId`** | No | Remote identifier of the file in the external provider. It is typically the remote id in the cloud provider. |
| **`attributes`** | No | Key-Value hash map with additional attributes describing the asset file. It could include details like the Amazon S3 bucket, region, etc. |
## Fields for `attributes.status`
A `status` object has the following attributes.
| Attribute | Type | Required | Description |
| --------------------- | ------- | -------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| **`isListed`** | Boolean | No | Use to flag unsuitable content. True by default. If it's false, the content must not be returned. |
| **`isRetired`** | Boolean | No | Flag retired content. False by default. If it's true, the content may either not be returned, or returned with a note about retirement. |
| **`isOrderDisabled`** | Boolean | No | For temporarily disabling ordering assets, e.g. when file host is in maintenance. False by default. If it's true, no ordering of assets for download or compute should be allowed. |
## Fields for `attributes.additionalInformation`
All the additional information will be stored as part of the `additionalInformation` section.
| Attribute | Type | Required |
| --------------------- | ------------- | -------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| **`tags`** | Array of Text | No | Array of keywords or tags used to describe this content. Empty by default. |
| **`description`** | Text | No | Details of what the resource is. For a dataset, this attribute explains what the data represents and what it can be used for. |
| **`copyrightHolder`** | Text | No | The party holding the legal copyright. Empty by default. |
| **`workExample`** | Text | No | Example of the concept of this asset. This example is part of the metadata, not an external link. |
| **`links`** | Array of Link | No | Mapping of links for data samples, or links to find out more information. Links may be to either a URL or another Asset. We expect marketplaces to converge on agreements of typical formats for linked data: The Ocean Protocol itself does not mandate any specific formats as these requirements are likely to be domain-specific. The links array can be an empty array, but if there is a link object in it, then an "url" is required in that link object. |
| **`inLanguage`** | Text | No | The language of the content. Please use one of the language codes from the [IETF BCP 47 standard](https://tools.ietf.org/html/bcp47). |
| **`categories`** | Array of Text | No | Optional array of categories associated to the asset. Note: recommended to use `"tags"` instead of this. |
## Fields - Other Suggestions
Here are example attributes to help an asset's discoverability.
| Attribute | Description |
| ---------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| **`updateFrequency`** | An indication of update latency - i.e. How often are updates expected (seldom, annually, quarterly, etc.), or is the resource static that is never expected to get updated. |
| **`structuredMarkup`** | A link to machine-readable structured markup (such as ttl/json-ld/rdf) describing the dataset. |
## DDO Metadata Example - Local
This is what the DDO metadata looks like. All fields are in plaintext. This is before it's stored on-chain or when it's retrieved and decrypted into a local cache.
```json
{
"main": {
"name": "Madrid Weather forecast",
"dateCreated": "2019-05-16T12:36:14.535Z",
"author": "Norwegian Meteorological Institute",
"type": "dataset",
"license": "Public Domain",
"price": "123000000000000000000",
"files": [
{
"index": 0,
"url": "https://example-url.net/weather/forecast/madrid/350750305731.xml",
"contentLength": "0",
"contentType": "text/xml",
"compression": "none"
}
]
},
"additionalInformation": {
"description": "Weather forecast of Europe/Madrid in XML format",
"copyrightHolder": "Norwegian Meteorological Institute",
"categories": ["Other"],
"links": [],
"tags": [],
"updateFrequency": null,
"structuredMarkup": []
},
"status": {
"isListed": true,
"isRetired": false,
"isOrderDisabled": false
}
}
```
## DDO Metadata Example - Remote
The previous example was for a local cache, with all fields in plaintext.
Here's the same example, for remote on-chain storage. That is, it's how metadata looks as a response to querying Aquarius (remote metadata).
How remote is changed, compared to local:
- `url` is removed from all objects in the `files` array
- `encryptedFiles` is added.
```json
{
"service": [
{
"index": 0,
"serviceEndpoint": "http://aquarius:5000/api/v1/aquarius/assets/ddo/{did}",
"type": "metadata",
"attributes": {
"main": {
"type": "dataset",
"name": "Madrid Weather forecast",
"dateCreated": "2019-05-16T12:36:14.535Z",
"author": "Norwegian Meteorological Institute",
"license": "Public Domain",
"files": [
{
"contentLength": "0",
"contentType": "text/xml",
"compression": "none",
"index": 0
}
],
"datePublished": "2019-05-16T12:41:01Z"
},
"encryptedFiles": "0x7a0d1c66ae861…df43aa9",
"additionalInformation": {
"description": "Weather forecast of Europe/Madrid in XML format",
"copyrightHolder": "Norwegian Meteorological Institute",
"categories": ["Other"],
"links": [],
"tags": [],
"updateFrequency": null,
"structuredMarkup": []
},
"status": {
"isListed": true,
"isRetired": false,
"isOrderDisabled": false
}
}
}
]
}
```
## Fields when `attributes.main.type = algorithm`
An asset of type `algorithm` has the following additional attributes under `main.algorithm`:
| Attribute | Type | Required | Description |
| --------------- | -------- | -------- | --------------------------------------------- |
| **`container`** | `Object` |**Yes** | Object describing the Docker container image. |
| **`language`** | `string` | No | Language used to implement the software |
| **`format`** | `string` | No | Packaging format of the software. |
| **`version`** | `string` | No | Version of the software. |
The `container` object has the following attributes:
| Attribute | Type | Required | Description |
| ---------------- | -------- | -------- | ----------------------------------------------------------------- |
| **`entrypoint`** | `string` |**Yes** | The command to execute, or script to run inside the Docker image. |
| **`image`** | `string` |**Yes** | Name of the Docker image. |
| **`tag`** | `string` |**Yes** | Tag of the Docker image. |
| **`checksum`** | `string` |**Yes** | Checksum of the Docker image. |
```json
{
"index": 0,
"serviceEndpoint": "http://localhost:5000/api/v1/aquarius/assets/ddo/{did}",
"type": "metadata",
"attributes": {
"main": {
"author": "John Doe",
"dateCreated": "2019-02-08T08:13:49Z",
"license": "CC-BY",
"name": "My super algorithm",
"type": "algorithm",
"algorithm": {
"language": "scala",
"format": "docker-image",
"version": "0.1",
"container": {
"entrypoint": "node $ALGO",
"image": "node",
"tag": "10",
"checksum": "efb2c764274b745f5fc37f97c6b0e761"
}
},
"files": [
{
"name": "build_model",
"url": "https://raw.gith ubusercontent.com/oceanprotocol/test-algorithm/master/javascript/algo.js",
"index": 0,
"checksum": "efb2c764274b745f5fc37f97c6b0e761",
"contentLength": "4535431",
"contentType": "text/plain",
"encoding": "UTF-8",
"compression": "zip"
}
]
},
"additionalInformation": {
"description": "Workflow to aggregate weather information",
"tags": ["weather", "uk", "2011", "workflow", "aggregation"],
"copyrightHolder": "John Doe"
}
}
}
```
## Fields when `attributes.main.type = compute`
An asset with a service of type `compute` has the following additional attributes under `main.privacy`:
| Attribute | Type | Required | Description |
| --------------------------------- | ------------------ | -------- | ---------------------------------------------------------- |
| **`allowRawAlgorithm`** | `boolean` |**Yes** | If True, a drag & drop algo can be runned |
| **`allowNetworkAccess`** | `boolean` |**Yes** | If True, the algo job will have network access (stil WIP) |
| **`publisherTrustedAlgorithms `** | Array of `Objects` |**Yes** | If Empty , then any published algo is allowed. (see below) |
The `publisherTrustedAlgorithms ` is an array of objects with the following structure:
| Attribute | Type | Required | Description |
| ------------------------------ | -------- | -------- | ------------------------------------------------------------------ |
| **`did`** | `string` |**Yes** | The did of the algo which is trusted by the publisher. |
| **`filesChecksum`** | `string` |**Yes** | Hash of ( algorithm's encryptedFiles + files section (as string) ) |
| **`containerSectionChecksum`** | `string` |**Yes** | Hash of the algorithm container section (as string) |
To produce `filesChecksum`:
```javascript
sha256(
algorithm_ddo.service['metadata'].attributes.encryptedFiles +
JSON.Stringify(algorithm_ddo.service['metadata'].attributes.main.files)
)
```
To produce `containerSectionChecksum`:
```javascript
sha256(
JSON.Stringify(
algorithm_ddo.service['metadata'].attributes.main.algorithm.container
)
)
```
### Example of a compute service
```json
{
"type": "compute",
"index": 1,
"serviceEndpoint": "https://provider.oceanprotocol.com",
"attributes": {
"main": {
"name": "dataAssetComputingService",
"creator": "0xA32C84D2B44C041F3a56afC07a33f8AC5BF1A071",
"datePublished": "2021-02-17T06:31:33Z",
"cost": "1",
"timeout": 3600,
"privacy": {
"allowRawAlgorithm": true,
"allowNetworkAccess": false,
"publisherTrustedAlgorithms": [
{
"did": "0xxxxx",
"filesChecksum": "1234",
"containerSectionChecksum": "7676"
},
{
"did": "0xxxxx",
"filesChecksum": "1232334",
"containerSectionChecksum": "98787"
}
]
}
}
}
}
```

173
content/concepts/did-ddo.md Normal file
View File

@ -0,0 +1,173 @@
---
title: DIDs & DDOs - Asset Identifiers & Objects
description: Specification of Ocean asset identifiers and objects using DIDs & DDOs
slug: /concepts/did-ddo/
section: concepts
---
## Overview
This document describes how Ocean assets follow the DID/DDO spec, such that Ocean assets can inherit DID/DDO benefits and enhance interoperability.
Decentralized identifiers (DIDs) are a new type of identifier that enables verifiable, decentralized digital identity. Each DID is associated with a unique entity. DIDs may represent humans, objects, and more.
A DID Document (DDO) is JSON blob that holds information about the DID. Given a DID, a _resolver_ will return the DDO of that DID.
If a DID is the index key in a key-value pair, then the DID Document is the value to which the index key points.
The combination of a DID and its associated DID Document forms the root record for a decentralized identifier.
DIDs and DDOs follow [this specification](https://w3c-ccg.github.io/did-spec/) defined by the World Wide Web Consurtium (W3C).
## Rules for DIDs & DDOs in Ocean
- An _asset_ in Ocean represents a downloadable file, compute service, or similar. Each asset is a _resource_ under control of a _publisher_. The Ocean network itself does _not_ store the actual resource (e.g. files).
- An asset should have a DID and DDO. The DDO should include metadata about the asset.
- The DDO can only can be modified by _owners_ or _delegated users_.
- There _must_ be at least one client library acting as _resolver_, to get a DDO from a DID.
- The DDO is stored on-chain. It's stored in in plaintext, with two exceptions: (1) the field for resource-access url is encrypted (2) the whole DDO may be encrypted, if the publisher is willing to lose 100% of discoverability.
- A metadata cache like Aquarius can help in reading and writing DDO data from the chain.
## DID Structure
In Ocean, a DID is a string that looks like:
```text
did:op:0ebed8226ada17fde24b6bf2b95d27f8f05fcce09139ff5cec31f6d81a7cd2ea
```
It follows [the generic DID scheme](https://w3c-ccg.github.io/did-spec/#the-generic-did-scheme).
The part after `did:op:` is the asset's on-chain Ethereum address (minus the "0x"). One can be computed from the other; therefore there is a 1:1 mapping between did and Ethereum address.
## DDO Attributes
![DDO Content](images/ddo-content.png)
A DDO has these standard attributes:
- `@context`
- `id`
- `created`
- `updated`
- `publicKey`
- `authentication`
- `proof`
- `verifiableCredential`
In Ocean, the DDO also has:
- `dataToken`
- `service`
- `credentials` - optional flag, which describes the credentials needed to access a dataset (see below)
Asset metadata must be included as one of the objects inside the `"service"` array, with type `"metadata"`.
## DDO Service Types
There are many possible service types for a DDO.
- `metadata` - describing the asset
- `access` - describing how the asset can be downloaded
- `compute` - describing how the asset can be computed upon
Each asset has a `metadata` service and at least one other service.
Each service is distinguished by the `DDO.service.type` attribute.
Each service has an `attributes` section holding the information related to the service. That section _must_ have a `main` sub-section, holding all the mandatory information that a service has to provide.
A part of the `attributes.main` sub-section, other optional sub-sections like `attributes.extra` can be added. These depend on the service type.
Each service has a `timeout` (in seconds) section describing how long the sevice can be used after consumption is initiated. A timeout of 0 represents no time limit.
The `cost` attribute is obsolete, as of Ocean V3. As of V3, to consume an asset, one sends exactly 1.0 datatokens of the asset, so a `cost` is not needed.
## DDO Service Example
Here is an example DDO service:
```json
"service": [
{
"index": 0,
"type": "metadata",
"serviceEndpoint": "https://service/api/v1/metadata/assets/ddo/did:op:0ebed8226ada17fde24b6bf2b95d27f8f05fcce09139ff5cec31f6d81a7cd2ea",
"attributes": {
"main": {},
"additionalInformation": {},
"curation": {}
}
},
{
"index": 1,
"type": "access",
"serviceEndpoint": "http://localhost:8030/api/v1/provider/services/consume",
"attributes": {
"main": {
"cost":"10",
"timeout":0
},
"additionalInformation": {}
}
},
{
"index": 2,
"type": "compute",
"serviceEndpoint": "http://localhost:8030/api/v1/provider/services/compute",
"attributes": {
"main": {
"cost":"10",
"timeout":3600
},
"additionalInformation": {}
}
}
]
```
## DDO Credentials for Fine-Grained Permissions
By default, a consumer can access a resource if they have 1.0 datatokens. _Credentials_ allow the publisher to optionally specify finer-grained permissions.
Consider a medical data use case, where only a credentialed EU researcher can legally access a given dataset. Ocean supports this as follows: a consumer can only access the resource if they have 1.0 datatokens _and_ one of the specified `"allow"` credentials.
This is like going to an R-rated movie, where you can only get in if you show both your movie ticket (datatoken) _and_ some some id showing you're old enough (credential).
Only credentials that can be proven are supported. This includes Ethereum public addresses, and (in the future) W3C Verifiable Credentials and more.
Ocean also supports `"deny"` credentials: if a consumer has any of these credentials, they cannot access the resource.
Here's an example object with both `"allow"` and `"deny"` entries.
```json
"credentials":{
"allow":[
{
"type":"address",
"values":[
"0x123",
"0x456"
]
}
]
},
"deny":[
{
"type":"address",
"values":[
"0x2222",
"0x333"
]
}
]
}
```
For future usage, we can extend that with different credentials types. Example:
```json
{
"type": "credential3Box",
"values": ["profile1", "profile2"]
}
```

Binary file not shown.

After

Width:  |  Height:  |  Size: 30 KiB

View File

@ -7,6 +7,9 @@ Ocean Protocol contracts are deployed on multiple public networks. You can alway
In each network, youll need ETH to pay for gas, and OCEAN for certain Ocean actions. Because the Ethereum mainnet is a network for production settings, ETH and OCEAN tokens have real value on there. The ETH and OCEAN tokens in each test network dont have real value and are used for testing-purposes only. They can be obtained with _faucets_ to dole out ETH and OCEAN.
The universal Aquarius Endpoint is `https://aquarius.oceanprotocol.com`.
## Ethereum Mainnet
The Ethereum Mainnet is Oceans production network.
@ -29,7 +32,6 @@ MetaMask and other ERC20 wallets default to Ethereum mainnet, therefore your wal
| Explorer | https://etherscan.io |
| Ocean Market | https://market.oceanprotocol.com |
| Provider | `https://provider.mainnet.oceanprotocol.com` |
| Aquarius | `https://aquarius.mainnet.oceanprotocol.com` |
| Subgraph | `https://subgraph.mainnet.oceanprotocol.com` |
## Polygon Mainnet
@ -63,7 +65,6 @@ If you don't find Polygon as a predefined network in your wallet, you can connec
| Explorer | https://polygonscan.com/ |
| Ocean Market | Point wallet to Polygon network, at https://market.oceanprotocol.com |
| Provider | `https://provider.polygon.oceanprotocol.com` |
| Aquarius | `https://aquarius.polygon.oceanprotocol.com` |
| Subgraph | `https://subgraph.polygon.oceanprotocol.com` |
**Bridge**
@ -100,7 +101,6 @@ If you don't find BSC as a predefined network in your wallet, you can connect to
| Explorer | https://bscscan.com/ |
| Ocean Market | Point wallet to BSC network, at https://market.oceanprotocol.com |
| Provider | `https://provider.bsc.oceanprotocol.com` |
| Aquarius | `https://aquarius.bsc.oceanprotocol.com` |
| Subgraph | `https://subgraph.bsc.oceanprotocol.com` |
**Bridge**
@ -129,7 +129,6 @@ In MetaMask and other ERC20 wallets, click on the network name dropdown, then se
| Explorer | https://ropsten.etherscan.io |
| Ocean Market | Point wallet to Ropsten network, at https://market.oceanprotocol.com |
| Provider | `https://provider.ropsten.oceanprotocol.com` |
| Aquarius | `https://aquarius.ropsten.oceanprotocol.com` |
| Subgraph | `https://subgraph.ropsten.oceanprotocol.com` |
## Rinkeby
@ -154,7 +153,6 @@ In MetaMask and other ERC20 wallets, click on the network name dropdown, then se
| Explorer | https://rinkeby.etherscan.io |
| Ocean Market | Point wallet to Rinkeby network, at https://market.oceanprotocol.com |
| Provider | `https://provider.rinkeby.oceanprotocol.com` |
| Aquarius | `https://aquarius.rinkeby.oceanprotocol.com` |
| Subgraph | `https://subgraph.rinkeby.oceanprotocol.com` |
@ -180,7 +178,6 @@ If you don't find Mumbai as a predefined network in your wallet, you can connect
| Explorer | https://mumbai.polygonscan.com |
| Ocean Market | Point wallet to Mumbai network, at https://market.oceanprotocol.com |
| Provider | `https://provider.mumbai.oceanprotocol.com` |
| Aquarius | `https://aquarius.mumbai.oceanprotocol.com` |
| Subgraph | `https://subgraph.mumbai.oceanprotocol.com` |
## Local / Ganache

View File

@ -0,0 +1,41 @@
---
title: Allow and Deny Lists
description: Restrict access to individual assets
---
Allow and deny lists are advanced features that allow publishers to control access to individual data assets. Publishers can restrict assets so that they can only be accessed by approved users (allow lists) or they can restrict assets so that they can be accessed by anyone except certain users (deny lists).
## Setup
All and deny lists are not enabled by default in Ocean Market. You need to edit the environmental variables to enable this feature in your fork of Ocean Market:
- To enable allow and deny lists you need to add the following environmental variable to your .env file in your fork of Ocean Market: `GATSBY_ALLOW_ADVANCED_SETTINGS="true"`
- Publishers in your market will now have the ability to restrict who can consume their datasets.
## Usage
To use allow or deny lists you need to navigate to your data asset and click on "Advance Settings".
![Advanced Settings](images/allow-deny-lists/advanced-settings.png)
In order to add a user to a allow or deny list, you need to first know their ethereum address. You can then enter the address of the user into the input section and click the "ADD" button.
![Add address to allow list](images/allow-deny-lists/add-allow-list.png)
To remove a user from an all or deny list you can click the cross next to their ethereum address.
![Removing a user from allow or deny list](images/allow-deny-lists/removing-allow-deny.png)
Any changes you make on the advanced settings page need to be submitted and signed in a transaction. To do this, first click the "SUBMIT" button.
![Submit changes to allow or deny lists](images/allow-deny-lists/submit.png)
Next you will need to sign the transaction in Metamask, or the wallet of your choice.
![Sign Metamask transaction](images/allow-deny-lists/metamask-transaction.png)
When the process of updating the allow or deny lists is complete you will receive a success message.
![Update allow or deny list success](images/allow-deny-lists/update-success.png)

View File

@ -1,16 +0,0 @@
---
title: Set Up Amazon S3 Storage
description: Tutorial about how to set up Amazon S3 storage for use with Ocean Protocol.
---
*Note: This needs updating for Ocean V3. As a workaround: Brizo has been renamed to provider-py; it should work similarly.*
To enable Brizo to use files stored in Amazon S3 (i.e. files with an URL containing `s3://`), you must:
1. have an Amazon AWS user account (IAM account) with permission to read those files from S3, and
1. set the AWS credentials on the machine where Brizo is running to those of the AWS user in question. Instructions are given below.
1. Note that you don't have to set any Brizo-specific configuration settings, e.g. in the `[osmosis]` section of the Brizo config file or in some special Brizo environment variables.
Under the hood, Brizo uses [boto3](https://aws.amazon.com/sdk-for-python/) (the Python library for interacting with AWS) to interact with AWS and boto3 has a whole process for determining AWS credentials. The easiest way to set the AWS credentials on the machine where Brizo is running is to install the [AWS CLI](https://aws.amazon.com/cli/) and then use the `aws configure` command.
For more details, see [the boto3 user guide about credentials](https://boto3.amazonaws.com/v1/documentation/api/latest/guide/configuration.html).

View File

@ -0,0 +1,15 @@
---
title: Set Up Amazon S3 Storage
description: Tutorial about how to set up Amazon S3 storage for use with Ocean Protocol.
---
*Note: This needs updating for Ocean V3.*
To enable Provider to use files stored in Amazon S3 (i.e. files with an URL containing `s3://`), you must:
1. have an Amazon AWS user account (IAM account) with permission to read those files from S3, and
1. set the AWS credentials on the machine where Provider is running to those of the AWS user in question. Instructions are given below.
1. Note that you don't have to set any Provider-specific configuration settings, e.g. in the `[osmosis]` section of the Provider config file or in some special Provider environment variables.
Under the hood, Provider uses [boto3](https://aws.amazon.com/sdk-for-python/) (the Python library for interacting with AWS) to interact with AWS and boto3 has a whole process for determining AWS credentials. The easiest way to set the AWS credentials on the machine where Provider is running is to install the [AWS CLI](https://aws.amazon.com/cli/) and then use the `aws configure` command.
For more details, see [the boto3 user guide about credentials](https://boto3.amazonaws.com/v1/documentation/api/latest/guide/configuration.html).

View File

@ -0,0 +1,177 @@
---
title: Set Up Azure Storage
description: Tutorial about how to set up Azure storage for use with Ocean.
---
*Note: This needs updating for Ocean V3.*
This tutorial is for publishers who want to get started using Azure to store some of their data assets. (Some data assets could also be stored in other places.)
Publishers must run [Provider](https://github.com/oceanprotocol/provider) to mediate consumer access to data assets stored in Azure Storage. Provider needs the following Azure credentials from the publisher:
- `AZURE_ACCOUNT_NAME`: Azure Storage Account Name (for storing files)
- `AZURE_ACCOUNT_KEY`: Azure Storage Account key
- `AZURE_RESOURCE_GROUP`: Azure resource group
- `AZURE_LOCATION`: Azure Region
- `AZURE_CLIENT_ID`: Azure Application ID
- `AZURE_CLIENT_SECRET`: Azure Application Secret
- `AZURE_TENANT_ID`: Azure Tenant ID
- `AZURE_SUBSCRIPTION_ID`: Azure Subscription ID
If you go through this tutorial, then you will get all the Azure credentials listed above.
If you already have data assets stored in Azure, then you might already have, or be able to get, the above information. You could use this tutorial to get a sense of where to look (but don't create anything new).
To give the above Azure credentials to Provider, you either put them in a Provider config file or in environment variables with the above names. Environment variables should be used if you're running Provider inside a container. If you want to use the config file option, see [Provider README](https://github.com/oceanprotocol/provider).
If you're using [Barge](https://github.com/oceanprotocol/barge) to run Provider and other Ocean Protocol components, then the above Azure credentials should go in the file `barge/provider.env`. (That file gets used to set environment variables.)
This tutorial uses the [Microsoft Azure Portal](https://azure.microsoft.com/en-us/features/azure-portal/), but [there are many other ways to interact with Azure](https://docs.microsoft.com/en-us/azure/#pivot=sdkstools).
**Note: Azure is constantly changing. For that reason, we give try to give links to official Azure documentation, since it _should_ stay up-to-date.**
## Sign in to Azure Portal
If you don't already have an Azure account, then you will have to create one. Go to the [Microsoft Azure website](https://azure.microsoft.com) and follow the links.
Once you have an Azure account, go to [https://portal.azure.com/](https://portal.azure.com/) and sign in.
## Get Your Subscription ID
The [Azure docs say](https://docs.microsoft.com/en-us/azure/guides/developer/azure-developer-guide), "A subscription is a logical grouping of Azure services that is linked to an Azure account. A single Azure account can contain multiple subscriptions."
If you see **Subscriptions** in the left sidebar of Azure Portal, then click that. If you don't see it, just type "Subscriptinos" into the search bar at the top, then click on **Subscriptions** under the SERVICES heading.
You should see a list of one or more subscriptions. Click on the one you want to use for Azure storage. Remember to use that one for the rest of this tutorial (whenever you are asked for a subscription name).
Copy the `Subscription ID`. That's what Provider calls `AZURE_SUBSCRIPTION_ID`. You now have one of the Azure credentials!
```text
# Example AZURE_SUBSCRIPTION_ID (Azure Subscription ID)
479284be-0104-421a-8488-1aeac0caecaa
```
## Create an Azure Active Directory (AD) Application
See the Azure docs page:
[How to: Use the portal to create an Azure AD application and service principal that can access resources](https://docs.microsoft.com/en-us/azure/active-directory/develop/howto-create-service-principal-portal)
The first step there is to **Create an Azure Active Directory application**. Do that.
The app `Name` and `Sign-on URL` can be totally made up. The URL doesn't need to be real.
Once the app is created, copy the `Application ID`: that's what Provider calls the `AZURE_CLIENT_ID`. It should look something like this:
```text
# Example AZURE_CLIENT_ID (Application ID)
5d25ee8a-da2c-4e6f-8fba-09b6dd091038
```
## Get Authentication Key for Your AD Application
On [the same Azure docs page](https://docs.microsoft.com/en-us/azure/active-directory/develop/howto-create-service-principal-portal), find the section titled **Get application ID and authentication key** or similar. You already have your application ID, but you still need generate an authentication key by following the instructions in that section.
You can make up whatever you like for the key's `Description`.
Once the application key is generated, copy its value: that's what Provider calls the `AZURE_CLIENT_SECRET`. It should look something like this:
```text
# Example AZURE_CLIENT_SECRET (Application key)
RVJ1H5gYOmnMitikmM5ehszqmgrY5BFkoalnjfWMuDM
```
## Get Tenant ID
On [the same Azure docs page](https://docs.microsoft.com/en-us/azure/active-directory/develop/howto-create-service-principal-portal), find the section titled **Get tenant ID** or similar. Follow the instructions.
The tenant ID is what Provider calls `AZURE_TENANT_ID`.
```text
# Example AZURE_TENANT_ID (tenant ID, Directory ID)
2a4a3887-4e2e-4a31-8006-6e2b5877640e
```
## Create a Resource Group for Your Data Storage
See the Azure docs page:
[Manage Azure resources through portal](https://docs.microsoft.com/en-us/azure/azure-resource-manager/resource-group-portal)
That page says how to create a new empty resource group. Do that.
You can make up whatever name you like, but it's good practice to avoid special characters and to include:
- some words to indicate what it's for, e.g. `Storage`
- your name
- the month and year it was created, e.g. `Nov2018`
to help you and others manage it. The Resource group name is what Provider calls the `AZURE_RESOURCE_GROUP` and the Resource group location is what Provider calls the `AZURE_LOCATION`. Here are examples of both:
```text
# Example AZURE_RESOURCE_GROUP (Resource group name)
StorageCreatedNov2018ByTroy
```
```text
# Example AZURE_LOCATION (Resource group location)
West Europe
```
## Give Your AD Application Access to Your Resource Group
Inside your new resource group:
- click **Access control (IAM)**
- click **+ Add role assignment**
- In the `Role` field, select `Contributor`. See the note below.
- Assign access to `Azure AD user, group, or service principal`
- In the `Select` field, begin entering the name of your AD application (created earlier). When it appears in the list, click on it there. It should now be listed as one of the "Selected members".
- Click **Save**
Note: You might want to give your application fewer permissions than what a `Contributor` role gets. The Azure docs have [a list of all the built-in roles for Azure resources](https://docs.microsoft.com/en-us/azure/role-based-access-control/built-in-roles).
## Create a Storage Account
Follow the instructions in the Azure docs page:
[Create a storage account](https://docs.microsoft.com/en-us/azure/storage/common/storage-quickstart-create-account?tabs=portal)
except you should use the _existing_ resource group you created earlier, i.e. don't create a new one.
The Storage account name you choose is what Provider calls the `AZURE_ACCOUNT_NAME`.
```text
# Example AZURE_ACCOUNT_NAME (Storage account name)
troystorageaccount1
```
Use the same `Location` as your resource group.
The other fields can be left with their default values unless you want to change them.
Wait for it to say, "Your deployment is complete."
## Get a Storage Account Access Key
See the Azure docs page:
[Manage storage account settings in the Azure portal](https://docs.microsoft.com/en-us/azure/storage/common/storage-account-manage)
Go to the subsection about access keys and follow the instructions to view your new storage account's credentials.
Copy the value of one of the keys (e.g. key1, not the connection string). That's what Provider calls `AZURE_ACCOUNT_KEY`.
```text
# Example AZURE_ACCOUNT_KEY (Storage account access key)
93uKDkbjfnSUNPKw2tpe0LOM+3Wk+OSkNmgwhzjvzDw1d3sKVhMRTC5ikvN0r3zsx8eQrmT9Wgjz22iLPu3aGw==
```
You now have all the Azure credentials Provider needs. See the instructions near the top of this page about how to give those Azure credentials to Provider.
## Store Some Data in Azure Storage
You now have a storage account, but you don't have any data stored under it yet. To get some data stored in [Azure Storage](https://docs.microsoft.com/en-us/azure/storage/common/storage-introduction), the easiest option is to use [Azure Storage Explorer](https://azure.microsoft.com/en-us/features/storage-explorer/), a free desktop app that works on Windows, macOS and Linux.
[Get Azure Storage Explorer](https://azure.microsoft.com/en-us/features/storage-explorer/).
Azure Storage can store blobs, files, queues and tables. To work with Ocean Network, you should store your files in [Azure Blob storage (also called object storage)](https://docs.microsoft.com/en-us/azure/storage/blobs/storage-blobs-introduction), not Azure Files.
Besides Azure Storage Explorer, there are [many other Azure Storage APIs, libraries and tools](https://docs.microsoft.com/en-us/azure/storage/common/storage-introduction#storage-apis-libraries-and-tools).

Binary file not shown.

After

Width:  |  Height:  |  Size: 147 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 122 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 146 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 152 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 96 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 102 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 94 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 99 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 73 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 103 KiB

View File

@ -9,5 +9,6 @@ These tutorials cover:
- Set up a marketplace
- Run a compute-to-data environment
- Storage setup - Azure, AWS or local
- Fine-grained permissions

View File

@ -8,8 +8,8 @@ https://market.oceanprotocol.com/
## About Ocean Market
1. Ocean Market enables publishers to monetize their data and/or alogrithms thorugh blockchain technology.
1. Ocean Market enables publishers to monetize their data and/or algorithms through blockchain technology.
2. Consumers can purchase the access to data, algoritms, compute services.
2. Consumers can purchase access to data, algorithms, compute services.
3. Liquidity providers can stake their Ocean tokens to earn interest on the transactions going thorugh the Liqiuidy pool.
3. Liquidity providers can stake their Ocean tokens to earn interest on the transactions going through the Liqiuidy pool.

View File

@ -2,14 +2,14 @@
## What can be published?
Ocean Market provides a convinent interface for individuals as well as organisations to publish their data. Data set can be images, location information, audio, video, sales data or combinations of all! There is no exhaustive list of what type of data can be Published on Market. Please note that Ocean team maintains a purgatory list [here](https://github.com/oceanprotocol/list-purgatory) to block addresses and remove assets for any violations.
Ocean Market provides a convenient interface for individuals as well as organizations to publish their data. Data set can be images, location information, audio, video, sales data, or combinations of all! There is no exhaustive list of what type of data can be published on the Market. Please note that the Ocean Protocol team maintains a purgatory list [here](https://github.com/oceanprotocol/list-purgatory) to block addresses and remove assets for any violations.
## Tutorial
### Step 1 - Publish data asset
1. Go to https://market.oceanprotocol.com
2. Conntect wallet.
2. Connect wallet.
![connect wallet](images/marketplace/connect-wallet.png 'Connect wallet')
3. Go to publish page.
@ -22,20 +22,20 @@ Ocean Market provides a convinent interface for individuals as well as organisat
![publish form part-2](images/marketplace/publish-form-2.png 'Publish form part-2')
![publish form part-3](images/marketplace/publish-form-3.png 'Publish form part-3')
5. After clicking submit, approve the transcations in the wallet. Here, you can metamask window.
5. After clicking submit, approve the transactions in the wallet. Here, you can see Metamask window.
Deploy a new Datatoken contract.
![publish submit part-1](images/marketplace/submit-1.png 'Create Datatoken contract')
Contract interaction.
![publish submit part-2](images/marketplace/submit-2.png 'Contract interaction')
6. Now, after the transactions are completed, below screen will appear.
6. Now, after the transactions are completed, the below screen will appear.
![publish success](images/marketplace/submit-success.png 'Success')
### Step 2 - Create pricing
Once the data asset is published, user(s) can choose the pricing option as per their choice. Only the publisher can set the pricing option and cannot be changed once the publisher selects any one method.
Once the data asset is published, the user(s) can choose the pricing option as per their choice. Only the publisher can set the pricing option and cannot be changed once the publisher selects any one method.
There are 2 options for settings the price of an asset on Ocean Marketplace.
@ -44,9 +44,9 @@ There are 2 options for settings the price of an asset on Ocean Marketplace.
#### Create fixed pricing for a data set
Connect to the Ocean Marketplace with the publisher account and go the published asset.
Connect to the Ocean Marketplace with the publisher account and go to the published asset.
If the pricing is not set Marketplace will provide an option to create a pricing. Click on the create pricing button as shown below.
If the pricing is not set Marketplace will provide an option to create pricing. Click on the create pricing button as shown below.
![pricing part-1](images/marketplace/pricing-1.png 'Create pricing page')
Select the pricing type. Here, we are selecting **Fixed** pricing option. Publisher can set the value of datatoken with respect to Ocean Tokens.

View File

@ -4,14 +4,14 @@
1. Search for the desired asset published on the [Ocean Marketplace](https://market.oceanprotocol.com/).
2. Select **Trade** option and enter amount of Ocean tokens you want to swap. The expected amount that the account will recieve wil be shown with the swap fees information.
2. Select **Trade** option and enter the amount of Ocean tokens you want to swap. The expected amount that the account will receive will be shown with the swap fees information.
![swap part-1](images/marketplace/Swap-1.png 'Select trade')
3. Approve the Contract transaction to Spend the Ocean Tokens.
![swap part-2](images/marketplace/Swap-2.png 'Approve spend limit')
4. Approve the Contract transaction to swap the tokens. After the transaction is completed, you can add the Datatoken address in the wallet to quickly view the balance in future.
4. Approve the Contract transaction to swap the tokens. After the transaction is completed, you can add the Datatoken address in the wallet to quickly view the balance in the future.
![swap part-3](images/marketplace/Swap-3.png 'Approve swap transation')
@ -19,17 +19,17 @@
1. Search for the desired asset published on the [Ocean Marketplace](https://market.oceanprotocol.com/).
2. Select **Pool** option and click **ADD LIQUIDTY** button. The expected amount that the account will recieve wil be shown with the swap fees information.
2. Select **Pool** option and click **ADD LIQUIDITY** button.
![staking part-1](images/marketplace/Staking-1.png 'Select Pool option')
3. Enter the amount of **Ocean Tokens** you want to stake.
![staking part-2](images/marketplace/Staking-2.png 'Enter the amount to stake')
4. Approve the contract transaction. Make sure you account has sufficient **ETH** balance.
4. Approve the contract transaction. Make sure your account has sufficient **ETH** balance.
![staking part-2](images/marketplace/Staking-3.png 'Approve spend transction')
5. Approve the contract transaction. Make sure you account has sufficient **ETH** balance.
5. Approve the contract transaction. Make sure your account has sufficient **ETH** balance.
![staking part-2](images/marketplace/Staking-4.png 'Approve contract transaction')
6. After the transactions are completed, below message will displayed.
6. After the transactions are completed, the below message will be displayed.
![staking part-2](images/marketplace/Staking-5.png 'Success')

View File

@ -25,7 +25,6 @@ When you deploy, you'll want some initial data assets for your market to offer.
Ocean supports several types, such as Azure and S3 storage. The [tutorials](/tutorials/) section provides more info.
## Deploy to Production
When developing your app, you'll likely use Barge to run all the Ocean Protocol components on your local machine.
@ -40,7 +39,7 @@ Of course, there are many other things that must be handled in production:
- Security of the infrastructure where the software is running
- Monitoring
- Log aggregation, storage and search
- Log aggregation, storage, and search
- Handling crashes or other faults
Each of those is beyond the scope of these docs.

View File

@ -1,10 +0,0 @@
---
title: Set Up On-Premise Storage
description: Tutorial about how to set up on-premise storage for use with Ocean.
---
*Note: This needs updating for Ocean V3. As a workaround: Brizo has been renamed to provider-py; it should work similarly.*
To enable Brizo to use files stored in on-premise storage (i.e. files with an URL not containing `core.windows.net` or `s3://`), there is _nothing to do, other than make sure Brizo can resolve the URLs_. In particular, you don't have to set any Brizo-specific configuration settings, e.g. in the `[osmosis]` section of the Brizo config file or in some special Brizo environment variables.
Local and private network URLs are fine so long as they can be resolved by Brizo. Potential examples include `http://localhost/helicopter_data.xls`, `http://192.168.12.34/almond_sales_2012.csv` and `http://10.12.34.56/duck_photos.zip`.

View File

@ -0,0 +1,9 @@
---
title: Set Up On-Premise Storage
description: Tutorial about how to set up on-premise storage for use with Ocean.
---
*Note: This needs updating for Ocean V3.*
To enable Provider to use files stored in on-premise storage (i.e. files with an URL not containing `core.windows.net` or `s3://`), there is _nothing to do, other than make sure Provider can resolve the URLs_. In particular, you don't have to set any Provider-specific configuration settings, e.g. in the `[osmosis]` section of the Provider config file or in some special Provider environment variables.
Local and private network URLs are fine so long as they can be resolved by Provider. Potential examples include `http://localhost/helicopter_data.xls`, `http://192.168.12.34/almond_sales_2012.csv` and `http://10.12.34.56/duck_photos.zip`.

View File

@ -0,0 +1,15 @@
---
title: Fine-Grained Permissions
description: Control who can publish, consume or browse data
---
Ocean Protocol supports fine-grained permissions across our technology stack which can be particularly useful for enterprise use-cases. There are two ways in which permissions are implemented:
- [Role based access control server.](./rbac)
- [Allow & deny lists.](./allow-deny-lists)
Neither are enabled in [Ocean Market](market.oceanprotocol.com/) but you can enable them in your own market by following the guides above.

122
content/tutorials/rbac.md Normal file
View File

@ -0,0 +1,122 @@
---
title: Role-Based Access Control Server
description: Control who can publish, consume or browse data
---
The primary mechanism for restricting your users ability to publish, consume, or browse is the role-based access (RBAC) control server.
## Roles
The RBAC server defines four different roles:
- Admin
- Publisher
- Consumer
- User
### Admin/ Publisher
Currently users with either the admin or publisher roles will be able to use the Market without any restrictions. They can publish, consume and browse datasets.
### Consumer
A user with the consumer is able to browse datasets, purchase them, trade datatokens and also contribute to datapools. However, they are not able to publish datasets.
![Viewing the market without publish permission](images/rbac/without-publish-permission.png)
### Users
Users are able to browse and search datasets but they are not able to purchase datasets, trade datatokens, or contribute to data pools. They are also not able to publish datasets.
![Viewing the market without consume permission](images/rbac/without-consume-permission.png)
### Address without a role
If a user attempts to view the data market without a role, or without a wallet connected, they will not be able to view or search any of the datasets.
![Viewing the market without browse permission](images/rbac/without-browse-permission.png)
### No wallet connected
When the RBAC server is enabled on the market, users are required to have a wallet connected to browse the datasets.
![Connect a wallet](images/rbac/connect-wallet.png)
## Mapping roles to addresses
Currently the are two ways that the RBAC server can be configured to map user roles to Ethereum addresses. The RBAC server is also built in such a way that it is easy for you to add your own authorization service. They two existing methods are:
1. Keycloak
If you already have a [Keycloak](https://www.keycloak.org/) identity and access management server running you can configure the RBAC server to use it by adding the URL of your Keycloak server to the `KEYCLOAK_URL` environmental variable in the RBAC `.enb` file.
2. JSON
Alternatively, if you are not already using Keycloak, the easiest way to map user roles to ethereum addresses is in a JSON object that is saved as the `JSON_DATA` environmental variable in the RBAC `.env` file. There is an example of the format required for this JSON object in `.example.env`
It is possible that you can configure both of these methods of mapping user roles to Ethereum Addresses. In this case the requests to your RBAC server should specify which auth service they are using e.g. `"authService": "json"` or `"authService": "keycloak"`
### Default Auth service
Additionally, you can also set an environmental variable within the RBAC server that specifies the default authorization method that will be used e.g. `DEFAULT_AUTH_SERVICE = "json"`. When this variable is specified, requests sent to your RBAC server don't need to include an `authService` and they will automatically use the default authorization method.
## Running the RBAC server locally
You can start running the RBAC server by following these steps:
1. Clone this repository:
```Bash
git clone https://github.com/oceanprotocol/RBAC-Server.git
cd RBAC-Server
```
2. Install the dependencies:
```Bash
npm install
```
3. Build the service
```Bash
npm run build
```
4. Start the server
```Bash
npm run start
```
## Running in Docker
When you are ready to deploy the RBAC server to
1. Replace the KEYCLOAK_URL in the Dockerfile with the correct URL for your hosting of [Keycloak](https://www.keycloak.org/).
2. Run the following command to build the RBAC service in a Docker container:
```Bash
npm run build:docker
```
3. Next, run the following command to start running the RBAC service in the Docker container:
```Bash
npm run start:docker
```
4. Now you are ready to send requests to the RBAC server via postman. Make sure to replace the URL to `http://localhost:49160` in your requests.
## Setting up the RBAC in the Market
To use the RBAC server with the market you need to save the URL of your RBAC server as an env within the market.
- First setup and host the Ocean role based access control (RBAC) server. Follow the instructions in the [RBAC repository](https://github.com/oceanprotocol/RBAC-Server)
- In your .env file in your fork of Ocean Market, set the value of the `GATSBY_RBAC_URL` environmental variable to the URL of the Ocean RBAC server that you have hosted, e.g. `GATSBY_RBAC_URL= "http://localhost:3000"`
- Users of your marketplace will now require the correct role ("user", "consumer", "publisher") to access features in your marketplace. The market will check the role that has been allocated to the user based on the address that they have connected to the market with.
- The following features have been wrapped in the `Permission` component and will be restricted once the `GATSBY_RBAC_URL` has been defined:
- Viewing or searching datasets requires the user to have permission to `browse`
- Purchasing or trading a datatoken, or adding liquidity to a pool require the user to have permission to `consume`
- Publishing a dataset requires the user to have permission to `publish`
- You can change the permission restrictions by either removing the `Permission` component or passing in a different eventType prop e.g. `<Permission eventType="browse">`.

View File

@ -8,6 +8,8 @@
link: /concepts/architecture/
- title: Supported Networks
link: /concepts/networks/
- title: Deployments
link: /concepts/deployments/
- title: Projects using Ocean
link: /concepts/projects-using-ocean/
@ -16,6 +18,13 @@
- title: Compute-to-Data Overview
link: /concepts/compute-to-data/
- group: Specifying Assets
items:
- title: DIDs & DDOs
link: /concepts/did-ddo/
- title: DDO Metadata
link: /concepts/ddo-metadata/
- group: Contribute
items:
- title: Ways to Contribute

View File

@ -23,11 +23,6 @@
- title: API Reference
link: /references/read-the-docs/provider/
- group: react
items:
- title: API Reference
link: https://github.com/oceanprotocol/react
- group: ocean.py
items:
- title: API Reference
@ -38,11 +33,6 @@
- title: API Reference
link: https://github.com/oceanprotocol/ocean-contracts
- group: provider-py
items:
- title: API Reference
link: https://github.com/oceanprotocol/provider-py
- group: Ocean Subgraph
items:
- title: Readme References

View File

@ -45,8 +45,17 @@
- group: Storage Setup
items:
- title: Set Up Azure Storage
link: /tutorials/azure-for-brizo/
link: /tutorials/azure-for-provider/
- title: Set Up Amazon S3 Storage
link: /tutorials/amazon-s3-for-brizo/
link: /tutorials/amazon-s3-for-provider/
- title: Set Up On-Premise Storage
link: /tutorials/on-premise-for-brizo/
link: /tutorials/on-premise-for-provider/
- group: Fine-Grained Permissions
items:
- title: Overview
link: /tutorials/permissions
- title: Role-Based Access Control
link: /tutorials/rbac
- title: Allow & Deny Lists
link: /tutorials/allow-deny-lists

View File

@ -17,7 +17,7 @@ The sidebar for those generated reference pages will automatically switch to inc
Reference pages based on Swagger specs are sourced from remotely hosted Swagger specs:
- [`https://aquarius.test.oceanprotocol.com/spec`](https://aquarius.test.oceanprotocol.com/spec)
- [`https://brizo.test.oceanprotocol.com/spec`](https://brizo.test.oceanprotocol.com/spec)
- [`https://provider.test.oceanprotocol.com/spec`](https://provider.test.oceanprotocol.com/spec)
They are fetched and updated automatically upon every site build. For more information about stylistic issues, take a look at the section in the test page:

View File

@ -14,7 +14,7 @@ The documentation is split in multiple sections whose content lives in this repo
- **Core concepts**: high-level explanation of concepts, assumptions, and components
- **Setup**: getting started for various stakeholders and use cases
- **Tutorials**: detailed tutorials
- **API References**: docs for the Aquarius & Brizo REST APIs, and docs for various Squid libraries
- **API References**: docs for ocean.js, ocean.py, Aquarius REST API, and Provider REST API
Those sections are defined in the [`/data/sections.yml`](../data/sections.yml) file.

View File

@ -3,7 +3,7 @@
const path = require('path')
const { createFilePath } = require('gatsby-source-filesystem')
const Swagger = require('swagger-client')
const { redirects } = require('./config')
const { redirects, swaggerComponents } = require('./config')
exports.onCreateNode = ({ node, getNode, actions }) => {
const { createNodeField } = actions
@ -128,9 +128,11 @@ exports.createPages = ({ graphql, actions }) => {
})
})
// API: brizo, aquarius
// API: aquarius
await createSwaggerPages(createPage)
await createDeploymentsPage(createPage)
// API: ocean.js
const lastRelease =
result.data.oceanJs.repository.releases.edges.filter(
@ -174,6 +176,15 @@ exports.createPages = ({ graphql, actions }) => {
})
}
const createDeploymentsPage = async (createPage) => {
const template = path.resolve('./src/components/Deployments.jsx')
const slug = `/concepts/deployments/`
createPage({
path: slug,
component: template
})
}
//
// Create pages from TypeDoc json files
//
@ -200,11 +211,9 @@ const createTypeDocPage = async (createPage, name, downloadUrl) => {
// Create pages from swagger json files
//
// https://github.com/swagger-api/swagger-js
const fetchSwaggerSpec = async (component) => {
const fetchSwaggerSpec = async (url) => {
try {
const client = await Swagger(
`https://${component}.mainnet.oceanprotocol.com/spec`
)
const client = await Swagger(url)
return client.spec // The resolved spec
// client.originalSpec // In case you need it
@ -221,21 +230,20 @@ const fetchSwaggerSpec = async (component) => {
}
const createSwaggerPages = async (createPage) => {
const swaggerComponents = ['aquarius', 'provider']
const apiSwaggerTemplate = path.resolve('./src/templates/Swagger/index.jsx')
const getSlug = (name) => `/references/${name}/`
for (const component of swaggerComponents) {
const slug = getSlug(component)
const slug = getSlug(component.name)
createPage({
path: slug,
component: apiSwaggerTemplate,
context: {
slug,
name: component,
api: await fetchSwaggerSpec(component)
name: component.name,
api: await fetchSwaggerSpec(component.url)
}
})
}

399
package-lock.json generated
View File

@ -3190,9 +3190,9 @@
}
},
"globals": {
"version": "13.10.0",
"resolved": "https://registry.npmjs.org/globals/-/globals-13.10.0.tgz",
"integrity": "sha512-piHC3blgLGFjvOuMmWZX60f+na1lXFDhQXBf1UYp2fXPXqvEUbOhNwi6BsQ0bQishwedgnjkwv1d9zKf+MWw3g==",
"version": "13.11.0",
"resolved": "https://registry.npmjs.org/globals/-/globals-13.11.0.tgz",
"integrity": "sha512-08/xrJ7wQjK9kkkRoI3OFUBbLx4f+6x3SGwcPvQ0QH6goFDrOU2oyAWrmh3dJezu65buo+HBMzAMQy6rovVC3g==",
"dev": true,
"requires": {
"type-fest": "^0.20.2"
@ -4180,9 +4180,9 @@
}
},
"@oceanprotocol/art": {
"version": "3.0.0",
"resolved": "https://registry.npmjs.org/@oceanprotocol/art/-/art-3.0.0.tgz",
"integrity": "sha512-j4PEZSVtKSqxDYMVh/hd5vk088Bg6a6QkrUMTXN9Q6OIFAMfHM235f1AxaakNrEyK0FKMD908KuJEdfFLRn9Hw=="
"version": "3.2.0",
"resolved": "https://registry.npmjs.org/@oceanprotocol/art/-/art-3.2.0.tgz",
"integrity": "sha512-aUQtg4m5hJlQ0u8C29O9TXJWcAenO3G9vP+vf6LNFkpTDOCMycN/F0SzHS89VNrvGUha8oTDEg7FAkfZBPv2WA=="
},
"@pieh/friendly-errors-webpack-plugin": {
"version": "1.7.0-chalk-2",
@ -4580,6 +4580,14 @@
"@types/node": "*"
}
},
"@types/hast": {
"version": "2.3.4",
"resolved": "https://registry.npmjs.org/@types/hast/-/hast-2.3.4.tgz",
"integrity": "sha512-wLEm0QvaoawEDoTRwzTXp4b4jpwiJDvR5KMnFnVodm3scufTlBOWRD6N1OBf9TZMhjlNsSfcO5V+7AF4+Vy+9g==",
"requires": {
"@types/unist": "*"
}
},
"@types/http-cache-semantics": {
"version": "4.0.0",
"resolved": "https://registry.npmjs.org/@types/http-cache-semantics/-/http-cache-semantics-4.0.0.tgz",
@ -9461,9 +9469,9 @@
"integrity": "sha1-G2HAViGQqN/2rjuyzwIAyhMLhtQ="
},
"eslint": {
"version": "7.31.0",
"resolved": "https://registry.npmjs.org/eslint/-/eslint-7.31.0.tgz",
"integrity": "sha512-vafgJpSh2ia8tnTkNUkwxGmnumgckLh5aAbLa1xRmIn9+owi8qBNGKL+B881kNKNTy7FFqTEkpNkUvmw0n6PkA==",
"version": "7.32.0",
"resolved": "https://registry.npmjs.org/eslint/-/eslint-7.32.0.tgz",
"integrity": "sha512-VHZ8gX+EDfz+97jGcgyGCyRia/dPOd6Xh9yPv8Bl1+SoaIwD+a/vlrOmGRUyOYu7MwUhc7CxqeaDZU13S4+EpA==",
"dev": true,
"requires": {
"@babel/code-frame": "7.12.11",
@ -9518,9 +9526,9 @@
}
},
"@babel/helper-validator-identifier": {
"version": "7.14.5",
"resolved": "https://registry.npmjs.org/@babel/helper-validator-identifier/-/helper-validator-identifier-7.14.5.tgz",
"integrity": "sha512-5lsetuxCLilmVGyiLEfoHBRX8UCFD+1m2x3Rj97WrW3V7H3u4RWRXA4evMjImCsin2J2YT0QaVDGf+z8ondbAg==",
"version": "7.14.9",
"resolved": "https://registry.npmjs.org/@babel/helper-validator-identifier/-/helper-validator-identifier-7.14.9.tgz",
"integrity": "sha512-pQYxPY0UP6IHISRitNe8bsijHex4TWZXi2HwKVsjPiltzlhse2znVcm9Ace510VT1kxIHjGJCZZQBX2gJDbo0g==",
"dev": true
},
"@babel/highlight": {
@ -9566,9 +9574,9 @@
"dev": true
},
"chalk": {
"version": "4.1.1",
"resolved": "https://registry.npmjs.org/chalk/-/chalk-4.1.1.tgz",
"integrity": "sha512-diHzdDKxcU+bAsUboHLPEDQiw0qEe0qd7SYUn3HgcFlWgbDcfLGswOHYeGrHKzG9z6UYf01d9VFMfZxPM1xZSg==",
"version": "4.1.2",
"resolved": "https://registry.npmjs.org/chalk/-/chalk-4.1.2.tgz",
"integrity": "sha512-oKnbhFyRIXpUuez8iBMmyEa4nbj4IOQyuhc/wy9kY7/WVPcwIO9VA668Pu8RkO7+0G76SLROeyw9CpQ061i4mA==",
"dev": true,
"requires": {
"ansi-styles": "^4.1.0",
@ -9726,9 +9734,9 @@
}
},
"flatted": {
"version": "3.2.1",
"resolved": "https://registry.npmjs.org/flatted/-/flatted-3.2.1.tgz",
"integrity": "sha512-OMQjaErSFHmHqZe+PSidH5n8j3O0F2DdnVh8JB4j4eUQ2k6KvB0qGfrKIhapvez5JerBbmWkaLYUYWISaESoXg==",
"version": "3.2.2",
"resolved": "https://registry.npmjs.org/flatted/-/flatted-3.2.2.tgz",
"integrity": "sha512-JaTY/wtrcSyvXJl4IMFHPKyFur1sE9AUqc0QnhOaJ0CxHtAoIV8pYDzeEfAaNEtGkOfq4gr3LBFmdXW5mOQFnA==",
"dev": true
},
"glob-parent": {
@ -9741,9 +9749,9 @@
}
},
"globals": {
"version": "13.10.0",
"resolved": "https://registry.npmjs.org/globals/-/globals-13.10.0.tgz",
"integrity": "sha512-piHC3blgLGFjvOuMmWZX60f+na1lXFDhQXBf1UYp2fXPXqvEUbOhNwi6BsQ0bQishwedgnjkwv1d9zKf+MWw3g==",
"version": "13.11.0",
"resolved": "https://registry.npmjs.org/globals/-/globals-13.11.0.tgz",
"integrity": "sha512-08/xrJ7wQjK9kkkRoI3OFUBbLx4f+6x3SGwcPvQ0QH6goFDrOU2oyAWrmh3dJezu65buo+HBMzAMQy6rovVC3g==",
"dev": true,
"requires": {
"type-fest": "^0.20.2"
@ -10470,9 +10478,9 @@
}
},
"eslint-plugin-prettier": {
"version": "3.4.0",
"resolved": "https://registry.npmjs.org/eslint-plugin-prettier/-/eslint-plugin-prettier-3.4.0.tgz",
"integrity": "sha512-UDK6rJT6INSfcOo545jiaOwB701uAIt2/dR7WnFQoGCVl1/EMqdANBmwUaqqQ45aXprsTGzSa39LI1PyuRBxxw==",
"version": "3.4.1",
"resolved": "https://registry.npmjs.org/eslint-plugin-prettier/-/eslint-plugin-prettier-3.4.1.tgz",
"integrity": "sha512-htg25EUYUeIhKHXjOinK4BgCcDwtLHjqaxCDsMy5nbnUMkKFvIhMVCp+5GFUXQ4Nr8lBsPqtGAqBenbpFqAA2g==",
"dev": true,
"requires": {
"prettier-linter-helpers": "^1.0.0"
@ -11093,9 +11101,9 @@
}
},
"fast-json-patch": {
"version": "3.0.0-1",
"resolved": "https://registry.npmjs.org/fast-json-patch/-/fast-json-patch-3.0.0-1.tgz",
"integrity": "sha512-6pdFb07cknxvPzCeLsFHStEy+MysPJPgZQ9LbQ/2O67unQF93SNqfdSqnPPl71YMHX+AD8gbl7iuoGFzHEdDuw=="
"version": "3.1.0",
"resolved": "https://registry.npmjs.org/fast-json-patch/-/fast-json-patch-3.1.0.tgz",
"integrity": "sha512-IhpytlsVTRndz0hU5t0/MGzS/etxLlfrpG5V5M9mVbuj9TrJLWaMfsox9REM5rkuGX0T+5qjpe8XA1o0gZ42nA=="
},
"fast-json-stable-stringify": {
"version": "2.1.0",
@ -11173,6 +11181,11 @@
"pend": "~1.2.0"
}
},
"fetch-blob": {
"version": "2.1.2",
"resolved": "https://registry.npmjs.org/fetch-blob/-/fetch-blob-2.1.2.tgz",
"integrity": "sha512-YKqtUDwqLyfyMnmbw8XD6Q8j9i/HggKtPEI+pZ1+8bvheBu78biSmNaXWusx1TauGqtUUGx/cBb1mKdq2rLYow=="
},
"figgy-pudding": {
"version": "3.5.2",
"resolved": "https://registry.npmjs.org/figgy-pudding/-/figgy-pudding-3.5.2.tgz",
@ -11397,12 +11410,27 @@
"version": "2.3.3",
"resolved": "https://registry.npmjs.org/form-data/-/form-data-2.3.3.tgz",
"integrity": "sha512-1lLKB2Mu3aGP1Q/2eCOx0fNbRMe7XdwktwOruhfqqd0rIJWwN4Dh+E3hrPSlDCXnSR7UtZ1N38rVXm+6+MEhJQ==",
"dev": true,
"requires": {
"asynckit": "^0.4.0",
"combined-stream": "^1.0.6",
"mime-types": "^2.1.12"
}
},
"form-data-encoder": {
"version": "1.4.4",
"resolved": "https://registry.npmjs.org/form-data-encoder/-/form-data-encoder-1.4.4.tgz",
"integrity": "sha512-7fHkKl/w+qxecNdv6Dy6gqAVuJ1Th4oyZd52nx0jGcgDBatMqCnIr5MtnuiFsLgEHs9HI2FufOmeHrj3obdhwA=="
},
"formdata-node": {
"version": "4.0.1",
"resolved": "https://registry.npmjs.org/formdata-node/-/formdata-node-4.0.1.tgz",
"integrity": "sha512-7qe/s/LQR4KE9zzPBg8HXRQQsgze4VtwTX9viuVOsodD5QSu7MKsNiSy5BWYDwV+kAcDTh3y7WnC5ZHK5t4Aqg==",
"requires": {
"fetch-blob": "2.1.2",
"node-domexception": "1.0.0"
}
},
"forwarded": {
"version": "0.1.2",
"resolved": "https://registry.npmjs.org/forwarded/-/forwarded-0.1.2.tgz",
@ -12637,19 +12665,19 @@
}
},
"gatsby-image": {
"version": "3.5.0",
"resolved": "https://registry.npmjs.org/gatsby-image/-/gatsby-image-3.5.0.tgz",
"integrity": "sha512-pr3P8+UiyL3nThVlqjlGbrDzaLx/aERjHoD4iVxrEsdoMmmb8fh+MT8/OkYe48NVJUiifNY1EmVUHP9RRCB6nw==",
"version": "3.11.0",
"resolved": "https://registry.npmjs.org/gatsby-image/-/gatsby-image-3.11.0.tgz",
"integrity": "sha512-vRMhGLrgyQRH2RYs8leyZ1UyWYIew+NOZEsKur1w6gnWDf0U9UVmYFa9OIE1Vedlo1W+on3AuZ3/KwM+cI69VQ==",
"requires": {
"@babel/runtime": "^7.12.5",
"@babel/runtime": "^7.14.6",
"object-fit-images": "^3.2.4",
"prop-types": "^15.7.2"
},
"dependencies": {
"@babel/runtime": {
"version": "7.14.0",
"resolved": "https://registry.npmjs.org/@babel/runtime/-/runtime-7.14.0.tgz",
"integrity": "sha512-JELkvo/DlpNdJ7dlyw/eY7E0suy5i5GQH+Vlxaq1nsNJ+H7f4Vtv3jMeCEgRhZZQFXTjldYfQgv2qmM6M1v5wA==",
"version": "7.14.8",
"resolved": "https://registry.npmjs.org/@babel/runtime/-/runtime-7.14.8.tgz",
"integrity": "sha512-twj3L8Og5SaCRCErB4x4ajbvBIVV77CGeFglHpeg5WC5FF8TZzBWXtTJ4MqaD9QszLYTtr+IsaAL2rEUevb+eg==",
"requires": {
"regenerator-runtime": "^0.13.4"
}
@ -13664,9 +13692,9 @@
}
},
"gatsby-remark-vscode": {
"version": "3.2.1",
"resolved": "https://registry.npmjs.org/gatsby-remark-vscode/-/gatsby-remark-vscode-3.2.1.tgz",
"integrity": "sha512-txzIOhfkBg49YLAw49L8PnkTu9ZK8gu61p/WbXelL0R9Abw96pmP+R4Bu1RJx3NSwikhC0nqwgORZl/qeaWwXQ==",
"version": "3.3.0",
"resolved": "https://registry.npmjs.org/gatsby-remark-vscode/-/gatsby-remark-vscode-3.3.0.tgz",
"integrity": "sha512-55ucO1KryOwz9UlvQzsdNC6mI8wiWqSrE8pkV/fvHP9Q4NBttOGShU7pLuIUiWlSrzBFGWwtZSvVRTnklbPeCw==",
"requires": {
"decompress": "^4.2.0",
"json5": "^2.1.1",
@ -14552,9 +14580,9 @@
}
},
"git-format-staged": {
"version": "2.1.1",
"resolved": "https://registry.npmjs.org/git-format-staged/-/git-format-staged-2.1.1.tgz",
"integrity": "sha512-Db4QiAymao9BfpTBCdEcF53jBZfKuwIigqhNmtODD+KOKbxrdDVMDeAs+P7eqJl9udlBZejhRKRbyFRtKLbVmA==",
"version": "2.1.2",
"resolved": "https://registry.npmjs.org/git-format-staged/-/git-format-staged-2.1.2.tgz",
"integrity": "sha512-ieP6iEyMJQ9xPKJGFSmK4HELcDdYwUO84dG4NBKdjaSTOdsZgrW9paLaEau2D4daPQjLwSsgwdqtYjqoVxz3Lw==",
"dev": true
},
"git-up": {
@ -15632,9 +15660,9 @@
"integrity": "sha512-SEQu7vl8KjNL2eoGBLF3+wAjpsNfA9XMlXAYj/3EdaNfAlxKthD1xjEQfGOUhllCGGJVNY34bRr6lPINhNjyZw=="
},
"husky": {
"version": "6.0.0",
"resolved": "https://registry.npmjs.org/husky/-/husky-6.0.0.tgz",
"integrity": "sha512-SQS2gDTB7tBN486QSoKPKQItZw97BMOd+Kdb6ghfpBc0yXyzrddI0oDV5MkDAbuB4X2mO3/nj60TRMcYxwzZeQ==",
"version": "7.0.2",
"resolved": "https://registry.npmjs.org/husky/-/husky-7.0.2.tgz",
"integrity": "sha512-8yKEWNX4z2YsofXAMT7KvA1g8p+GxtB1ffV8XtpAEGuXNAbCV5wdNKH+qTpw8SM9fh4aMPDR+yQuKfgnreyZlg==",
"dev": true
},
"iconv-lite": {
@ -16575,14 +16603,6 @@
}
}
},
"isomorphic-form-data": {
"version": "2.0.0",
"resolved": "https://registry.npmjs.org/isomorphic-form-data/-/isomorphic-form-data-2.0.0.tgz",
"integrity": "sha512-TYgVnXWeESVmQSg4GLVbalmQ+B4NPi/H4eWxqALKj63KsUrcu301YDjBqaOw3h+cbak7Na4Xyps3BiptHtxTfg==",
"requires": {
"form-data": "^2.3.2"
}
},
"isomorphic-ws": {
"version": "4.0.1",
"resolved": "https://registry.npmjs.org/isomorphic-ws/-/isomorphic-ws-4.0.1.tgz",
@ -17397,17 +17417,17 @@
}
},
"markdownlint-cli": {
"version": "0.27.1",
"resolved": "https://registry.npmjs.org/markdownlint-cli/-/markdownlint-cli-0.27.1.tgz",
"integrity": "sha512-p1VV6aSbGrDlpUWzHizAnSNEQAweVR3qUI/AIUubxW7BGPXziSXkIED+uRtSohUlRS/jmqp3Wi4es5j6fIrdeQ==",
"version": "0.28.1",
"resolved": "https://registry.npmjs.org/markdownlint-cli/-/markdownlint-cli-0.28.1.tgz",
"integrity": "sha512-RBKtRRBzcuAF/H5wMSzb4zvEtbUkyYNEeaDtlQkyH9SoHWPL01emJ2Wrx6NEOa1ZDGwB+seBGvE157Qzc/t/vA==",
"dev": true,
"requires": {
"commander": "~7.1.0",
"commander": "~8.0.0",
"deep-extend": "~0.6.0",
"get-stdin": "~8.0.0",
"glob": "~7.1.6",
"glob": "~7.1.7",
"ignore": "~5.1.8",
"js-yaml": "^4.0.0",
"js-yaml": "^4.1.0",
"jsonc-parser": "~3.0.0",
"lodash.differencewith": "~4.5.0",
"lodash.flatten": "~4.4.0",
@ -17415,7 +17435,7 @@
"markdownlint-rule-helpers": "~0.14.0",
"minimatch": "~3.0.4",
"minimist": "~1.2.5",
"rc": "~1.2.8"
"run-con": "~1.2.10"
},
"dependencies": {
"argparse": {
@ -17425,9 +17445,9 @@
"dev": true
},
"commander": {
"version": "7.1.0",
"resolved": "https://registry.npmjs.org/commander/-/commander-7.1.0.tgz",
"integrity": "sha512-pRxBna3MJe6HKnBGsDyMv8ETbptw3axEdYHoqNh7gu5oDcew8fs0xnivZGm06Ogk8zGAJ9VX+OPEr2GXEQK4dg==",
"version": "8.0.0",
"resolved": "https://registry.npmjs.org/commander/-/commander-8.0.0.tgz",
"integrity": "sha512-Xvf85aAtu6v22+E5hfVoLHqyul/jyxh91zvqk/ioJTQuJR7Z78n7H558vMPKanPSRgIEeZemT92I2g9Y8LPbSQ==",
"dev": true
},
"get-stdin": {
@ -17436,6 +17456,20 @@
"integrity": "sha512-sY22aA6xchAzprjyqmSEQv4UbAAzRN0L2dQB0NlN5acTTK9Don6nhoc3eAbUnpZiCANAMfd/+40kVdKfFygohg==",
"dev": true
},
"glob": {
"version": "7.1.7",
"resolved": "https://registry.npmjs.org/glob/-/glob-7.1.7.tgz",
"integrity": "sha512-OvD9ENzPLbegENnYP5UUfJIirTg4+XwMWGaQfQTY0JenxNvvIKP3U3/tAQSPIu/lHxXYSZmpXlUHeqAIdKzBLQ==",
"dev": true,
"requires": {
"fs.realpath": "^1.0.0",
"inflight": "^1.0.4",
"inherits": "2",
"minimatch": "^3.0.4",
"once": "^1.3.0",
"path-is-absolute": "^1.0.0"
}
},
"ignore": {
"version": "5.1.8",
"resolved": "https://registry.npmjs.org/ignore/-/ignore-5.1.8.tgz",
@ -17443,9 +17477,9 @@
"dev": true
},
"js-yaml": {
"version": "4.0.0",
"resolved": "https://registry.npmjs.org/js-yaml/-/js-yaml-4.0.0.tgz",
"integrity": "sha512-pqon0s+4ScYUvX30wxQi3PogGFAlUyH0awepWvwkj4jD4v+ova3RiYw8bmA6x2rDrEaj8i/oWKoRxpVNW+Re8Q==",
"version": "4.1.0",
"resolved": "https://registry.npmjs.org/js-yaml/-/js-yaml-4.1.0.tgz",
"integrity": "sha512-wpxZs9NoxZaJESJGIZTyDEaYpl0FKSA+FB9aJiyemKhMwkxQg63h4T1KJgUGHpTqPDNRcmmYLugrRjJlBtWvRA==",
"dev": true,
"requires": {
"argparse": "^2.0.1"
@ -18612,6 +18646,11 @@
"resolved": "https://registry.npmjs.org/node-addon-api/-/node-addon-api-3.1.0.tgz",
"integrity": "sha512-flmrDNB06LIl5lywUz7YlNGZH/5p0M7W28k8hzd9Lshtdh1wshD2Y+U4h9LD6KObOy1f+fEVdgprPrEymjM5uw=="
},
"node-domexception": {
"version": "1.0.0",
"resolved": "https://registry.npmjs.org/node-domexception/-/node-domexception-1.0.0.tgz",
"integrity": "sha512-/jKZoMpw0F8GRwl4/eLROPA3cfcXtLApP0QzLmUT/HuPCZWyB7IY9ZrMeKw2O/nFIqPQB3PVM9aYm0F312AXDQ=="
},
"node-eta": {
"version": "0.9.0",
"resolved": "https://registry.npmjs.org/node-eta/-/node-eta-0.9.0.tgz",
@ -19808,13 +19847,13 @@
}
},
"plist": {
"version": "3.0.2",
"resolved": "https://registry.npmjs.org/plist/-/plist-3.0.2.tgz",
"integrity": "sha512-MSrkwZBdQ6YapHy87/8hDU8MnIcyxBKjeF+McXnr5A9MtffPewTs7G3hlpodT5TacyfIyFTaJEhh3GGcmasTgQ==",
"version": "3.0.3",
"resolved": "https://registry.npmjs.org/plist/-/plist-3.0.3.tgz",
"integrity": "sha512-ghdOKN99hh1oEmAlwBmPYo4L+tSQ7O3jRpkhWqOrMz86CWotpVzMevvQ+czo7oPDpOZyA6K06Ci7QVHpoh9gaA==",
"requires": {
"base64-js": "^1.5.1",
"xmlbuilder": "^9.0.7",
"xmldom": "^0.5.0"
"xmldom": "^0.6.0"
},
"dependencies": {
"xmlbuilder": {
@ -20610,9 +20649,9 @@
"integrity": "sha1-6SQ0v6XqjBn0HN/UAddBo8gZ2Jc="
},
"prettier": {
"version": "2.3.0",
"resolved": "https://registry.npmjs.org/prettier/-/prettier-2.3.0.tgz",
"integrity": "sha512-kXtO4s0Lz/DW/IJ9QdWhAf7/NmPWQXkFr/r/WkR3vyI+0v8amTDxiaQSLzs8NBlytfLWX/7uQUMIW677yLKl4w=="
"version": "2.3.2",
"resolved": "https://registry.npmjs.org/prettier/-/prettier-2.3.2.tgz",
"integrity": "sha512-lnJzDfJ66zkMy58OL5/NY5zp70S7Nz6KqcKkXYzn2tMVrNxvbqaBpg7H3qHaLxCJ5lNMsGuM8+ohS7cZrthdLQ=="
},
"prettier-linter-helpers": {
"version": "1.0.0",
@ -21605,12 +21644,126 @@
}
},
"rehype-react": {
"version": "6.2.0",
"resolved": "https://registry.npmjs.org/rehype-react/-/rehype-react-6.2.0.tgz",
"integrity": "sha512-XpR3p8ejdJ5CSEKqAfASIrkD+KaHLy0JOqXu9zM32tvkr1cUeM7AeidF6Q8eQ/wtMvcJb+h/L4QRwg1eFwBggQ==",
"version": "7.0.1",
"resolved": "https://registry.npmjs.org/rehype-react/-/rehype-react-7.0.1.tgz",
"integrity": "sha512-H1Dha9uGt2ThGEpWT3p1lOUpvvihhXoa0FfANfkAfKJVQH1E4dXGwaJTdYcNGaZUXl9eU9enbGLo6iW6VUPrlA==",
"requires": {
"@mapbox/hast-util-table-cell-style": "^0.1.3",
"hast-to-hyperscript": "^9.0.0"
"@mapbox/hast-util-table-cell-style": "^0.2.0",
"@types/hast": "^2.0.0",
"@types/react": "^17.0.0",
"hast-to-hyperscript": "^10.0.0",
"unified": "^10.0.0"
},
"dependencies": {
"@mapbox/hast-util-table-cell-style": {
"version": "0.2.0",
"resolved": "https://registry.npmjs.org/@mapbox/hast-util-table-cell-style/-/hast-util-table-cell-style-0.2.0.tgz",
"integrity": "sha512-gqaTIGC8My3LVSnU38IwjHVKJC94HSonjvFHDk8/aSrApL8v4uWgm8zJkK7MJIIbHuNOr/+Mv2KkQKcxs6LEZA==",
"requires": {
"unist-util-visit": "^1.4.1"
}
},
"bail": {
"version": "2.0.1",
"resolved": "https://registry.npmjs.org/bail/-/bail-2.0.1.tgz",
"integrity": "sha512-d5FoTAr2S5DSUPKl85WNm2yUwsINN8eidIdIwsOge2t33DaOfOdSmmsI11jMN3GmALCXaw+Y6HMVHDzePshFAA=="
},
"comma-separated-tokens": {
"version": "2.0.2",
"resolved": "https://registry.npmjs.org/comma-separated-tokens/-/comma-separated-tokens-2.0.2.tgz",
"integrity": "sha512-G5yTt3KQN4Yn7Yk4ed73hlZ1evrFKXeUW3086p3PRFNp7m2vIjI6Pg+Kgb+oyzhd9F2qdcoj67+y3SdxL5XWsg=="
},
"hast-to-hyperscript": {
"version": "10.0.1",
"resolved": "https://registry.npmjs.org/hast-to-hyperscript/-/hast-to-hyperscript-10.0.1.tgz",
"integrity": "sha512-dhIVGoKCQVewFi+vz3Vt567E4ejMppS1haBRL6TEmeLeJVB1i/FJIIg/e6s1Bwn0g5qtYojHEKvyGA+OZuyifw==",
"requires": {
"@types/unist": "^2.0.0",
"comma-separated-tokens": "^2.0.0",
"property-information": "^6.0.0",
"space-separated-tokens": "^2.0.0",
"style-to-object": "^0.3.0",
"unist-util-is": "^5.0.0",
"web-namespaces": "^2.0.0"
}
},
"is-buffer": {
"version": "2.0.5",
"resolved": "https://registry.npmjs.org/is-buffer/-/is-buffer-2.0.5.tgz",
"integrity": "sha512-i2R6zNFDwgEHJyQUtJEk0XFi1i0dPFn/oqjK3/vPCcDeJvW5NQ83V8QbicfF1SupOaB0h8ntgBC2YiE7dfyctQ=="
},
"is-plain-obj": {
"version": "4.0.0",
"resolved": "https://registry.npmjs.org/is-plain-obj/-/is-plain-obj-4.0.0.tgz",
"integrity": "sha512-NXRbBtUdBioI73y/HmOhogw/U5msYPC9DAtGkJXeFcFWSFZw0mCUsPxk/snTuJHzNKA8kLBK4rH97RMB1BfCXw=="
},
"property-information": {
"version": "6.0.1",
"resolved": "https://registry.npmjs.org/property-information/-/property-information-6.0.1.tgz",
"integrity": "sha512-F4WUUAF7fMeF4/JUFHNBWDaKDXi2jbvqBW/y6o5wsf3j19wTZ7S60TmtB5HoBhtgw7NKQRMWuz5vk2PR0CygUg=="
},
"space-separated-tokens": {
"version": "2.0.1",
"resolved": "https://registry.npmjs.org/space-separated-tokens/-/space-separated-tokens-2.0.1.tgz",
"integrity": "sha512-ekwEbFp5aqSPKaqeY1PGrlGQxPNaq+Cnx4+bE2D8sciBQrHpbwoBbawqTN2+6jPs9IdWxxiUcN0K2pkczD3zmw=="
},
"trough": {
"version": "2.0.2",
"resolved": "https://registry.npmjs.org/trough/-/trough-2.0.2.tgz",
"integrity": "sha512-FnHq5sTMxC0sk957wHDzRnemFnNBvt/gSY99HzK8F7UP5WAbvP70yX5bd7CjEQkN+TjdxwI7g7lJ6podqrG2/w=="
},
"unified": {
"version": "10.1.0",
"resolved": "https://registry.npmjs.org/unified/-/unified-10.1.0.tgz",
"integrity": "sha512-4U3ru/BRXYYhKbwXV6lU6bufLikoAavTwev89H5UxY8enDFaAT2VXmIXYNm6hb5oHPng/EXr77PVyDFcptbk5g==",
"requires": {
"@types/unist": "^2.0.0",
"bail": "^2.0.0",
"extend": "^3.0.0",
"is-buffer": "^2.0.0",
"is-plain-obj": "^4.0.0",
"trough": "^2.0.0",
"vfile": "^5.0.0"
}
},
"unist-util-is": {
"version": "5.1.1",
"resolved": "https://registry.npmjs.org/unist-util-is/-/unist-util-is-5.1.1.tgz",
"integrity": "sha512-F5CZ68eYzuSvJjGhCLPL3cYx45IxkqXSetCcRgUXtbcm50X2L9oOWQlfUfDdAf+6Pd27YDblBfdtmsThXmwpbQ=="
},
"unist-util-stringify-position": {
"version": "3.0.0",
"resolved": "https://registry.npmjs.org/unist-util-stringify-position/-/unist-util-stringify-position-3.0.0.tgz",
"integrity": "sha512-SdfAl8fsDclywZpfMDTVDxA2V7LjtRDTOFd44wUJamgl6OlVngsqWjxvermMYf60elWHbxhuRCZml7AnuXCaSA==",
"requires": {
"@types/unist": "^2.0.0"
}
},
"vfile": {
"version": "5.1.0",
"resolved": "https://registry.npmjs.org/vfile/-/vfile-5.1.0.tgz",
"integrity": "sha512-4o7/DJjEaFPYSh0ckv5kcYkJTHQgCKdL8ozMM1jLAxO9ox95IzveDPXCZp08HamdWq8JXTkClDvfAKaeLQeKtg==",
"requires": {
"@types/unist": "^2.0.0",
"is-buffer": "^2.0.0",
"unist-util-stringify-position": "^3.0.0",
"vfile-message": "^3.0.0"
}
},
"vfile-message": {
"version": "3.0.2",
"resolved": "https://registry.npmjs.org/vfile-message/-/vfile-message-3.0.2.tgz",
"integrity": "sha512-UUjZYIOg9lDRwwiBAuezLIsu9KlXntdxwG+nXnjuQAHvBpcX3x0eN8h+I7TkY5nkCXj+cWVp4ZqebtGBvok8ww==",
"requires": {
"@types/unist": "^2.0.0",
"unist-util-stringify-position": "^3.0.0"
}
},
"web-namespaces": {
"version": "2.0.0",
"resolved": "https://registry.npmjs.org/web-namespaces/-/web-namespaces-2.0.0.tgz",
"integrity": "sha512-dE7ELZRVWh0ceQsRgkjLgsAvwTuv3kcjSY/hLjqL0llleUlQBDjE9JkB9FCBY5F2mnFEwiyJoowl8+NVGHe8dw=="
}
}
},
"remark": {
@ -22175,6 +22328,32 @@
"is-promise": "^2.1.0"
}
},
"run-con": {
"version": "1.2.10",
"resolved": "https://registry.npmjs.org/run-con/-/run-con-1.2.10.tgz",
"integrity": "sha512-n7PZpYmMM26ZO21dd8y3Yw1TRtGABjRtgPSgFS/nhzfvbJMXFtJhJVyEgayMiP+w/23craJjsnfDvx4W4ue/HQ==",
"dev": true,
"requires": {
"deep-extend": "^0.6.0",
"ini": "~2.0.0",
"minimist": "^1.2.5",
"strip-json-comments": "~3.1.1"
},
"dependencies": {
"ini": {
"version": "2.0.0",
"resolved": "https://registry.npmjs.org/ini/-/ini-2.0.0.tgz",
"integrity": "sha512-7PnF4oN3CvZF23ADhA5wRaYEQpJ8qygSkbtTXWBeXWXmEVRXK+1ITciHWwHhsjv1TmW0MgacIv6hEi5pX5NQdA==",
"dev": true
},
"strip-json-comments": {
"version": "3.1.1",
"resolved": "https://registry.npmjs.org/strip-json-comments/-/strip-json-comments-3.1.1.tgz",
"integrity": "sha512-6fPc+R4ihwqP6N/aIv2f1gMH8lOVtWQHoqC4yK6oSDVVocumAsfCqjkXnqiYMhmMwS/mEHLp7Vehlt3ql6lEig==",
"dev": true
}
}
},
"run-parallel": {
"version": "1.1.10",
"resolved": "https://registry.npmjs.org/run-parallel/-/run-parallel-1.1.10.tgz",
@ -22836,9 +23015,9 @@
}
},
"slugify": {
"version": "1.5.3",
"resolved": "https://registry.npmjs.org/slugify/-/slugify-1.5.3.tgz",
"integrity": "sha512-/HkjRdwPY3yHJReXu38NiusZw2+LLE2SrhkWJtmlPDB1fqFSvioYj62NkPcrKiNCgRLeGcGK7QBvr1iQwybeXw=="
"version": "1.6.0",
"resolved": "https://registry.npmjs.org/slugify/-/slugify-1.6.0.tgz",
"integrity": "sha512-FkMq+MQc5hzYgM86nLuHI98Acwi3p4wX+a5BO9Hhw4JdK4L7WueIiZ4tXEobImPqBz2sVcV0+Mu3GRB30IGang=="
},
"smoothscroll-polyfill": {
"version": "0.4.4",
@ -23783,19 +23962,20 @@
}
},
"swagger-client": {
"version": "3.13.3",
"resolved": "https://registry.npmjs.org/swagger-client/-/swagger-client-3.13.3.tgz",
"integrity": "sha512-8ZVm0NIhmAiHaBwDibkX76W3jvs3h1Okb41iyeSG8TPXwuZbeS5tEpOkqgUMdK48dKs0S8VMu5ldkak79MFVMw==",
"version": "3.16.0",
"resolved": "https://registry.npmjs.org/swagger-client/-/swagger-client-3.16.0.tgz",
"integrity": "sha512-fE+HPDla35+k9uXd9BmgZNvhUaM1oA8iCILOINYrziFK3+dkiSLG57h9Z4QOlcVMr/MjVHYy/1JbftlMt+sQ2A==",
"requires": {
"@babel/runtime-corejs3": "^7.11.2",
"btoa": "^1.2.1",
"buffer": "^6.0.3",
"cookie": "~0.4.1",
"cross-fetch": "^3.0.6",
"cross-fetch": "^3.1.4",
"deep-extend": "~0.6.0",
"fast-json-patch": "^3.0.0-1",
"isomorphic-form-data": "~2.0.0",
"js-yaml": "^3.14.0",
"form-data-encoder": "^1.4.3",
"formdata-node": "^4.0.0",
"js-yaml": "^4.1.0",
"lodash": "^4.17.19",
"qs": "^6.9.4",
"querystring-browser": "^1.0.4",
@ -23804,14 +23984,19 @@
},
"dependencies": {
"@babel/runtime-corejs3": {
"version": "7.14.0",
"resolved": "https://registry.npmjs.org/@babel/runtime-corejs3/-/runtime-corejs3-7.14.0.tgz",
"integrity": "sha512-0R0HTZWHLk6G8jIk0FtoX+AatCtKnswS98VhXwGImFc759PJRp4Tru0PQYZofyijTFUr+gT8Mu7sgXVJLQ0ceg==",
"version": "7.15.3",
"resolved": "https://registry.npmjs.org/@babel/runtime-corejs3/-/runtime-corejs3-7.15.3.tgz",
"integrity": "sha512-30A3lP+sRL6ml8uhoJSs+8jwpKzbw8CqBvDc1laeptxPm5FahumJxirigcbD2qTs71Sonvj1cyZB0OKGAmxQ+A==",
"requires": {
"core-js-pure": "^3.0.0",
"core-js-pure": "^3.16.0",
"regenerator-runtime": "^0.13.4"
}
},
"argparse": {
"version": "2.0.1",
"resolved": "https://registry.npmjs.org/argparse/-/argparse-2.0.1.tgz",
"integrity": "sha512-8+9WqebbFzpX9OR+Wa6O29asIogeRMzcGtAINdpMHHyAg10f05aSFVBbcEqGf/PXw1EjAZ+q2/bEBg3DvurK3Q=="
},
"buffer": {
"version": "6.0.3",
"resolved": "https://registry.npmjs.org/buffer/-/buffer-6.0.3.tgz",
@ -23826,19 +24011,31 @@
"resolved": "https://registry.npmjs.org/cookie/-/cookie-0.4.1.tgz",
"integrity": "sha512-ZwrFkGJxUR3EIoXtO+yVE69Eb7KlixbaeAWfBQB9vVsNn/o+Yw69gBWSSDK825hQNdN+wF8zELf3dFNl/kxkUA=="
},
"js-yaml": {
"version": "3.14.1",
"resolved": "https://registry.npmjs.org/js-yaml/-/js-yaml-3.14.1.tgz",
"integrity": "sha512-okMH7OXXJ7YrN9Ok3/SXrnu4iX9yOk+25nqX4imS2npuvTYDmo/QEZoqwZkYaIDk3jVvBOTOIEgEhaLOynBS9g==",
"core-js-pure": {
"version": "3.16.2",
"resolved": "https://registry.npmjs.org/core-js-pure/-/core-js-pure-3.16.2.tgz",
"integrity": "sha512-oxKe64UH049mJqrKkynWp6Vu0Rlm/BTXO/bJZuN2mmR3RtOFNepLlSWDd1eo16PzHpQAoNG97rLU1V/YxesJjw=="
},
"cross-fetch": {
"version": "3.1.4",
"resolved": "https://registry.npmjs.org/cross-fetch/-/cross-fetch-3.1.4.tgz",
"integrity": "sha512-1eAtFWdIubi6T4XPy6ei9iUFoKpUkIF971QLN8lIvvvwueI65+Nw5haMNKUwfJxabqlIIDODJKGrQ66gxC0PbQ==",
"requires": {
"argparse": "^1.0.7",
"esprima": "^4.0.0"
"node-fetch": "2.6.1"
}
},
"js-yaml": {
"version": "4.1.0",
"resolved": "https://registry.npmjs.org/js-yaml/-/js-yaml-4.1.0.tgz",
"integrity": "sha512-wpxZs9NoxZaJESJGIZTyDEaYpl0FKSA+FB9aJiyemKhMwkxQg63h4T1KJgUGHpTqPDNRcmmYLugrRjJlBtWvRA==",
"requires": {
"argparse": "^2.0.1"
}
},
"object-inspect": {
"version": "1.10.3",
"resolved": "https://registry.npmjs.org/object-inspect/-/object-inspect-1.10.3.tgz",
"integrity": "sha512-e5mCJlSH7poANfC8z8S9s9S2IN5/4Zb3aZ33f5s8YqoazCFzNLloLU8r5VCG+G7WoqLvAAZoVMcy3tp/3X0Plw=="
"version": "1.11.0",
"resolved": "https://registry.npmjs.org/object-inspect/-/object-inspect-1.11.0.tgz",
"integrity": "sha512-jp7ikS6Sd3GxQfZJPyH3cjcbJF6GZPClgdV+EFygjFLQ5FmW/dRUnTd9PQ9k0JhoNDabWFbpF1yCdSWCC6gexg=="
},
"qs": {
"version": "6.10.1",
@ -26524,9 +26721,9 @@
"integrity": "sha512-fDlsI/kFEx7gLvbecc0/ohLG50fugQp8ryHzMTuW9vSa1GJ0XYWKnhsUx7oie3G98+r56aTQIUB4kht42R3JvA=="
},
"xmldom": {
"version": "0.5.0",
"resolved": "https://registry.npmjs.org/xmldom/-/xmldom-0.5.0.tgz",
"integrity": "sha512-Foaj5FXVzgn7xFzsKeNIde9g6aFBxTPi37iwsno8QvApmtg7KYrr+OPyRHcJF7dud2a5nGRBXK3n0dL62Gf7PA=="
"version": "0.6.0",
"resolved": "https://registry.npmjs.org/xmldom/-/xmldom-0.6.0.tgz",
"integrity": "sha512-iAcin401y58LckRZ0TkI4k0VSM1Qg0KGSc3i8rU+xrxe19A/BN1zHyVSJY7uoutVlaTSzYyk/v5AmkewAP7jtg=="
},
"xmlhttprequest-ssl": {
"version": "1.6.2",

View File

@ -16,11 +16,11 @@
"test": "npm run lint"
},
"dependencies": {
"@oceanprotocol/art": "^3.0.0",
"@oceanprotocol/art": "^3.2.0",
"axios": "^0.21.1",
"classnames": "^2.3.1",
"gatsby": "^2.32.13",
"gatsby-image": "^3.5.0",
"gatsby-image": "^3.11.0",
"gatsby-plugin-catch-links": "^2.10.0",
"gatsby-plugin-manifest": "^2.12.1",
"gatsby-plugin-offline": "^3.10.2",
@ -38,7 +38,7 @@
"gatsby-remark-images": "^3.11.1",
"gatsby-remark-responsive-iframe": "^2.11.0",
"gatsby-remark-smartypants": "^2.10.0",
"gatsby-remark-vscode": "^3.2.1",
"gatsby-remark-vscode": "^3.3.0",
"gatsby-source-filesystem": "^2.11.1",
"gatsby-source-git": "^1.1.0",
"gatsby-source-graphql": "^2.14.0",
@ -53,28 +53,28 @@
"react-helmet": "^6.1.0",
"react-json-view": "^1.21.3",
"react-scrollspy": "^3.4.3",
"rehype-react": "^6.2.0",
"rehype-react": "^7.0.1",
"remark": "^13.0.0",
"remark-github-plugin": "^1.4.0",
"remark-react": "^8.0.0",
"shortid": "^2.2.16",
"slugify": "^1.5.3",
"slugify": "^1.6.0",
"smoothscroll-polyfill": "^0.4.4",
"swagger-client": "^3.13.3"
"swagger-client": "^3.16.0"
},
"devDependencies": {
"@svgr/webpack": "^5.5.0",
"dotenv": "^10.0.0",
"eslint": "^7.31.0",
"eslint": "^7.32.0",
"eslint-config-oceanprotocol": "^1.5.0",
"eslint-config-prettier": "^8.3.0",
"eslint-plugin-prettier": "^3.4.0",
"git-format-staged": "^2.1.1",
"husky": "^6.0.0",
"markdownlint-cli": "^0.27.1",
"eslint-plugin-prettier": "^3.4.1",
"git-format-staged": "^2.1.2",
"husky": "^7.0.2",
"markdownlint-cli": "^0.28.1",
"node-sass": "^5.0.0",
"npm-run-all": "^4.1.5",
"prettier": "^2.3.0"
"prettier": "^2.3.2"
},
"repository": {
"type": "git",

View File

@ -0,0 +1,139 @@
import React, { useState, useEffect } from 'react'
import { Helmet } from 'react-helmet'
import Layout from '../components/Layout'
import Content from '../components/Content'
import HeaderSection from '../components/HeaderSection'
import Sidebar from '../components/Sidebar'
import stylesDoc from '../templates/Doc.module.scss'
import Seo from './Seo'
import PropTypes from 'prop-types'
import { graphql } from 'gatsby'
export default function Deployments({ data, location }) {
const [content, setContent] = useState(undefined)
const [loading, setLoading] = useState(true)
const networks = {
'Ethereum Mainnet': {
aquarius: 'https://aquarius.oceanprotocol.com',
provider: 'https://provider.mainnet.oceanprotocol.com'
},
'Polygon Mainnet': {
aquarius: 'https://aquarius.oceanprotocol.com',
provider: 'https://provider.polygon.oceanprotocol.com'
},
'Binance Smart Chain': {
aquarius: 'https://aquarius.oceanprotocol.com/',
provider: 'https://provider.bsc.oceanprotocol.com'
},
Ropsten: {
aquarius: 'https://aquarius.oceanprotocol.com',
provider: 'https://provider.ropsten.oceanprotocol.com'
},
Rinkeby: {
aquarius: 'https://aquarius.oceanprotocol.com',
provider: 'https://provider.rinkeby.oceanprotocol.com'
},
Mumbai: {
aquarius: 'https://aquarius.oceanprotocol.com',
provider: 'https://provider.mumbai.oceanprotocol.com'
}
}
useEffect(async () => {
const table = await getTable(networks)
setContent(table)
setLoading(false)
}, [])
const getVersion = async (url) => {
if (!url) return
try {
const data = await fetch(url)
const { version } = await data.json()
return version
} catch {
return '-'
}
}
const getTable = async (networks) => {
const objs = []
for (const key of Object.keys(networks)) {
const aquariusVerison = await getVersion(networks[key].aquarius)
const providerVerison = await getVersion(networks[key].provider)
objs.push(
<tr key={key}>
<td>{key}</td>
<td>{aquariusVerison}</td>
<td>{providerVerison}</td>
</tr>
)
}
return (
<div>
<table>
<thead>
<tr>
<th>Network</th>
<th>Aquarius</th>
<th>Provider</th>
</tr>
</thead>
<tbody>{objs}</tbody>
</table>
</div>
)
}
return (
<>
<Helmet>
<body className="concepts" />
</Helmet>
<Seo
title="Deployments"
description=""
slug="/concepts/deployments/"
article
location={location}
/>
<Layout location={location}>
<HeaderSection title="Core Concepts" />
<Content>
<main className={stylesDoc.wrapper}>
<aside className={stylesDoc.sidebar}>
<Sidebar location={location} sidebar="concepts" collapsed />
</aside>
<article className={stylesDoc.main}>
<div>{loading ? <>Fetching versions</> : content}</div>
</article>
</main>
</Content>
</Layout>
</>
)
}
Deployments.propTypes = {
data: PropTypes.object.isRequired,
location: PropTypes.object.isRequired
}
export const DeploymentsQuery = graphql`
query {
allSectionsYaml {
edges {
node {
title
description
link
}
}
}
}
`

View File

@ -1,7 +1,6 @@
import React from 'react'
import PropTypes from 'prop-types'
import slugify from 'slugify'
import { cleanPathKey } from './utils'
import styles from './Paths.module.scss'
import stylesDoc from '../Doc.module.scss'
const ResponseExample = React.lazy(() => import('./ResponseExample'))
@ -153,9 +152,9 @@ Method.propTypes = {
const Paths = ({ paths }) =>
Object.entries(paths).map(([key, value]) => (
<div key={key} id={slugify(cleanPathKey(key))}>
<div key={key} id={slugify(key)}>
<h2 className={stylesDoc.pathName}>
<code>{cleanPathKey(key)}</code>
<code>{key}</code>
</h2>
{Object.entries(value).map(([key, value]) => (

View File

@ -9,29 +9,47 @@ import stylesSidebar from '../../components/Sidebar.module.scss'
const Toc = ({ data }) => {
const Ids = []
const items = Object.keys(data.paths).map((key) => {
Ids.push(slugify(cleanPathKey(key)))
const itemsV1 = Object.keys(data.paths)
.filter((key) => key.startsWith('/api/v1/aquarius'))
.map((key) => {
Ids.push(slugify(key))
return (
<li key={key}>
<Scroll
type="id"
element={`${slugify(cleanPathKey(key))}`}
offset={-20}
>
<Scroll type="id" element={`${slugify(key)}`} offset={-20}>
<code>{cleanPathKey(key)}</code>
</Scroll>
</li>
)
})
const itemsOther = Object.keys(data.paths)
.filter((key) => !key.startsWith('/api/v1/aquarius'))
.map((key) => {
Ids.push(slugify(key))
return (
<li key={key}>
<Scroll type="id" element={`${slugify(key)}`} offset={-20}>
<code>{key}</code>
</Scroll>
</li>
)
})
return (
<Scrollspy
items={Ids}
currentClassName={stylesSidebar.scrollspyActive}
offset={-100}
>
{items}
<code>/api/v1/aquarius</code>
<ul>{itemsV1}</ul>
{itemsOther.length ? (
<>
<code>Other REST endpoints</code>
<ul>{itemsOther}</ul>
</>
) : null}
</Scrollspy>
)
}

View File

@ -4,10 +4,5 @@ export const cleanPathKey = (key) => {
if (key.includes('aquarius')) {
keyCleaned = key.replace(/\/api\/v1\/aquarius/gi, '')
}
if (key.includes('brizo')) {
keyCleaned = key.replace(/\/api\/v1\/brizo/gi, '')
}
return keyCleaned
}