1
0
mirror of https://github.com/oceanprotocol/docs.git synced 2024-11-26 19:49:26 +01:00

Merge branch 'main' of github.com:oceanprotocol/docs into issue-701-create-v4-docs

This commit is contained in:
Akshay 2021-09-28 12:01:58 +02:00
commit 500754c435
46 changed files with 1548 additions and 689 deletions

View File

@ -116,3 +116,4 @@ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
```

View File

@ -67,6 +67,36 @@ module.exports = {
{
from: '/concepts/connect-to-networks/',
to: '/concepts/networks/'
},
{
from: '/concepts/oeps-did/',
to: '/concepts/did-ddo/'
},
{
from: '/concepts/oeps-asset-ddo/',
to: '/concepts/ddo-metadata/'
},
{
from: '/tutorials/azure-for-brizo/',
to: '/tutorials/azure-for-provider/'
},
{
from: '/tutorials/amazon-s3-for-brizo/',
to: '/tutorials/amazon-s3-for-provider/'
},
{
from: '/tutorials/on-premise-for-brizo/',
to: '/tutorials/on-premise-for-provider/'
}
],
swaggerComponents: [
{
name: 'aquarius',
url: 'https://aquarius.oceanprotocol.com/spec'
},
{
name: 'provider',
url: 'https://provider.mainnet.oceanprotocol.com/spec'
}
]
}

View File

@ -56,7 +56,8 @@ Complementary to Ocean Market, Ocean has reference code to ease building **third
## Metadata Tools
Metadata (name of dataset, date created etc.) is used by marketplaces for data asset discovery. Each data asset can have a [decentralized identifier](https://w3c-ccg.github.io/did-spec/) (DID) that resolves to a DID document (DDO) for associated metadata. The DDO is essentially [JSON](https://www.json.org/) filling in metadata fields. [OEP7](https://github.com/oceanprotocol/OEPs/tree/master/7) formalizes Ocean DID usage.
Metadata (name of dataset, date created etc.) is used by marketplaces for data asset discovery. Each data asset can have a [decentralized identifier](https://w3c-ccg.github.io/did-spec/) (DID) that resolves to a DID document (DDO) for associated metadata. The DDO is essentially [JSON](https://www.json.org/) filling in metadata fields. For more details on working with OCEAN DIDs check out the [DID concept documentation](https://docs.oceanprotocol.com/concepts/did-ddo/).
The [DDO Metadata documentation](https://docs.oceanprotocol.com/concepts/ddo-metadata/) goes into more depth regarding metadata structure.
[OEP8](https://github.com/oceanprotocol/OEPs/tree/master/8) specifies Ocean metadata schema, including fields that must be filled. Its based on the public [DataSet schema from schema.org](https://schema.org/Dataset).

View File

@ -15,34 +15,42 @@ The most basic scenario for a Publisher is to provide access to the datasets the
[This page](https://oceanprotocol.com/technology/compute-to-data) elaborates on the benefits.
## Data Sets & Algorithms
## Datasets & Algorithms
With Compute-to-Data, data sets are not allowed to leave the premises of the data holder, only algorithms can be permitted to run on them under certain conditions within an isolated and secure environment. Algorithms are an asset type just like data sets and they too can have a pool or a fixed price to determine their price whenever they are used.
With Compute-to-Data, datasets are not allowed to leave the premises of the data holder, only algorithms can be permitted to run on them under certain conditions within an isolated and secure environment. Algorithms are an asset type just like datasets. They they too can have a pool or a fixed price to determine their price whenever they are used.
Algorithms can be either public or private by setting either an `access` or a `compute` service in their DDO. An algorithm set to public can be downloaded for its set price, while an algorithm set to private is only available as part of a compute job without any way to download it. If an algorithm is set to private, then the dataset must be published on the same Ocean Provider as the data set it should run on.
For each data set, publishers can choose to allow various permission levels for algorithms to run:
Algorithms can be public or private by setting `"attributes.main.type"` value as follows:
- `"access"` - public. The algorithm can be downloaded, given appropriate datatoken.
- `"compute"` - private. The algorithm is only available to use as part of a compute job without any way to download it. The dataset must be published on the same Ocean Provider as the dataset it's targeted to run on.
For each dataset, publishers can choose to allow various permission levels for algorithms to run:
- allow selected algorithms, referenced by their DID
- allow all algorithms published within a network or marketplace
- allow raw algorithms, for advanced use cases circumventing algorithm as an asset type, but most prone to data escape
All implementations should set permissions to private by default: upon publishing a compute data set, no algorithms should be allowed to run on it. This is to prevent data escape by a rogue algorithm being written in a way to extract all data from a data set.
All implementations should set permissions to private by default: upon publishing a compute dataset, no algorithms should be allowed to run on it. This is to prevent data escape by a rogue algorithm being written in a way to extract all data from a dataset.
## Architecture Overview
The architecture follows [OEP-12: Compute-to-Data](https://github.com/oceanprotocol/OEPs/tree/master/12) as a spec.
Here's the sequence diagram for starting a new compute job.
![Sequence Diagram for computing services](images/Starting New Compute Job.png)
In the above diagram you can see the initial integration supported. It involves the following components/actors:
The Consumer calls the Provider with `start(did, algorithm, additionalDIDs)`. It returns job id `XXXX`. The Provider oversees the rest of the work. At any point, the Consumer can query the Provider for the job status via `getJobDetails(XXXX)`.
Here's how Provider works. First, it ensures that the Consumer has sent the appropriate datatokens to get access. Then, it calls asks the Operator-Service (a microservice) to start the job, which passes on the request to Operator-Engine (the actual compute system). Operator-Engine runs Kubernetes compute jobs etc as needed. Operator-Engine reports when to Operator-Service when the job has finished.
Here's the actors/components:
- Consumers - The end users who need to use some computing services offered by the same Publisher as the data Publisher.
- Operator-Service - Micro-service that is handling the compute requests.
- Operator-Engine - The computing systems where the compute will be executed.
- Kubernetes - a K8 cluster
Before the flow can begin, the following pre-conditions must be met:
Before the flow can begin, these pre-conditions must be met:
- The Asset DDO has a `compute` service.
- The Asset DDO compute service must permit algorithms to run on it.
@ -109,3 +117,4 @@ The Operator Engine is in charge of retrieving all the workflows registered in a
- [Tutorial: Writing Algorithms](/tutorials/compute-to-data-algorithms/)
- [Tutorial: Set Up a Compute-to-Data Environment](/tutorials/compute-to-data/)
- [Compute-to-Data in Ocean Market](https://blog.oceanprotocol.com)
- [(Old) Compute-to-Data specs](https://github.com/oceanprotocol-archive/OEPs/tree/master/12) (OEP12)

View File

@ -1,165 +1,88 @@
# OEP-8: Assets Metadata Ontology
```text
shortname: 8/ASSET-DDO
name: Assets Metadata Ontology
type: Standard
status: Draft
version: 0.5
editor: Alex Coseru <alex@oceanprotocol.com>
contributors: Aitor Argomaniz <aitor@oceanprotocol.com>
Enrique Ruiz <enrique@oceanprotocol.com>,
Matthias Kretschmann <matthias@oceanprotocol.com>,
Jose Pablo Fernandez <jose@oceanprotocol.com>,
Marcus Jones <marcus@oceanprotocol.com>,
Troy McConaghy <troy@oceanprotocol.com>
```
**Table of Contents**
- [Motivation](#motivation)
- [Life Cycle of Metadata](#life-cycle-of-metadata)
- [Local Metadata](#local-metadata)
- [Remote Metadata](#remote-metadata)
- [Metadata Attributes](#metadata-attributes)
- [Main Attributes](#main-attributes)
- [File Attributes](#file-attributes)
- [Additional Attributes](#additional-attributes)
- [Other Suggested Additional Attributes](#other-suggested-additional-attributes)
- [Status Attributes](#status-attributes)
- [Example of Local Metadata](#example-of-local-metadata)
- [Example of Remote Metadata](#example-of-remote-metadata)
- [Specific attributes per asset type](#specific-attributes-per-asset-type)
- [Algorithm attributes](#algorithm-attributes)
- [References](#references)
- [Change Process](#change-process)
- [Language](#language)
---
title: DDO Metadata
description: Specification of the DDO subset dedicated to asset metadata
slug: /concepts/ddo-metadata/
section: concepts
---
## Motivation
## Overview
Every asset (dataset, algorithm) in the Ocean Network has an associated Decentralized Identifier (DID) and DID document / DID Descriptor Object (DDO). Because assets without proper descriptive metadata have poor visibility and discoverability.
This page defines the schema for asset _metadata_. Metadata is the subset of an Ocean DDO that holds information about the asset.
See [OEP 7/DID](../../7/) for information about the overall structure of Ocean DDOs and DIDs.
The schema is based on public schema.org [DataSet schema](https://schema.org/Dataset).
This OEP is about one particular part of Ocean DDOs: the asset metadata, a JSON object with information about the asset.
Standardizing labels is key to effective searching, sorting and filtering (discovery).
This OEP defines the assets metadata ontology, i.e. the schema for the asset metadata. It's based on the public schema.org [DataSet schema](https://schema.org/Dataset).
This page specifies metadata attributes that _must_ be included, and that _may_ be included. These attributes are organized hierarchically, from top-layer attributes like `"main"` to sub-level attributes like `"main.type"`. This page also provides DDO metadata examples.
This OEP doesn't detail the exact method of registering assets on-chain or storing DDOs.
## Rules for Metadata Storage and Control in Ocean
The main motivations of this OEP are to:
The publisher publishes an asset DDO (including metadata) onto the chain.
- Specify the common attributes that MUST be included in any asset metadata stored in the Ocean Network
- Normalize the attributes to use in any curation process, to provide a common structure to sort and filter the DDOs
- Identify the recommended additional attributes that SHOULD be included in a DDO to facilitate asset search
- Provide an example of an asset metadata object and additional links for reference
The publisher may be the asset owner, or a marketplace acting on behalf of the owner.
## Life Cycle of Metadata
Most metadata fields may be modified after creation. The blockchain records the provenance of changes.
### Local Metadata
DDOs (including metadata) are found in two places:
Metadata is first created by the publisher of the asset. The publisher has knowledge of the file URLs, and they are stored in plaintext in the `files` object. This initial metadata is the _local metadata_.
- _Remote_ - main storage, on-chain. File URLs are always encrypted. One may actually encrypt all metadata, at a severe cost to discoverability.
- _Local_ - local cache. All fields are in plaintext.
### Remote Metadata
Ocean Aquarius helps manage metadata. It can be used to write DDOs to the chain, read from the chain, and has a local cache of the DDO in plaintext with fast search.
A publisher publishes (registers) an asset using [Ocean-lib](https://docs.oceanprotocol.com/concepts/components/#squid-libraries), which might be running on their local machine or remotely. When they do, the local metadata is passed to Squid, which makes some changes and additions in the metadata, puts it into a DDO, and sends that DDO to a metadata store (Aquarius).
## Fields for Metadata
Aquarius may also make some changes and additions to the metadata, such as the `datePublished` or parts of the `curation` object. The metadata that finally gets stored by Aquarius is the _remote metadata_.
An asset represents a resource in Ocean, e.g. a dataset or an algorithm.
> A marketplace can and might also act as a publisher. [OEP-11](../../11) describes the publishing flow in more detail.
## Metadata Attributes
An asset is the representation of different type of resources in Ocean Protocol. Typically can asset could be one of the following asset types:
- _Dataset_. An asset representing a dataset or data resource. It could be for example a CSV file or a multiple JPG files.
- _Algorithm_. An asset representing a piece of software. It could be a python script using tensorflow, a spark job, etc.
Each kind of asset require a different subset of metadata attributes. The distintion between the type of asset (dataset, algorithm) is given by the attribute `DDO.services["metadata"].main.type`
A `metadata` object has the following attributes, all of which are objects.
A `metadata` object has the following attributes, all of which are objects. Some are only required for local or remote, and are specified as such.
| Attribute | Required | Description |
| --------------------------- | -------- | ---------------------------------------------------------- |
| **`main`** | Yes | Main attributes used to calculate the service checksum |
| **`status`** | No. | Status attributes |
| **`main`** | **Yes** | Main attributes |
| **`encryptedFiles`** | Remote | Encrypted string of the `attributes.main.files` object. |
| **`encryptedServices`** | Remote | Encrypted string of the `attributes.main.services` object. |
| **`status`** | No | Status attributes |
| **`additionalInformation`** | No | Optional attributes |
| **`encryptedFiles`** | (remote) | Encrypted string of the `attributes.main.files` object. |
| **`encryptedServices`** | (remote) | Encrypted string of the `attributes.main.services` object. |
The `main`, `curation` and `additionalInformation` attributes are independent of the asset type, all assets have those metadata sections.
The `main` and `additionalInformation` attributes are independent of the asset type.
### Main Attributes
## Fields for `attributes.main`
**This list of attributes can't be modified after creation**, because these are considered as the metadata essence of the asset created. This information is used to calculate the unique checksum of the asset. If any change would be necessary in the following attributes, it would be necessary to create a new asset derived from the existing one.
The `main` object has the following attributes, not all are required. Some are required by only the metadata store (_remote_) and others are mandatory for _local_ metadata only. If required or not by both, they are marked with _Yes/No_ in the _Required_ column.
The `main` object has the following attributes.
| Attribute | Type | Required | Description |
| ------------------- | --------------------- | -------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| **`name`** | Text | Yes | Descriptive name or title of the asset. |
| **`type`** | Text | Yes | Type of the asset. Helps to filter by the type of asset. It could be for example ("dataset", "algorithm"). |
| **`dateCreated`** | DateTime | Yes | The date on which the asset was created by the originator. ISO 8601 format, Coordinated Universal Time, e.g. `2019-01-31T08:38:32Z`. |
| **`datePublished`** | DateTime | (remote) | The date on which the asset DDO is registered into the metadata store (Aquarius) |
| **`author`** | Text | Yes | Name of the entity generating this data (e.g. Tfl, Disney Corp, etc.). |
| **`license`** | Text | Yes | Short name referencing the license of the asset (e.g. Public Domain, CC-0, CC-BY, No License Specified, etc. ). If it's not specified, the following value will be added: "No License Specified". |
| **`files`** | Array of files object | Yes | Array of `File` objects including the encrypted file urls. Further metadata about each file is stored, see [File Attributes](#file-attributes) |
| **`name`** | Text |**Yes** | Descriptive name or title of the asset. |
| **`type`** | Text |**Yes** | Asset type. Includes `"dataset"` (e.g. csv file), `"algorithm"` (e.g. Python script). Each type needs a different subset of metadata attributes. |
| **`author`** | Text |**Yes** | Name of the entity generating this data (e.g. Tfl, Disney Corp, etc.). |
| **`license`** | Text |**Yes** | Short name referencing the license of the asset (e.g. Public Domain, CC-0, CC-BY, No License Specified, etc. ). If it's not specified, the following value will be added: "No License Specified". |
| **`files`** | Array of files object |**Yes** | Array of `File` objects including the encrypted file urls. |
| **`dateCreated`** | DateTime |**Yes** | The date on which the asset was created by the originator. ISO 8601 format, Coordinated Universal Time, e.g. `2019-01-31T08:38:32Z`. |
| **`datePublished`** | DateTime | Remote | The date on which the asset DDO is registered into the metadata store (Aquarius) |
#### File Attributes
## Fields for `attributes.main.files`
File attributes are a subset of the `main` section.
The `files` object has a list of `file` objects.
A file object has the following attributes, with the details necessary to consume and validate the data.
Each `file` object has the following attributes, with the details necessary to consume and validate the data.
| Attribute | Required | Description |
| -------------------- | -------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| **`url`** | (local) | Content URL. Omitted from the remote metadata. Supports `http(s)://` and `ipfs://` URLs. |
| **`name`** | no | File name. |
| **`index`** | yes | Index number starting from 0 of the file. |
| **`contentType`** | yes | File format. |
| **`checksum`** | no | Checksum of the file using your preferred format (i.e. MD5). Format specified in `checksumType`. If it's not provided can't be validated if the file was not modified after registering. |
| **`checksumType`** | no | Format of the provided checksum. Can vary according to server (i.e Amazon vs. Azure) |
| **`contentLength`** | no | Size of the file in bytes. |
| **`encoding`** | no | File encoding (e.g. UTF-8). |
| **`compression`** | no | File compression (e.g. no, gzip, bzip2, etc). |
| **`encrypted`** | no | Boolean. Is the file encrypted? If is not set is assumed the file is not encrypted |
| **`encryptionMode`** | no | Encryption mode used. Just valid if `encrypted=true` |
| **`resourceId`** | no | Remote identifier of the file in the external provider. It is typically the remote id in the cloud provider. |
| **`attributes`** | no | Key-Value hash map with additional attributes describing the asset file. It could include details like the Amazon S3 bucket, region, etc. |
| **`index`** |**Yes** | Index number starting from 0 of the file. |
| **`contentType`** |**Yes** | File format. |
| **`url`** | Local | Content URL. Omitted from the remote metadata. Supports `http(s)://` and `ipfs://` URLs. |
| **`name`** | No | File name. |
| **`checksum`** | No | Checksum of the file using your preferred format (i.e. MD5). Format specified in `checksumType`. If it's not provided can't be validated if the file was not modified after registering. |
| **`checksumType`** | No | Format of the provided checksum. Can vary according to server (i.e Amazon vs. Azure) |
| **`contentLength`** | No | Size of the file in bytes. |
| **`encoding`** | No | File encoding (e.g. UTF-8). |
| **`compression`** | No | File compression (e.g. no, gzip, bzip2, etc). |
| **`encrypted`** | No | Boolean. Is the file encrypted? If is not set is assumed the file is not encrypted |
| **`encryptionMode`** | No | Encryption mode used. Just valid if `encrypted=true` |
| **`resourceId`** | No | Remote identifier of the file in the external provider. It is typically the remote id in the cloud provider. |
| **`attributes`** | No | Key-Value hash map with additional attributes describing the asset file. It could include details like the Amazon S3 bucket, region, etc. |
### Additional Attributes
All the additional information will be stored as part of the `additionalInformation` section.
| Attribute | Type | Required |
| --------------------- | ------------- | -------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| **`categories`** | Array of Text | No | Optional array of categories associated to the asset. |
| **`tags`** | Array of Text | No | Array of keywords or tags used to describe this content. Empty by default. |
| **`description`** | Text | No | Details of what the resource is. For a dataset, this attribute explains what the data represents and what it can be used for. |
| **`copyrightHolder`** | Text | No | The party holding the legal copyright. Empty by default. |
| **`workExample`** | Text | No | Example of the concept of this asset. This example is part of the metadata, not an external link. |
| **`links`** | Array of Link | No | Mapping of links for data samples, or links to find out more information. Links may be to either a URL or another Asset. We expect marketplaces to converge on agreements of typical formats for linked data: The Ocean Protocol itself does not mandate any specific formats as these requirements are likely to be domain-specific. The links array can be an empty array, but if there is a link object in it, then an "url" is required in that link object. |
| **`inLanguage`** | Text | No | The language of the content. Please use one of the language codes from the [IETF BCP 47 standard](https://tools.ietf.org/html/bcp47). |
#### Other Suggested Additional Attributes
These are examples of attributes that can enhance the discoverability of a resource:
| Attribute | Description |
| ---------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| **`sla`** | Service Level Agreement. |
| **`industry`** | |
| **`updateFrequency`** | An indication of update latency - i.e. How often are updates expected (seldom, annually, quarterly, etc.), or is the resource static that is never expected to get updated. |
| **`termsOfService`** | |
| **`privacy`** | |
| **`keyword`** | A list of keywords/tags describing a dataset. |
| **`structuredMarkup`** | A link to machine-readable structured markup (such as ttl/json-ld/rdf) describing the dataset. |
The publisher of a DDO MAY add additional attributes or change the above object definition.
### Status Attributes
## Fields for `attributes.status`
A `status` object has the following attributes.
@ -169,7 +92,32 @@ A `status` object has the following attributes.
| **`isRetired`** | Boolean | No | Flag retired content. False by default. If it's true, the content may either not be returned, or returned with a note about retirement. |
| **`isOrderDisabled`** | Boolean | No | For temporarily disabling ordering assets, e.g. when file host is in maintenance. False by default. If it's true, no ordering of assets for download or compute should be allowed. |
## Example of Local Metadata
## Fields for `attributes.additionalInformation`
All the additional information will be stored as part of the `additionalInformation` section.
| Attribute | Type | Required |
| --------------------- | ------------- | -------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| **`tags`** | Array of Text | No | Array of keywords or tags used to describe this content. Empty by default. |
| **`description`** | Text | No | Details of what the resource is. For a dataset, this attribute explains what the data represents and what it can be used for. |
| **`copyrightHolder`** | Text | No | The party holding the legal copyright. Empty by default. |
| **`workExample`** | Text | No | Example of the concept of this asset. This example is part of the metadata, not an external link. |
| **`links`** | Array of Link | No | Mapping of links for data samples, or links to find out more information. Links may be to either a URL or another Asset. We expect marketplaces to converge on agreements of typical formats for linked data: The Ocean Protocol itself does not mandate any specific formats as these requirements are likely to be domain-specific. The links array can be an empty array, but if there is a link object in it, then an "url" is required in that link object. |
| **`inLanguage`** | Text | No | The language of the content. Please use one of the language codes from the [IETF BCP 47 standard](https://tools.ietf.org/html/bcp47). |
| **`categories`** | Array of Text | No | Optional array of categories associated to the asset. Note: recommended to use `"tags"` instead of this. |
## Fields - Other Suggestions
Here are example attributes to help an asset's discoverability.
| Attribute | Description |
| ---------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| **`updateFrequency`** | An indication of update latency - i.e. How often are updates expected (seldom, annually, quarterly, etc.), or is the resource static that is never expected to get updated. |
| **`structuredMarkup`** | A link to machine-readable structured markup (such as ttl/json-ld/rdf) describing the dataset. |
## DDO Metadata Example - Local
This is what the DDO metadata looks like. All fields are in plaintext. This is before it's stored on-chain or when it's retrieved and decrypted into a local cache.
```json
{
@ -207,9 +155,16 @@ A `status` object has the following attributes.
}
```
## Example of Remote Metadata
## DDO Metadata Example - Remote
Similarly, this is how the metadata file would look as a response to querying Aquarius (remote metadata). Note that `url` is removed from all objects in the `files` array, and `encryptedFiles` & `curation` are added.
The previous example was for a local cache, with all fields in plaintext.
Here's the same example, for remote on-chain storage. That is, it's how metadata looks as a response to querying Aquarius (remote metadata).
How remote is changed, compared to local:
- `url` is removed from all objects in the `files` array
- `encryptedFiles` is added.
```json
{
@ -256,29 +211,25 @@ Similarly, this is how the metadata file would look as a response to querying Aq
}
```
### Specific attributes per asset type
Depending on the asset type (dataset, algorithm), there are different metadata attributes supported:
#### Algorithm attributes
## Fields when `attributes.main.type = algorithm`
An asset of type `algorithm` has the following additional attributes under `main.algorithm`:
| Attribute | Type | Required | Description |
| --------------- | -------- | -------- | --------------------------------------------- |
| **`language`** | `string` | no | Language used to implement the software |
| **`format`** | `string` | no | Packaging format of the software. |
| **`version`** | `string` | no | Version of the software. |
| **`container`** | `Object` | yes | Object describing the Docker container image. |
| **`container`** | `Object` |**Yes** | Object describing the Docker container image. |
| **`language`** | `string` | No | Language used to implement the software |
| **`format`** | `string` | No | Packaging format of the software. |
| **`version`** | `string` | No | Version of the software. |
The `container` object has the following attributes:
| Attribute | Type | Required | Description |
| ---------------- | -------- | -------- | ----------------------------------------------------------------- |
| **`entrypoint`** | `string` | yes | The command to execute, or script to run inside the Docker image. |
| **`image`** | `string` | yes | Name of the Docker image. |
| **`tag`** | `string` | yes | Tag of the Docker image. |
| **`checksum`** | `string` | yes | Checksum of the Docker image. |
| **`entrypoint`** | `string` |**Yes** | The command to execute, or script to run inside the Docker image. |
| **`image`** | `string` |**Yes** | Name of the Docker image. |
| **`tag`** | `string` |**Yes** | Tag of the Docker image. |
| **`checksum`** | `string` |**Yes** | Checksum of the Docker image. |
```json
{
@ -306,7 +257,7 @@ The `container` object has the following attributes:
"files": [
{
"name": "build_model",
"url": "https://raw.githubusercontent.com/oceanprotocol/test-algorithm/master/javascript/algo.js",
"url": "https://raw.gith ubusercontent.com/oceanprotocol/test-algorithm/master/javascript/algo.js",
"index": 0,
"checksum": "efb2c764274b745f5fc37f97c6b0e761",
"contentLength": "4535431",
@ -325,25 +276,25 @@ The `container` object has the following attributes:
}
```
#### Compute datasets attributes
## Fields when `attributes.main.type = compute`
An asset with a service of type `compute` has the following additional attributes under `main.privacy`:
| Attribute | Type | Required | Description |
| --------------------------------- | ------------------ | -------- | ---------------------------------------------------------- |
| **`allowRawAlgorithm`** | `boolean` | yes | If True, a drag & drop algo can be runned |
| **`allowNetworkAccess`** | `boolean` | yes | If True, the algo job will have network access (stil WIP) |
| **`publisherTrustedAlgorithms `** | Array of `Objects` | yes | If Empty , then any published algo is allowed. (see below) |
| **`allowRawAlgorithm`** | `boolean` |**Yes** | If True, a drag & drop algo can be runned |
| **`allowNetworkAccess`** | `boolean` |**Yes** | If True, the algo job will have network access (stil WIP) |
| **`publisherTrustedAlgorithms `** | Array of `Objects` |**Yes** | If Empty , then any published algo is allowed. (see below) |
The `publisherTrustedAlgorithms ` is an array of objects with the following structure:
| Attribute | Type | Required | Description |
| ------------------------------ | -------- | -------- | ------------------------------------------------------------------ |
| **`did`** | `string` | yes | The did of the algo which is trusted by the publisher. |
| **`filesChecksum`** | `string` | yes | Hash of ( algorithm's encryptedFiles + files section (as string) ) |
| **`containerSectionChecksum`** | `string` | yes | Hash of the algorithm container section (as string) |
| **`did`** | `string` |**Yes** | The did of the algo which is trusted by the publisher. |
| **`filesChecksum`** | `string` |**Yes** | Hash of ( algorithm's encryptedFiles + files section (as string) ) |
| **`containerSectionChecksum`** | `string` |**Yes** | Hash of the algorithm container section (as string) |
To produce filesChecksum:
To produce `filesChecksum`:
```javascript
sha256(
@ -352,7 +303,7 @@ sha256(
)
```
To produce containerSectionChecksum:
To produce `containerSectionChecksum`:
```javascript
sha256(
@ -362,7 +313,7 @@ sha256(
)
```
Example of a compute service
### Example of a compute service
```json
{
@ -396,19 +347,3 @@ Example of a compute service
}
}
```
## References
[Schema.org](https://schema.org/) is a collaborative, community activity with a mission to create, maintain, and promote schemas for structured data on the Internet. Data types use the [Schema.org primitive data types](https://schema.org/DataType).
- [Schema.org: DataSet](https://schema.org/Dataset)
- [Schema.org: FileSize](https://schema.org/fileSize)
- [Common license types for datasets](https://help.data.world/hc/en-us/articles/115006114287-Common-license-types-for-datasets)
## Change Process
This document is governed by [OEP 2/COSS](../2/README.md).
## Language
The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "NOT RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be interpreted as described in [BCP 14](https://tools.ietf.org/html/bcp14) \[[RFC2119](https://tools.ietf.org/html/rfc2119)\] \[[RFC8174](https://tools.ietf.org/html/rfc8174)\] when, and only when, they appear in all capitals, as shown here.

173
content/concepts/did-ddo.md Normal file
View File

@ -0,0 +1,173 @@
---
title: DIDs & DDOs - Asset Identifiers & Objects
description: Specification of Ocean asset identifiers and objects using DIDs & DDOs
slug: /concepts/did-ddo/
section: concepts
---
## Overview
This document describes how Ocean assets follow the DID/DDO spec, such that Ocean assets can inherit DID/DDO benefits and enhance interoperability.
Decentralized identifiers (DIDs) are a new type of identifier that enables verifiable, decentralized digital identity. Each DID is associated with a unique entity. DIDs may represent humans, objects, and more.
A DID Document (DDO) is JSON blob that holds information about the DID. Given a DID, a _resolver_ will return the DDO of that DID.
If a DID is the index key in a key-value pair, then the DID Document is the value to which the index key points.
The combination of a DID and its associated DID Document forms the root record for a decentralized identifier.
DIDs and DDOs follow [this specification](https://w3c-ccg.github.io/did-spec/) defined by the World Wide Web Consurtium (W3C).
## Rules for DIDs & DDOs in Ocean
- An _asset_ in Ocean represents a downloadable file, compute service, or similar. Each asset is a _resource_ under control of a _publisher_. The Ocean network itself does _not_ store the actual resource (e.g. files).
- An asset should have a DID and DDO. The DDO should include metadata about the asset.
- The DDO can only can be modified by _owners_ or _delegated users_.
- There _must_ be at least one client library acting as _resolver_, to get a DDO from a DID.
- The DDO is stored on-chain. It's stored in in plaintext, with two exceptions: (1) the field for resource-access url is encrypted (2) the whole DDO may be encrypted, if the publisher is willing to lose 100% of discoverability.
- A metadata cache like Aquarius can help in reading and writing DDO data from the chain.
## DID Structure
In Ocean, a DID is a string that looks like:
```text
did:op:0ebed8226ada17fde24b6bf2b95d27f8f05fcce09139ff5cec31f6d81a7cd2ea
```
It follows [the generic DID scheme](https://w3c-ccg.github.io/did-spec/#the-generic-did-scheme).
The part after `did:op:` is the asset's on-chain Ethereum address (minus the "0x"). One can be computed from the other; therefore there is a 1:1 mapping between did and Ethereum address.
## DDO Attributes
![DDO Content](images/ddo-content.png)
A DDO has these standard attributes:
- `@context`
- `id`
- `created`
- `updated`
- `publicKey`
- `authentication`
- `proof`
- `verifiableCredential`
In Ocean, the DDO also has:
- `dataToken`
- `service`
- `credentials` - optional flag, which describes the credentials needed to access a dataset (see below)
Asset metadata must be included as one of the objects inside the `"service"` array, with type `"metadata"`.
## DDO Service Types
There are many possible service types for a DDO.
- `metadata` - describing the asset
- `access` - describing how the asset can be downloaded
- `compute` - describing how the asset can be computed upon
Each asset has a `metadata` service and at least one other service.
Each service is distinguished by the `DDO.service.type` attribute.
Each service has an `attributes` section holding the information related to the service. That section _must_ have a `main` sub-section, holding all the mandatory information that a service has to provide.
A part of the `attributes.main` sub-section, other optional sub-sections like `attributes.extra` can be added. These depend on the service type.
Each service has a `timeout` (in seconds) section describing how long the sevice can be used after consumption is initiated. A timeout of 0 represents no time limit.
The `cost` attribute is obsolete, as of Ocean V3. As of V3, to consume an asset, one sends exactly 1.0 datatokens of the asset, so a `cost` is not needed.
## DDO Service Example
Here is an example DDO service:
```json
"service": [
{
"index": 0,
"type": "metadata",
"serviceEndpoint": "https://service/api/v1/metadata/assets/ddo/did:op:0ebed8226ada17fde24b6bf2b95d27f8f05fcce09139ff5cec31f6d81a7cd2ea",
"attributes": {
"main": {},
"additionalInformation": {},
"curation": {}
}
},
{
"index": 1,
"type": "access",
"serviceEndpoint": "http://localhost:8030/api/v1/provider/services/consume",
"attributes": {
"main": {
"cost":"10",
"timeout":0
},
"additionalInformation": {}
}
},
{
"index": 2,
"type": "compute",
"serviceEndpoint": "http://localhost:8030/api/v1/provider/services/compute",
"attributes": {
"main": {
"cost":"10",
"timeout":3600
},
"additionalInformation": {}
}
}
]
```
## DDO Credentials for Fine-Grained Permissions
By default, a consumer can access a resource if they have 1.0 datatokens. _Credentials_ allow the publisher to optionally specify finer-grained permissions.
Consider a medical data use case, where only a credentialed EU researcher can legally access a given dataset. Ocean supports this as follows: a consumer can only access the resource if they have 1.0 datatokens _and_ one of the specified `"allow"` credentials.
This is like going to an R-rated movie, where you can only get in if you show both your movie ticket (datatoken) _and_ some some id showing you're old enough (credential).
Only credentials that can be proven are supported. This includes Ethereum public addresses, and (in the future) W3C Verifiable Credentials and more.
Ocean also supports `"deny"` credentials: if a consumer has any of these credentials, they cannot access the resource.
Here's an example object with both `"allow"` and `"deny"` entries.
```json
"credentials":{
"allow":[
{
"type":"address",
"values":[
"0x123",
"0x456"
]
}
]
},
"deny":[
{
"type":"address",
"values":[
"0x2222",
"0x333"
]
}
]
}
```
For future usage, we can extend that with different credentials types. Example:
```json
{
"type": "credential3Box",
"values": ["profile1", "profile2"]
}
```

Binary file not shown.

After

Width:  |  Height:  |  Size: 30 KiB

View File

@ -7,6 +7,8 @@ Ocean Protocol contracts are deployed on multiple public networks. You can alway
In each network, youll need ETH to pay for gas, and OCEAN for certain Ocean actions. Because the Ethereum mainnet is a network for production settings, ETH and OCEAN tokens have real value on there. The ETH and OCEAN tokens in each test network dont have real value and are used for testing-purposes only. They can be obtained with _faucets_ to dole out ETH and OCEAN.
The universal Aquarius Endpoint is `https://aquarius.oceanprotocol.com`.
## Ethereum Mainnet
The Ethereum Mainnet is Oceans production network.
@ -29,7 +31,6 @@ MetaMask and other ERC20 wallets default to Ethereum mainnet, therefore your wal
| Explorer | https://etherscan.io |
| Ocean Market | https://market.oceanprotocol.com |
| Provider | `https://provider.mainnet.oceanprotocol.com` |
| Aquarius | `https://aquarius.mainnet.oceanprotocol.com` |
| Subgraph | `https://subgraph.mainnet.oceanprotocol.com` |
## Polygon Mainnet
@ -39,7 +40,7 @@ Ocean is [deployed](https://blog.oceanprotocol.com/ocean-on-polygon-network-8aba
If you don't find Polygon as a predefined network in your wallet, you can connect to it manually via [this guide](/tutorials/metamask-setup/#set-up-custom-network) and the parameters below.
| What | Value |
|--------------------|------------------------------------------|
| ------------------ | ---------------------------------------- |
| Network Name | `Matic Mainnet` |
| RPC | `https://rpc.polygon.oceanprotocol.com/` |
| Chain Id | `137` |
@ -55,16 +56,14 @@ If you don't find Polygon as a predefined network in your wallet, you can connec
- Address: [0x282d8efCe846A88B159800bd4130ad77443Fa1A1](https://polygonscan.com/token/0x282d8efCe846A88B159800bd4130ad77443Fa1A1)
- [Exchanges to purchase](https://oceanprotocol.com/token#get)
**Additional Components**
| What | URL |
| ------------ | -------------------------------------------- |
| Explorer | https://polygonscan.com/ |
| What | URL |
| ------------ | -------------------------------------------------------------------- |
| Explorer | https://polygonscan.com/ |
| Ocean Market | Point wallet to Polygon network, at https://market.oceanprotocol.com |
| Provider | `https://provider.polygon.oceanprotocol.com` |
| Aquarius | `https://aquarius.polygon.oceanprotocol.com` |
| Subgraph | `https://subgraph.polygon.oceanprotocol.com` |
| Provider | `https://provider.polygon.oceanprotocol.com` |
| Subgraph | `https://subgraph.polygon.oceanprotocol.com` |
**Bridge**
@ -76,13 +75,13 @@ Ocean is deployed to [Binance Smart Chain (BSC)](https://academy.binance.com/en/
If you don't find BSC as a predefined network in your wallet, you can connect to it manually via [Binance's guide](https://academy.binance.com/en/articles/connecting-metamask-to-binance-smart-chain) or [Ocean's guide](/tutorials/metamask-setup/#set-up-custom-network) and the parameters below.
| What | Value |
|--------------------|------------------------------------------|
| Network Name | `Smart Chain` |
| RPC | `https://bsc-dataseed.binance.org/` |
| Chain Id | `56` |
| Currency Symbol | `BNB` |
| Block Explorer URL | `https://bscscan.com` |
| What | Value |
| ------------------ | ----------------------------------- |
| Network Name | `Smart Chain` |
| RPC | `https://bsc-dataseed.binance.org/` |
| Chain Id | `56` |
| Currency Symbol | `BNB` |
| Block Explorer URL | `https://bscscan.com` |
**Tokens**
@ -95,13 +94,12 @@ If you don't find BSC as a predefined network in your wallet, you can connect to
**Additional Components**
| What | URL |
|--------------|-----------------------------------------------------------------------|
| Explorer | https://bscscan.com/ |
| Ocean Market | Point wallet to BSC network, at https://market.oceanprotocol.com |
| Provider | `https://provider.bsc.oceanprotocol.com` |
| Aquarius | `https://aquarius.bsc.oceanprotocol.com` |
| Subgraph | `https://subgraph.bsc.oceanprotocol.com` |
| What | URL |
| ------------ | ---------------------------------------------------------------- |
| Explorer | https://bscscan.com/ |
| Ocean Market | Point wallet to BSC network, at https://market.oceanprotocol.com |
| Provider | `https://provider.bsc.oceanprotocol.com` |
| Subgraph | `https://subgraph.bsc.oceanprotocol.com` |
**Bridge**
@ -124,13 +122,12 @@ In MetaMask and other ERC20 wallets, click on the network name dropdown, then se
**Additional Components**
| What | URL |
| ------------ | ---------------------------------------------------------------------- |
| Explorer | https://ropsten.etherscan.io |
| Ocean Market | Point wallet to Ropsten network, at https://market.oceanprotocol.com |
| Provider | `https://provider.ropsten.oceanprotocol.com` |
| Aquarius | `https://aquarius.ropsten.oceanprotocol.com` |
| Subgraph | `https://subgraph.ropsten.oceanprotocol.com` |
| What | URL |
| ------------ | -------------------------------------------------------------------- |
| Explorer | https://ropsten.etherscan.io |
| Ocean Market | Point wallet to Ropsten network, at https://market.oceanprotocol.com |
| Provider | `https://provider.ropsten.oceanprotocol.com` |
| Subgraph | `https://subgraph.ropsten.oceanprotocol.com` |
## Rinkeby
@ -149,14 +146,12 @@ In MetaMask and other ERC20 wallets, click on the network name dropdown, then se
**Additional Components**
| What | URL |
| ------------ | ---------------------------------------------------------------------- |
| Explorer | https://rinkeby.etherscan.io |
| Ocean Market | Point wallet to Rinkeby network, at https://market.oceanprotocol.com |
| Provider | `https://provider.rinkeby.oceanprotocol.com` |
| Aquarius | `https://aquarius.rinkeby.oceanprotocol.com` |
| Subgraph | `https://subgraph.rinkeby.oceanprotocol.com` |
| What | URL |
| ------------ | -------------------------------------------------------------------- |
| Explorer | https://rinkeby.etherscan.io |
| Ocean Market | Point wallet to Rinkeby network, at https://market.oceanprotocol.com |
| Provider | `https://provider.rinkeby.oceanprotocol.com` |
| Subgraph | `https://subgraph.rinkeby.oceanprotocol.com` |
## Mumbai
@ -171,17 +166,16 @@ If you don't find Mumbai as a predefined network in your wallet, you can connect
- [Faucet](https://faucet.matic.network/). You may find others by [searching](https://www.google.com/search?q=mumbai+faucet).
- Mumbai OCEAN:
- Address: [0xd8992Ed72C445c35Cb4A2be468568Ed1079357c8](https://mumbai.polygonscan.com/token/0xd8992Ed72C445c35Cb4A2be468568Ed1079357c8)
- To acquire tokens, please reach out to the core team [via Discord](https://discord.com/invite/TnXjkR5)
- [Faucet](https://faucet.mumbai.oceanprotocol.com/)
**Additional Components**
| What | URL |
| ------------ | ---------------------------------------------------------------------- |
| Explorer | https://mumbai.polygonscan.com |
| Ocean Market | Point wallet to Mumbai network, at https://market.oceanprotocol.com |
| Provider | `https://provider.mumbai.oceanprotocol.com` |
| Aquarius | `https://aquarius.mumbai.oceanprotocol.com` |
| Subgraph | `https://subgraph.mumbai.oceanprotocol.com` |
| What | URL |
| ------------ | ------------------------------------------------------------------- |
| Explorer | https://mumbai.polygonscan.com |
| Ocean Market | Point wallet to Mumbai network, at https://market.oceanprotocol.com |
| Provider | `https://provider.mumbai.oceanprotocol.com` |
| Subgraph | `https://subgraph.mumbai.oceanprotocol.com` |
## Local / Ganache
@ -204,4 +198,3 @@ Alternatively, you can run Ganache independently. Install it according to [the G
## Other
Some apps may need `network_id` and `chain_id`. Here's a [list of values for major Ethereum networks](https://medium.com/@piyopiyo/list-of-ethereums-major-network-and-chain-ids-2bc58e928508).

View File

@ -1,243 +0,0 @@
# OEP-7: Decentralized Identifiers
```text
shortname: 7/DID
name: Decentralized Identifiers
type: Standard
status: Draft
version: 0.3
editor: Alex Coseru <alex@oceanprotocol.com>
contributors: Matthias Kretschmann <matthias@oceanprotocol.com>,
Ahmed Ali <ahmed@oceanprotocol.com>
```
**Table of Contents**
- [Motivation](#motivation)
- [Specification](#specification)
- [Proposed Solution](#proposed-solution)
- [Decentralized IDs (DIDs)](#decentralized-ids-dids)
- [DID Documents (DDOs)](#did-documents-ddos)
- [DDO Services](#ddo-services)
- [Credentials](#credentials)
- [Integrity](#integrity)
- [How to compute the integrity checksum](#how-to-compute-the-integrity-checksum)
- [DID Document Proof](#did-document-proof)
- [Length of a DID](#length-of-a-did)
- [How to compute a DID](#how-to-compute-a-did)
- [References](#references)
- [Change Process](#change-process)
- [Language](#language)
---
This specification is based on:
- the [W3C DID specification](https://w3c-ccg.github.io/did-spec/), which was at version 0.11 as of August 2018,
- the [Ocean Protocol technical whitepaper](https://github.com/oceanprotocol/whitepaper),
- [3/ARCH](../3/README.md), and
- [4/AGENT](../4/README.md).
## Motivation
The main motivations of this OEP are:
- Design a solution to extend the current architecture to use **Decentralized Identifiers (DIDs)** and **DID Documents (DDOs)**
- Understand how to resolve DIDs into DDOs
- Establishing the mechanism to know if the DDO associated with a DID was modified
- Defining the common mechanisms, interfaces and APIs to implemented the designed solution
- Define how Ocean assets, agents and domains can be modeled with a DID/DDO data model
- Understand how DID hubs are formed, and how they integrate a business and storage layer
## Specification
Requirements are:
- The DID resolving capabilities MUST be exposed in the client libraries, enabling to resolve a DDO directly in a totally transparent way
- ASSETS are DATA objects describing RESOURCES under control of a PUBLISHER
- PROVIDERS store the ASSET metadata off-chain
- OCEAN doesn't store ASSET contents (e.g. files)
- An ASSET is modeled in OCEAN as off-chain information stored in AQUARIUS
- ASSETS information only can be modified by OWNERS or DELEGATED USERS
- ASSETS can be resolved using a Decentralized ID (DID)
- A DID Document (DDO) should include the ASSET metadata
- Any kind of object registered in Ocean SHOULD have a DID allowing one to uniquely identify that object in the system
- ASSET DDO (and the metadata included as part of the DDO) is associated to the ASSET information stored using a common DID
- A DID can be resolved to get access to a DDO
- The function to calculate the HASH MUST BE standard
## Proposed Solution
### Decentralized IDs (DIDs)
A DID is a unique identifier that can be resolved or de-referenced to a standard resource describing the entity (a DID Document or DDO).
If we apply this to Ocean, the DID would be the unique identifier of an object represented in Ocean (i.e. the Asset ID of an ASSET or the Actor ID of a USER).
The DDO SHOULD include the METADATA information associated with this object.
The DDO is stored off-chain in Ocean.
In Ocean, a DID is a string that looks like:
```text
did:op:0ebed8226ada17fde24b6bf2b95d27f8f05fcce09139ff5cec31f6d81a7cd2ea
```
which follows [the generic DID scheme](https://w3c-ccg.github.io/did-spec/#the-generic-did-scheme).
Details about how to compute the DID are given below.
### DID Documents (DDOs)
If a DID is the index key in a key-value pair, then the DID Document is the value to which the index key points.
The combination of a DID and its associated DID Document forms the root record for a decentralized identifier.
![DDO Content](images/ddo-content.png)
A DDO document is composed of standard DDO attributes:
- `@context`
- `id`
- `created`
- `updated`
- `publicKey`
- `authentication`
- `proof`
- `verifiableCredential`
- `dataToken`
- `service`
- `credentials` - optional flag, which describes the credentials needed to access a dataset (see below)
Asset metadata must be included as one of the objects inside the `"service"` array, with type `"metadata"`.
#### DDO Services
Each type of asset (dataset, algorithm, workflow, etc, ..) typically will have associated different kind of services. There are multiple type of services that are commonly added to all the assets:
- metadata - describing the asset
- provenance - describing the asset provenance
- access - describing how the asset can be downloaded
- compute - describing how the asset can be computed upon
Each service is distinguished by the `DDO.service.type` attribute.
Each service has an `attributes` section where all the information related to the service is added. As mandatory content, the attributes section will have a `main` sub-section. This one is important because it must include all the mandatory information that a service has to provide.
A part of the `attributes.main` sub-section, other optional sub-sections can be added (like: `attributes.curation` or `attributes.extra`) depending on the service type.
Each service has an `cost` and `timeout` (in seconds) section describing the cost (how much datatokens needs to be transferred) and how long the sevice can be used after payment. A timeout of 0 represents no time limit.
Example:
```json
"service": [
{
"index": 0,
"type": "metadata",
"serviceEndpoint": "https://service/api/v1/metadata/assets/ddo/did:op:0ebed8226ada17fde24b6bf2b95d27f8f05fcce09139ff5cec31f6d81a7cd2ea",
"attributes": {
"main": {},
"additionalInformation": {},
"curation": {}
}
},
{
"index": 1,
"type": "access",
"serviceEndpoint": "http://localhost:8030/api/v1/brizo/services/consume",
"attributes": {
"main": {
"cost":"10",
"timeout":0
},
"additionalInformation": {}
}
},
{
"index": 2,
"type": "compute",
"serviceEndpoint": "http://localhost:8030/api/v1/brizo/services/compute",
"attributes": {
"main": {
"cost":"10",
"timeout":3600
},
"additionalInformation": {}
}
}
]
```
- You can find a [complete example of a DDO](ddo-example.json).
- You can find a complete reference of the asset metadata in [OEP-8](8).
- You can find a complete [real world example of a DDO](https://w3c-ccg.github.io/did-spec/#real-world-example) with extended services added, as part of the W3C DID spec.
#### Credentials
In order to support credentials based access. the following optional object is used:
```json
"credentials":{
"allow":[
{
"type":"address",
"values":[
"0x123",
"0x456"
]
}
]
},
"deny":[
{
"type":"address",
"values":[
"0x2222",
"0x333"
]
}
]
}
```
where:
- "allow" - will control who can consume this asset. If array it's empty, means anyone can consume
- "deny" - if there is a match, consumption is denied
For future usage, we can extend that with different credentials types. Example:
```json
{
"type": "credential3Box",
"values": ["profile1", "profile2"]
}
```
#### DID Document Proof
Since V3, the metadata is stored on chain, so we don't need additional proofs, because we already have the transaction sender.
#### Length of a DID
The length of a DID must be compliant with the underlying storage layer and function calls. Given that decentralized virtual machines make use of contract languages such as Solidity and WASM, it is advised to fit the DID in structures such as `bytes32`.
It would be nice to store the `did:op:` prefix in those 32 bytes, but that means fewer than 32 bytes would be left for storing the rest (25 bytes since "did:op:" takes 7 bytes if using UTF-8). If the rest is a secure hash, then we need a 25-byte secure hash, but secure hashes typically have 28, 32 or more bytes, so that won't work.
Only the hash value _needs_ to be stored, not the `did:op:` prefix, because it should be clear from context that the value is an Ocean DID.
#### How to compute a DID
The DID (`id`) string begins with `did:op:` and is followed by a string representation of a bytes32.
In V3, the DID is based on the datatoken address.
## References
- [DID Spec from the W3C Credentials Community Group](https://w3c-ccg.github.io/did-spec/)
- [DID Spec from _Rebooting the Web of Trust_](https://github.com/WebOfTrustInfo/rebooting-the-web-of-trust-fall2016/blob/master/topics-and-advance-readings/did-spec-working-draft-03.md)
## Change Process
This document is governed by [OEP 2/COSS](../2/README.md).
## Language
The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "NOT RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be interpreted as described in [BCP 14](https://tools.ietf.org/html/bcp14) \[[RFC2119](https://tools.ietf.org/html/rfc2119)\] \[[RFC8174](https://tools.ietf.org/html/rfc8174)\] when, and only when, they appear in all capitals, as shown here.

View File

@ -5,6 +5,5 @@ description:
These drivers each have their own quickstart. Pick your favorite and have fun!
- [ocean.js](/references/ocean.js/)
- [ocean.js](https://github.com/oceanprotocol/ocean.js/blob/main/README.md)
- [ocean.py](https://github.com/oceanprotocol/ocean.py)
- [Ocean React](https://github.com/oceanprotocol/react)

View File

@ -1,16 +0,0 @@
---
title: Set Up Amazon S3 Storage
description: Tutorial about how to set up Amazon S3 storage for use with Ocean Protocol.
---
*Note: This needs updating for Ocean V3. As a workaround: Brizo has been renamed to provider-py; it should work similarly.*
To enable Brizo to use files stored in Amazon S3 (i.e. files with an URL containing `s3://`), you must:
1. have an Amazon AWS user account (IAM account) with permission to read those files from S3, and
1. set the AWS credentials on the machine where Brizo is running to those of the AWS user in question. Instructions are given below.
1. Note that you don't have to set any Brizo-specific configuration settings, e.g. in the `[osmosis]` section of the Brizo config file or in some special Brizo environment variables.
Under the hood, Brizo uses [boto3](https://aws.amazon.com/sdk-for-python/) (the Python library for interacting with AWS) to interact with AWS and boto3 has a whole process for determining AWS credentials. The easiest way to set the AWS credentials on the machine where Brizo is running is to install the [AWS CLI](https://aws.amazon.com/cli/) and then use the `aws configure` command.
For more details, see [the boto3 user guide about credentials](https://boto3.amazonaws.com/v1/documentation/api/latest/guide/configuration.html).

View File

@ -0,0 +1,15 @@
---
title: Set Up Amazon S3 Storage
description: Tutorial about how to set up Amazon S3 storage for use with Ocean Protocol.
---
*Note: This needs updating for Ocean V3.*
To enable Provider to use files stored in Amazon S3 (i.e. files with an URL containing `s3://`), you must:
1. have an Amazon AWS user account (IAM account) with permission to read those files from S3, and
1. set the AWS credentials on the machine where Provider is running to those of the AWS user in question. Instructions are given below.
1. Note that you don't have to set any Provider-specific configuration settings, e.g. in the `[osmosis]` section of the Provider config file or in some special Provider environment variables.
Under the hood, Provider uses [boto3](https://aws.amazon.com/sdk-for-python/) (the Python library for interacting with AWS) to interact with AWS and boto3 has a whole process for determining AWS credentials. The easiest way to set the AWS credentials on the machine where Provider is running is to install the [AWS CLI](https://aws.amazon.com/cli/) and then use the `aws configure` command.
For more details, see [the boto3 user guide about credentials](https://boto3.amazonaws.com/v1/documentation/api/latest/guide/configuration.html).

View File

@ -1,9 +1,16 @@
---
title: Allow and Deny Lists
title: Asset-Level Restrictions
description: Restrict access to individual assets
---
Allow and deny lists are advanced features that allow publishers to control access to individual data assets. Publishers can restrict assets so that they can only be accessed by approved users (allow lists) or they can restrict assets so that they can be accessed by anyone except certain users (deny lists).
## Introduction
For asset-level restrictions Ocean supports allow and deny lists. Allow and deny lists are advanced features that allow publishers to control access to individual data assets. Publishers can restrict assets so that they can only be accessed by approved users (allow lists) or they can restrict assets so that they can be accessed by anyone except certain users (deny lists).
When an allow-list is in place, a consumer can only access the resource if they have a datatoken and one of the credentials in the "allow" list of the DDO. Ocean also has complementary deny functionality: if a consumer is on the "deny" list, they will not be allowed to access the resource.
Initially, the only credential supported is Ethereum public addresses. To be fair, its more a pointer to an individual not a credential; but it has a low-complexity implementation so makes a good starting point. For extensibility, the Ocean metadata schema enables specification of other types of credentials like W3C Verifiable Credentials and more. When this gets implemented, asset-level permissions will be properly RBAC too:)
Since asset-level permissions are in the DDO, and the DDO is controlled by the publisher, asset-level restrictions are controlled by the publisher.
## Setup
@ -34,7 +41,7 @@ Next you will need to sign the transaction in Metamask, or the wallet of your ch
![Sign Metamask transaction](images/allow-deny-lists/metamask-transaction.png)
When the process of updating the allow or deny lists is complete you will a success message.
When the process of updating the allow or deny lists is complete you will receive a success message.
![Update allow or deny list success](images/allow-deny-lists/update-success.png)

View File

@ -0,0 +1,177 @@
---
title: Set Up Azure Storage
description: Tutorial about how to set up Azure storage for use with Ocean.
---
*Note: This needs updating for Ocean V3.*
This tutorial is for publishers who want to get started using Azure to store some of their data assets. (Some data assets could also be stored in other places.)
Publishers must run [Provider](https://github.com/oceanprotocol/provider) to mediate consumer access to data assets stored in Azure Storage. Provider needs the following Azure credentials from the publisher:
- `AZURE_ACCOUNT_NAME`: Azure Storage Account Name (for storing files)
- `AZURE_ACCOUNT_KEY`: Azure Storage Account key
- `AZURE_RESOURCE_GROUP`: Azure resource group
- `AZURE_LOCATION`: Azure Region
- `AZURE_CLIENT_ID`: Azure Application ID
- `AZURE_CLIENT_SECRET`: Azure Application Secret
- `AZURE_TENANT_ID`: Azure Tenant ID
- `AZURE_SUBSCRIPTION_ID`: Azure Subscription ID
If you go through this tutorial, then you will get all the Azure credentials listed above.
If you already have data assets stored in Azure, then you might already have, or be able to get, the above information. You could use this tutorial to get a sense of where to look (but don't create anything new).
To give the above Azure credentials to Provider, you either put them in a Provider config file or in environment variables with the above names. Environment variables should be used if you're running Provider inside a container. If you want to use the config file option, see [Provider README](https://github.com/oceanprotocol/provider).
If you're using [Barge](https://github.com/oceanprotocol/barge) to run Provider and other Ocean Protocol components, then the above Azure credentials should go in the file `barge/provider.env`. (That file gets used to set environment variables.)
This tutorial uses the [Microsoft Azure Portal](https://azure.microsoft.com/en-us/features/azure-portal/), but [there are many other ways to interact with Azure](https://docs.microsoft.com/en-us/azure/#pivot=sdkstools).
**Note: Azure is constantly changing. For that reason, we give try to give links to official Azure documentation, since it _should_ stay up-to-date.**
## Sign in to Azure Portal
If you don't already have an Azure account, then you will have to create one. Go to the [Microsoft Azure website](https://azure.microsoft.com) and follow the links.
Once you have an Azure account, go to [https://portal.azure.com/](https://portal.azure.com/) and sign in.
## Get Your Subscription ID
The [Azure docs say](https://docs.microsoft.com/en-us/azure/guides/developer/azure-developer-guide), "A subscription is a logical grouping of Azure services that is linked to an Azure account. A single Azure account can contain multiple subscriptions."
If you see **Subscriptions** in the left sidebar of Azure Portal, then click that. If you don't see it, just type "Subscriptinos" into the search bar at the top, then click on **Subscriptions** under the SERVICES heading.
You should see a list of one or more subscriptions. Click on the one you want to use for Azure storage. Remember to use that one for the rest of this tutorial (whenever you are asked for a subscription name).
Copy the `Subscription ID`. That's what Provider calls `AZURE_SUBSCRIPTION_ID`. You now have one of the Azure credentials!
```text
# Example AZURE_SUBSCRIPTION_ID (Azure Subscription ID)
479284be-0104-421a-8488-1aeac0caecaa
```
## Create an Azure Active Directory (AD) Application
See the Azure docs page:
[How to: Use the portal to create an Azure AD application and service principal that can access resources](https://docs.microsoft.com/en-us/azure/active-directory/develop/howto-create-service-principal-portal)
The first step there is to **Create an Azure Active Directory application**. Do that.
The app `Name` and `Sign-on URL` can be totally made up. The URL doesn't need to be real.
Once the app is created, copy the `Application ID`: that's what Provider calls the `AZURE_CLIENT_ID`. It should look something like this:
```text
# Example AZURE_CLIENT_ID (Application ID)
5d25ee8a-da2c-4e6f-8fba-09b6dd091038
```
## Get Authentication Key for Your AD Application
On [the same Azure docs page](https://docs.microsoft.com/en-us/azure/active-directory/develop/howto-create-service-principal-portal), find the section titled **Get application ID and authentication key** or similar. You already have your application ID, but you still need generate an authentication key by following the instructions in that section.
You can make up whatever you like for the key's `Description`.
Once the application key is generated, copy its value: that's what Provider calls the `AZURE_CLIENT_SECRET`. It should look something like this:
```text
# Example AZURE_CLIENT_SECRET (Application key)
RVJ1H5gYOmnMitikmM5ehszqmgrY5BFkoalnjfWMuDM
```
## Get Tenant ID
On [the same Azure docs page](https://docs.microsoft.com/en-us/azure/active-directory/develop/howto-create-service-principal-portal), find the section titled **Get tenant ID** or similar. Follow the instructions.
The tenant ID is what Provider calls `AZURE_TENANT_ID`.
```text
# Example AZURE_TENANT_ID (tenant ID, Directory ID)
2a4a3887-4e2e-4a31-8006-6e2b5877640e
```
## Create a Resource Group for Your Data Storage
See the Azure docs page:
[Manage Azure resources through portal](https://docs.microsoft.com/en-us/azure/azure-resource-manager/resource-group-portal)
That page says how to create a new empty resource group. Do that.
You can make up whatever name you like, but it's good practice to avoid special characters and to include:
- some words to indicate what it's for, e.g. `Storage`
- your name
- the month and year it was created, e.g. `Nov2018`
to help you and others manage it. The Resource group name is what Provider calls the `AZURE_RESOURCE_GROUP` and the Resource group location is what Provider calls the `AZURE_LOCATION`. Here are examples of both:
```text
# Example AZURE_RESOURCE_GROUP (Resource group name)
StorageCreatedNov2018ByTroy
```
```text
# Example AZURE_LOCATION (Resource group location)
West Europe
```
## Give Your AD Application Access to Your Resource Group
Inside your new resource group:
- click **Access control (IAM)**
- click **+ Add role assignment**
- In the `Role` field, select `Contributor`. See the note below.
- Assign access to `Azure AD user, group, or service principal`
- In the `Select` field, begin entering the name of your AD application (created earlier). When it appears in the list, click on it there. It should now be listed as one of the "Selected members".
- Click **Save**
Note: You might want to give your application fewer permissions than what a `Contributor` role gets. The Azure docs have [a list of all the built-in roles for Azure resources](https://docs.microsoft.com/en-us/azure/role-based-access-control/built-in-roles).
## Create a Storage Account
Follow the instructions in the Azure docs page:
[Create a storage account](https://docs.microsoft.com/en-us/azure/storage/common/storage-quickstart-create-account?tabs=portal)
except you should use the _existing_ resource group you created earlier, i.e. don't create a new one.
The Storage account name you choose is what Provider calls the `AZURE_ACCOUNT_NAME`.
```text
# Example AZURE_ACCOUNT_NAME (Storage account name)
troystorageaccount1
```
Use the same `Location` as your resource group.
The other fields can be left with their default values unless you want to change them.
Wait for it to say, "Your deployment is complete."
## Get a Storage Account Access Key
See the Azure docs page:
[Manage storage account settings in the Azure portal](https://docs.microsoft.com/en-us/azure/storage/common/storage-account-manage)
Go to the subsection about access keys and follow the instructions to view your new storage account's credentials.
Copy the value of one of the keys (e.g. key1, not the connection string). That's what Provider calls `AZURE_ACCOUNT_KEY`.
```text
# Example AZURE_ACCOUNT_KEY (Storage account access key)
93uKDkbjfnSUNPKw2tpe0LOM+3Wk+OSkNmgwhzjvzDw1d3sKVhMRTC5ikvN0r3zsx8eQrmT9Wgjz22iLPu3aGw==
```
You now have all the Azure credentials Provider needs. See the instructions near the top of this page about how to give those Azure credentials to Provider.
## Store Some Data in Azure Storage
You now have a storage account, but you don't have any data stored under it yet. To get some data stored in [Azure Storage](https://docs.microsoft.com/en-us/azure/storage/common/storage-introduction), the easiest option is to use [Azure Storage Explorer](https://azure.microsoft.com/en-us/features/storage-explorer/), a free desktop app that works on Windows, macOS and Linux.
[Get Azure Storage Explorer](https://azure.microsoft.com/en-us/features/storage-explorer/).
Azure Storage can store blobs, files, queues and tables. To work with Ocean Network, you should store your files in [Azure Blob storage (also called object storage)](https://docs.microsoft.com/en-us/azure/storage/blobs/storage-blobs-introduction), not Azure Files.
Besides Azure Storage Explorer, there are [many other Azure Storage APIs, libraries and tools](https://docs.microsoft.com/en-us/azure/storage/common/storage-introduction#storage-apis-libraries-and-tools).

View File

@ -33,6 +33,15 @@ When creating an algorithm asset in Ocean Protocol, the additional `algorithm` o
| `tag` | The Docker image tag that you are going to use. |
| `entrypoint` | The Docker entrypoint. `$ALGO` is a macro that gets replaced inside the compute job, depending where your algorithm code is downloaded. |
Define your entrypoint according to your dependencies. E.g. if you have multiple versions of python installed, use the appropriate command `python3.6 $ALGO`.
### What Docker container should I use?
There are plenty of Docker containers that work out-of-the-box. However, if you have custom dependencies, you may want to configure your own Docker Image.
To do so, create a Dockerfile with the appropriate instructions for dependency management and publish the container, e.g. using Dockerhub.
We also collect some [example images](https://github.com/oceanprotocol/algo_dockers) which you can also view in Dockerhub.
When publishing an algorithm through the [Ocean Market](https://market.oceanprotoco.com), these properties can be set via the publish UI.
### Environment Examples
@ -65,19 +74,20 @@ Run an algorithm written in Python, based on Python v3.9:
}
```
Be aware that you might need a lot of dependencies, so it's a lot faster if you are going to build your own image and publish your algorithm with that custom image. We also collect some [example images](https://github.com/oceanprotocol/algo_dockers).
### Data Storage
As part of a compute job, every algorithm runs in a K8s pod with these volumes mounted:
| Path | Permissions | Usage |
| --------------- | ----------- | --------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `/data/inputs` | read | Storage for input data sets, accessible only to the algorithm running in the pod. |
| `/data/ddos` | read | Storage for all DDOs involved in compute job (input data set + algorithm). |
| `/data/inputs` | read | Storage for input data sets, accessible only to the algorithm running in the pod. Contents will be the files themselves, inside indexed folders e.g. `/data/inputs/{did}/{service_id}`. |
| `/data/ddos` | read | Storage for all DDOs involved in compute job (input data set + algorithm). Contents will json files containing the DDO structure. |
| `/data/outputs` | read/write | Storage for all of the algorithm's output files. They are uploaded on some form of cloud storage, and URLs are sent back to the consumer. |
| `/data/logs/` | read/write | All algorithm output (such as `print`, `console.log`, etc.) is stored in a file located in this folder. They are stored and sent to the consumer as well. |
Please note that when using local Providers or Metatata Caches, the ddos might not be correctly transferred into c2d, but inputs are still available.
If your algorithm relies on contents from the DDO json structure, make sure to use a public Provider and Metadata Cache (Aquarius instance).
### Environment variables available to algorithms
For every algorithm pod, the Compute to Data environment provides the following environment variables:

View File

@ -0,0 +1,156 @@
---
title: Minikube Compute-to-Data Environment
description:
---
## Requirements
- functioning internet-accessable provider service
- machine capable of running compute (e.g. we used a machine with 8 CPUs, 16 GB Ram, 100GB SSD and fast internet connection)
- Ubuntu 20.04
## Install Docker and Git
```bash
sudo apt update
sudo apt install git docker.io
sudo usermod -aG docker $USER && newgrp docker
```
## Install Minikube
```bash
wget -q --show-progress https://github.com/kubernetes/minikube/releases/download/v1.22.0/minikube_1.22.0-0_amd64.deb
sudo dpkg -i minikube_1.22.0-0_amd64.deb
```
## Download and Configure Operator Service
```bash
git clone https://github.com/oceanprotocol/operator-service.git
```
Edit `operator-service/kubernetes/postgres-configmap.yaml`. Change `POSTGRES_PASSWORD` to nice long random password.
Edit `operator-service/kubernetes/deployment.yaml`. Optionally change:
- `ALGO_POD_TIMEOUT`
- add `requests_cpu`
- add `requests_memory`
- add `limits_cpu`
- add `limits_memory`
```yaml
...
spec:
containers:
- env:
- name: requests_cpu
value: "4"
- name: requests_memory
value: "8Gi"
- name: limits_cpu
value: "8"
- name: limits_memory
value: "15Gi"
- name: ALGO_POD_TIMEOUT
value: "3600"
...
```
## Download and Configure Operator Engine
```bash
git clone https://github.com/oceanprotocol/operator-engine.git
```
Check the [README](https://github.com/oceanprotocol/operator-engine#customize-your-operator-engine-deployment) section of operator engine to customize your deployment.
At a minimum you should add your IPFS URLs or AWS settings, and add (or remove) notification URLs.
## Install kubectl
```bash
curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"
curl -LO "https://dl.k8s.io/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl.sha256"
echo "$(<kubectl.sha256) kubectl" | sha256sum --check
sudo install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl
```
## Start Minikube
First command is imporant, and solves a [PersistentVolumeClaims problem](https://github.com/kubernetes/minikube/issues/7828).
```bash
minikube config set kubernetes-version v1.16.0
minikube start --cni=calico --driver=docker --container-runtime=docker
```
Wait untill all the defaults are running (1/1).
```bash
watch kubectl get pods --all-namespaces
```
## Create namespaces
```bash
kubectl create ns ocean-operator
kubectl create ns ocean-compute
```
## Deploy Operator Service
```bash
kubectl config set-context --current --namespace ocean-operator
kubectl create -f operator-service/kubernetes/postgres-configmap.yaml
kubectl create -f operator-service/kubernetes/postgres-storage.yaml
kubectl create -f operator-service/kubernetes/postgres-deployment.yaml
kubectl create -f operator-service/kubernetes/postgresql-service.yaml
kubectl apply -f operator-service/kubernetes/deployment.yaml
```
## Deploy Operator Engine
```bash
kubectl config set-context --current --namespace ocean-compute
kubectl apply -f operator-engine/kubernetes/sa.yml
kubectl apply -f operator-engine/kubernetes/binding.yml
kubectl apply -f operator-engine/kubernetes/operator.yml
kubectl create -f operator-service/kubernetes/postgres-configmap.yaml
```
## Expose Operator Service
```bash
kubectl expose deployment operator-api --namespace=ocean-operator --port=8050
```
Run a port forward or create your ingress service and setup DNS and certificates (not covered here):
```bash
kubectl -n ocean-operator port-forward svc/operator-api 8050
```
Alternatively you could use another method to communicate between the C2D Environment and the provider, such as an SSH tunnel.
## Initialize database
If your minikube is running on compute.example.com:
```bash
curl -X POST "https://compute.example.com/api/v1/operator/pgsqlinit" -H "accept: application/json"
```
## Update Provider
Update your provider service by updating the `operator_service.url` value in `config.ini`
```ini
operator_service.url = https://compute.example.com/
```
Restart your provider service.
[Watch the explanatory video for more details](https://vimeo.com/580934725)

View File

@ -17,7 +17,7 @@ ocean/
Then you need the following parts:
- working [Barge](https://github.com/oceanprotocol/barge). For this setup, we will asume the Barge is installed in /ocean/barge/
- a working Kubernetes (K8s) cluster (Minikube is a good start)
- a working Kubernetes (K8s) cluster ([Minikube](../compute-to-data-minikube/) is a good start)
- a working `kubectl` connected to the K8s cluster
- one folder (/ocean/operator-service/), in which we will download the following:
- [postgres-configmap.yaml](https://raw.githubusercontent.com/oceanprotocol/operator-service/main/kubernetes/postgres-configmap.yaml)
@ -45,9 +45,9 @@ Check the [README](https://github.com/oceanprotocol/operator-engine#customize-yo
## Storage class
For minikube, you can use 'standard' class.
For minikube, you can use the default 'standard' class.
For AWS , please make sure that your class allocates volumes in the same region and zone in which you are running your pods.
For AWS, please make sure that your class allocates volumes in the same region and zone in which you are running your pods.
We created our own 'standard' class in AWS:
@ -71,17 +71,6 @@ reclaimPolicy: Delete
volumeBindingMode: Immediate
```
Or we can use this for minikube:
```yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: standard
provisioner: docker.io/hostpath
reclaimPolicy: Retain
```
For more information, please visit https://kubernetes.io/docs/concepts/storage/storage-classes/
## Create namespaces

View File

@ -1,34 +0,0 @@
---
title: Jupyter Notebooks
description: Try some online interactive squid-py tutorials.
---
You can go through interactive squid-py tutorials at [datascience.oceanprotocol.com](https://datascience.oceanprotocol.com/).
They're [Jupyter Notebooks](http://jupyter.org/) running on your own [JupyterLab](https://github.com/jupyterlab/jupyterlab) instance.
It's quite easy to figure out, but if you get stuck, here are some initial steps:
- Visit [datascience.oceanprotocol.com](https://datascience.oceanprotocol.com/).
- Click the **JupyterLab Instance** button.
- Login with your GitHub account.
- Once you're in the JupyterLab console, double-click on **mantaray_jupyter**.
- Double-click on **introdution.txt** and read it.
- Double-click on one of the **.ipynb** files. They're labelled to appear in a logical order.
- When asked to select a kernel, select Python 3.
- To make stuff happen (e.g. to run code cells), use the menus at the top of the JupyterLab console, or at the top of your current Jupyter notebook.
In you prefer a step-by-step tutorial, you can watch our thorough [Manta Ray tutorial mini series](https://www.youtube.com/playlist?list=PL_dn0wVs9kWqSO2iCXvrWuxKFSgVr0Jrw):
`youtube: N7HrWz35xIk`
`youtube: Ahbw4WDDFxI`
`youtube: FsDwOlOkIHc`
`youtube: VoBd1iwTvF8`
`youtube: MHxOOioYSbg`
For more info, see the blog posts:
- "[Project Manta RayData Science powered by Ocean Protocol](https://blog.oceanprotocol.com/project-manta-ray-data-science-powered-by-ocean-protocol-535c54089b0f)"
- "[The Data Science WorkflowPowered by Ocean Protocol](https://blog.oceanprotocol.com/dive-into-ocean-protocol-ai-ecosystem-60f64eddf74d)"

View File

@ -1,8 +1,13 @@
---
title: Role-Based Access Control Server
title: Market-Level Permissions
description: Control who can publish, consume or browse data
---
The primary mechanism for restricting your users ability to publish, consume, or browse is the role-based access (RBAC) control server.
## Introduction
For market-level permissions, Ocean implements a role-based access control server (RBAC server). It implements restrictions at the user level, based on the users role (credentials). The RBAC server is run & controlled by the marketplace owner. Therefore permissions at this level are at the discretion of the marketplace owner.
The RBAC server is the primary mechanism for restricting your users ability to publish, consume, or browse assets in the market.
## Roles
@ -109,7 +114,7 @@ npm run start:docker
## Setting up the RBAC in the Market
To use the RBAC server with the market you need to save your the URL of your RBAC server as an env within the market.
To use the RBAC server with the market you need to save the URL of your RBAC server as an env within the market.
- First setup and host the Ocean role based access control (RBAC) server. Follow the instructions in the [RBAC repository](https://github.com/oceanprotocol/RBAC-Server)
- In your .env file in your fork of Ocean Market, set the value of the `GATSBY_RBAC_URL` environmental variable to the URL of the Ocean RBAC server that you have hosted, e.g. `GATSBY_RBAC_URL= "http://localhost:3000"`

View File

@ -1,3 +1,8 @@
---
title: Consume data asset
description:
---
1. Go to Ocean Marketplace https://market.oceanprotocol.com/
2. Search for the data asset.
The Ocean Marketplace provides features to search the Data/Algorithms by text, and users can also sort the result by published date.

View File

@ -1,4 +1,7 @@
# Ocean Market
---
title: Ocean Market
description:
---
https://market.oceanprotocol.com/

View File

@ -1,4 +1,7 @@
# Publish a Data asset on Ocean Market place.
---
title: Publish a Data asset on Ocean Market place.
description:
---
## What can be published?

View File

@ -1,4 +1,7 @@
# Swap and/or Stake Tokens
---
title: Swap and/or Stake Tokens
description:
---
## Swap Ocean Tokens against Datatokens

View File

@ -1,10 +0,0 @@
---
title: Set Up On-Premise Storage
description: Tutorial about how to set up on-premise storage for use with Ocean.
---
*Note: This needs updating for Ocean V3. As a workaround: Brizo has been renamed to provider-py; it should work similarly.*
To enable Brizo to use files stored in on-premise storage (i.e. files with an URL not containing `core.windows.net` or `s3://`), there is _nothing to do, other than make sure Brizo can resolve the URLs_. In particular, you don't have to set any Brizo-specific configuration settings, e.g. in the `[osmosis]` section of the Brizo config file or in some special Brizo environment variables.
Local and private network URLs are fine so long as they can be resolved by Brizo. Potential examples include `http://localhost/helicopter_data.xls`, `http://192.168.12.34/almond_sales_2012.csv` and `http://10.12.34.56/duck_photos.zip`.

View File

@ -0,0 +1,9 @@
---
title: Set Up On-Premise Storage
description: Tutorial about how to set up on-premise storage for use with Ocean.
---
*Note: This needs updating for Ocean V3.*
To enable Provider to use files stored in on-premise storage (i.e. files with an URL not containing `core.windows.net` or `s3://`), there is _nothing to do, other than make sure Provider can resolve the URLs_. In particular, you don't have to set any Provider-specific configuration settings, e.g. in the `[osmosis]` section of the Provider config file or in some special Provider environment variables.
Local and private network URLs are fine so long as they can be resolved by Provider. Potential examples include `http://localhost/helicopter_data.xls`, `http://192.168.12.34/almond_sales_2012.csv` and `http://10.12.34.56/duck_photos.zip`.

View File

@ -4,12 +4,28 @@ description: Control who can publish, consume or browse data
---
Ocean Protocol supports fine-grained permissions across our technology stack which can be particularly useful for enterprise use-cases. There are two ways in which permissions are implemented:
A large part of Ocean is about access control, which is primarily handled by datatokens. Users can access a resource (e.g. a file) by redeeming datatokens for that resource. We recognize that enterprises and other users often need more precise ways to specify and manage access, and we have introduced fine-grained permissions for these use cases.
Fine-grained permissions mean that access can be controlled precisely at two levels:
- [Role based access control server.](./rbac)
- [Marketplace-level permissions](./market-level-permissions) for browsing, consuming or publishing within a marketplace frontend.
- [Allow & deny lists.](./allow-deny-lists)
- [Asset-level permissions](./asset-level-permissions) on consuming a specific asset.
Neither are enabled in [Ocean Market](market.oceanprotocol.com/) but you can enable them in your own market by following the guides above.
The fine-grained permissions features are designed to work in forks of Ocean Market. We have not enabled them in Ocean Market itself, to keep Ocean Market open for everyone to use. On the front end, the permissions features are easily enabled by setting environment variables.
### Introduction
Some datasets need to be restricted to appropriately credentialed users. In this situation there is tension:
1. Datatokens on their own arent enough - the datatokens can be exchanged without any restrictions, which means anyone can acquire them and access the data.
2. We want to retain datatokens approach, since they enable Ocean users to leverage existing crypto infrastructure e.g. wallets, exchange etc.
We can resolve this tension by drawing on the following analogy:
> Imagine going to an age 18+ rock concert. You can only get in if you show both (a) your concert ticket and (b) an id showing that youre old enough.
We can port this model into Ocean, where (a) is a datatoken, and (b) is a credential. The datatoken is the baseline access control. Its fungible, and something that youve paid for or had shared to you. Its independent of your identity. The credential is something thats a function of your identity.
The credential based restrictions are implemented in two ways, at the market level and at the asset level. Access to the market is restricted on a role basis, the user's identity is attached to a role via the role based access control (RBAC) server. Access to individual assets is restricted via allow and deny lists which list the ethereum addresses of the users who can and cannot access the asset within the DDO.

View File

@ -18,12 +18,12 @@
- title: Compute-to-Data Overview
link: /concepts/compute-to-data/
- group: OEPs
- group: Specifying Assets
items:
- title: DID
link: /concepts/oeps-did/
- title: Asset DDO
link: /concepts/oeps-asset-ddo/
- title: DIDs & DDOs
link: /concepts/did-ddo/
- title: DDO Metadata
link: /concepts/ddo-metadata/
- group: NFTs
items:

View File

@ -41,21 +41,23 @@
link: /tutorials/compute-to-data-algorithms/
- title: Run a Compute-to-Data Environment
link: /tutorials/compute-to-data/
- title: Minikube Compute-to-Data Environment
link: /tutorials/compute-to-data-minikube/
- group: Storage Setup
items:
- title: Set Up Azure Storage
link: /tutorials/azure-for-brizo/
link: /tutorials/azure-for-provider/
- title: Set Up Amazon S3 Storage
link: /tutorials/amazon-s3-for-brizo/
link: /tutorials/amazon-s3-for-provider/
- title: Set Up On-Premise Storage
link: /tutorials/on-premise-for-brizo/
link: /tutorials/on-premise-for-provider/
- group: Fine-Grained Permissions
items:
- title: Overview
link: /tutorials/permissions
- title: Role-Based Access Control
link: /tutorials/rbac
- title: Allow & Deny Lists
link: /tutorials/allow-deny-lists
- title: Market-Level Permissions
link: /tutorials/market-level-permissions
- title: Asset-Level Permissions
link: /tutorials/asset-level-permissions

View File

@ -17,7 +17,7 @@ The sidebar for those generated reference pages will automatically switch to inc
Reference pages based on Swagger specs are sourced from remotely hosted Swagger specs:
- [`https://aquarius.test.oceanprotocol.com/spec`](https://aquarius.test.oceanprotocol.com/spec)
- [`https://brizo.test.oceanprotocol.com/spec`](https://brizo.test.oceanprotocol.com/spec)
- [`https://provider.test.oceanprotocol.com/spec`](https://provider.test.oceanprotocol.com/spec)
They are fetched and updated automatically upon every site build. For more information about stylistic issues, take a look at the section in the test page:

View File

@ -14,7 +14,7 @@ The documentation is split in multiple sections whose content lives in this repo
- **Core concepts**: high-level explanation of concepts, assumptions, and components
- **Setup**: getting started for various stakeholders and use cases
- **Tutorials**: detailed tutorials
- **API References**: docs for the Aquarius & Brizo REST APIs, and docs for various Squid libraries
- **API References**: docs for ocean.js, ocean.py, Aquarius REST API, and Provider REST API
Those sections are defined in the [`/data/sections.yml`](../data/sections.yml) file.

View File

@ -170,6 +170,7 @@ module.exports = {
path: `${__dirname}/markdowns/markdowns`,
name: 'markdowns'
}
}
},
`gatsby-transformer-remark-plaintext`
]
}

View File

@ -3,7 +3,7 @@
const path = require('path')
const { createFilePath } = require('gatsby-source-filesystem')
const Swagger = require('swagger-client')
const { redirects } = require('./config')
const { redirects, swaggerComponents } = require('./config')
exports.onCreateNode = ({ node, getNode, actions }) => {
const { createNodeField } = actions
@ -132,7 +132,6 @@ exports.createPages = ({ graphql, actions }) => {
await createSwaggerPages(createPage)
await createDeploymentsPage(createPage)
// API: ocean.js
const lastRelease =
result.data.oceanJs.repository.releases.edges.filter(
@ -172,6 +171,12 @@ exports.createPages = ({ graphql, actions }) => {
await createReadTheDocsPage(createPage, 'provider', providerList)
await createReadTheDocsPage(createPage, 'ocean-subgraph', subgraphList)
// Create search page
createPage({
path: `/search/`,
component: path.resolve('./src/components/Search/SearchComponent.jsx')
})
resolve()
})
)
@ -187,6 +192,7 @@ const createDeploymentsPage = async (createPage) => {
component: template
})
}
//
// Create pages from TypeDoc json files
//
@ -213,11 +219,9 @@ const createTypeDocPage = async (createPage, name, downloadUrl) => {
// Create pages from swagger json files
//
// https://github.com/swagger-api/swagger-js
const fetchSwaggerSpec = async (component) => {
const fetchSwaggerSpec = async (url) => {
try {
const client = await Swagger(
`https://${component}.mainnet.oceanprotocol.com/spec`
)
const client = await Swagger(url)
return client.spec // The resolved spec
// client.originalSpec // In case you need it
@ -234,21 +238,20 @@ const fetchSwaggerSpec = async (component) => {
}
const createSwaggerPages = async (createPage) => {
const swaggerComponents = ['aquarius', 'provider']
const apiSwaggerTemplate = path.resolve('./src/templates/Swagger/index.jsx')
const getSlug = (name) => `/references/${name}/`
for (const component of swaggerComponents) {
const slug = getSlug(component)
const slug = getSlug(component.name)
createPage({
path: slug,
component: apiSwaggerTemplate,
context: {
slug,
name: component,
api: await fetchSwaggerSpec(component)
name: component.name,
api: await fetchSwaggerSpec(component.url)
}
})
}

380
package-lock.json generated
View File

@ -3142,6 +3142,11 @@
"to-fast-properties": "^2.0.0"
}
},
"@emotion/hash": {
"version": "0.8.0",
"resolved": "https://registry.npmjs.org/@emotion/hash/-/hash-0.8.0.tgz",
"integrity": "sha512-kBJtf7PH6aWwZ6fka3zQ0p6SBYzx4fl1LoZXE2RrnYST9Xljm7WfKJrU4g/Xr3Beg72MLrp1AWNUmuYJTL7Cow=="
},
"@endemolshinegroup/cosmiconfig-typescript-loader": {
"version": "3.0.2",
"resolved": "https://registry.npmjs.org/@endemolshinegroup/cosmiconfig-typescript-loader/-/cosmiconfig-typescript-loader-3.0.2.tgz",
@ -4132,6 +4137,108 @@
"unist-util-visit": "^1.3.0"
}
},
"@material-ui/core": {
"version": "4.12.3",
"resolved": "https://registry.npmjs.org/@material-ui/core/-/core-4.12.3.tgz",
"integrity": "sha512-sdpgI/PL56QVsEJldwEe4FFaFTLUqN+rd7sSZiRCdx2E/C7z5yK0y/khAWVBH24tXwto7I1hCzNWfJGZIYJKnw==",
"requires": {
"@babel/runtime": "^7.4.4",
"@material-ui/styles": "^4.11.4",
"@material-ui/system": "^4.12.1",
"@material-ui/types": "5.1.0",
"@material-ui/utils": "^4.11.2",
"@types/react-transition-group": "^4.2.0",
"clsx": "^1.0.4",
"hoist-non-react-statics": "^3.3.2",
"popper.js": "1.16.1-lts",
"prop-types": "^15.7.2",
"react-is": "^16.8.0 || ^17.0.0",
"react-transition-group": "^4.4.0"
}
},
"@material-ui/icons": {
"version": "4.11.2",
"resolved": "https://registry.npmjs.org/@material-ui/icons/-/icons-4.11.2.tgz",
"integrity": "sha512-fQNsKX2TxBmqIGJCSi3tGTO/gZ+eJgWmMJkgDiOfyNaunNaxcklJQFaFogYcFl0qFuaEz1qaXYXboa/bUXVSOQ==",
"requires": {
"@babel/runtime": "^7.4.4"
}
},
"@material-ui/lab": {
"version": "4.0.0-alpha.60",
"resolved": "https://registry.npmjs.org/@material-ui/lab/-/lab-4.0.0-alpha.60.tgz",
"integrity": "sha512-fadlYsPJF+0fx2lRuyqAuJj7hAS1tLDdIEEdov5jlrpb5pp4b+mRDUqQTUxi4inRZHS1bEXpU8QWUhO6xX88aA==",
"requires": {
"@babel/runtime": "^7.4.4",
"@material-ui/utils": "^4.11.2",
"clsx": "^1.0.4",
"prop-types": "^15.7.2",
"react-is": "^16.8.0 || ^17.0.0"
}
},
"@material-ui/styles": {
"version": "4.11.4",
"resolved": "https://registry.npmjs.org/@material-ui/styles/-/styles-4.11.4.tgz",
"integrity": "sha512-KNTIZcnj/zprG5LW0Sao7zw+yG3O35pviHzejMdcSGCdWbiO8qzRgOYL8JAxAsWBKOKYwVZxXtHWaB5T2Kvxew==",
"requires": {
"@babel/runtime": "^7.4.4",
"@emotion/hash": "^0.8.0",
"@material-ui/types": "5.1.0",
"@material-ui/utils": "^4.11.2",
"clsx": "^1.0.4",
"csstype": "^2.5.2",
"hoist-non-react-statics": "^3.3.2",
"jss": "^10.5.1",
"jss-plugin-camel-case": "^10.5.1",
"jss-plugin-default-unit": "^10.5.1",
"jss-plugin-global": "^10.5.1",
"jss-plugin-nested": "^10.5.1",
"jss-plugin-props-sort": "^10.5.1",
"jss-plugin-rule-value-function": "^10.5.1",
"jss-plugin-vendor-prefixer": "^10.5.1",
"prop-types": "^15.7.2"
},
"dependencies": {
"csstype": {
"version": "2.6.17",
"resolved": "https://registry.npmjs.org/csstype/-/csstype-2.6.17.tgz",
"integrity": "sha512-u1wmTI1jJGzCJzWndZo8mk4wnPTZd1eOIYTYvuEyOQGfmDl3TrabCCfKnOC86FZwW/9djqTl933UF/cS425i9A=="
}
}
},
"@material-ui/system": {
"version": "4.12.1",
"resolved": "https://registry.npmjs.org/@material-ui/system/-/system-4.12.1.tgz",
"integrity": "sha512-lUdzs4q9kEXZGhbN7BptyiS1rLNHe6kG9o8Y307HCvF4sQxbCgpL2qi+gUk+yI8a2DNk48gISEQxoxpgph0xIw==",
"requires": {
"@babel/runtime": "^7.4.4",
"@material-ui/utils": "^4.11.2",
"csstype": "^2.5.2",
"prop-types": "^15.7.2"
},
"dependencies": {
"csstype": {
"version": "2.6.17",
"resolved": "https://registry.npmjs.org/csstype/-/csstype-2.6.17.tgz",
"integrity": "sha512-u1wmTI1jJGzCJzWndZo8mk4wnPTZd1eOIYTYvuEyOQGfmDl3TrabCCfKnOC86FZwW/9djqTl933UF/cS425i9A=="
}
}
},
"@material-ui/types": {
"version": "5.1.0",
"resolved": "https://registry.npmjs.org/@material-ui/types/-/types-5.1.0.tgz",
"integrity": "sha512-7cqRjrY50b8QzRSYyhSpx4WRw2YuO0KKIGQEVk5J8uoz2BanawykgZGoWEqKm7pVIbzFDN0SpPcVV4IhOFkl8A=="
},
"@material-ui/utils": {
"version": "4.11.2",
"resolved": "https://registry.npmjs.org/@material-ui/utils/-/utils-4.11.2.tgz",
"integrity": "sha512-Uul8w38u+PICe2Fg2pDKCaIG7kOyhowZ9vjiC1FsVwPABTW8vPPKfF6OvxRq3IiBaI1faOJmgdvMG7rMJARBhA==",
"requires": {
"@babel/runtime": "^7.4.4",
"prop-types": "^15.7.2",
"react-is": "^16.8.0 || ^17.0.0"
}
},
"@mdx-js/util": {
"version": "2.0.0-next.8",
"resolved": "https://registry.npmjs.org/@mdx-js/util/-/util-2.0.0-next.8.tgz",
@ -4581,9 +4688,9 @@
}
},
"@types/hast": {
"version": "2.3.2",
"resolved": "https://registry.npmjs.org/@types/hast/-/hast-2.3.2.tgz",
"integrity": "sha512-Op5W7jYgZI7AWKY5wQ0/QNMzQM7dGQPyW1rXKNiymVCy5iTfdPuGu4HhYNOM2sIv8gUfIuIdcYlXmAepwaowow==",
"version": "2.3.4",
"resolved": "https://registry.npmjs.org/@types/hast/-/hast-2.3.4.tgz",
"integrity": "sha512-wLEm0QvaoawEDoTRwzTXp4b4jpwiJDvR5KMnFnVodm3scufTlBOWRD6N1OBf9TZMhjlNsSfcO5V+7AF4+Vy+9g==",
"requires": {
"@types/unist": "*"
}
@ -4731,6 +4838,14 @@
"csstype": "^3.0.2"
}
},
"@types/react-transition-group": {
"version": "4.4.2",
"resolved": "https://registry.npmjs.org/@types/react-transition-group/-/react-transition-group-4.4.2.tgz",
"integrity": "sha512-KibDWL6nshuOJ0fu8ll7QnV/LVTo3PzQ9aCPnRUYPfX7eZohHwLIdNHj7pftanREzHNP4/nJa8oeM73uSiavMQ==",
"requires": {
"@types/react": "*"
}
},
"@types/readable-stream": {
"version": "2.3.9",
"resolved": "https://registry.npmjs.org/@types/readable-stream/-/readable-stream-2.3.9.tgz",
@ -5798,11 +5913,18 @@
"integrity": "sha512-1uIESzroqpaTzt9uX48HO+6gfnKu3RwvWdCcWSrX4csMInJfCo1yvKPNXCwXFRpJqRW25tiASb6No0YH57PXqg=="
},
"axios": {
"version": "0.21.1",
"resolved": "https://registry.npmjs.org/axios/-/axios-0.21.1.tgz",
"integrity": "sha512-dKQiRHxGD9PPRIUNIWvZhPTPpl1rf/OxTYKsqKUDjBwYylTvV7SjSHJb9ratfyzM6wCdLCOYLzs73qpg5c4iGA==",
"version": "0.21.4",
"resolved": "https://registry.npmjs.org/axios/-/axios-0.21.4.tgz",
"integrity": "sha512-ut5vewkiu8jjGBdqpM44XxjuCjq9LAKeHVmoVfHVzy8eHgxxq8SbAVQNovDA8mVi05kP0Ea/n/UzcSHcTJQfNg==",
"requires": {
"follow-redirects": "^1.10.0"
"follow-redirects": "^1.14.0"
},
"dependencies": {
"follow-redirects": {
"version": "1.14.3",
"resolved": "https://registry.npmjs.org/follow-redirects/-/follow-redirects-1.14.3.tgz",
"integrity": "sha512-3MkHxknWMUtb23apkgz/83fDoe+y+qr0TdgacGIA7bew+QLBo3vdgEN2xEsuXNivpFy4CyDhBBZnNZOtalmenw=="
}
}
},
"axobject-query": {
@ -7444,6 +7566,11 @@
"mimic-response": "^1.0.0"
}
},
"clsx": {
"version": "1.1.1",
"resolved": "https://registry.npmjs.org/clsx/-/clsx-1.1.1.tgz",
"integrity": "sha512-6/bPho624p3S2pMyvP5kKBPXnI3ufHLObBFCfgx+LkeR5lg2XYy2hqZqUf45ypD8COn2bhgGJSUE+l5dhNBieA=="
},
"coa": {
"version": "2.0.2",
"resolved": "https://registry.npmjs.org/coa/-/coa-2.0.2.tgz",
@ -8194,6 +8321,15 @@
}
}
},
"css-vendor": {
"version": "2.0.8",
"resolved": "https://registry.npmjs.org/css-vendor/-/css-vendor-2.0.8.tgz",
"integrity": "sha512-x9Aq0XTInxrkuFeHKbYC7zWY8ai7qJ04Kxd9MnvbC1uO5DagxoHQjm4JvG+vCdXOoFtCjbL2XSZfxmoYa9uQVQ==",
"requires": {
"@babel/runtime": "^7.8.3",
"is-in-browser": "^1.0.2"
}
},
"css-what": {
"version": "3.2.1",
"resolved": "https://registry.npmjs.org/css-what/-/css-what-3.2.1.tgz",
@ -8935,6 +9071,15 @@
"utila": "~0.4"
}
},
"dom-helpers": {
"version": "5.2.1",
"resolved": "https://registry.npmjs.org/dom-helpers/-/dom-helpers-5.2.1.tgz",
"integrity": "sha512-nRCa7CK3VTrM2NmGkIy4cbK7IZlgBE/PYMn55rrXefr5xXDP0LdtfPnblFDoVdcAfslJ7or6iqAUnx0CCGIWQA==",
"requires": {
"@babel/runtime": "^7.8.7",
"csstype": "^3.0.2"
}
},
"dom-serializer": {
"version": "0.2.2",
"resolved": "https://registry.npmjs.org/dom-serializer/-/dom-serializer-0.2.2.tgz",
@ -10473,9 +10618,9 @@
}
},
"eslint-plugin-prettier": {
"version": "3.4.0",
"resolved": "https://registry.npmjs.org/eslint-plugin-prettier/-/eslint-plugin-prettier-3.4.0.tgz",
"integrity": "sha512-UDK6rJT6INSfcOo545jiaOwB701uAIt2/dR7WnFQoGCVl1/EMqdANBmwUaqqQ45aXprsTGzSa39LI1PyuRBxxw==",
"version": "4.0.0",
"resolved": "https://registry.npmjs.org/eslint-plugin-prettier/-/eslint-plugin-prettier-4.0.0.tgz",
"integrity": "sha512-98MqmCJ7vJodoQK359bqQWaxOE0CS8paAz/GgjaZLyex4TTk3g9HugoO89EqWCrFiOqn9EVvcoo7gZzONCWVwQ==",
"dev": true,
"requires": {
"prettier-linter-helpers": "^1.0.0"
@ -11149,11 +11294,6 @@
"pend": "~1.2.0"
}
},
"fetch-blob": {
"version": "2.1.2",
"resolved": "https://registry.npmjs.org/fetch-blob/-/fetch-blob-2.1.2.tgz",
"integrity": "sha512-YKqtUDwqLyfyMnmbw8XD6Q8j9i/HggKtPEI+pZ1+8bvheBu78biSmNaXWusx1TauGqtUUGx/cBb1mKdq2rLYow=="
},
"figgy-pudding": {
"version": "3.5.2",
"resolved": "https://registry.npmjs.org/figgy-pudding/-/figgy-pudding-3.5.2.tgz",
@ -11377,18 +11517,17 @@
}
},
"form-data-encoder": {
"version": "1.4.3",
"resolved": "https://registry.npmjs.org/form-data-encoder/-/form-data-encoder-1.4.3.tgz",
"integrity": "sha512-ARLR/jJaj3+tlKkO7h1uvvjQcD6xCiKyg42hcG5Q4jv8uDa1IMPs81bM3BwI8BrqVEQxF9pX6tx0iLIzAvr31Q=="
"version": "1.5.3",
"resolved": "https://registry.npmjs.org/form-data-encoder/-/form-data-encoder-1.5.3.tgz",
"integrity": "sha512-TBXL4jWdTERP1oNLXCXEJYgBfA5dBbhGVvS6E9bvAl48gu4L1q+JQYnPfixEyemGewRUeCRRXLUOEdtRfE2FKQ=="
},
"formdata-node": {
"version": "3.7.0",
"resolved": "https://registry.npmjs.org/formdata-node/-/formdata-node-3.7.0.tgz",
"integrity": "sha512-O3y7XoWwE4zRvI5e1yVPDHyDGtEgGEI10KxTbQeMbEEt+imR7uxbL5Z4BCaHz5M09d1pkrFaqeYc8beAce4VSw==",
"version": "4.2.1",
"resolved": "https://registry.npmjs.org/formdata-node/-/formdata-node-4.2.1.tgz",
"integrity": "sha512-mYFfryf+E+r/zaYFWuouQEBbtjyJQql4hTDEVvUt9RexwCEzjj23pkVxAcwQDuFMftpf3MQhcbqp6FysWwN/tQ==",
"requires": {
"fetch-blob": "2.1.2",
"form-data-encoder": "1.4.3",
"node-domexception": "1.0.0"
"node-domexception": "1.0.0",
"web-streams-polyfill": "4.0.0-beta.1"
}
},
"forwarded": {
@ -14279,6 +14418,15 @@
}
}
},
"gatsby-transformer-remark-plaintext": {
"version": "1.0.3",
"resolved": "https://registry.npmjs.org/gatsby-transformer-remark-plaintext/-/gatsby-transformer-remark-plaintext-1.0.3.tgz",
"integrity": "sha512-V6nU03WKW65Xy5DyQPeVSMf5KVNoXX2yHbq53H0u1a/MkakG3osNTDWiTDxVQ8kg9OZ/04Dblczg9QxRnLjrYA==",
"requires": {
"@babel/runtime": "^7.0.0",
"strip-markdown": "^3.0.2"
}
},
"gatsby-transformer-sharp": {
"version": "2.12.1",
"resolved": "https://registry.npmjs.org/gatsby-transformer-sharp/-/gatsby-transformer-sharp-2.12.1.tgz",
@ -15620,11 +15768,16 @@
"integrity": "sha512-SEQu7vl8KjNL2eoGBLF3+wAjpsNfA9XMlXAYj/3EdaNfAlxKthD1xjEQfGOUhllCGGJVNY34bRr6lPINhNjyZw=="
},
"husky": {
"version": "7.0.1",
"resolved": "https://registry.npmjs.org/husky/-/husky-7.0.1.tgz",
"integrity": "sha512-gceRaITVZ+cJH9sNHqx5tFwbzlLCVxtVZcusME8JYQ8Edy5mpGDOqD8QBCdMhpyo9a+JXddnujQ4rpY2Ff9SJA==",
"version": "7.0.2",
"resolved": "https://registry.npmjs.org/husky/-/husky-7.0.2.tgz",
"integrity": "sha512-8yKEWNX4z2YsofXAMT7KvA1g8p+GxtB1ffV8XtpAEGuXNAbCV5wdNKH+qTpw8SM9fh4aMPDR+yQuKfgnreyZlg==",
"dev": true
},
"hyphenate-style-name": {
"version": "1.0.4",
"resolved": "https://registry.npmjs.org/hyphenate-style-name/-/hyphenate-style-name-1.0.4.tgz",
"integrity": "sha512-ygGZLjmXfPHj+ZWh6LwbC37l43MhfztxetbFCoYTM2VjkIUpeHgSNn7QIyVFj7YQ1Wl9Cbw5sholVJPzWvC2MQ=="
},
"iconv-lite": {
"version": "0.4.24",
"resolved": "https://registry.npmjs.org/iconv-lite/-/iconv-lite-0.4.24.tgz",
@ -16269,6 +16422,11 @@
"resolved": "https://registry.npmjs.org/is-hexadecimal/-/is-hexadecimal-1.0.4.tgz",
"integrity": "sha512-gyPJuv83bHMpocVYoqof5VDiZveEoGoFL8m3BXNb2VW8Xs+rz9kqO8LOQ5DH6EsuvilT1ApazU0pyl+ytbPtlw=="
},
"is-in-browser": {
"version": "1.1.3",
"resolved": "https://registry.npmjs.org/is-in-browser/-/is-in-browser-1.1.3.tgz",
"integrity": "sha1-Vv9NtoOgeMYILrldrX3GLh0E+DU="
},
"is-installed-globally": {
"version": "0.4.0",
"resolved": "https://registry.npmjs.org/is-installed-globally/-/is-installed-globally-0.4.0.tgz",
@ -16728,6 +16886,11 @@
"integrity": "sha512-pZe//GGmwJndub7ZghVHz7vjb2LgC1m8B07Au3eYqeqv9emhESByMXxaEgkUkEqJe87oBbSniGYoQNIBklc7IQ==",
"dev": true
},
"js-search": {
"version": "2.0.0",
"resolved": "https://registry.npmjs.org/js-search/-/js-search-2.0.0.tgz",
"integrity": "sha512-lJ8KzjlwcelIWuAdKyzsXv45W6OIwRpayzc7XmU8mzgWadg5UVOKVmnq/tXudddEB9Ceic3tVaGu6QOK/eebhg=="
},
"js-tokens": {
"version": "4.0.0",
"resolved": "https://registry.npmjs.org/js-tokens/-/js-tokens-4.0.0.tgz",
@ -16833,6 +16996,84 @@
"verror": "1.10.0"
}
},
"jss": {
"version": "10.7.1",
"resolved": "https://registry.npmjs.org/jss/-/jss-10.7.1.tgz",
"integrity": "sha512-5QN8JSVZR6cxpZNeGfzIjqPEP+ZJwJJfZbXmeABNdxiExyO+eJJDy6WDtqTf8SDKnbL5kZllEpAP71E/Lt7PXg==",
"requires": {
"@babel/runtime": "^7.3.1",
"csstype": "^3.0.2",
"is-in-browser": "^1.1.3",
"tiny-warning": "^1.0.2"
}
},
"jss-plugin-camel-case": {
"version": "10.7.1",
"resolved": "https://registry.npmjs.org/jss-plugin-camel-case/-/jss-plugin-camel-case-10.7.1.tgz",
"integrity": "sha512-+ioIyWvmAfgDCWXsQcW1NMnLBvRinOVFkSYJUgewQ6TynOcSj5F1bSU23B7z0p1iqK0PPHIU62xY1iNJD33WGA==",
"requires": {
"@babel/runtime": "^7.3.1",
"hyphenate-style-name": "^1.0.3",
"jss": "10.7.1"
}
},
"jss-plugin-default-unit": {
"version": "10.7.1",
"resolved": "https://registry.npmjs.org/jss-plugin-default-unit/-/jss-plugin-default-unit-10.7.1.tgz",
"integrity": "sha512-tW+dfYVNARBQb/ONzBwd8uyImigyzMiAEDai+AbH5rcHg5h3TtqhAkxx06iuZiT/dZUiFdSKlbe3q9jZGAPIwA==",
"requires": {
"@babel/runtime": "^7.3.1",
"jss": "10.7.1"
}
},
"jss-plugin-global": {
"version": "10.7.1",
"resolved": "https://registry.npmjs.org/jss-plugin-global/-/jss-plugin-global-10.7.1.tgz",
"integrity": "sha512-FbxCnu44IkK/bw8X3CwZKmcAnJqjAb9LujlAc/aP0bMSdVa3/MugKQRyeQSu00uGL44feJJDoeXXiHOakBr/Zw==",
"requires": {
"@babel/runtime": "^7.3.1",
"jss": "10.7.1"
}
},
"jss-plugin-nested": {
"version": "10.7.1",
"resolved": "https://registry.npmjs.org/jss-plugin-nested/-/jss-plugin-nested-10.7.1.tgz",
"integrity": "sha512-RNbICk7FlYKaJyv9tkMl7s6FFfeLA3ubNIFKvPqaWtADK0KUaPsPXVYBkAu4x1ItgsWx67xvReMrkcKA0jSXfA==",
"requires": {
"@babel/runtime": "^7.3.1",
"jss": "10.7.1",
"tiny-warning": "^1.0.2"
}
},
"jss-plugin-props-sort": {
"version": "10.7.1",
"resolved": "https://registry.npmjs.org/jss-plugin-props-sort/-/jss-plugin-props-sort-10.7.1.tgz",
"integrity": "sha512-eyd5FhA+J0QrpqXxO7YNF/HMSXXl4pB0EmUdY4vSJI4QG22F59vQ6AHtP6fSwhmBdQ98Qd9gjfO+RMxcE39P1A==",
"requires": {
"@babel/runtime": "^7.3.1",
"jss": "10.7.1"
}
},
"jss-plugin-rule-value-function": {
"version": "10.7.1",
"resolved": "https://registry.npmjs.org/jss-plugin-rule-value-function/-/jss-plugin-rule-value-function-10.7.1.tgz",
"integrity": "sha512-fGAAImlbaHD3fXAHI3ooX6aRESOl5iBt3LjpVjxs9II5u9tzam7pqFUmgTcrip9VpRqYHn8J3gA7kCtm8xKwHg==",
"requires": {
"@babel/runtime": "^7.3.1",
"jss": "10.7.1",
"tiny-warning": "^1.0.2"
}
},
"jss-plugin-vendor-prefixer": {
"version": "10.7.1",
"resolved": "https://registry.npmjs.org/jss-plugin-vendor-prefixer/-/jss-plugin-vendor-prefixer-10.7.1.tgz",
"integrity": "sha512-1UHFmBn7hZNsHXTkLLOL8abRl8vi+D1EVzWD4WmLFj55vawHZfnH1oEz6TUf5Y61XHv0smdHabdXds6BgOXe3A==",
"requires": {
"@babel/runtime": "^7.3.1",
"css-vendor": "^2.0.8",
"jss": "10.7.1"
}
},
"jsx-ast-utils": {
"version": "2.2.3",
"resolved": "https://registry.npmjs.org/jsx-ast-utils/-/jsx-ast-utils-2.2.3.tgz",
@ -19837,6 +20078,11 @@
"ts-pnp": "^1.1.6"
}
},
"popper.js": {
"version": "1.16.1-lts",
"resolved": "https://registry.npmjs.org/popper.js/-/popper.js-1.16.1-lts.tgz",
"integrity": "sha512-Kjw8nKRl1m+VrSFCoVGPph93W/qrSO7ZkqPpTf7F4bk/sqcfWK019dWBUpE/fBOsOQY1dks/Bmcbfn1heM/IsA=="
},
"portfinder": {
"version": "1.0.28",
"resolved": "https://registry.npmjs.org/portfinder/-/portfinder-1.0.28.tgz",
@ -20599,9 +20845,9 @@
"integrity": "sha1-6SQ0v6XqjBn0HN/UAddBo8gZ2Jc="
},
"prettier": {
"version": "2.3.2",
"resolved": "https://registry.npmjs.org/prettier/-/prettier-2.3.2.tgz",
"integrity": "sha512-lnJzDfJ66zkMy58OL5/NY5zp70S7Nz6KqcKkXYzn2tMVrNxvbqaBpg7H3qHaLxCJ5lNMsGuM8+ohS7cZrthdLQ=="
"version": "2.4.1",
"resolved": "https://registry.npmjs.org/prettier/-/prettier-2.4.1.tgz",
"integrity": "sha512-9fbDAXSBcc6Bs1mZrDYb3XKzDLm4EXXL9sC1LqKP5rZkT6KRr/rf9amVUcODVXgguK/isJz0d0hP72WeaKWsvA=="
},
"prettier-linter-helpers": {
"version": "1.0.0",
@ -21272,6 +21518,17 @@
"resolved": "https://registry.npmjs.org/react-side-effect/-/react-side-effect-2.1.0.tgz",
"integrity": "sha512-IgmcegOSi5SNX+2Snh1vqmF0Vg/CbkycU9XZbOHJlZ6kMzTmi3yc254oB1WCkgA7OQtIAoLmcSFuHTc/tlcqXg=="
},
"react-transition-group": {
"version": "4.4.2",
"resolved": "https://registry.npmjs.org/react-transition-group/-/react-transition-group-4.4.2.tgz",
"integrity": "sha512-/RNYfRAMlZwDSr6z4zNKV6xu53/e2BuaBbGhbyYIXTrmgu/bGHzmqOs7mJSJBHy9Ud+ApHx3QjrkKSp1pxvlFg==",
"requires": {
"@babel/runtime": "^7.5.5",
"dom-helpers": "^5.0.1",
"loose-envify": "^1.4.0",
"prop-types": "^15.6.2"
}
},
"read": {
"version": "1.0.7",
"resolved": "https://registry.npmjs.org/read/-/read-1.0.7.tgz",
@ -21547,14 +21804,15 @@
}
},
"rehype-react": {
"version": "7.0.0",
"resolved": "https://registry.npmjs.org/rehype-react/-/rehype-react-7.0.0.tgz",
"integrity": "sha512-x24M2jkvhDd9otFiHpAP2yt+zLLPAxgvSvhPaXCwHG2UfKT8LTOgfZN/5qcauQLaqsCDgcdTgUKQTbc1jFpfBA==",
"version": "7.0.2",
"resolved": "https://registry.npmjs.org/rehype-react/-/rehype-react-7.0.2.tgz",
"integrity": "sha512-jVndWMaGFrBOI8Z5B9B4sAJZFRaSt9IXrSC3m1QfJrxznud3834HxCgO0TmHi/8oFp0vgHw7aTZsYQ73+VI0kQ==",
"requires": {
"@mapbox/hast-util-table-cell-style": "^0.2.0",
"@types/hast": "^2.0.0",
"@types/react": "^17.0.0",
"hast-to-hyperscript": "^10.0.0",
"hast-util-whitespace": "^2.0.0",
"unified": "^10.0.0"
},
"dependencies": {
@ -21590,6 +21848,11 @@
"web-namespaces": "^2.0.0"
}
},
"hast-util-whitespace": {
"version": "2.0.0",
"resolved": "https://registry.npmjs.org/hast-util-whitespace/-/hast-util-whitespace-2.0.0.tgz",
"integrity": "sha512-Pkw+xBHuV6xFeJprJe2BBEoDV+AvQySaz3pPDRUs5PNZEMQjpXJJueqrpcHIXxnWTcAGi/UOCgVShlkY6kLoqg=="
},
"is-buffer": {
"version": "2.0.5",
"resolved": "https://registry.npmjs.org/is-buffer/-/is-buffer-2.0.5.tgz",
@ -21643,9 +21906,9 @@
}
},
"vfile": {
"version": "5.0.2",
"resolved": "https://registry.npmjs.org/vfile/-/vfile-5.0.2.tgz",
"integrity": "sha512-5cV+K7tX83MT3bievROc+7AvHv0GXDB0zqbrTjbOe+HRbkzvY4EP+wS3IR77kUBCoWFMdG9py18t0sesPtQ1Rw==",
"version": "5.1.0",
"resolved": "https://registry.npmjs.org/vfile/-/vfile-5.1.0.tgz",
"integrity": "sha512-4o7/DJjEaFPYSh0ckv5kcYkJTHQgCKdL8ozMM1jLAxO9ox95IzveDPXCZp08HamdWq8JXTkClDvfAKaeLQeKtg==",
"requires": {
"@types/unist": "^2.0.0",
"is-buffer": "^2.0.0",
@ -22918,9 +23181,9 @@
}
},
"slugify": {
"version": "1.5.3",
"resolved": "https://registry.npmjs.org/slugify/-/slugify-1.5.3.tgz",
"integrity": "sha512-/HkjRdwPY3yHJReXu38NiusZw2+LLE2SrhkWJtmlPDB1fqFSvioYj62NkPcrKiNCgRLeGcGK7QBvr1iQwybeXw=="
"version": "1.6.0",
"resolved": "https://registry.npmjs.org/slugify/-/slugify-1.6.0.tgz",
"integrity": "sha512-FkMq+MQc5hzYgM86nLuHI98Acwi3p4wX+a5BO9Hhw4JdK4L7WueIiZ4tXEobImPqBz2sVcV0+Mu3GRB30IGang=="
},
"smoothscroll-polyfill": {
"version": "0.4.4",
@ -23709,6 +23972,11 @@
"resolved": "https://registry.npmjs.org/strip-json-comments/-/strip-json-comments-3.1.0.tgz",
"integrity": "sha512-e6/d0eBu7gHtdCqFt0xJr642LdToM5/cN4Qb9DbHjVx1CP5RyeM+zH7pbecEmDv/lBqb0QH+6Uqq75rxFPkM0w=="
},
"strip-markdown": {
"version": "3.1.2",
"resolved": "https://registry.npmjs.org/strip-markdown/-/strip-markdown-3.1.2.tgz",
"integrity": "sha512-NjwW6CEefesmHQPs7lof/lgnSriqUnRNOWpnrNPq9A7/yOCdnhaB7DcxlhYuN7WiiRUe349aitAsTQ/ajM9Dmw=="
},
"strip-outer": {
"version": "1.0.1",
"resolved": "https://registry.npmjs.org/strip-outer/-/strip-outer-1.0.1.tgz",
@ -23865,9 +24133,9 @@
}
},
"swagger-client": {
"version": "3.15.0",
"resolved": "https://registry.npmjs.org/swagger-client/-/swagger-client-3.15.0.tgz",
"integrity": "sha512-8Ki4bVbT+bl8hGmy3vR89wktZhVLXKNUk8CoUvfwn7Rq45bZ15c9UQXVr4VU6hP3f9diWJGJs0bKZR9pPOq4ZA==",
"version": "3.16.1",
"resolved": "https://registry.npmjs.org/swagger-client/-/swagger-client-3.16.1.tgz",
"integrity": "sha512-BcNRQzXHRGuXfhN0f80ptlr+bSaPvXwo8+gWbpmTnbKdAjcWOKAWwUx7rgGHjTKZh0qROr/GX9xOZIY8LrBuTg==",
"requires": {
"@babel/runtime-corejs3": "^7.11.2",
"btoa": "^1.2.1",
@ -23876,10 +24144,10 @@
"cross-fetch": "^3.1.4",
"deep-extend": "~0.6.0",
"fast-json-patch": "^3.0.0-1",
"form-data-encoder": "^1.0.1",
"formdata-node": "^3.6.2",
"form-data-encoder": "^1.4.3",
"formdata-node": "^4.0.0",
"js-yaml": "^4.1.0",
"lodash": "^4.17.19",
"lodash": "^4.17.21",
"qs": "^6.9.4",
"querystring-browser": "^1.0.4",
"traverse": "~0.6.6",
@ -23887,9 +24155,9 @@
},
"dependencies": {
"@babel/runtime-corejs3": {
"version": "7.15.3",
"resolved": "https://registry.npmjs.org/@babel/runtime-corejs3/-/runtime-corejs3-7.15.3.tgz",
"integrity": "sha512-30A3lP+sRL6ml8uhoJSs+8jwpKzbw8CqBvDc1laeptxPm5FahumJxirigcbD2qTs71Sonvj1cyZB0OKGAmxQ+A==",
"version": "7.15.4",
"resolved": "https://registry.npmjs.org/@babel/runtime-corejs3/-/runtime-corejs3-7.15.4.tgz",
"integrity": "sha512-lWcAqKeB624/twtTc3w6w/2o9RqJPaNBhPGK6DKLSiwuVWC7WFkypWyNg+CpZoyJH0jVzv1uMtXZ/5/lQOLtCg==",
"requires": {
"core-js-pure": "^3.16.0",
"regenerator-runtime": "^0.13.4"
@ -23915,9 +24183,9 @@
"integrity": "sha512-ZwrFkGJxUR3EIoXtO+yVE69Eb7KlixbaeAWfBQB9vVsNn/o+Yw69gBWSSDK825hQNdN+wF8zELf3dFNl/kxkUA=="
},
"core-js-pure": {
"version": "3.16.1",
"resolved": "https://registry.npmjs.org/core-js-pure/-/core-js-pure-3.16.1.tgz",
"integrity": "sha512-TyofCdMzx0KMhi84mVRS8rL1XsRk2SPUNz2azmth53iRN0/08Uim9fdhQTaZTG1LqaXHYVci4RDHka6WrXfnvg=="
"version": "3.17.3",
"resolved": "https://registry.npmjs.org/core-js-pure/-/core-js-pure-3.17.3.tgz",
"integrity": "sha512-YusrqwiOTTn8058JDa0cv9unbXdIiIgcgI9gXso0ey4WgkFLd3lYlV9rp9n7nDCsYxXsMDTjA4m1h3T348mdlQ=="
},
"cross-fetch": {
"version": "3.1.4",
@ -24226,6 +24494,11 @@
"resolved": "https://registry.npmjs.org/timsort/-/timsort-0.3.0.tgz",
"integrity": "sha1-QFQRqOfmM5/mTbmiNN4R3DHgK9Q="
},
"tiny-warning": {
"version": "1.0.3",
"resolved": "https://registry.npmjs.org/tiny-warning/-/tiny-warning-1.0.3.tgz",
"integrity": "sha512-lBN9zLN/oAf68o3zNXYrdCt1kP8WsiGW8Oo2ka41b2IM5JL/S1CTyX1rW0mb/zSuJun0ZUrDxx4sqvYS2FWzPA=="
},
"tinycolor2": {
"version": "1.4.2",
"resolved": "https://registry.npmjs.org/tinycolor2/-/tinycolor2-1.4.2.tgz",
@ -25445,6 +25718,11 @@
"resolved": "https://registry.npmjs.org/web-namespaces/-/web-namespaces-1.1.4.tgz",
"integrity": "sha512-wYxSGajtmoP4WxfejAPIr4l0fVh+jeMXZb08wNc0tMg6xsfZXj3cECqIK0G7ZAqUq0PP8WlMDtaOGVBTAWztNw=="
},
"web-streams-polyfill": {
"version": "4.0.0-beta.1",
"resolved": "https://registry.npmjs.org/web-streams-polyfill/-/web-streams-polyfill-4.0.0-beta.1.tgz",
"integrity": "sha512-3ux37gEX670UUphBF9AMCq8XM6iQ8Ac6A+DSRRjDoRBm1ufCkaCDdNVbaqq60PsEkdNlLKrGtv/YBP4EJXqNtQ=="
},
"webpack": {
"version": "4.46.0",
"resolved": "https://registry.npmjs.org/webpack/-/webpack-4.46.0.tgz",

View File

@ -16,8 +16,11 @@
"test": "npm run lint"
},
"dependencies": {
"@material-ui/core": "^4.12.3",
"@material-ui/icons": "^4.11.2",
"@material-ui/lab": "^4.0.0-alpha.60",
"@oceanprotocol/art": "^3.2.0",
"axios": "^0.21.1",
"axios": "^0.21.4",
"classnames": "^2.3.1",
"gatsby": "^2.32.13",
"gatsby-image": "^3.11.0",
@ -43,23 +46,25 @@
"gatsby-source-git": "^1.1.0",
"gatsby-source-graphql": "^2.14.0",
"gatsby-transformer-remark": "^2.16.1",
"gatsby-transformer-remark-plaintext": "^1.0.3",
"gatsby-transformer-sharp": "^2.12.1",
"gatsby-transformer-xml": "^2.10.0",
"gatsby-transformer-yaml": "^2.11.0",
"giphy-js-sdk-core": "^1.0.6",
"intersection-observer": "^0.12.0",
"js-search": "^2.0.0",
"react": "^17.0.2",
"react-dom": "^17.0.2",
"react-helmet": "^6.1.0",
"react-scrollspy": "^3.4.3",
"rehype-react": "^7.0.0",
"rehype-react": "^7.0.2",
"remark": "^13.0.0",
"remark-github-plugin": "^1.4.0",
"remark-react": "^8.0.0",
"shortid": "^2.2.16",
"slugify": "^1.5.3",
"slugify": "^1.6.0",
"smoothscroll-polyfill": "^0.4.4",
"swagger-client": "^3.15.0"
"swagger-client": "^3.16.1"
},
"devDependencies": {
"@svgr/webpack": "^5.5.0",
@ -67,13 +72,13 @@
"eslint": "^7.32.0",
"eslint-config-oceanprotocol": "^1.5.0",
"eslint-config-prettier": "^8.3.0",
"eslint-plugin-prettier": "^3.4.0",
"eslint-plugin-prettier": "^4.0.0",
"git-format-staged": "^2.1.2",
"husky": "^7.0.1",
"husky": "^7.0.2",
"markdownlint-cli": "^0.28.1",
"node-sass": "^5.0.0",
"npm-run-all": "^4.1.5",
"prettier": "^2.3.2"
"prettier": "^2.4.1"
},
"repository": {
"type": "git",

View File

@ -15,27 +15,27 @@ export default function Deployments({ data, location }) {
const networks = {
'Ethereum Mainnet': {
aquarius: 'https://aquarius.mainnet.oceanprotocol.com',
aquarius: 'https://aquarius.oceanprotocol.com',
provider: 'https://provider.mainnet.oceanprotocol.com'
},
'Polygon Mainnet': {
aquarius: 'https://aquarius.polygon.oceanprotocol.com',
aquarius: 'https://aquarius.oceanprotocol.com',
provider: 'https://provider.polygon.oceanprotocol.com'
},
'Binance Smart Chain': {
aquarius: 'https://aquarius.bsc.oceanprotocol.com',
aquarius: 'https://aquarius.oceanprotocol.com/',
provider: 'https://provider.bsc.oceanprotocol.com'
},
Ropsten: {
aquarius: 'https://aquarius.ropsten.oceanprotocol.com',
aquarius: 'https://aquarius.oceanprotocol.com',
provider: 'https://provider.ropsten.oceanprotocol.com'
},
Rinkeby: {
aquarius: 'https://aquarius.rinkeby.oceanprotocol.com',
aquarius: 'https://aquarius.oceanprotocol.com',
provider: 'https://provider.rinkeby.oceanprotocol.com'
},
Mumbai: {
aquarius: 'https://aquarius.mumbai.oceanprotocol.com',
aquarius: 'https://aquarius.oceanprotocol.com',
provider: 'https://provider.mumbai.oceanprotocol.com'
}
}
@ -46,18 +46,7 @@ export default function Deployments({ data, location }) {
setLoading(false)
}, [])
const getAquariusVersion = async (url) => {
if (!url) return
try {
const data = await fetch(url)
const { version } = await data.json()
return version
} catch {
return '-'
}
}
const getProviderVersion = async (url) => {
const getVersion = async (url) => {
if (!url) return
try {
const data = await fetch(url)
@ -72,8 +61,8 @@ export default function Deployments({ data, location }) {
const objs = []
for (const key of Object.keys(networks)) {
const aquariusVerison = await getAquariusVersion(networks[key].aquarius)
const providerVerison = await getProviderVersion(networks[key].provider)
const aquariusVerison = await getVersion(networks[key].aquarius)
const providerVerison = await getVersion(networks[key].provider)
objs.push(
<tr key={key}>
<td>{key}</td>
@ -85,7 +74,6 @@ export default function Deployments({ data, location }) {
return (
<div>
{' '}
<table>
<thead>
<tr>

View File

@ -2,6 +2,7 @@ import React from 'react'
import { Link, StaticQuery, graphql } from 'gatsby'
import { ReactComponent as Logo } from '@oceanprotocol/art/logo/logo.svg'
import styles from './Header.module.scss'
import SearchButton from './Search/SearchButton'
const query = graphql`
query {
@ -37,7 +38,6 @@ const Header = () => (
<Logo className={styles.headerLogoImage} />
<h1 className={styles.headerTitle}>{siteTitle}</h1>
</Link>
<nav className={styles.headerMenu}>
{sections.map(({ node }) => (
<Link
@ -48,6 +48,7 @@ const Header = () => (
{node.title}
</Link>
))}
<SearchButton />
</nav>
</div>
</header>

View File

@ -3,6 +3,7 @@ import { StaticQuery, graphql } from 'gatsby'
import { ReactComponent as Logo } from '@oceanprotocol/art/logo/logo.svg'
import Content from '../components/Content'
import styles from './HeaderHome.module.scss'
import SearchButton from '../components/Search/SearchButton'
const HeaderHome = () => (
<StaticQuery
@ -24,7 +25,14 @@ const HeaderHome = () => (
<Content>
<Logo className={styles.headerLogo} />
<h1 className={styles.headerTitle}>{siteTitle}</h1>
<p className={styles.headerDescription}>{siteDescription}</p>
<p className={styles.headerDescription}>
<div style={{ display: 'flex', flexDirection: 'column' }}>
{siteDescription}
<div>
<SearchButton />
</div>
</div>
</p>
</Content>
</header>
)

View File

@ -26,11 +26,12 @@ const queryGithub = graphql`
totalCount
}
releases(
first: 1
first: 2
orderBy: { field: CREATED_AT, direction: DESC }
) {
edges {
node {
isDraft
tag {
name
}
@ -80,12 +81,20 @@ const Repository = ({ name, links, readme }) => (
})
.filter((n) => n)
const repo = repoFilteredArray[0]
var repo = repoFilteredArray[0]
// safeguard against more empty items,
// e.g. when private repos are referenced in repositories.yml
if (repo === undefined) return null
const releasesFilteredArray = repo.releases.edges
.filter(({ node }) => {
return !node.isDraft
})
.splice(0, 1)
repo = {
...repo,
releases: { edges: releasesFilteredArray }
}
const {
url,
description,
@ -110,7 +119,6 @@ const Repository = ({ name, links, readme }) => (
})
const moreLinks = links || linksFilteredArray.filter((n) => n)[0]
return (
<article className={styles.repository}>
<Title

View File

@ -0,0 +1,13 @@
import React from 'react'
import { navigate } from 'gatsby'
import { IconButton } from '@material-ui/core'
import SearchIcon from '@material-ui/icons/Search'
const SearchButton = () => {
return (
<IconButton onClick={() => navigate('/search')}>
<SearchIcon />
</IconButton>
)
}
export default SearchButton

View File

@ -0,0 +1,150 @@
import React, { useState, useEffect } from 'react'
import * as JsSearch from 'js-search'
import PropTypes from 'prop-types'
import { makeStyles } from '@material-ui/core/styles'
import List from '@material-ui/core/List'
import ListItem from '@material-ui/core/ListItem'
import TextField from '@material-ui/core/TextField'
import InputAdornment from '@material-ui/core/InputAdornment'
import SearchIcon from '@material-ui/icons/Search'
import SearchResultElement from './SearchResultElement'
const useStyles = makeStyles((theme) => ({
parent: {
overflow: 'hidden',
position: 'relative',
width: '100%'
},
child: {
background: 'green',
height: '100%',
width: '50%',
position: 'absolute',
right: 0,
top: 0
},
root: {
margin: 'auto',
width: '50%'
}
}))
const SearchClient = ({ searchableData }) => {
const [searchState, setSearchState] = useState({
isLoading: true,
searchResults: [],
search: null,
isError: false,
termFrequency: true,
removeStopWords: false,
searchQuery: '',
selectedStrategy: '',
selectedSanitizer: '',
touched: false
})
const classes = useStyles()
useEffect(() => {
rebuildIndex(searchableData)
}, [])
const rebuildIndex = (searchableData) => {
// const {
// removeStopWords,
// selectedStrategy,
// selectedSanitizer,
// termFrequency
// } = searchState
const dataToSearch = new JsSearch.Search('title')
dataToSearch.addIndex('title')
dataToSearch.addIndex('description')
dataToSearch.addIndex('text')
dataToSearch.addDocuments(searchableData)
setSearchState({
...searchState,
isLoading: false,
search: dataToSearch
})
}
const searchData = (e) => {
const { search } = searchState
const queryResult = search.search(e.target.value)
setSearchState({
...searchState,
touched: true,
searchQuery: e.target.value,
searchResults: queryResult
})
}
const handleSubmit = (e) => {
e.preventDefault()
}
return (
<div style={{ height: '100%' }}>
<form onSubmit={handleSubmit}>
<TextField
variant="outlined"
placeholder="Search"
style={{
margin: '10px auto',
width: '100%'
}}
autoFocus
value={searchState.searchQuery}
onChange={searchData}
InputProps={{
startAdornment: (
<InputAdornment position="start">
<SearchIcon />
</InputAdornment>
)
}}
/>
</form>
<div
id="result-list-conatiner"
style={{ overflowY: 'auto', height: '100%' }}
className={classes.parent}
>
{searchState.touched ? (
<div>
<ResultList searchResults={searchState.searchResults} />
</div>
) : null}
</div>
</div>
)
}
SearchClient.propTypes = {
searchableData: PropTypes.array.isRequired
}
const ResultList = ({ searchResults }) => {
return (
<div style={{ maxHeight: '100%' }}>
<div>Total results found: {searchResults.length}</div>
<div>
<List style={{ maxHeight: '100%' }}>
{searchResults.map((element) => (
<ListItem style={{ before: { content: null } }} key={element.id}>
<SearchResultElement element={element} />
</ListItem>
))}
</List>
</div>
</div>
)
}
ResultList.propTypes = {
searchResults: PropTypes.array.isRequired
}
export default SearchClient

View File

@ -0,0 +1,89 @@
import React from 'react'
import { useStaticQuery, graphql } from 'gatsby'
import SearchClient from './SearchClient'
import Layout from '../../components/Layout'
import HeaderSection from '../../components/HeaderSection'
import PropTypes from 'prop-types'
const SearchComponent = ({ location }) => {
const data = useStaticQuery(graphql`
query {
allMarkdownRemark(
filter: { fileAbsolutePath: { regex: "/content/|/markdowns/" } }
) {
edges {
node {
fields {
slug
section
}
frontmatter {
title
description
app
slug
module
}
id
plainText
}
}
}
}
`)
const searchableData = data.allMarkdownRemark.edges.map(({ node }) => {
var { slug } = node.fields
var section = null
if (node.fields.slug.startsWith('/tutorials')) {
section = 'Tutorials'
} else if (node.fields.slug.startsWith('/concepts')) {
section = 'Core concepts'
} else if (node.frontmatter.module) {
// This is for adding py module docs to index
slug = `/references/read-the-docs/${node.frontmatter.app.replace(
'.',
'-'
)}/${node.frontmatter.slug}`
section = `API References [${node.frontmatter.app}]`
}
return {
title: node.frontmatter.title,
description: node.frontmatter.description,
id: node.id,
text: node.plainText,
slug,
section
}
})
return (
<Layout location={location}>
<HeaderSection title="Search" />
<main>
<article style={{ height: '700px' }}>
<div
id="search-client-container"
style={{
margin: 'auto',
width: '50%',
height: '100%',
paddingBottom: '50px'
}}
>
<SearchClient searchableData={searchableData} />
</div>
</article>
</main>
</Layout>
)
}
SearchComponent.propTypes = {
location: PropTypes.object.isRequired
}
export default SearchComponent

View File

@ -0,0 +1,10 @@
@import 'variables';
.searchform input[type='text'] {
float: right;
padding: 6px;
border: none;
margin-top: 8px;
margin-right: 16px;
font-size: 17px;
}

View File

@ -0,0 +1,54 @@
import React from 'react'
import { Link } from 'gatsby'
import PropTypes from 'prop-types'
import Card from '@material-ui/core/Card'
import CardContent from '@material-ui/core/CardContent'
import Typography from '@material-ui/core/Typography'
import { makeStyles } from '@material-ui/core/styles'
const useStyles = makeStyles({
root: {
minWidth: 275
},
bullet: {
display: 'inline-block',
margin: '0 2px',
transform: 'scale(0.8)'
},
title: {
fontSize: 14
},
pos: {
marginBottom: 12
}
})
const SearchResultElement = ({ element }) => {
const classes = useStyles()
const { slug, title, section, description } = element
return (
<Card container alignItems="center" style={{ width: '100%' }}>
<CardContent>
<Typography
className={classes.title}
color="textSecondary"
gutterBottom
>
{section}
</Typography>
<Typography variant="h6" component="h2">
<Link to={slug}>{title}</Link>
</Typography>
<Typography className={classes.pos} color="textSecondary">
{description ? description.substring(0, 100) + '...' : null}
</Typography>
</CardContent>
</Card>
)
}
SearchResultElement.propTypes = {
element: PropTypes.object.isRequired
}
export default SearchResultElement

View File

@ -66,7 +66,6 @@ const IndexPage = ({ data, location }) => (
</li>
))}
</ul>
<Repositories />
</Content>
</Layout>

View File

@ -112,3 +112,8 @@
transition: transform 0.2s ease-out;
}
}
.searchButton {
margin-top: 20px;
text-align: center;
}

View File

@ -8,7 +8,6 @@ import stylesSidebar from '../../components/Sidebar.module.scss'
const Toc = ({ data }) => {
const Ids = []
const itemsV1 = Object.keys(data.paths)
.filter((key) => key.startsWith('/api/v1/aquarius'))
.map((key) => {
@ -36,20 +35,30 @@ const Toc = ({ data }) => {
</li>
)
})
const getRestEndpoints = () => {
if (data.info.title === 'Aquarius') {
return (
<>
<code>/api/v1/aquarius</code>
<ul>{itemsV1}</ul>
{itemsOther.length ? (
<>
<code>Other REST endpoints</code>
<ul>{itemsOther}</ul>
</>
) : null}
</>
)
} else return <>{itemsOther}</>
}
return (
<Scrollspy
items={Ids}
currentClassName={stylesSidebar.scrollspyActive}
offset={-100}
>
<code>/api/v1/aquarius</code>
<ul>{itemsV1}</ul>
{itemsOther.length ? (
<>
<code>Other REST endpoints</code>
<ul>{itemsOther}</ul>
</>
) : null}
{getRestEndpoints()}
</Scrollspy>
)
}