GitBook: [#2] No subject

This commit is contained in:
Akshay Patel 2022-07-03 10:47:26 +00:00 committed by gitbook-bot
parent feca89db1f
commit cc6d326bee
No known key found for this signature in database
GPG Key ID: 07D2180C7B12D0FF
23 changed files with 515 additions and 568 deletions

119
README (1).md Normal file
View File

@ -0,0 +1,119 @@
# README
[![banner](https://raw.githubusercontent.com/oceanprotocol/art/master/github/repo-banner%402x.png)](https://docs.oceanprotocol.com)
## docs
> 🐬 Ocean Protocol documentation. https://docs.oceanprotocol.com
[![Build Status](https://github.com/oceanprotocol/docs/workflows/CI/badge.svg)](https://github.com/oceanprotocol/docs/actions) [![Netlify Status](https://api.netlify.com/api/v1/badges/218e617e-45da-47ab-8f2a-bcfedf80550f/deploy-status)](https://app.netlify.com/sites/docs-oceanprotocol/deploys) [![Maintainability](https://api.codeclimate.com/v1/badges/d39837421591f0bc2550/maintainability)](https://codeclimate.com/github/oceanprotocol/docs/maintainability) [![js oceanprotocol](https://img.shields.io/badge/js-oceanprotocol-7b1173.svg)](https://github.com/oceanprotocol/eslint-config-oceanprotocol) [![css bigchaindb](https://img.shields.io/badge/css-bigchaindb-39BA91.svg)](https://github.com/bigchaindb/stylelint-config-bigchaindb)
***
**These docs are meant to be viewed on** [**docs.oceanprotocol.com**](https://docs.oceanprotocol.com)**. You can still browse them here but links or images might not work in some places.**
**If you want to contribute to these docs, then keep reading.**
***
* [Content](<README (1).md#content>)
* [Development](<README (1).md#development>)
* [Linting & Formatting](<README (1).md#linting--formatting>)
* [Editor Setup: VS Code](<README (1).md#editor-setup-vs-code>)
* [⬆️ Deployment](<README (1).md#-deployment>)
* [License](<README (1).md#license>)
### Content
To write or update content, refer to the documentation of the documentation:
* [**Documentation: Content →**](broken-reference)
* [**Documentation: API References →**](broken-reference)
* [**Documentation: GitHub Data Fetching →**](broken-reference)
* [**Documentation: Repository Component →**](broken-reference)
### Development
The site is a React app built with [Gatsby](https://www.gatsbyjs.org), pulling its content from local and external Markdown files, and from various APIs.
To start, clone this repo and set your `GITHUB_TOKEN` (see [GitHub GraphQL API](broken-reference)):
```bash
git clone git@github.com:oceanprotocol/docs.git
cd docs/
# add GITHUB_TOKEN
cp .env.sample .env
vi .env
```
Then install dependencies and start up the development server:
```bash
# use Node.js/npm version defined in .nvmrc
nvm use
npm i
npm start
```
Alternatively, you can use [Docker Compose](https://docs.docker.com/compose/) to do the same, but without using your local system:
```bash
docker-compose up
```
Either one of these commands will expose a hot-reloading server under:
* [localhost:8000](http://localhost:8000)
* [localhost:8000/\_\_\_graphql](http://localhost:8000/\_\_\_graphql)
### Linting & Formatting
To enforce a consistent code style, linting is setup for pretty much every file. Linting is part of the test suite, meaning builds on Travis will fail in case of linting errors.
In this repo the following tools are setup for that:
* ESLint with [eslint-config-oceanprotocol](https://github.com/oceanprotocol/eslint-config-oceanprotocol)
* [markdownlint](https://github.com/DavidAnson/markdownlint)
* [Prettier](https://prettier.io)
```bash
# only run linting checks
npm run lint
# auto-formatting of all js, css, md, yml files
npm run format
```
#### Editor Setup: VS Code
If you use VS Code as your editor, you can install those extensions to get linting as you type, and auto-formatting as you save:
* [ESLint](https://marketplace.visualstudio.com/items?itemName=dbaeumer.vscode-eslint)
* [Prettier](https://marketplace.visualstudio.com/items?itemName=esbenp.prettier-vscode)
* [markdownlint](https://marketplace.visualstudio.com/items?itemName=DavidAnson.vscode-markdownlint)
### ⬆️ Deployment
Every branch or Pull Request is automatically deployed by [Netlify](https://netlify.com) with their GitHub integration. A link to a preview deployment will appear under each Pull Request.
The latest deployment of the `main` branch is automatically aliased to `docs.oceanprotocol.com`.
### License
```
Copyright ((C)) 2022 Ocean Protocol Foundation Ltd.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
```

121
README.md
View File

@ -1,121 +1,2 @@
[![banner](https://raw.githubusercontent.com/oceanprotocol/art/master/github/repo-banner%402x.png)](https://docs.oceanprotocol.com)
# Orientation
<h1 align="center">docs</h1>
> 🐬 Ocean Protocol documentation. https://docs.oceanprotocol.com
[![Build Status](https://github.com/oceanprotocol/docs/workflows/CI/badge.svg)](https://github.com/oceanprotocol/docs/actions)
[![Netlify Status](https://api.netlify.com/api/v1/badges/218e617e-45da-47ab-8f2a-bcfedf80550f/deploy-status)](https://app.netlify.com/sites/docs-oceanprotocol/deploys)
[![Maintainability](https://api.codeclimate.com/v1/badges/d39837421591f0bc2550/maintainability)](https://codeclimate.com/github/oceanprotocol/docs/maintainability)
[![js oceanprotocol](https://img.shields.io/badge/js-oceanprotocol-7b1173.svg)](https://github.com/oceanprotocol/eslint-config-oceanprotocol)
[![css bigchaindb](https://img.shields.io/badge/css-bigchaindb-39BA91.svg)](https://github.com/bigchaindb/stylelint-config-bigchaindb)
---
**These docs are meant to be viewed on [docs.oceanprotocol.com](https://docs.oceanprotocol.com). You can still browse them here but links or images might not work in some places.**
**If you want to contribute to these docs, then keep reading.**
---
- [Content](#content)
- [Development](#development)
- [Linting & Formatting](#linting--formatting)
- [Editor Setup: VS Code](#editor-setup-vs-code)
- [⬆️ Deployment](#-deployment)
- [License](#license)
## Content
To write or update content, refer to the documentation of the documentation:
- [**Documentation: Content →**](docs/content.md)
- [**Documentation: API References →**](docs/apis.md)
- [**Documentation: GitHub Data Fetching →**](docs/github.md)
- [**Documentation: Repository Component →**](docs/repositories.md)
## Development
The site is a React app built with [Gatsby](https://www.gatsbyjs.org), pulling its content from local and external Markdown files, and from various APIs.
To start, clone this repo and set your `GITHUB_TOKEN` (see [GitHub GraphQL API](docs/github.md#GitHub-GraphQL-API)):
```bash
git clone git@github.com:oceanprotocol/docs.git
cd docs/
# add GITHUB_TOKEN
cp .env.sample .env
vi .env
```
Then install dependencies and start up the development server:
```bash
# use Node.js/npm version defined in .nvmrc
nvm use
npm i
npm start
```
Alternatively, you can use [Docker Compose](https://docs.docker.com/compose/) to do the same, but without using your local system:
```bash
docker-compose up
```
Either one of these commands will expose a hot-reloading server under:
- [localhost:8000](http://localhost:8000)
- [localhost:8000/\_\_\_graphql](http://localhost:8000/___graphql)
## Linting & Formatting
To enforce a consistent code style, linting is setup for pretty much every file. Linting is part of the test suite, meaning builds on Travis will fail in case of linting errors.
In this repo the following tools are setup for that:
- ESLint with [eslint-config-oceanprotocol](https://github.com/oceanprotocol/eslint-config-oceanprotocol)
- [markdownlint](https://github.com/DavidAnson/markdownlint)
- [Prettier](https://prettier.io)
```bash
# only run linting checks
npm run lint
# auto-formatting of all js, css, md, yml files
npm run format
```
### Editor Setup: VS Code
If you use VS Code as your editor, you can install those extensions to get linting as you type, and auto-formatting as you save:
- [ESLint](https://marketplace.visualstudio.com/items?itemName=dbaeumer.vscode-eslint)
- [Prettier](https://marketplace.visualstudio.com/items?itemName=esbenp.prettier-vscode)
- [markdownlint](https://marketplace.visualstudio.com/items?itemName=DavidAnson.vscode-markdownlint)
## ⬆️ Deployment
Every branch or Pull Request is automatically deployed by [Netlify](https://netlify.com) with their GitHub integration. A link to a preview deployment will appear under each Pull Request.
The latest deployment of the `main` branch is automatically aliased to `docs.oceanprotocol.com`.
## License
```text
Copyright ((C)) 2022 Ocean Protocol Foundation Ltd.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
```

41
SUMMARY.md Normal file
View File

@ -0,0 +1,41 @@
# Table of contents
* [Orientation](README.md)
* [README](<README (1).md>)
* [Building with ocean](building-with-ocean/README.md)
* [Publish assets using hosting services](building-with-ocean/asset-hosting.md)
* [Binance Smart Chain (BSC)](building-with-ocean/bsc-bridge.md)
* [Writing Algorithms for Compute to Data](building-with-ocean/compute-to-data-algorithms.md)
* [Compute-to-Data](building-with-ocean/compute-to-data-architecture.md)
* [Compute-to-Data](building-with-ocean/compute-to-data-datasets-algorithms.md)
* [Setting up private docker registry for Compute-to-Data environment](building-with-ocean/compute-to-data-docker-registry.md)
* [Minikube Compute-to-Data Environment](building-with-ocean/compute-to-data-minikube.md)
* [Overview of Tutorials](building-with-ocean/introduction.md)
* [Add liquidity to liquidity pools](building-with-ocean/marketplace-add-liquidity.md)
* [Download a data asset](building-with-ocean/marketplace-download-data-asset.md)
* [Ocean Market](building-with-ocean/marketplace-introduction.md)
* [Publish a data asset](building-with-ocean/marketplace-publish-data-asset.md)
* [Swap datatokens](building-with-ocean/marketplace-swap.md)
* [Set Up a Marketplace](building-with-ocean/marketplace.md)
* [Set Up MetaMask Wallet](building-with-ocean/metamask-setup.md)
* [Polygon (ex Matic)](building-with-ocean/polygon-bridge.md)
* [Use Your Wallet to Manage OCEAN Tokens](building-with-ocean/wallets-and-ocean-tokens.md)
* [Wallet Basics](building-with-ocean/wallets.md)
* [Core Concepts](core-concepts/README.md)
* [Architecture Overview](core-concepts/architecture.md)
* [Asset pricing](core-concepts/asset-pricing.md)
* [Contributor Code of Conduct](core-concepts/code-of-conduct.md)
* [Compute-to-Data](core-concepts/compute-to-data.md)
* [Ways to Contribute](core-concepts/contributing.md)
* [Data NFTs and Datatokens](core-concepts/datanft-and-datatoken.md)
* [DID & DDO](core-concepts/did-ddo.md)
* [Fees](core-concepts/fees.md)
* [Funding](core-concepts/get-funding.md)
* [Introduction](core-concepts/introduction.md)
* [Legal Requirements when Contributing Code](core-concepts/legal-reqs.md)
* [Supported Networks](core-concepts/networks.md)
* [Projects using Ocean Protocol](core-concepts/projects-using-ocean.md)
* [Quickstart](core-concepts/quickstart.md)
* [Data NFTs and datatoken roles](core-concepts/roles.md)
* [Reporting Vulnerabilities](core-concepts/vulnerabilities.md)
* [Using Ocean Marketplace](using-ocean-marketplace.md)

View File

@ -0,0 +1,2 @@
# Building with ocean

View File

@ -3,32 +3,33 @@ title: Publish assets using hosting services
description: Tutorial to publish assets using hosting services like Google Drive and Azure.
---
## Overview
# Publish assets using hosting services
### Overview
To publish assets on the Ocean Marketplace, publishers must provide a link(an URL) to the file. It is up to the asset publisher to decide where to host the asset. For example, a publisher can store the content on their Google Drive, AWS server, private cloud server, or other third-party hosting services. Through publishing, the URL of the asset is encrypted and stored as a part of DDO on the blockchain. Buyers don't have access directly to the URL, but they interact with the Provider, which decrypts the URL and acts as a proxy to serve the asset. The DDO only stores the location of the file, which is accessed on-demand by the Provider. Implementing a security policy that allows only the Provider to access the URL and blocks requests from other unauthorized actors is recommended. One of the possible ways to achieve this is to allow only the Provider's IP address to access the URL. But, not all hosting services provide this feature. So, the publishers must consider the security features while choosing a hosting service.
On Ocean Marketplace, a publisher must provide the link to the asset during publish step. Once the asset is published, this link cannot be changed. So, it is essential that the publisher correctly sets this field (shown in the below image).
![Publish - File URL field](./images/marketplace/publish/marketplace-publish-file-field.png)
![Publish - File URL field](<images/marketplace/publish/marketplace-publish-file-field (1).png>)
## Hosting services
### Hosting services
Publishers can choose any hosting service of their choice. The below section explains how to use commonly used hosting services with Ocean Marketplace.
### Google Drive
#### Google Drive
Google Drive allows users to share files/folders with various access policies. Publishers must set the access policy such that anyone with the link can download the file when using Ocean Marketplace with Ocean Protocol's default [Provider](https://v4.provider.rinkeby.oceanprotocol.com).
#### Step 1 - Get link
**Step 1 - Get link**
Open https://drive.google.com and upload the file you want to publish on the Ocean Marketplace.
Right-click on the uploaded file and click the `Share` option. Set the file access policy correctly and click the `Copy link` button.
Open https://drive.google.com and upload the file you want to publish on the Ocean Marketplace. Right-click on the uploaded file and click the `Share` option. Set the file access policy correctly and click the `Copy link` button.
The file URL will be of the form `https://drive.google.com/file/d/<FILE-ID>/view?usp=sharing`, where the `<FILE-ID>` is the unique alphanumeric string. Verify if the URL is correct by entering it in a browser and check if the file is downloaded.
![Google Drive link](./images/marketplace/publish/publish-google-drive.png)
![Google Drive link](<images/marketplace/publish/publish-google-drive (1).png>)
#### Step 2 - Create a downloadable link
**Step 2 - Create a downloadable link**
If you paste the copied URL into the browser, it will load an HTML page. Directly pasting the link on the publish page will publish the HTML page instead of a downloadable file URL. So, let's make a downloadable file URL.
@ -36,91 +37,91 @@ Note the `<FILE-ID>` from step 1 and create a URL as below.
`https://drive.google.com/uc?export=download&id=<FILE-ID>`
#### Step 3 - Publish the asset using the generated link
**Step 3 - Publish the asset using the generated link**
After creating a downloadable file URL, fill the `File*` field with the downloadable URL created in step 2.
![Publish - Google Drive file](./images/marketplace/publish/publish-google-drive-2.png)
![Publish - Google Drive file](<images/marketplace/publish/publish-google-drive-2 (1).png>)
_Note: Google Drive allows only shared files to be downloaded, as shown in the above steps. The above method does not work with the shared folder. As a workaround, publishers can upload a zip of a folder and upload it as a file._
---
***
### Azure storage
#### Azure storage
Azure provides various options to host data and multiple configuration possibilities. Publishers are required to do their research and decide what would be the right choice. The below steps provide one of the possible ways to host data using Azure storage and publish it on Ocean Marketplace.
#### Prerequisite
**Prerequisite**
Create an account on [Azure](https://azure.microsoft.com/en-us/). Users might also be asked to provide payment details and billing addresses that are out of this tutorial's scope.
#### Step 1 - Create a storage account
**Step 1 - Create a storage account**
##### Go to Azure portal
**Go to Azure portal**
Go to the Azure portal: https://portal.azure.com/#home and select `Storage accounts` as shown below.
![Create a storage account - 1](/images/marketplace/publish/azure-1.png)
![Create a storage account - 1](../images/marketplace/publish/azure-1.png)
##### Create a new storage account
**Create a new storage account**
![Create a storage account - 2](/images/marketplace/publish/azure-2.png)
![Create a storage account - 2](../images/marketplace/publish/azure-2.png)
##### Fill in the details
**Fill in the details**
![Add details](/images/marketplace/publish/azure-3.png)
![Add details](../images/marketplace/publish/azure-3.png)
##### Storage account created
**Storage account created**
![Storage account created](/images/marketplace/publish/azure-4.png)
![Storage account created](../images/marketplace/publish/azure-4.png)
#### Step 2 - Create a blob container
**Step 2 - Create a blob container**
![Create a blob container](/images/marketplace/publish/azure-5.png)
![Create a blob container](../images/marketplace/publish/azure-5.png)
#### Step 3 - Upload a file
**Step 3 - Upload a file**
![Upload a file](/images/marketplace/publish/azure-6.png)
![Upload a file](../images/marketplace/publish/azure-6.png)
#### Step 4 - Share the file
**Step 4 - Share the file**
##### Select the file to be published and click Generate SAS
**Select the file to be published and click Generate SAS**
![Click generate SAS](/images/marketplace/publish/azure-7.png)
![Click generate SAS](../images/marketplace/publish/azure-7.png)
##### Configure the SAS details and click `Generate SAS token and URL`
**Configure the SAS details and click `Generate SAS token and URL`**
![Generate link to file](/images/marketplace/publish/azure-8.png)
![Generate link to file](../images/marketplace/publish/azure-8.png)
##### Copy the generated link
**Copy the generated link**
![Copy the link](/images/marketplace/publish/azure-9.png)
![Copy the link](../images/marketplace/publish/azure-9.png)
#### Step 5 - Publish the asset using the generated link
**Step 5 - Publish the asset using the generated link**
Now, copy and paste the link in the Publish page in the Ocean Marketplace.
![Publish the file as an asset](/images/marketplace/publish/azure-10.png)
![Publish the file as an asset](../images/marketplace/publish/azure-10.png)
### OneDrive
#### OneDrive
Create an account on [Microsoft](https://www.microsoft.com/en-us/microsoft-365/onedrive/online-cloud-storage).
#### Step 1 - Upload a file
**Step 1 - Upload a file**
Go to [OneDrive](https://onedrive.live.com/) and upload the file to be published.
Go to [OneDrive](https://onedrive.live.com/) and upload the file to be published.
![Upload a file](/images/marketplace/publish/one-drive-1.png)
![Upload a file](../images/marketplace/publish/one-drive-1.png)
#### Step 2 - Get link
**Step 2 - Get link**
After the file is uploaded, right click on the file and click `Embed`, and copy the link.
![Get an embeddable link](/images/marketplace/publish/one-drive-2.png)
![Get an embeddable link](../images/marketplace/publish/one-drive-2.png)
Copy the highlighted content as shown in the below image:
![Copy the iframe](/images/marketplace/publish/one-drive-3.png)
![Copy the iframe](../images/marketplace/publish/one-drive-3.png)
The copied content has the following format:
@ -132,20 +133,18 @@ The copied content has the following format:
</iframe>
```
#### Step 3 - Generate downloadable link
**Step 3 - Generate downloadable link**
Copy the content from `src` field from the `iframe`. The link has the following format:
`https://onedrive.live.com/embed?cid=<CID>&resid=<RES_ID>%<NUMBER>&authkey=<AUTH_KEY>`
Copy the content from `src` field from the `iframe`. The link has the following format: `https://onedrive.live.com/embed?cid=<CID>&resid=<RES_ID>%<NUMBER>&authkey=<AUTH_KEY>`
Replace the `https://onedrive.live.com/embed` with `https://onedrive.live.com/download` from the above URL.
The downloadable file URL has the following format:
`https://onedrive.live.com/download?cid=<CID>&resid=<RES_ID>%<NUMBER>&authkey=<AUTH_KEY>`
The downloadable file URL has the following format: `https://onedrive.live.com/download?cid=<CID>&resid=<RES_ID>%<NUMBER>&authkey=<AUTH_KEY>`
Enter the URL in the browser and verify if the file is downloaded correctly.
#### Step 4 - Publish the asset using the generated link
**Step 4 - Publish the asset using the generated link**
Copy and paste the link in the Publish page in the Ocean Marketplace.
![Publish the file as an asset](/images/marketplace/publish/one-drive-4.png)
![Publish the file as an asset](../images/marketplace/publish/one-drive-4.png)

View File

@ -1,30 +1,33 @@
---
title: Setting up private docker registry for Compute-to-Data environment
description: Learn how to setup your own docker registry and push images for running algorithms in a C2D environment.
description: >-
Learn how to setup your own docker registry and push images for running
algorithms in a C2D environment.
---
# Setting up private docker registry for Compute-to-Data environment
The document is intended for a production setup. The tutorial provides the steps to setup a private docker registry on the server for the following scenarios:
- Allow registry access only to the C2D environment.
- Anyone can pull the image from the registry but, only authenticated users will push images to the registry.
## Setup 1: Allow registry access only to the C2D environment
* Allow registry access only to the C2D environment.
* Anyone can pull the image from the registry but, only authenticated users will push images to the registry.
### Setup 1: Allow registry access only to the C2D environment
To implement this use case, 1 domain will be required:
- **example.com**: This domain will allow only image pull operations
* **example.com**: This domain will allow only image pull operations
_Note: Please change the domain names to your application-specific domain names._
### 1.1 Prerequisites
#### 1.1 Prerequisites
- Running docker environment on the linux server.
- Docker compose is installed.
- C2D environment is running.
- The domain names is mapped to the server hosting the registry.
* Running docker environment on the linux server.
* Docker compose is installed.
* C2D environment is running.
* The domain names is mapped to the server hosting the registry.
### 1.2 Generate certificates
#### 1.2 Generate certificates
```bash
# install certbot: https://certbot.eff.org/
@ -33,7 +36,7 @@ sudo certbot certonly --standalone --cert-name example.com -d example.com
_Note: Do check the access right of the files/directories where certificates are stored. Usually, they are at `/etc/letsencrypt/`._
### 1.3 Generate password file
#### 1.3 Generate password file
Replace content in `<>` with appropriate content.
@ -43,7 +46,7 @@ docker run \
httpd:2 -Bbn <username> <password> > <path>/auth/htpasswd
```
### 1.4 Docker compose template file for registry
#### 1.4 Docker compose template file for registry
Copy the below yml content to `docker-compose.yml` file and replace content in `<>`.
@ -79,11 +82,11 @@ services:
- registry
```
### 1.5 Nginx configuration
#### 1.5 Nginx configuration
Copy the below nginx configuration to a `nginx.conf` file.
```conf
```
events {}
http {
access_log /app/logs/access.log;
@ -109,10 +112,9 @@ http {
}
}
}
```
### 1.6 Create kubernetes secret in C2D server
#### 1.6 Create kubernetes secret in C2D server
Login into Compute-to-data enviroment and run the following command with appropriate credentials:
@ -120,10 +122,9 @@ Login into Compute-to-data enviroment and run the following command with appropr
kubectl create secret docker-registry regcred --docker-server=example.com --docker-username=<username> --docker-password=<password> --docker-email=<email_id> -n ocean-compute
```
### 1.7 Update operator-engine configuration
#### 1.7 Update operator-engine configuration
Add `PULL_SECRET` property with value `regcred` in the [operator.yml](https://github.com/oceanprotocol/operator-engine/blob/main/kubernetes/operator.yml) file of operator-engine configuration.
For more detials on operator-engine properties refer this [link](https://github.com/oceanprotocol/operator-engine/blob/177ca7185c34aa2a503afbe026abb19c62c69e6d/README.md?plain=1#L106)
Add `PULL_SECRET` property with value `regcred` in the [operator.yml](https://github.com/oceanprotocol/operator-engine/blob/main/kubernetes/operator.yml) file of operator-engine configuration. For more detials on operator-engine properties refer this [link](https://github.com/oceanprotocol/operator-engine/blob/177ca7185c34aa2a503afbe026abb19c62c69e6d/README.md?plain=1#L106)
Apply updated operator-engine configuration.
@ -132,22 +133,22 @@ kubectl config set-context --current --namespace ocean-compute
kubectl apply -f operator-engine/kubernetes/operator.yml
```
## Steup 2: Allow anyonymous `pull` operations
### Steup 2: Allow anyonymous `pull` operations
To implement this use case, 2 domains will be required:
- **example.com**: This domain will allow image push/pull operations only to the authenticated users.
- **readonly.example.com**: This domain will allow only image pull operations
* **example.com**: This domain will allow image push/pull operations only to the authenticated users.
* **readonly.example.com**: This domain will allow only image pull operations
_Note: Please change the domain names to your application-specific domain names._
### 2.1 Prerequisites
#### 2.1 Prerequisites
- Running docker environment on the linux server.
- Docker compose is installed.
- 2 domain names is mapped to the same server IP address.
* Running docker environment on the linux server.
* Docker compose is installed.
* 2 domain names is mapped to the same server IP address.
### 2.2 Generate certificates
#### 2.2 Generate certificates
```bash
# install certbot: https://certbot.eff.org/
@ -157,7 +158,7 @@ sudo certbot certonly --standalone --cert-name readonly.example.com -d readonly.
_Note: Do check the access right of the files/directories where certificates are stored. Usually, they are at `/etc/letsencrypt/`._
### 2.3 Generate password file
#### 2.3 Generate password file
Replace content in `<>` with appropriate content.
@ -167,10 +168,9 @@ docker run \
httpd:2 -Bbn <username> <password> > <path>/auth/htpasswd
```
### 2.4 Docker compose template file for registry
#### 2.4 Docker compose template file for registry
Copy the below yml content to `docker-compose.yml` file and replace content in `<>`.
Here, we will be creating two services of the docker registry so that anyone can `pull` the images from the registry but, only authenticated users can `push` the images.
Copy the below yml content to `docker-compose.yml` file and replace content in `<>`. Here, we will be creating two services of the docker registry so that anyone can `pull` the images from the registry but, only authenticated users can `push` the images.
```yml
version: '3'
@ -217,11 +217,11 @@ services:
- registry-read-only
```
### 2.5 Nginx configuration
#### 2.5 Nginx configuration
Copy the below nginx configuration to a `nginx.conf` file.
```conf
```
events {}
http {
access_log /app/logs/access.log;
@ -260,24 +260,23 @@ http {
}
}
}
```
## Start the registry
### Start the registry
```bash
docker-compose -f docker-compose.yml up
```
## Working with registry
### Working with registry
### Login to registry
#### Login to registry
```bash
docker login example.com -u <username> -p <password>
```
### Build and push an image to the registry
#### Build and push an image to the registry
Use the commands below to build an image from a `Dockerfile` and push it to your private registry.
@ -286,13 +285,13 @@ docker build . -t example.com/my-algo:latest
docker image push example.com/my-algo:latest
```
### List images in the registry
#### List images in the registry
```bash
curl -X GET -u <username>:<password> https://example.com/v2/_catalog
```
### Pull an image from the registry
#### Pull an image from the registry
Use the commands below to build an image from a `Dockerfile` and push it to your private registry.
@ -303,16 +302,14 @@ docker image pull example.com/my-algo:latest
# allows anonymous pull if 2nd setup scenario is implemented
docker image pull readonly.example.com/my-algo:latest
```
### Next step
#### Next step
You can publish an algorithm asset with the metadata containing registry URL, image, and tag information to enable users to run C2D jobs.
### Further references
## Further references
- [Setup Compute-to-Data environment](/tutorials/compute-to-data-minikube/)
- [Writing algorithms](/tutorials/compute-to-data-algorithms/)
- [C2D example](/references/read-the-docs/ocean-py/READMEs/c2d-flow.md)
* [Setup Compute-to-Data environment](../tutorials/compute-to-data-minikube/)
* [Writing algorithms](../tutorials/compute-to-data-algorithms/)
* [C2D example](../references/read-the-docs/ocean-py/READMEs/c2d-flow.md)

Binary file not shown.

After

Width:  |  Height:  |  Size: 87 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 36 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 38 KiB

View File

@ -3,85 +3,77 @@ title: Publish a data asset
description: Tutorial to publish assets using the Ocean Market
---
## What can be published?
# Publish a data asset
### What can be published?
Ocean Market provides a convenient interface for individuals and organizations to publish their data. Datasets can be images, location information, audio, video, sales data, or combinations of all! There is no exhaustive list of what type of data can be published on the Market. Please note the Ocean Protocol team maintains a purgatory list [here](https://github.com/oceanprotocol/list-purgatory) to block addresses and remove assets for any violations.
## Tutorial
### Tutorial
### Connect wallet and navigate to the publish page
#### Connect wallet and navigate to the publish page
1. Go to <a href="https://v4.market.oceanprotocol.com " target="_blank">Ocean Market</a>
1. Go to [Ocean Market](https://v4.market.oceanprotocol.com)
2. Connect wallet.
2. Connect wallet.
<img src="images/marketplace/connect-wallet.png" alt="connect wallet" data-size="original">
![connect wallet](images/marketplace/connect-wallet.png 'Connect wallet')
In this tutorial, we will be using the Rinkeby test network.
3. Go to the publish page.
In this tutorial, we will be using the Rinkeby test network.
<img src="images/marketplace/publish.png" alt="publish page" data-size="original">
3. Go to the publish page.
![publish page](images/marketplace/publish.png 'Publish page')
### Step 1 - Metadata
#### Step 1 - Metadata
Fill in the metadata.
_Mandatory fields are marked with <span style="color: red;">\*</span>_
_Mandatory fields are marked with \*_
- **Asset type**<span style="color: red;">\*</span>
* **Asset type**\*
An asset can be a _dataset_ or an _algorithm_. The asset type cannot be changed after publication.
An asset can be a _dataset_ or an _algorithm_. The asset type cannot be changed after publication.
* **Title**\*
- **Title**<span style="color: red;">\*</span>
The descriptive name of the asset. This field is editable after the asset publication.
* **Description**\*
The descriptive name of the asset. This field is editable after the asset publication.
Description of the asset. Ocean Marketplace supports plain text and Markdown format for the description field. This field is editable after the asset publication.
* **Author**\*
- **Description**<span style="color: red;">\*</span>
The author of the asset. The author can be an individual or an organization. This field is editable after the asset publication.
* **Tags**
Description of the asset. Ocean Marketplace supports plain text and Markdown format for the description field. This field is editable after the asset publication.
Tags help the asset to be discoverable. If not provided, the list of tags is empty by default.
- **Author**<span style="color: red;">\*</span>
![publish part-1](images/marketplace/publish-1.png)
The author of the asset. The author can be an individual or an organization. This field is editable after the asset publication.
#### Step 2 - Access details
- **Tags**
_Mandatory fields are marked with \*_
Tags help the asset to be discoverable. If not provided, the list of tags is empty by default.
* **Access Type**\*
![publish part-1](images/marketplace/publish-1.png 'Asset metadata')
An asset can be a downloadable file or a compute service on which buyers can run their algorithm. Through **download**, buyers will be able to download the dataset. Through **compute**, buyers will be able to use the dataset in a compute-to-data environment.
* **Provider URL**\*
### Step 2 - Access details
Provider facilitates the asset download to buyers or for computing jobs and much more.
* **File**\*
_Mandatory fields are marked with <span style="color: red;">\*</span>_
The direct URL of the dataset to be published. The file needs to be publicly accessible to be downloadable by buyers. If the file is hosted on services like Google Drive, the URL provided needs to point directly to the data asset file. Also, the file needs to have the proper permissions to be downloaded by anybody.
- **Access Type**<span style="color: red;">\*</span>
**Provider** encrypts this field before publishing the asset on-chain.
* **Sample file**
An asset can be a downloadable file or a compute service on which buyers can run their algorithm. Through **download**, buyers will be able to download the dataset. Through **compute**, buyers will be able to use the dataset in a compute-to-data environment.
An optional field through which publishers provide a sample file of the dataset they want to publish. The buyers can access it before buying the dataset. This field is editable after the asset publication.
- **Provider URL**<span style="color: red;">\*</span>
**Provider** encrypts this field before publishing the asset on-chain.
* **Timeout**\*
Provider facilitates the asset download to buyers or for computing jobs and much more.
This field specifies how long the buyer can access the dataset after the dataset is purchased. This field is editable after the asset publication.
- **File**<span style="color: red;">\*</span>
![publish part-2](images/marketplace/publish-2.png)
The direct URL of the dataset to be published. The file needs to be publicly accessible to be downloadable by buyers. If the file is hosted on services like Google Drive, the URL provided needs to point directly to the data asset file. Also, the file needs to have the proper permissions to be downloaded by anybody.
**Provider** encrypts this field before publishing the asset on-chain.
- **Sample file**
An optional field through which publishers provide a sample file of the dataset they want to publish. The buyers can access it before buying the dataset. This field is editable after the asset publication.
**Provider** encrypts this field before publishing the asset on-chain.
- **Timeout**<span style="color: red;">\*</span>
This field specifies how long the buyer can access the dataset after the dataset is purchased. This field is editable after the asset publication.
![publish part-2](images/marketplace/publish-2.png 'Access details')
### Step 3 - Pricing
#### Step 3 - Pricing
The publisher needs to choose a pricing option for the asset before publishing the data asset. The pricing schema is not editable after the asset publication.
@ -97,38 +89,40 @@ With the _free pricing_ schema, the publisher provides an asset that is free to
With the _dynamic pricing_ schema, the publisher sets the asset price and creates a datatoken liquidity pool with an initial amount of OCEAN tokens.
For more information on the pricing models, please refer this [document](/concepts/asset-pricing/).
For more information on the pricing models, please refer this [document](../concepts/asset-pricing/).
The publisher can also change the **Swap Fee** of the liquidity pool.
For a deep dive into the fee structure, please refer to this [document](/concepts/fees/).
For a deep dive into the fee structure, please refer to this [document](../concepts/fees/).
![publish part-3](images/marketplace/publish-3.png 'Dynamic pricing')
![publish part-3](images/marketplace/publish-3.png)
### Step 4 - Preview
#### Step 4 - Preview
![publish part-4](images/marketplace/publish-4.png 'Preview')
![publish part-4](images/marketplace/publish-4.png)
### Step 5 - Blockchain transactions
#### Step 5 - Blockchain transactions
![publish part-5](images/marketplace/publish-5.png 'Transaction 1 - Allow access to Ocean tokens')
![publish part-5](images/marketplace/publish-5.png)
<br />
\
![publish part-6](images/marketplace/publish-6.png 'Transaction 2 - Deploy data NFT and datatoken')
<br />
![publish part-6](images/marketplace/publish-6.png)
![publish part-7](images/marketplace/publish-7.png 'Transaction 3 - Publish DDO')
\
### Confirmation
![publish part-7](images/marketplace/publish-7.png)
#### Confirmation
Now, the asset is successfully published and available in the Ocean Market.
![publish success](images/marketplace/publish-8.png 'Successful publish')
![publish success](images/marketplace/publish-8.png)
On the [profile page](https://v4.market.oceanprotocol.com/profile), the publisher has access to all his published assets.
## Other Articles
### Other Articles
https://blog.oceanprotocol.com/on-selling-data-in-ocean-market-9afcfa1e6e43

2
core-concepts/README.md Normal file
View File

@ -0,0 +1,2 @@
# Core Concepts

View File

@ -3,25 +3,26 @@ title: Architecture Overview
description: Data NFTs and datatokens architecture
---
## Overview
# Architecture Overview
### Overview
Here is the Ocean architecture.
![Ocean Protocol tools architecture](images/architecture.png)
![Ocean Protocol tools architecture](<images/architecture (1).png>)
Heres an overview of the figure.
- The top layer is **applications** like Ocean Market. With these apps, users can onboard services like data, algorithms, compute-to-data into crypto (publish and mint data NFTs and datatokens), hold datatokens as assets (data wallets), discover assets, and buy/sell datatokens for a fixed or auto-determined price (data marketplaces), and use data services (spend datatokens).
- Below are **libraries** used by the applications: Ocean.js (JavaScript library) and Ocean.py (Python library). This also includes middleware to assist discovery:
- **Aquarius**: Provides metadata cache for faster search by caching on-chain data into elasticsearch
- **Provider**: Facilitates downloading assets, DDO encryption, and communicating with `operator-service` for Compute-to-Data jobs.
- **The Graph**: It is a 3rd party tool that developers can utilize the libraries to build their custom applications and marketplaces.
- The lowest level has the **smart contracts**. The smart contracts are deployed on the Ethereum mainnet and other compatible networks. Libraries encapsulate the calls to these smart contracts and provide features like publishing new assets, facilitating consumption, managing pricing, and much more. To see the supported networks click [here](/concepts/networks/).
* The top layer is **applications** like Ocean Market. With these apps, users can onboard services like data, algorithms, compute-to-data into crypto (publish and mint data NFTs and datatokens), hold datatokens as assets (data wallets), discover assets, and buy/sell datatokens for a fixed or auto-determined price (data marketplaces), and use data services (spend datatokens).
* Below are **libraries** used by the applications: Ocean.js (JavaScript library) and Ocean.py (Python library). This also includes middleware to assist discovery:
* **Aquarius**: Provides metadata cache for faster search by caching on-chain data into elasticsearch
* **Provider**: Facilitates downloading assets, DDO encryption, and communicating with `operator-service` for Compute-to-Data jobs.
* **The Graph**: It is a 3rd party tool that developers can utilize the libraries to build their custom applications and marketplaces.
* The lowest level has the **smart contracts**. The smart contracts are deployed on the Ethereum mainnet and other compatible networks. Libraries encapsulate the calls to these smart contracts and provide features like publishing new assets, facilitating consumption, managing pricing, and much more. To see the supported networks click [here](../concepts/networks/).
## Data NFTs, Datatokens and Access Control Tools
### Data NFTs, Datatokens and Access Control Tools
Data NFTs are based on [ERC721](https://eips.ethereum.org/EIPS/eip-721) standard. The publisher can use Marketplace or client libraries to deploy a new data NFT contract. To save gas fees, it uses [ERC1167](https://eips.ethereum.org/EIPS/eip-1167) proxy approach on the **ERC721 template**. Publisher can then assign manager role to other Ethereum addresses who can deploy new datatoken contracts and even mint them. Each datatoken contract is associated with one data NFT contract.
Click [here](/concepts/datanft-and-datatoken/) to further read about data NFTs and datatokens.
Data NFTs are based on [ERC721](https://eips.ethereum.org/EIPS/eip-721) standard. The publisher can use Marketplace or client libraries to deploy a new data NFT contract. To save gas fees, it uses [ERC1167](https://eips.ethereum.org/EIPS/eip-1167) proxy approach on the **ERC721 template**. Publisher can then assign manager role to other Ethereum addresses who can deploy new datatoken contracts and even mint them. Each datatoken contract is associated with one data NFT contract. Click [here](../concepts/datanft-and-datatoken/) to further read about data NFTs and datatokens.
ERC721 data NFTs represent holding copyright/base IP of a data asset, and ERC20 datatokens represent licenses to access the asset by downloading the content or running Compute-to-Data jobs.
@ -37,11 +38,7 @@ Instead of running a Provider themselves, the publisher can have a 3rd party lik
**Ocean JavaScript and Python libraries** act as drivers for the lower-level contracts. Each library integrates with Ocean Provider to provision & access data services, and Ocean Aquarius for metadata.
<repo name="provider"></repo>
<repo name="ocean.js"></repo>
<repo name="ocean.py"></repo>
## Market Tools
### Market Tools
Once someone has generated datatokens, they can be used in any ERC20 exchange, centralized or decentralized. In addition, Ocean provides a convenient default marketplace that is tuned for data: **Ocean Market**. Its a vendor-neutral reference data marketplace for use by the Ocean community.
@ -49,32 +46,27 @@ The marketplaces are decentralized (no single owner or controller), and non-cust
Ocean Market supports fixed pricing and automatic price discovery.
- For **fixed pricing**, theres a simple contract for users to buy/sell datatokens for OCEAN while avoiding custodianship during value transfer.
- For **automatic price discovery**, Ocean Market uses automated market makers (AMMs) powered by [Balancer](https://www.balancer.fi). Each pool is a datatoken-OCEAN pair. In the Ocean Market GUI, the user adds liquidity then invokes pool creation; the GUIs React code calls the Ocean JavaScript library, which calls the **Pool Factory** to deploy a **Pool** contract. (The Python library also does this.) Deploying a datatoken pool can be viewed as an “Initial Data Offering” (IDO).
* For **fixed pricing**, theres a simple contract for users to buy/sell datatokens for OCEAN while avoiding custodianship during value transfer.
* For **automatic price discovery**, Ocean Market uses automated market makers (AMMs) powered by [Balancer](https://www.balancer.fi). Each pool is a datatoken-OCEAN pair. In the Ocean Market GUI, the user adds liquidity then invokes pool creation; the GUIs React code calls the Ocean JavaScript library, which calls the **Pool Factory** to deploy a **Pool** contract. (The Python library also does this.) Deploying a datatoken pool can be viewed as an “Initial Data Offering” (IDO).
Complementary to Ocean Market, Ocean has reference code to ease building **third-party data marketplaces**, such as for logistics ([dexFreight data marketplace](https://blog.oceanprotocol.com/dexfreight-ocean-protocol-partner-to-enable-transportation-logistics-companies-to-monetize-data-7aa839195ac)) or mobility ([Daimler](https://blog.oceanprotocol.com/ocean-protocol-delivers-proof-of-concept-for-daimler-ag-in-collaboration-with-daimler-south-east-564aa7d959ca)).
[This post](https://blog.oceanprotocol.com/ocean-market-an-open-source-community-marketplace-for-data-4b99bedacdc3) elaborates on Ocean marketplace tools.
<repo name="market"></repo>
### Metadata Tools
## Metadata Tools
Marketplaces use the Metadata of the asset for discovery. Metadata consists of information like the type of asset, name of the asset, creation date, license, etc. Each data asset can have a [decentralized identifier](https://w3c-ccg.github.io/did-spec/) (DID) that resolves to a DID document (DDO) for associated metadata. The DDO is essentially [JSON](https://www.json.org/) filling in metadata fields. For more details on working with OCEAN DIDs check out the [DID concept documentation](https://docs.oceanprotocol.com/concepts/did-ddo/). The [DDO Metadata documentation](https://docs.oceanprotocol.com/concepts/ddo-metadata/) goes into more depth regarding metadata structure.
Marketplaces use the Metadata of the asset for discovery. Metadata consists of information like the type of asset, name of the asset, creation date, license, etc. Each data asset can have a [decentralized identifier](https://w3c-ccg.github.io/did-spec/) (DID) that resolves to a DID document (DDO) for associated metadata. The DDO is essentially [JSON](https://www.json.org/) filling in metadata fields. For more details on working with OCEAN DIDs check out the [DID concept documentation](https://docs.oceanprotocol.com/concepts/did-ddo/).
The [DDO Metadata documentation](https://docs.oceanprotocol.com/concepts/ddo-metadata/) goes into more depth regarding metadata structure.
[OEP8](/concepts/did-ddo/) specifies Ocean metadata schema, including fields that must be filled. Its based on the public [DataSet schema from schema.org](https://schema.org/Dataset).
[OEP8](../concepts/did-ddo/) specifies Ocean metadata schema, including fields that must be filled. Its based on the public [DataSet schema from schema.org](https://schema.org/Dataset).
Ocean uses the Ethereum mainnet and other compatible networks as an **on-chain metadata store**, i.e. to store both DID and DDO. This means that once the transaction fee is paid, there are no further expenses or devops work needed to ensure metadata availability into the future, aiding in the discoverability of data assets. It also simplifies integration with the rest of the Ocean system, which is Ethereum-based. Storage cost on Ethereum mainnet is not negligible, but not prohibitive and the other benefits are currently worth the trade-off compared to alternatives.
Due to the permissionless, decentralized nature of data on the Ethereum mainnet, any last mile tool can access metadata. **Ocean Aquarius** supports different metadata fields for each different Ocean-based marketplace. Developers could also use [The Graph](https://www.thegraph.com) to see metadata fields that are common across all marketplaces.
<repo name="aquarius"></repo>
## Third-Party ERC20 Apps & Tools
### Third-Party ERC20 Apps & Tools
The ERC20 nature of datatokens eases composability with other Ethereum tools and apps, including **MetaMask** and **Trezor** as data wallets, DEXes as data exchanges, and more. [This post](https://blog.oceanprotocol.com/ocean-datatokens-from-money-legos-to-data-legos-4f867cec1837) has details.
## Actor Identities
### Actor Identities
Actors like data providers and buyers have Ethereum addresses, aka web3 accounts. These are managed by crypto wallets, as one would expect. For most use cases, this is all thats needed. There are cases where the Ocean community could layer on protocols like [Verifiable Credentials](https://www.w3.org/TR/vc-data-model/) or tools like [3Box](https://3box.io/).

View File

@ -3,11 +3,13 @@ title: Asset pricing
description: Choose the revenue model during asset publishing
---
# Asset pricing
Ocean Protocol offers 3 types of pricing options for asset monetization. The publisher can choose a pricing model which best suits their needs while publishing an asset. The pricing model selected cannot be changed once the asset is published.
The price of an asset is determined by the number of Ocean tokens a buyer must pay to access the asset. When users pay the right amount of Ocean tokens, they get a _datatoken_ in their wallets, a tokenized representation of the access right stored on the blockchain. To read more about datatoken and data NFT click [here](/concepts/datanft-and-datatoken).
The price of an asset is determined by the number of Ocean tokens a buyer must pay to access the asset. When users pay the right amount of Ocean tokens, they get a _datatoken_ in their wallets, a tokenized representation of the access right stored on the blockchain. To read more about datatoken and data NFT click [here](../concepts/datanft-and-datatoken/).
## Fixed pricing
### Fixed pricing
With the fixed price model, publishers set the price for the data in OCEAN. Ocean Market creates a datatoken in the background with a value equal to the dataset price in OCEAN so that buyers do not have to know about the datatoken. Buyers pay the amount specified in OCEAN for access. The publisher can update the price of the dataset later anytime.
@ -17,16 +19,15 @@ Publishers can choose this fixed pricing model when they do not want Automated M
The image below shows how to set the fixed pricing of an asset in the Ocean's Marketplace. Here, the price of the asset is set to 10 Ocean tokens.
![fixed-asset-pricing](images/fixed-asset-pricing.png 'Fixed asset pricing using Marketplace')
![fixed-asset-pricing](<images/fixed-asset-pricing (1).png>)
## Dynamic pricing
### Dynamic pricing
With the dynamic pricing model, the market defines the price with a mechanism derived from Decentralized Finance (DeFi): liquidity pools. While the publisher sets a base price for the token in OCEAN, the market will organically discover the right price for the data. This can be extremely handy when the value of the data is not known.
The Ocean Market helps create an Automated Market Maker(AMM) pool of Datatoken and Ocean tokens in dynamic pricing for each asset. _AMM_ enables unstoppable, decentralized trading of assets in the liquidity pool.
AMM uses a constant product formula to price tokens, which states: **x \* y = k**
where **x** and **y** represents the quantity of the two different tokens in the pool and **k** is a constant.
AMM uses a constant product formula to price tokens, which states: **x \* y = k** where **x** and **y** represents the quantity of the two different tokens in the pool and **k** is a constant.
A _liquidity pool_ is a reserve of tokens locked in the smart contract for market making. A buyer or a seller of an asset exchanges token **x** for token **y** or vice versa. AMM calculates the exchange ratio between the tokens based on the mathematical formula above.
@ -38,33 +39,33 @@ Publishers can set the pricing model of an asset to Dynamic pricing if they want
The image below shows how to set the Dynamic pricing of an asset in the Ocean's Marketplace. Here, the asset price is initially set to 50 Ocean tokens.
![dynamic-asset-pricing](images/dynamic-asset-pricing.png 'Dynamic asset pricing using Marketplace')
![dynamic-asset-pricing](<images/dynamic-asset-pricing (1).png>)
Ocean Protocol also allows publishers to set the pricing using ocean.js and ocean.py library.
### Asset price
#### Asset price
#### Action: Add liquidity
**Action: Add liquidity**
With one-sided staking, when liquidity is added to the pool, the Ocean tokens are added to the liquidity pool. To protect funds from impermanent loss due to changes in the ratio of tokens in the liquidity pool, Ocean Protocol's bot mints new datatokens and adds them to the pool. Thus, when liquidity is added to the pool, the ratio of tokens remains constant, and there is no price impact on the datatoken.
#### Action: Remove liquidity
**Action: Remove liquidity**
When the liquidity is removed from the pool, the Ocean tokens are returned to the liquidity provider who initiated the action. Ocean Protocol's bot burns the datatokens from the liquidity pool to protect funds from impermanent loss due to changes in the ratio of tokens in the liquidity pool. Thus, even in this case, there is no price impact on the datatoken.
#### Action: Buy datatoken
**Action: Buy datatoken**
When a datatoken is bought by paying Ocean tokens to the pool, the ratio of Ocean token and datatoken changes: there are more Ocean tokens and fewer datatokens in the liquidity pool. Therefore, as the ratio of datatokens/Ocean tokens changes, the liquidity pool increases the amount of Ocean tokens required to buy a datatoken in the following transactions(to maintain a constant ratio). Thus, the price of the datatoken increases whenever a datatoken is bought.
#### Action: Buy dataset
**Action: Buy dataset**
Buying a dataset involves swapping a datatoken from the liquidity pool by paying Ocean tokens. Thus, if users buy datatokens, the price of datatokens will increase. However, if users already have the datatokens, they can use them to buy the asset or the service without requiring interaction with the pool. In such a case, the price of the datatoken doesn't change.
#### Action: Sell datatoken
**Action: Sell datatoken**
When a datatoken is sold, Ocean tokens are removed from the liquidity pool in exchange for datatoken. Thus, the ratio of Ocean tokens and datatokens changes: there are fewer Ocean tokens and more datatokens in the liquidity pool. As there are more datatokens, the liquidity pool decreases the amount of Ocean tokens required to buy a datatoken in the following transactions(to maintain a constant ratio). Thus, the price of the datatoken decreases whenever a datatoken is sold.
## Free pricing
### Free pricing
With the free pricing model, the buyers can access an asset without requiring them to pay for it except for the transaction fees.
@ -74,4 +75,4 @@ Free pricing is suitable for individuals and organizations working in the public
The image below shows how to set free access to an asset in the Ocean's Marketplace.
![free-asset-pricing](images/free-asset-pricing.png 'Free asset pricing using Marketplace')
![free-asset-pricing](<images/free-asset-pricing (1).png>)

View File

@ -1,48 +1,52 @@
---
title: Data NFTs and Datatokens
description: In Ocean Protocol, ERC721 data NFTs represent holding copyright/base IP of a data asset, and ERC20 datatokens represent licenses to access the assets.
description: >-
In Ocean Protocol, ERC721 data NFTs represent holding copyright/base IP of a
data asset, and ERC20 datatokens represent licenses to access the assets.
---
# Data NFTs and Datatokens
A non-fungible token stored on the blockchain represents a unique asset. NFTs can represent images, videos, digital art, or any piece of information. NFTs can be traded, and allow transfer of copyright/base IP. [EIP-721](https://eips.ethereum.org/EIPS/eip-721) defines an interface for handling NFTs on EVM-compatible blockchains. The creator of the NFT can deploy a new contract on Ethereum or any Blockchain supporting NFT related interface and also, transfer the ownership of copyright/base IP through transfer transactions.
Fungible tokens represent fungible assets. If you have 5 ETH and Alice has 5 ETH, you and Alice could swap your ETH and your final holdings remain the same. They're apples-to-apples. Licenses (contracts) to access a copyrighted asset are naturally fungible - they can be swapped with each other.
![Data NFT and datatoken](images/datanft-and-datatoken.png)
![Data NFT and datatoken](<images/datanft-and-datatoken (1).png>)
## High-Level Architecture
### High-Level Architecture
The image above describes how ERC721 data NFTs, ERC20 datatokens, and AMMs relate.
- Bottom: The publisher deploys an ERC721 data NFT contract representing the base IP for the data asset. They are now the manager of the data NFT.
- Middle: The manager then deploys an ERC20 datatoken contract against the data NFT. The ERC20 represents a license with specific terms like "can download for the next 3 days". They could even publish further ERC20 datatoken contracts, to represent different license terms or for compute-to-data.
- Top: The manager then deploys a pool of the datatoken and OCEAN (or H2O), adds initial liquidity, and receives ERC20 pool tokens in return. Others may also add liquidity to receive pool tokens, i.e. become liquidity providers (LPs).
* Bottom: The publisher deploys an ERC721 data NFT contract representing the base IP for the data asset. They are now the manager of the data NFT.
* Middle: The manager then deploys an ERC20 datatoken contract against the data NFT. The ERC20 represents a license with specific terms like "can download for the next 3 days". They could even publish further ERC20 datatoken contracts, to represent different license terms or for compute-to-data.
* Top: The manager then deploys a pool of the datatoken and OCEAN (or H2O), adds initial liquidity, and receives ERC20 pool tokens in return. Others may also add liquidity to receive pool tokens, i.e. become liquidity providers (LPs).
## Terminology
### Terminology
- **Base IP** means the artifact being copyrighted. Represented by the {ERC721 address, tokenId} from the publish transactions.
- **Base IP holder** means the holder of the Base IP. Represented as the actor that did the initial "publish" action.
- **Sub-licensee** is the holder of the sub-license. Represented as the entity that controls address ERC721.\_owners[tokenId=x].
- **To Publish**: Claim copyright or exclusive base license.
- **To Sub-license**: Transfer one (of many) sub-licenses to new licensee: ERC20.transfer(to=licensee, value=1.0).
* **Base IP** means the artifact being copyrighted. Represented by the {ERC721 address, tokenId} from the publish transactions.
* **Base IP holder** means the holder of the Base IP. Represented as the actor that did the initial "publish" action.
* **Sub-licensee** is the holder of the sub-license. Represented as the entity that controls address ERC721.\_owners\[tokenId=x].
* **To Publish**: Claim copyright or exclusive base license.
* **To Sub-license**: Transfer one (of many) sub-licenses to new licensee: ERC20.transfer(to=licensee, value=1.0).
## Implementation in Ocean Protocol
### Implementation in Ocean Protocol
Ocean Protocol defines the [ERC721Factory](https://github.com/oceanprotocol/contracts/blob/v4main/contracts/ERC721Factory.sol) contract, allowing **Base IP holders** to create their ERC721 contract instances on any supported networks. The deployed contract stores Metadata, ownership, sub-license information, permissions. The contract creator can also create and mint ERC20 token instances for sub-licensing the **Base IP**.
ERC721 tokens are non-fungible, thus cannot be used for automatic price discovery like ERC20 tokens. ERC721 and ERC20 combined together can be used for sub-licensing. Ocean Protocol's [ERC721Template](https://github.com/oceanprotocol/contracts/blob/v4main/contracts/templates/ERC721Template.sol) solves this problem by using ERC721 for tokenizing the **Base IP** and tokenizing sub-licenses by using ERC20. Thus, sub-licenses can be traded on any AMM as the underlying contract is ERC20 compliant.
## High-Level Behavior
### High-Level Behavior
![Flow](images/use-case.png)
![Flow](<images/use-case (1).png>)
Here's an example.
- In step 1, Alice **publishes** her dataset with Ocean: this means deploying an ERC721 data NFT contract (claiming copyright/base IP), then an ERC20 datatoken contract (license against base IP).
- In step 2, she **mints** some ERC20 datatokens and **transfers** 1.0 of them to Bob's wallet; now he has a license to be able to download that dataset.
* In step 1, Alice **publishes** her dataset with Ocean: this means deploying an ERC721 data NFT contract (claiming copyright/base IP), then an ERC20 datatoken contract (license against base IP).
* In step 2, she **mints** some ERC20 datatokens and **transfers** 1.0 of them to Bob's wallet; now he has a license to be able to download that dataset.
## Other References
### Other References
- [Data & NFTs 1: Practical Connections of ERC721 with Intellectual Property](https://blog.oceanprotocol.com/nfts-ip-1-practical-connections-of-erc721-with-intellectual-property-dc216aaf005d)
- [Data & NFTs 2: Leveraging ERC20 Fungibility](https://blog.oceanprotocol.com/nfts-ip-2-leveraging-erc20-fungibility-bcee162290e3)
- [Data & NFTs 3: Combining ERC721 & ERC20](https://blog.oceanprotocol.com/nfts-ip-3-combining-erc721-erc20-b69ea659115e)
- [Fungibility sightings in NFTs](https://blog.oceanprotocol.com/on-difficult-to-explain-fungibility-sightings-in-nfts-26bc18620f70)
* [Data & NFTs 1: Practical Connections of ERC721 with Intellectual Property](https://blog.oceanprotocol.com/nfts-ip-1-practical-connections-of-erc721-with-intellectual-property-dc216aaf005d)
* [Data & NFTs 2: Leveraging ERC20 Fungibility](https://blog.oceanprotocol.com/nfts-ip-2-leveraging-erc20-fungibility-bcee162290e3)
* [Data & NFTs 3: Combining ERC721 & ERC20](https://blog.oceanprotocol.com/nfts-ip-3-combining-erc721-erc20-b69ea659115e)
* [Fungibility sightings in NFTs](https://blog.oceanprotocol.com/on-difficult-to-explain-fungibility-sightings-in-nfts-26bc18620f70)

View File

@ -1,13 +1,17 @@
---
title: DID & DDO
description: Specification of decentralized identifiers for assets in Ocean Protocol using the DID & DDO standards.
slug: /concepts/did-ddo/
section: concepts
description: >-
Specification of decentralized identifiers for assets in Ocean Protocol using
the DID & DDO standards.
---
# DID & DDO
**v4.1.0**
## Overview
### Overview
This document describes how Ocean assets follow the DID/DDO specification, such that Ocean assets can inherit DID/DDO benefits and enhance interoperability. DIDs and DDOs follow the [specification defined by the World Wide Web Consortium (W3C)](https://w3c-ccg.github.io/did-spec/).
@ -15,26 +19,27 @@ Decentralized identifiers (DIDs) are a type of identifier that enable verifiable
A DID Document (DDO) is a JSON blob that holds information about the DID. Given a DID, a _resolver_ will return the DDO of that DID.
## Rules for DID & DDO
### Rules for DID & DDO
An _asset_ in Ocean represents a downloadable file, compute service, or similar. Each asset is a _resource_ under the control of a _publisher_. The Ocean network itself does _not_ store the actual resource (e.g. files).
An _asset_ has a DID and DDO. The DDO should include [metadata](#metadata) about the asset, and define access in at least one [service](#services). Only _owners_ or _delegated users_ can modify the DDO.
An _asset_ has a DID and DDO. The DDO should include [metadata](did-ddo.md#metadata) about the asset, and define access in at least one [service](did-ddo.md#services). Only _owners_ or _delegated users_ can modify the DDO.
All DDOs are stored on-chain in encrypted form to be fully GDPR-compatible. A metadata cache like _Aquarius_ can help in reading, decrypting, and searching through encrypted DDO data from the chain. Because the file URLs are encrypted on top of the full DDO encryption, returning unencrypted DDOs e.g. via an API is safe to do as the file URLs will still stay encrypted.
## Publishing & Retrieving DDOs
### Publishing & Retrieving DDOs
The DDO is stored on-chain as part of the NFT contract and stored in encrypted form using the private key of the _Provider_. To resolve it, a metadata cache like _Aquarius_ must query the provider to decrypt the DDO.
Here is the flow:
![DDO flow](images/ddo-flow.png)
![DDO flow](<images/ddo-flow (1).png>)
<details>
<summary>UML source</summary>
```text
<summary>UML source</summary>
```
title DDO flow
User(Ocean library) -> User(Ocean library): Prepare DDO
@ -54,11 +59,11 @@ Aquarius -> Aquarius : enhance cached DDO in response with additional infos like
</details>
## DID
### DID
In Ocean, a DID is a string that looks like this:
```text
```
did:op:0ebed8226ada17fde24b6bf2b95d27f8f05fcce09139ff5cec31f6d81a7cd2ea
```
@ -72,41 +77,41 @@ console.log(checksum)
It follows [the generic DID scheme](https://w3c-ccg.github.io/did-spec/#the-generic-did-scheme).
## DDO
### DDO
A DDO in Ocean has these required attributes:
| Attribute | Type | Description |
| ----------------- | --------------------------- | -------------------------------------------------------------------------------------------------------------- |
| **`@context`** | Array of `string` | Contexts used for validation. |
| **`id`** | `string` | Computed as `sha256(address of ERC721 contract + chainId)`. |
| **`version`** | `string` | Version information in [SemVer](https://semver.org) notation referring to this DDO spec version, like `4.1.0`. |
| **`chainId`** | `number` | Stores chainId of the network the DDO was published to. |
| **`nftAddress`** | `string` | NFT contract linked to this asset |
| **`metadata`** | [Metadata](#metadata) | Stores an object describing the asset. |
| **`services`** | [Services](#services) | Stores an array of services defining access to the asset. |
| **`credentials`** | [Credentials](#credentials) | Describes the credentials needed to access a dataset in addition to the `services` definition. |
| Attribute | Type | Description |
| ----------------- | ------------------------------------- | -------------------------------------------------------------------------------------------------------------- |
| **`@context`** | Array of `string` | Contexts used for validation. |
| **`id`** | `string` | Computed as `sha256(address of ERC721 contract + chainId)`. |
| **`version`** | `string` | Version information in [SemVer](https://semver.org) notation referring to this DDO spec version, like `4.1.0`. |
| **`chainId`** | `number` | Stores chainId of the network the DDO was published to. |
| **`nftAddress`** | `string` | NFT contract linked to this asset |
| **`metadata`** | [Metadata](did-ddo.md#metadata) | Stores an object describing the asset. |
| **`services`** | [Services](did-ddo.md#services) | Stores an array of services defining access to the asset. |
| **`credentials`** | [Credentials](did-ddo.md#credentials) | Describes the credentials needed to access a dataset in addition to the `services` definition. |
### Metadata
#### Metadata
This object holds information describing the actual asset.
| Attribute | Type | Required | Description |
| --------------------------- | ----------------------------------------- | --------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| **`created`** | `ISO date/time string` | | Contains the date of the creation of the dataset content in ISO 8601 format preferably with timezone designators, e.g. `2000-10-31T01:30:00Z`. |
| **`updated`** | `ISO date/time string` | | Contains the date of last update of the dataset content in ISO 8601 format preferably with timezone designators, e.g. `2000-10-31T01:30:00Z`. |
| **`description`** | `string` | **✓** | Details of what the resource is. For a dataset, this attribute explains what the data represents and what it can be used for. |
| **`copyrightHolder`** | `string` | | The party holding the legal copyright. Empty by default. |
| **`name`** | `string` | **✓** | Descriptive name or title of the asset. |
| **`type`** | `string` | **✓** | Asset type. Includes `"dataset"` (e.g. csv file), `"algorithm"` (e.g. Python script). Each type needs a different subset of metadata attributes. |
| **`author`** | `string` | **✓** | Name of the entity generating this data (e.g. Tfl, Disney Corp, etc.). |
| **`license`** | `string` | **✓** | Short name referencing the license of the asset (e.g. Public Domain, CC-0, CC-BY, No License Specified, etc. ). If it's not specified, the following value will be added: "No License Specified". |
| **`links`** | Array of `string` | | Mapping of URL strings for data samples, or links to find out more information. Links may be to either a URL or another asset. |
| **`contentLanguage`** | `string` | | The language of the content. Use one of the language codes from the [IETF BCP 47 standard](https://tools.ietf.org/html/bcp47) |
| **`tags`** | Array of `string` | | Array of keywords or tags used to describe this content. Empty by default. |
| **`categories`** | Array of `string` | | Array of categories associated to the asset. Note: recommended to use `tags` instead of this. |
| **`additionalInformation`** | Object | | Stores additional information, this is customizable by publisher |
| **`algorithm`** | [Algorithm Metadata](#algorithm-metadata) | **✓** (for algorithm assets only) | Information about asset of `type` `algorithm` |
| Attribute | Type | Required | Description |
| --------------------------- | --------------------------------------------------- | --------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| **`created`** | `ISO date/time string` | | Contains the date of the creation of the dataset content in ISO 8601 format preferably with timezone designators, e.g. `2000-10-31T01:30:00Z`. |
| **`updated`** | `ISO date/time string` | | Contains the date of last update of the dataset content in ISO 8601 format preferably with timezone designators, e.g. `2000-10-31T01:30:00Z`. |
| **`description`** | `string` | **✓** | Details of what the resource is. For a dataset, this attribute explains what the data represents and what it can be used for. |
| **`copyrightHolder`** | `string` | | The party holding the legal copyright. Empty by default. |
| **`name`** | `string` | **✓** | Descriptive name or title of the asset. |
| **`type`** | `string` | **✓** | Asset type. Includes `"dataset"` (e.g. csv file), `"algorithm"` (e.g. Python script). Each type needs a different subset of metadata attributes. |
| **`author`** | `string` | **✓** | Name of the entity generating this data (e.g. Tfl, Disney Corp, etc.). |
| **`license`** | `string` | **✓** | Short name referencing the license of the asset (e.g. Public Domain, CC-0, CC-BY, No License Specified, etc. ). If it's not specified, the following value will be added: "No License Specified". |
| **`links`** | Array of `string` | | Mapping of URL strings for data samples, or links to find out more information. Links may be to either a URL or another asset. |
| **`contentLanguage`** | `string` | | The language of the content. Use one of the language codes from the [IETF BCP 47 standard](https://tools.ietf.org/html/bcp47) |
| **`tags`** | Array of `string` | | Array of keywords or tags used to describe this content. Empty by default. |
| **`categories`** | Array of `string` | | Array of categories associated to the asset. Note: recommended to use `tags` instead of this. |
| **`additionalInformation`** | Object | | Stores additional information, this is customizable by publisher |
| **`algorithm`** | [Algorithm Metadata](did-ddo.md#algorithm-metadata) | **✓** (for algorithm assets only) | Information about asset of `type` `algorithm` |
Example:
@ -124,16 +129,16 @@ Example:
}
```
#### Algorithm Metadata
**Algorithm Metadata**
An asset of type `algorithm` has additional attributes under `metadata.algorithm`, describing the algorithm and the Docker environment it is supposed to be run under.
| Attribute | Type | Required | Description |
| ------------------------ | ------------------------------------------- | -------- | ------------------------------------------------------------------------------------------ |
| **`language`** | `string` | | Language used to implement the software. |
| **`version`** | `string` | | Version of the software preferably in [SemVer](https://semver.org) notation. E.g. `1.0.0`. |
| **`consumerParameters`** | [Consumer Parameters](#consumer-parameters) | | An object that defines required consumer input before running the algorithm |
| **`container`** | `container` | **✓** | Object describing the Docker container image. See below |
| Attribute | Type | Required | Description |
| ------------------------ | ----------------------------------------------------- | -------- | ------------------------------------------------------------------------------------------ |
| **`language`** | `string` | | Language used to implement the software. |
| **`version`** | `string` | | Version of the software preferably in [SemVer](https://semver.org) notation. E.g. `1.0.0`. |
| **`consumerParameters`** | [Consumer Parameters](did-ddo.md#consumer-parameters) | | An object that defines required consumer input before running the algorithm |
| **`container`** | `container` | **✓** | Object describing the Docker container image. See below |
The `container` object has the following attributes defining the Docker image for running the algorithm:
@ -142,7 +147,7 @@ The `container` object has the following attributes defining the Docker image fo
| **`entrypoint`** | `string` | **✓** | The command to execute, or script to run inside the Docker image. |
| **`image`** | `string` | **✓** | Name of the Docker image. |
| **`tag`** | `string` | **✓** | Tag of the Docker image. |
| **`checksum`** | `string` | **✓** | Digest of the Docker image. (ie: sha256:xxxxx) |
| **`checksum`** | `string` | **✓** | Digest of the Docker image. (ie: sha256:xxxxx) |
```json
{
@ -169,28 +174,27 @@ The `container` object has the following attributes defining the Docker image fo
}
```
### Services
#### Services
Services define the access for an asset, and each service is represented by its respective datatoken.
An asset should have at least one service to be actually accessible, and can have as many services which make sense for a specific use case.
| Attribute | Type | Required | Description |
| ---------------------- | --------------------------- | ------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------- |
| **`id`** | `string` | **✓** | Unique ID |
| **`type`** | `string` | **✓** | Type of service (`access`, `compute`, `wss`, etc. |
| **`name`** | `string` | | Service friendly name |
| **`description`** | `string` | | Service description |
| **`datatokenAddress`** | `string` | **✓** | Datatoken address |
| **`serviceEndpoint`** | `string` | **✓** | Provider URL (schema + host) |
| **`files`** | [Files](#files) | **✓** | Encrypted file URLs. |
| **`timeout`** | `number` | **✓** | Describing how long the service can be used after consumption is initiated. A timeout of `0` represents no time limit. Expressed in seconds. |
| **`compute`** | [Compute](#compute-options) | **✓** (for compute assets only) | If service is of `type` `compute`, holds information about the compute-related privacy settings & resources. |
| **`consumerParameters`** | [Consumer Parameters](#consumer-parameters) | | An object the defines required consumer input before consuming the asset|
| **`additionalInformation`** | Object | | Stores additional information, this is customizable by publisher |
| Attribute | Type | Required | Description |
| --------------------------- | ----------------------------------------------------- | ------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------- |
| **`id`** | `string` | **✓** | Unique ID |
| **`type`** | `string` | **✓** | Type of service (`access`, `compute`, `wss`, etc. |
| **`name`** | `string` | | Service friendly name |
| **`description`** | `string` | | Service description |
| **`datatokenAddress`** | `string` | **✓** | Datatoken address |
| **`serviceEndpoint`** | `string` | **✓** | Provider URL (schema + host) |
| **`files`** | [Files](did-ddo.md#files) | **✓** | Encrypted file URLs. |
| **`timeout`** | `number` | **✓** | Describing how long the service can be used after consumption is initiated. A timeout of `0` represents no time limit. Expressed in seconds. |
| **`compute`** | [Compute](did-ddo.md#compute-options) | **✓** (for compute assets only) | If service is of `type` `compute`, holds information about the compute-related privacy settings & resources. |
| **`consumerParameters`** | [Consumer Parameters](did-ddo.md#consumer-parameters) | | An object the defines required consumer input before consuming the asset |
| **`additionalInformation`** | Object | | Stores additional information, this is customizable by publisher |
#### Files
**Files**
The `files` field is returned as a `string` which holds the encrypted file URLs.
@ -203,6 +207,7 @@ Example:
```
During the publish process, file URLs must be encrypted with a respective _Provider_ API call before storing the DDO on-chain. For this, you need to send the following object to Provider:
```json
{
"datatokenAddress":"0x1",
@ -212,22 +217,14 @@ During the publish process, file URLs must be encrypted with a respective _Provi
]
}
```
where "files" contains one or more storage objects.
Type of objects supported :
<table>
<tr>
<th>Type</th>
<th>Description</th>
<th>Example</th>
</tr>
<td><code>url</code></td>
<td>Static URL. Contains url and HTTP method</td>
<td>
```json
[
| Type | Description | Example |
| ----- | ---------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `url` | Static URL. Contains url and HTTP method | <pre class="language-json"><code class="lang-json">[
{
"type": "url",
"url": "https://url.com/file1.csv",
@ -238,40 +235,22 @@ Type of objects supported :
{"APIKEY": "124"},
]
}
]
```
</td>
</tr></table>
]</code></pre> |
First class integrations supported in the future :
<table>
<tr>
<th>Type</th>
<th>Description</th>
<th>Example</th>
</tr>
<tr>
<td><code>ipfs</code></td><td>IPFS files</td>
<td>
```json
[
| Type | Description | Example |
| ---------- | ----------------------------------------------- | ----------------------------------------------------------------------------------------------------------------- |
| `ipfs` | IPFS files | <pre class="language-json"><code class="lang-json">[
{
"type": "ipfs",
"hash": "XXX"
}
]
```
</td>
<tr><td><code>filecoin</code></td><td>Filecoin storage</td><td>&nbsp;</td></tr>
<tr><td><code>arwave</code></td><td>Arwave</td><td>&nbsp;</td></tr>
<tr><td><code>storj</code></td><td>Storj</td><td>&nbsp;</td></tr>
<tr><td><code>sql</code></td><td>Sql connection, dataset is generated by a query</td><td>&nbsp;</td></tr>
</table>
]</code></pre> |
| `filecoin` | Filecoin storage | |
| `arwave` | Arwave | |
| `storj` | Storj | |
| `sql` | Sql connection, dataset is generated by a query | |
A service can contain multiple files, using multiple storage types.
@ -314,97 +293,33 @@ To get information about the files after encryption, the `/fileinfo` endpoint of
This only concerns metadata about a file, but never the file URLs. The only way to decrypt them is to exchange at least 1 datatoken based on the respective service pricing scheme.
#### Compute Options
**Compute Options**
An asset with a service of `type` `compute` has the following additional attributes under the `compute` object. This object is required if the asset is of `type` `compute`, but can be omitted for `type` of `access`.
<table>
<tbody>
<tr>
<td><b><code>allowRawAlgorithm</code></b>
<table>
<tbody>
<tr>
<th>Type</th>
<th>Required</th>
<th>Description</th>
</tr>
<tr>
<td><code>boolean</code></td>
<td><b></b></td>
<td>If <code>true</code>, any passed raw text will be allowed to run. Useful for an algorithm drag & drop use case, but increases risk of data escape through malicious user input. Should be <code>false</code> by default in all implementations.</td>
</tr>
</tbody>
</table>
</td>
</tr>
<tr>
<td><b><code>allowNetworkAccess</code></b>
<table>
<tbody>
<tr>
<th>Type</th>
<th>Required</th>
<th>Description</th>
</tr>
<tr>
<td><code>boolean</code></td>
<td><b></b></td>
<td>If <code>true</code>, the algorithm job will have network access.</td>
</tr>
</tbody>
</table>
</td>
</tr>
<tr>
<td><b><code>publisherTrustedAlgorithmPublishers</code></b>
<table>
<tbody>
<tr>
<th>Type</th>
<th>Required</th>
<th>Description</th>
</tr>
<tr>
<td>Array of <code>string</code></td>
<td><b></b></td>
<td>If not defined, then any published algorithm is allowed. If empty array, then no algorithm is allowed. If not empty any algo published by the defined publishers is allowed.</td>
</tr>
</tbody>
</table>
</td>
</tr>
<tr>
<td><b><code>publisherTrustedAlgorithms</code></b>
<table>
<tbody>
<tr>
<th>Type</th>
<th>Required</th>
<th>Description</th>
</tr>
<tr>
<td>Array of <code>publisherTrustedAlgorithms</code></td>
<td><b></b></td>
<td>If not defined, then any published algorithm is allowed. If empty array, then no algorithm is allowed. Otherwise only the algorithms defined in the array are allowed. (see below).</td>
</tr>
</tbody>
</table>
</td>
</tr>
</tbody>
</table>
| <p><strong><code>allowRawAlgorithm</code></strong></p><table><thead><tr><th>Type</th><th>Required</th><th>Description</th></tr></thead><tbody><tr><td><code>boolean</code></td><td><strong></strong></td><td>If <code>true</code>, any passed raw text will be allowed to run. Useful for an algorithm drag &#x26; drop use case, but increases risk of data escape through malicious user input. Should be <code>false</code> by default in all implementations.</td></tr></tbody></table> | | |
| -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| Type | Required | Description |
| `boolean` | **✓** | If `true`, any passed raw text will be allowed to run. Useful for an algorithm drag & drop use case, but increases risk of data escape through malicious user input. Should be `false` by default in all implementations. |
| <p><strong><code>allowNetworkAccess</code></strong></p><table><thead><tr><th>Type</th><th>Required</th><th>Description</th></tr></thead><tbody><tr><td><code>boolean</code></td><td><strong></strong></td><td>If <code>true</code>, the algorithm job will have network access.</td></tr></tbody></table> | | |
| Type | Required | Description |
| `boolean` | **✓** | If `true`, the algorithm job will have network access. |
| <p><strong><code>publisherTrustedAlgorithmPublishers</code></strong></p><table><thead><tr><th>Type</th><th>Required</th><th>Description</th></tr></thead><tbody><tr><td>Array of <code>string</code></td><td><strong></strong></td><td>If not defined, then any published algorithm is allowed. If empty array, then no algorithm is allowed. If not empty any algo published by the defined publishers is allowed.</td></tr></tbody></table> | | |
| Type | Required | Description |
| Array of `string` | **✓** | If not defined, then any published algorithm is allowed. If empty array, then no algorithm is allowed. If not empty any algo published by the defined publishers is allowed. |
| <p><strong><code>publisherTrustedAlgorithms</code></strong></p><table><thead><tr><th>Type</th><th>Required</th><th>Description</th></tr></thead><tbody><tr><td>Array of <code>publisherTrustedAlgorithms</code></td><td><strong></strong></td><td>If not defined, then any published algorithm is allowed. If empty array, then no algorithm is allowed. Otherwise only the algorithms defined in the array are allowed. (see below).</td></tr></tbody></table> | | |
| Type | Required | Description |
| Array of `publisherTrustedAlgorithms` | **✓** | If not defined, then any published algorithm is allowed. If empty array, then no algorithm is allowed. Otherwise only the algorithms defined in the array are allowed. (see below). |
The `publisherTrustedAlgorithms ` is an array of objects with the following structure:
The `publisherTrustedAlgorithms` is an array of objects with the following structure:
| Attribute | Type | Required | Description |
| ------------------------------ | -------- | -------- | ------------------------------------------------------------------------- |
| **`did`** | `string` | **✓** | The DID of the algorithm which is trusted by the publisher. |
| **`filesChecksum`** | `string` | **✓** | Hash of algorithm's files (as `string`). |
| **`containerSectionChecksum`** | `string` | **✓** | Hash of algorithm's image details (as `string`). |
| Attribute | Type | Required | Description |
| ------------------------------ | -------- | -------- | ----------------------------------------------------------- |
| **`did`** | `string` | **✓** | The DID of the algorithm which is trusted by the publisher. |
| **`filesChecksum`** | `string` | **✓** | Hash of algorithm's files (as `string`). |
| **`containerSectionChecksum`** | `string` | **✓** | Hash of algorithm's image details (as `string`). |
To produce `filesChecksum`, call the Provider FileInfoEndpoint with parameter withChecksum = True.
If algorithm has multiple files, `filesChecksum` is a concatenated string of all files checksums (ie: checksumFile1+checksumFile2 , etc)
To produce `filesChecksum`, call the Provider FileInfoEndpoint with parameter withChecksum = True. If algorithm has multiple files, `filesChecksum` is a concatenated string of all files checksums (ie: checksumFile1+checksumFile2 , etc)
To produce `containerSectionChecksum`:
@ -458,26 +373,24 @@ Example:
}
```
#### Consumer Parameters
**Consumer Parameters**
Sometimes, the asset needs additional input data before downloading or running a Compute-to-Data job.
Examples:
Sometimes, the asset needs additional input data before downloading or running a Compute-to-Data job. Examples:
- The publisher needs to know the sampling interval before the buyer downloads it. Suppose the dataset URL is `https://example.com/mydata`. The publisher defines a field called `sampling` and asks the buyer to enter a value. This parameter is then added to the URL of the published dataset as query parameters: `https://example.com/mydata?sampling=10`.
- An algorithm that needs to know the number of iterations it should perform. In this case, the algorithm publisher defines a field called `iterations`. The buyer needs to enter a value for the `iterations` parameter. Later, this value is stored in a specific location in the Compute-to-Data pod for the algorithm to read and use it.
* The publisher needs to know the sampling interval before the buyer downloads it. Suppose the dataset URL is `https://example.com/mydata`. The publisher defines a field called `sampling` and asks the buyer to enter a value. This parameter is then added to the URL of the published dataset as query parameters: `https://example.com/mydata?sampling=10`.
* An algorithm that needs to know the number of iterations it should perform. In this case, the algorithm publisher defines a field called `iterations`. The buyer needs to enter a value for the `iterations` parameter. Later, this value is stored in a specific location in the Compute-to-Data pod for the algorithm to read and use it.
The `consumerParameters` is an array of objects. Each object defines a field and has the following structure:
| Attribute | Type | Required | Description |
| ----------------- | --------------------------------------------------- | -------- | -------------------------------------------------------------------------- |
| **`name`** | `string` | **✓** | The parameter name (this is sent as HTTP param or key towards algo) |
| **`type`** | `string` | **✓** | The field type (text, number, boolean, select) |
| **`label`** | `string` | **✓** | The field label which is displayed |
| **`required`** | `boolean` | **✓** | If customer input for this field is mandatory. |
| **`description`** | `string` | **✓** | The field description. |
| **`default`** | `string`, `number`, or `boolean` | **✓** | The field default value. For select types, `string` key of default option. |
| **`options`** | Array of `option` | | For select types, a list of options. |
| Attribute | Type | Required | Description |
| ----------------- | -------------------------------- | -------- | -------------------------------------------------------------------------- |
| **`name`** | `string` | **✓** | The parameter name (this is sent as HTTP param or key towards algo) |
| **`type`** | `string` | **✓** | The field type (text, number, boolean, select) |
| **`label`** | `string` | **✓** | The field label which is displayed |
| **`required`** | `boolean` | **✓** | If customer input for this field is mandatory. |
| **`description`** | `string` | **✓** | The field description. |
| **`default`** | `string`, `number`, or `boolean` | **✓** | The field default value. For select types, `string` key of default option. |
| **`options`** | Array of `option` | | For select types, a list of options. |
Each `option` is an `object` containing a single key:value pair where the key is the option name, and the value is the option value.
@ -539,7 +452,7 @@ Algorithms will have access to a JSON file located at /data/inputs/algoCustomDat
}
```
### Credentials
#### Credentials
By default, a consumer can access a resource if they have 1 datatoken. _Credentials_ allow the publisher to optionally specify more fine-grained permissions.
@ -572,7 +485,7 @@ Here's an example object with both `"allow"` and `"deny"` entries:
}
```
### DDO Checksum
#### DDO Checksum
In order to ensure the integrity of the DDO, a checksum is computed for each DDO:
@ -608,7 +521,7 @@ event MetadataUpdated(
_Aquarius_ should always verify the checksum after data is decrypted via a _Provider_ API call.
### State
#### State
Each asset has a state, which is held by the NFT contract. The possible states are:
@ -620,25 +533,25 @@ Each asset has a state, which is held by the NFT contract. The possible states a
| **`3`** | Revoked by publisher. |
| **`4`** | Ordering is temporary disabled. |
## Aquarius Enhanced DDO Response
### Aquarius Enhanced DDO Response
The following fields are added by _Aquarius_ in its DDO response for convenience reasons, where an asset returned by _Aquarius_ inherits the DDO fields stored on-chain.
These additional fields are never stored on-chain, and are never taken into consideration when [hashing the DDO](#ddo-checksum).
These additional fields are never stored on-chain, and are never taken into consideration when [hashing the DDO](did-ddo.md#ddo-checksum).
### NFT
#### NFT
The `nft` object contains information about the ERC721 NFT contract which represents the intellectual property of the publisher.
| Attribute | Type | Description |
| -------------- | ---------------------- | ------------------------------------------------------------------------- |
| **`address`** | `string` | Contract address of the deployed ERC721 NFT contract. |
| **`name`** | `string` | Name of NFT set in contract. |
| **`symbol`** | `string` | Symbol of NFT set in contract. |
| **`owner`** | `string` | ETH account address of the NFT owner. |
| **`state`** | `number` | State of the asset reflecting the NFT contract value. See [State](#state) |
| **`created`** | `ISO date/time string` | Contains the date of NFT creation. |
| **`tokenURI`** | `string` | tokenURI |
| Attribute | Type | Description |
| -------------- | ---------------------- | ----------------------------------------------------------------------------------- |
| **`address`** | `string` | Contract address of the deployed ERC721 NFT contract. |
| **`name`** | `string` | Name of NFT set in contract. |
| **`symbol`** | `string` | Symbol of NFT set in contract. |
| **`owner`** | `string` | ETH account address of the NFT owner. |
| **`state`** | `number` | State of the asset reflecting the NFT contract value. See [State](did-ddo.md#state) |
| **`created`** | `ISO date/time string` | Contains the date of NFT creation. |
| **`tokenURI`** | `string` | tokenURI |
Example:
@ -655,9 +568,9 @@ Example:
}
```
### Datatokens
#### Datatokens
The `datatokens` array contains information about the ERC20 datatokens attached to [asset services](#services).
The `datatokens` array contains information about the ERC20 datatokens attached to [asset services](did-ddo.md#services).
| Attribute | Type | Description |
| --------------- | -------- | ------------------------------------------------ |
@ -687,7 +600,7 @@ Example:
}
```
### Event
#### Event
The `event` section contains information about the last transaction that created or updated the DDO.
@ -705,7 +618,7 @@ Example:
}
```
### Purgatory
#### Purgatory
Contains information about an asset's purgatory status defined in [`list-purgatory`](https://github.com/oceanprotocol/list-purgatory). Marketplace interfaces are encouraged to prevent certain user actions like adding liquidity on assets in purgatory.
@ -733,7 +646,7 @@ Example:
}
```
### Statistics
#### Statistics
The `stats` section contains different statistics fields.
@ -751,7 +664,7 @@ Example:
}
```
## Full Enhanced DDO Example
### Full Enhanced DDO Example
```json
{

Binary file not shown.

After

Width:  |  Height:  |  Size: 62 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 44 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 57 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 83 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 44 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 39 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 63 KiB

View File

@ -0,0 +1,2 @@
# Using Ocean Marketplace