diff --git a/.gitbook/assets/Enter-Metadata (1).png b/.gitbook/assets/Enter-Metadata (1).png index 26afa886..8dd1d3fa 100644 Binary files a/.gitbook/assets/Enter-Metadata (1).png and b/.gitbook/assets/Enter-Metadata (1).png differ diff --git a/.gitbook/assets/Enter-Metadata.png b/.gitbook/assets/Enter-Metadata.png index 8dd1d3fa..26afa886 100644 Binary files a/.gitbook/assets/Enter-Metadata.png and b/.gitbook/assets/Enter-Metadata.png differ diff --git a/data-science/README.md b/data-science/README.md index 823ed81e..6ba09b12 100644 --- a/data-science/README.md +++ b/data-science/README.md @@ -6,7 +6,7 @@ coverY: 0 # 📊 Data Science -

Ocean Protocol - Built to protect your precious.

+

Ocean Protocol - Built to protect your precious.

### Why should data scientists use Ocean Protocol? diff --git a/developers/compute-to-data/compute-to-data-architecture.md b/developers/compute-to-data/compute-to-data-architecture.md index bd29b189..539293ea 100644 --- a/developers/compute-to-data/compute-to-data-architecture.md +++ b/developers/compute-to-data/compute-to-data-architecture.md @@ -34,9 +34,7 @@ As with [the `access` service](broken-reference), the `compute` service requires Ocean Provider includes the credentials to interact with the infrastructure (initially in cloud providers, but it could be on-premise). -### Compute-to-Data Environment - -#### Operator Service +### Operator Service The **Operator Service** is a micro-service in charge of managing the workflow executing requests. @@ -58,7 +56,7 @@ The Operator Service is in charge of establishing the communication with the K8s The Operator Service doesn't provide any storage capability, all the state is stored directly in the K8s cluster. -#### Operator Engine +### Operator Engine The **Operator Engine** is in charge of orchestrating the compute infrastructure using Kubernetes as backend where each compute job runs in an isolated [Kubernetes Pod](https://kubernetes.io/docs/concepts/workloads/pods/). Typically the Operator Engine retrieves the workflows created by the Operator Service in Kubernetes, and manage the infrastructure necessary to complete the execution of the compute workflows. @@ -70,6 +68,28 @@ The Operator Engine is in charge of retrieving all the workflows registered in a * Start the publishing pod that publish the new assets created in the Ocean Protocol network. * The Operator Engine doesn't provide any storage capability, all the state is stored directly in the K8s cluster. -#### Pod: Configuration +### Pod Configuration -#### Pod: Publishing +The Pod-Configuration repository operates in conjunction with the Operator Engine, and it initiates at the beginning of a job. It performs crucial functions that set up the environment for the job execution. + +The Pod-Configuration is a node.js script that dynamically manages the environment set-up at the start of a job by the operator-engine. Its role involves fetching and preparing necessary assets and files to ensure a seamless job execution. + +1. **Fetching Dataset Assets**: It fetches the files corresponding to datasets and saves them in the location `/data/inputs/DID/`. The files are named based on their array index ranging from 0 to X, depending on the total number of files associated with the dataset. +2. **Fetching Algorithm Files**: The script then retrieves the algorithm files and stores them in the `/data/transformations/` directory. The first file is named 'algorithm', and the subsequent files are indexed from 1 to X, based on the number of files present for the algorithm. +3. **Fetching DDOS**: Additionally, the Pod-Configuration fetches Decentralized Document Oriented Storage (DDOS) and saves them to the disk at the location `/data/ddos/`. +4. **Error Handling**: In case of any provisioning failures, whether during data fetching or algorithm processing, the script updates the job status in a PostgreSQL database, and logs the relevant error messages. + +Once the Pod-Configuration successfully completes these tasks, it closes and signals the operator-engine to initiate the algorithm pod for the next steps. This repository provides the basis for smooth job processing by effectively managing assets and algorithm files, and handling potential provisioning error. + +### Pod Publishing + +Pod Publishing is a command-line utility for processing, logging, and uploading workflow outputs, functioning in conjunction with the Operator Service and Operator Engine within a Kubernetes-based compute infrastructure. The primary functionality divided across three areas: + +1. **Interaction with Operator Service**: Pod Publishing uploads the outputs of compute workflows initiated by the Operator Service to a designated AWS S3 bucket or the InterPlanetary File System (IPFS). It logs all processing steps and updates a PostgreSQL database. +2. **Role in Publishing Pod**: Within the compute infrastructure orchestrated by the Operator Engine on Kubernetes (K8s), Pod Publishing is integral to the Publishing Pod. The Publishing Pod handles the creation of new assets in the Ocean Protocol network after a workflow execution. +3. **Workflow Outputs Management**: Pod Publishing manages the storage of workflow outputs. Depending on configuration, it interacts with IPFS or AWS S3, and logs the processing steps. + +Please note: + +* Pod Publishing does not provide storage capabilities; all state information is stored directly in the K8s cluster or the respective data storage solution (AWS S3 or IPFS). +* The utility works in close coordination with the Operator Service and Operator Engine, but does not have standalone functionality. diff --git a/developers/list-datatoken-buyers.md b/developers/list-datatoken-buyers.md index 27c3dde6..d23dd85b 100644 --- a/developers/list-datatoken-buyers.md +++ b/developers/list-datatoken-buyers.md @@ -4,15 +4,13 @@ description: >- list the buyers of a datatoken --- -# List datatoken buyers +# Find datatoken/data NFT addresses & Chain ID -## Step 1: Find the Network and Datatoken address - -### How to find the network and datatoken address from an Ocean Market link? +### How to find the network, datatoken address, and data NFT address from an Ocean Market link? If you are given an Ocean Market link, then the network and datatoken address for the asset is visible on the Ocean Market webpage. For example, given this asset's Ocean Market link: [https://odc.oceanprotocol.com/asset/did:op:1b26eda361c6b6d307c8a139c4aaf36aa74411215c31b751cad42e59881f92c1](https://odc.oceanprotocol.com/asset/did:op:1b26eda361c6b6d307c8a139c4aaf36aa74411215c31b751cad42e59881f92c1) the webpage shows that this asset is hosted on the Mumbai network, and one simply clicks the datatoken's hyperlink to reveal the datatoken's address as shown in the screenshot below: -

See the Network and Datatoken Address for an Ocean Market asset by visiting the asset's Ocean Market page.

+

See the Network and Datatoken Address for an Ocean Market asset by visiting the asset's Ocean Market page.

#### More Detailed Info: @@ -20,23 +18,23 @@ You can access all the information for the Ocean Market asset also by **enabling **Step 1** - Click the Settings button in the top right corner of the Ocean Market -

Click the Settings button

+

Click the Settings button

**Step 2** - Check the Activate Debug Mode box in the dropdown menu -

Check 'Active Debug Mode'

+

Check 'Active Debug Mode'

**Step 3** - Go to the page for the asset you would like to examine, and scroll through the DDO information to find the NFT address, datatoken address, chain ID, and other information. -
+
-### How to find the network and datatoken address from a DID? +### How to use Aquarius to find the chainID and datatoken address from a DID? If you know the DID:op but you don't know the source link, then you can use Ocean Aquarius to resolve the metadata for the DID:op to find the `chainId`+ `datatoken address` of the asset. Simply enter in your browser "[https://v4.aquarius.oceanprotocol.com/api/aquarius/assets/ddo/](https://v4.aquarius.oceanprotocol.com/api/aquarius/assets/ddo/did:op:1b26eda361c6b6d307c8a139c4aaf36aa74411215c31b751cad42e59881f92c1)\" to fetch the metadata. For example, for the following DID:op: "did:op:1b26eda361c6b6d307c8a139c4aaf36aa74411215c31b751cad42e59881f92c1" the Ocean Aquarius URL can be modified to add the DID:op and resolve its metadata. Simply add "[https://v4.aquarius.oceanprotocol.com/api/aquarius/assets/ddo/](https://v4.aquarius.oceanprotocol.com/api/aquarius/assets/ddo/did:op:1b26eda361c6b6d307c8a139c4aaf36aa74411215c31b751cad42e59881f92c1)" to the beginning of the DID:op and enter the link in your browser like this: [https://v4.aquarius.oceanprotocol.com/api/aquarius/assets/ddo/did:op:1b26eda361c6b6d307c8a139c4aaf36aa74411215c31b751cad42e59881f92c1](https://v4.aquarius.oceanprotocol.com/api/aquarius/assets/ddo/did:op:1b26eda361c6b6d307c8a139c4aaf36aa74411215c31b751cad42e59881f92c1) -

The metadata printout for this DID:op with the network's Chain ID and datatoken address circled in red

+

The metadata printout for this DID:op with the network's Chain ID and datatoken address circled in red

Here are the networks and their corresponding chain IDs: @@ -57,75 +55,3 @@ Here are the networks and their corresponding chain IDs: "development: 8996" ``` -\ -Step 2: Query the Subgraph to see all buyers of the datatoken -------------------------------------------------------------- - -Select the corresponding subgraph URL for the asset's network. Below are some of the popular subgraph URLs, to show you the subgraph URL format. - -``` -https://v4.subgraph.mainnet.oceanprotocol.com/subgraphs/name/oceanprotocol/ocean-subgraph/graphql? -https://v4.subgraph.polygon.oceanprotocol.com/subgraphs/name/oceanprotocol/ocean-subgraph/graphql? -https://v4.subgraph.bsc.oceanprotocol.com/subgraphs/name/oceanprotocol/ocean-subgraph/graphql? -https://v4.subgraph.moonriver.oceanprotocol.com/subgraphs/name/oceanprotocol/ocean-subgraph/graphql? -https://v4.subgraph.energyweb.oceanprotocol.com/subgraphs/name/oceanprotocol/ocean-subgraph/graphql? -https://v4.subgraph.goerli.oceanprotocol.com/subgraphs/name/oceanprotocol/ocean-subgraph/graphql? -https://v4.subgraph.mumbai.oceanprotocol.com/subgraphs/name/oceanprotocol/ocean-subgraph/graphql? -``` - -You can then use the following example Javascript query to list the buyers of the datatoken. - -Note, that you can also copy and paste the contents of the query function below to fetch the same info from the Ocean Subgraph [GraphiQL interface](https://v4.subgraph.mainnet.oceanprotocol.com/subgraphs/name/oceanprotocol/ocean-subgraph/graphql). - -```runkit nodeVersion="18.x.x" -const axios = require('axios') - -const datatoken = "0xc22bfd40f81c4a28c809f80d05070b95a11829d9".toLowerCase() - -const query = `{ - token(id : "${datatoken}") { - id, - orders( - orderBy: createdTimestamp - orderDirection: desc - first: 1000 - ) { - id - consumer { - id - } - payer { - id - } - reuses { - id - } - block - createdTimestamp - amount - } - } -}` - -const network = "mumbai" -var config = { - method: 'post', - url: `https://v4.subgraph.${network}.oceanprotocol.com/subgraphs/name/oceanprotocol/ocean-subgraph`, - headers: { "Content-Type": "application/json" }, - data: JSON.stringify({ "query": query }) -}; - -axios(config) - .then(function (response) { - const orders = response.data.data.token.orders - console.log(orders) - for (let order of orders) { - console.log('id:' + order.id + ' consumer: ' + order.consumer.id + ' payer: ' + order.payer.id) - } - console.log(response.data.data.token.orders) - }) - .catch(function (error) { - console.log(error); -}); - -``` diff --git a/user-guides/publish-data-nfts.md b/user-guides/publish-data-nfts.md index 7efd6091..93ed6544 100644 --- a/user-guides/publish-data-nfts.md +++ b/user-guides/publish-data-nfts.md @@ -63,7 +63,7 @@ _Mandatory fields are marked with \*_ Tags help the asset to be searchable. If not provided, the list of tags is empty by default. -

Enter the metadata

+

Enter the metadata

Click Continue.