diff --git a/content/concepts/compute-to-data.md b/content/concepts/compute-to-data.md index 8983898d..eec7f514 100644 --- a/content/concepts/compute-to-data.md +++ b/content/concepts/compute-to-data.md @@ -9,16 +9,12 @@ section: concepts ## Motivation The most basic scenario for a Publisher is to provide access to the datasets they own or manage. -In addition to that, a Publisher could offer other data-related services. -Some possibilities are: - -1. A service to execute some computation on top of their data. This has some benefits: +In addition to that, a Publisher could offer a service to execute some computation on top of their data. This has some benefits: - The data **never** leaves the Publisher enclave. - It's not necessary to move the data; the algorithm is sent to the data. - Having only one copy of the data and not moving it makes it easier to be compliant with data protection regulations. -2. A service to store newly-derived datasets. As a result of the computation on existing datasets, a new dataset could be created. Publishers could offer a storage service to make use of their existing storage capabilities. This is optional; users could also download the newly-derived datasets. ## Architecture @@ -53,10 +49,10 @@ but can be called independently if it. The Operator Service is in charge of stablishing the communication with the K8s cluster, allowing to: -* Register workflows as K8s objects -* List the workflows registered in K8s -* Stop a running workflow execution -* Get information about the state of execution of a workflow +- Register workflows as K8s objects +- List the workflows registered in K8s +- Stop a running workflow execution +- Get information about the state of execution of a workflow The Operator Service doesn't provide any storage capability, all the state is stored directly in the K8s cluster. @@ -67,12 +63,12 @@ The Operator Service doesn't provide any storage capability, all the state is st The main responsibilities are: -* Expose an HTTP API allowing for the execution of data access and compute endpoints. -* Authorize the user on-chain using the proper Service Agreement. That is, validate that the user requesting the service is allowed to use that service. -* Interact with the infrastructure (cloud/on-premise) using the Publisher's credentials. -* Start/stop/execute computing instances with the algorithms provided by users. -* Retrieve the logs generated during executions. -* Register newly-derived assets arising from the executions (i.e. as new Ocean assets) (if required by the consumer). +- Expose an HTTP API allowing for the execution of data access and compute endpoints. +- Authorize the user on-chain using the proper Service Agreement. That is, validate that the user requesting the service is allowed to use that service. +- Interact with the infrastructure (cloud/on-premise) using the Publisher's credentials. +- Start/stop/execute computing instances with the algorithms provided by users. +- Retrieve the logs generated during executions. +- Register newly-derived assets arising from the executions (i.e. as new Ocean assets) (if required by the consumer). ### Flow @@ -81,14 +77,14 @@ The main responsibilities are: In the above diagram you can see the initial integration supported. It involves the following components/actors: -* Data Scientists/Consumers - The end users who need to use some computing services offered by the same Publisher as the data Publisher. -* Ocean Keeper - In charge of enforcing the Service Agreement by tracing conditions. -* Operator-Service - Micro-service that is handling the compute requests. -* Operator-Engine - The computing systems where the compute will be executed. +- Data Scientists/Consumers - The end users who need to use some computing services offered by the same Publisher as the data Publisher. +- Ocean Keeper - In charge of enforcing the Service Agreement by tracing conditions. +- Operator-Service - Micro-service that is handling the compute requests. +- Operator-Engine - The computing systems where the compute will be executed. Before the flow can begin, the following pre-conditions must be met: -* The Asset DDO has a compute service. -* The Asset DDO must specify the Brizo endpoint exposed by the Publisher. -* The Service Agreement template must already be predefined and whitelisted `on-chain`. +- The Asset DDO has a compute service. +- The Asset DDO must specify the Brizo endpoint exposed by the Publisher. +- The Service Agreement template must already be predefined and whitelisted `on-chain`. diff --git a/content/concepts/testnets.md b/content/concepts/testnets.md index 7d05e893..65ed9147 100644 --- a/content/concepts/testnets.md +++ b/content/concepts/testnets.md @@ -64,11 +64,6 @@ A local testnet similar to Spree but launched by using the `--local-ganache-node > You shouldn't use a Ganache-Based Testnet unless you know why you're doing so. For example, a Ganache-based testnet can be used to test some smart contracts, but it can't be used with a Secret Store. -## The Duero Testnet - -The Duero Testnet is similar to the Nile Testnet, but it's only for internal use by the Ocean Protocol dev team. They test new things in the Duero Testnet before deploying them in the Nile Testnet (which is for use by anyone). That is, the testing order is Spree (local), Duero (private), Nile (public). - -If you need to know something technical about the Duero Testnet, such as the RPC URL, please contact the Ocean Protocol dev team. [^1]: Formerly called Ocean Protocol Testnet v0.1, it was announced as part of the Plankton milestone. [^2]: Also known as the Nile Beta Network. Formerly called the Ocean POA Testnet. diff --git a/content/tutorials/react-compute-published-algorithm.md b/content/tutorials/react-compute-published-algorithm.md index abf06863..db6d3260 100644 --- a/content/tutorials/react-compute-published-algorithm.md +++ b/content/tutorials/react-compute-published-algorithm.md @@ -9,7 +9,7 @@ This is a continuation of the [React App Setup](/tutorials/react-setup/) tutoria 1. [React App Setup](/tutorials/react-setup/) -Open `src/index.js` from your `marketplace/` folder. +Open `src/Compute.js` from your `marketplace/` folder. ## Define Compute Output diff --git a/content/tutorials/react-compute-raw.md b/content/tutorials/react-compute-raw.md index 0ef9603f..990b586e 100644 --- a/content/tutorials/react-compute-raw.md +++ b/content/tutorials/react-compute-raw.md @@ -9,7 +9,7 @@ This is a continuation of the [React App Setup](/tutorials/react-setup/) tutoria 1. [React App Setup](/tutorials/react-setup/) -Open `src/index.js` from your `marketplace/` folder. +Open `src/Compute.js` from your `marketplace/` folder. ## Define Raw Code diff --git a/content/tutorials/react-publish-algorithm.md b/content/tutorials/react-publish-algorithm.md index caf1775b..dbc4f1c7 100644 --- a/content/tutorials/react-publish-algorithm.md +++ b/content/tutorials/react-publish-algorithm.md @@ -9,7 +9,7 @@ This is a continuation of the [React App Setup](/tutorials/react-setup/) tutoria 1. [React App Setup](/tutorials/react-setup/) -Open `src/index.js` from your `marketplace/` folder. +Open `src/Compute.js` from your `marketplace/` folder. ## Define Asset diff --git a/content/tutorials/react-publish-data-set-compute.md b/content/tutorials/react-publish-data-set-compute.md index 35c8b06e..c1662cba 100644 --- a/content/tutorials/react-publish-data-set-compute.md +++ b/content/tutorials/react-publish-data-set-compute.md @@ -9,7 +9,7 @@ This is a continuation of the [React App Setup](/tutorials/react-setup/) tutoria 1. [React App Setup](/tutorials/react-setup/) -Open `src/index.js` from your `marketplace/` folder. +Open `src/Compute.js` from your `marketplace/` folder. ## Define Asset