diff --git a/.gitbook/assets/c2d/c2d_detailed_flow.png b/.gitbook/assets/c2d/c2d_detailed_flow.png new file mode 100644 index 00000000..60e0c79d Binary files /dev/null and b/.gitbook/assets/c2d/c2d_detailed_flow.png differ diff --git a/SUMMARY.md b/SUMMARY.md index a47356a9..55a4d67a 100644 --- a/SUMMARY.md +++ b/SUMMARY.md @@ -79,6 +79,7 @@ - [Compute to data](developers/compute-to-data/README.md) - [Architecture](developers/compute-to-data/compute-to-data-architecture.md) - [Datasets & Algorithms](developers/compute-to-data/compute-to-data-datasets-algorithms.md) + - [Workflow](developers/compute-to-data/compute-workflow.md) - [Writing Algorithms](developers/compute-to-data/compute-to-data-algorithms.md) - [Compute Options](developers/compute-to-data/compute-options.md) - [Aquarius](developers/aquarius/README.md) diff --git a/developers/aquarius/README.md b/developers/aquarius/README.md index 9691b788..748d545e 100644 --- a/developers/aquarius/README.md +++ b/developers/aquarius/README.md @@ -8,8 +8,6 @@ The core job of Aquarius is to continually look out for new metadata being creat Aquarius has its own interface (API) that allows you to easily query this metadata. With Aquarius, you don't need to do the time-consuming task of scanning the data chains yourself. It offers you a convenient shortcut to the information you need. It's ideal for when you need a search feature within your dApp. - - ### What does Aquarius do? diff --git a/developers/compute-to-data/README.md b/developers/compute-to-data/README.md index e2d1e526..fd18c1b1 100644 --- a/developers/compute-to-data/README.md +++ b/developers/compute-to-data/README.md @@ -15,7 +15,7 @@ Private data holds immense value as it can significantly enhance research and bu Private data has the potential to drive groundbreaking discoveries in science and technology, with increased data improving the predictive accuracy of modern AI models. Due to its scarcity and the challenges associated with accessing it, private data is often regarded as the most valuable. By utilizing private data through Compute-to-Data, significant rewards can be reaped, leading to transformative advancements and innovative breakthroughs. {% hint style="info" %} -The Ocean Protocol provides a compute environment that you can access at the following address: [https://stagev4.c2d.oceanprotocol.com/](https://stagev4.c2d.oceanprotocol.com/). Feel free to explore and utilize this platform for your needs. +The Ocean Protocol provides a compute environment that you can access at the following [address](https://stagev4.c2d.oceanprotocol.com/). Feel free to explore and utilize this platform for your needs. {% endhint %} We suggest reading these guides to get an understanding of how compute-to-data works: diff --git a/developers/compute-to-data/compute-to-data.md b/developers/compute-to-data/compute-to-data.md deleted file mode 100644 index d3bfff50..00000000 --- a/developers/compute-to-data/compute-to-data.md +++ /dev/null @@ -1,32 +0,0 @@ ---- -title: Compute-to-Data -slug: /concepts/compute-to-data/ -section: concepts -description: Providing access to data in a privacy-preserving fashion ---- - -# Compute-to-Data - -### Quick Start - -* [Compute-to-Data example](https://github.com/oceanprotocol/ocean.py/blob/main/READMEs/c2d-flow.md) - -### Motivation - -The most basic scenario for a Publisher is to provide access to the datasets they own or manage. However, a Publisher may offer a service to execute some computation on top of their data. This has some benefits: - -* The data **never** leaves the Publisher enclave. -* It's not necessary to move the data; the algorithm is sent to the data. -* Having only one copy of the data and not moving it makes it easier to be compliant with data protection regulations. - -[This page](https://oceanprotocol.com/technology/compute-to-data) elaborates on the benefits. - -### Further Reading - -* [Compute-to-Data architecture](compute-to-data-architecture.md) -* [Tutorial: Writing Algorithms](compute-to-data-algorithms.md) -* [Tutorial: Set Up a Compute-to-Data Environment](../../infrastructure/compute-to-data-minikube.md) -* [Use Compute-to-Data in Ocean Market](https://blog.oceanprotocol.com/compute-to-data-is-now-available-in-ocean-market-58868be52ef7) -* [Build ML models via Ocean Market or Python](https://medium.com/ravenprotocol/machine-learning-series-using-logistic-regression-for-classification-in-oceans-compute-to-data-18df49b6b165) -* [Compute-to-Data Python Quickstart](https://github.com/oceanprotocol/ocean.py/blob/main/READMEs/c2d-flow.md) -* [(Old) Compute-to-Data specs](https://github.com/oceanprotocol-archive/OEPs/tree/master/12) (OEP12) diff --git a/developers/compute-to-data/compute-workflow.md b/developers/compute-to-data/compute-workflow.md new file mode 100644 index 00000000..d97dcf67 --- /dev/null +++ b/developers/compute-to-data/compute-workflow.md @@ -0,0 +1,64 @@ +--- +title: Compute Workflow +section: developers +description: Understanding the Compute-to-Data (C2D) Workflow +--- + +🚀 Now that we've introduced the key actors and provided an overview of the process, it's time to delve into the nitty-gritty of the compute workflow. We'll dissect each step, examining the inner workings of Compute-to-Data (C2D). From data selection to secure computations, we'll leave no stone unturned in this exploration. + +For visual clarity, here's an image of the workflow in action! 🖼️✨ + + + +Below, we'll outline each step in detail 📝 + +## Starting a C2D Job +1. The consumer selects a preferred environment from the provider's list and initiates a compute-to-data job by choosing a dataset-algorithm pair. +2. The provider checks the orders on the blockchain. +3. If the orders for dataset, algorithm and compute environment fees are valid, the provider can start the compute flow. +4. The provider informs the consumer of the job's id successful creation. +5. With the job ID and confirmation of the orders, the provider starts the job by calling the operator service. +6. The operator service adds the new job in its local jobs queue. +7. It's the operator engine's responsibility to periodically check the operator service for the list of pending jobs. If there are available resources for a new job, the operator engine requests the job list from the operator service to decide whether to initiate a new job. +8. The operator service provides the list of jobs, and the operator engine is then prepared to start a new job. + +## Creating the K8 Cluster and Allocating Job Volumes +9. As a new job begins, volumes are created on the Kubernetes cluster, a task handled by the operator engine. +10. The cluster creates and allocates volumes for the job using the job volumes. +11. The volumes are created and allocated to the pod. +12. After volume creation and allocation, the operator engine starts "pod-configuration" as a new pod in the cluster. + +## Loading Datasets and Algorithms +13. Pod-configuration requests the necessary dataset(s) and algorithm from their respective providers. +14. The files are downloaded by the pod configuration via the provider. +15. The pod configuration writes the datasets in the job volume. +16. The pod configuration informs the operator engine that it's ready to start the job. + +## Running the Algorithm on Dataset(s) +17. The operator engine launches the algorithm pod on the Kubernetes cluster, with volume containing dataset(s) and algorithm mounted. +18. Kubernetes runs the algorithm pod. +19. The Operator engine monitors the algorithm, stopping it if it exceeds the specified time limit based on the chosen environment. +20. Now that the results are available, the operator engine starts "pod-publishing". +21. The pod publishing uploads the results, logs, and admin logs to the output volume. +22. Upon successful upload, the operator engine receives notification from the pod publishing, allowing it to clean up the job volumes. + +## Cleaning Up Volumes and Allocated Space +23. The operator engine deletes the K8 volumes. +24. The Kubernetes cluster removes all used volumes. +25. Once volumes are deleted, the operator engine finalizes the job. +26. The operator engine informs the operator service that the job is completed, and the results are now accessible. + +## Retrieving Job Details +27. The consumer retrieves job details by calling the provider's `get job details`. +28. The provider communicates with the operator service to fetch job details. +29. The operator service returns the job details to the provider. +30. With the job details, the provider can share them with the dataset consumer. + +## Retrieving Job Results +31. Equipped with job details, the dataset consumer can retrieve the results from the recently executed job. +32. The provider engages the operator engine to access the job results. +33. As the operator service lacks access to this information, it uses the output volume to fetch the results. +34. The output volume provides the stored job results to the operator service. +35. The operator service shares the results with the provider. +36. The provider then delivers the results to the dataset consumer. + diff --git a/developers/compute-to-data/user-defined-parameters.md b/developers/compute-to-data/user-defined-parameters.md deleted file mode 100644 index 4ae86f47..00000000 --- a/developers/compute-to-data/user-defined-parameters.md +++ /dev/null @@ -1,186 +0,0 @@ ---- -description: >- - Learn how to define and use custom parameters while downloading assets or - using dataset in Compute-to-data environment ---- - -# User defined parameters - -### Overview - -Ocean Protocol allows dataset buyers to provide custom parameters that can be used to fetch the downloaded data in a specific format, download a different type of data or pass some additional input to the algorithms in the Compute-to-Data job. - -There 2 are types of parameters that asset publishers can support: - -- User defined parameters -- Algorithm custom parameters - -### Publish a dataset that uses custom parameters - -The dataset publisher can support these parameters to allow filtering or querying of the published dataset. The additional parameters that facilitate this are called `User defined parameters`. The Provider combines the original asset URL and the entered parameter values into a new URL and then streams the response from the modified URL back to the buyer. - -#### Use case for user defined parameters - -For example, if the publisher has published an URL `https://example.com` which serves large size historical weather data from all over the world, the publisher could allow buyers to filter the data based on location, type of data, etc. It is possible to do this using user defined parameters. - -Suppose the publisher defines the following 2 parameters: - -- `location`: A string indicating region code -- `type`: A string indicating the type of weather data. It can be temperature/humidity/pressure. - -Suppose the buyer wants to download the temperature data in the region code `XYZ`. While downloading the data, the buyer enters the desired parameter values using ocean.py. - -The provider will decrypt the URL from the DDO published on-chain, construct a new URL with the additional parameters and finally stream data to the buyer. - -Internally, the new URL will be of the format `https://example.com/?location=XYZ&type=temperature`. The server hosting the data has to read these parameters and serve the appropriate data. - -The following steps will specify how the publisher can support additional parameters. - -#### Step 1: Create a service - -The below python script exposes a REST endpoint that takes two parameters, namely: `location` and `type`. Let's assume that the dataset publisher hosts the service at domain `example.com` along with HTTPS support. The publisher must ensure that the URL is accessible to Provider. - -The code snippet is only for demo purposes and not for production use. - -```python -from flask import Flask, request -def get_data(data_type: str, location: str): - ''' - Add some business logic here to get - the required data with given parameters - ''' - return {} -@app.route('/', methods=['GET']) -def serve_content(): - args = request.args - data_type = args.get('type') - location = args.get('location') - result = get_data(data_type, location) - return result -``` - -#### Step 2: Publish dataset asset with compute service - -The publisher now must provide the file URL as `https://example.org` while publishing the asset, as shown in the below image. - -![Compute to data parameters](../../.gitbook/assets/c2d/compute-to-data-parameters-publish-dataset.png) - -For a complete tutorial on publishing asset using Ocean Marketplace read [our guide on publishing with Ocean Market](../../user-guides//publish-data-nfts.md). - -### Publish an algorithm that uses custom parameters - -#### Use case for algorithm custom parameters - -For example, if the algorithm publisher has published an URL `https://example.org` which serves python script to analyze historical weather data published in the previous section. If the algorithm publisher wants buyers to specify the number of iterations the algorithm must perform over the data, it is possible to do so using algorithm custom parameters. - -Suppose the algorithm publisher defines a parameter called `iterations` and expects the buyer to give this input before running the algorithm in a Compute-to-Data environment. The buyer can enter the desired parameter value using ocean.py or ocean.js. - -The provider passes the entered parameters and saves them in a specific path in the Compute-to-Data environment. The algorithm can later read this value and perform required computations. - -The following steps will specify how the algorithm publisher can support additional algorithm custom parameters. - -#### Step 1: Create an algorithm - -The code snippet is only for demo purposes and not for production use. - -```python -def run_algorithm(i: int): - pass -def read_algorithm_custom_input(): - parameters_file = os.path.join(os.sep, "data", "inputs", "algoCustomData.json") - with open(parameters_file, "r") as file: - return json.load(file) -algorithm_inputs = read_algorithm_custom_input() -iterations = algorithm_inputs["iterations"] -for i in range(iterations): - # Run some machine learning algorithm - print(f"Running iteration {i}") - result = run_algorithm(i) -output_dir = os.path.join(os.sep, "data", "outputs") -with open(os.path.join(output_dir, "result"), "w") as f: - f.write(result) -``` - -#### Step 2: Publish algorithm asset - -The publisher now must provide the file URL as `https://example.org` while publishing the algorithm asset, as shown in the below image. - -![Publish algorithm asset](../../.gitbook/assets/c2d/compute-to-data-parameters-publish-algorithm.png) - -For a complete tutorial on publishing asset using Ocean Marketplace read [our guide on publishing with Ocean Market](../../user-guides/publish-data-nfts.md). - -### Starting compute job with custom parameters - -In this example, the buyer wants to run the algorithm with certain parameters on a selected dataset. The code snippet below shows how the buyer can start the compute job with custom parameter values. Before embarking on this tutorial you should familiarize yourself with how to: - -- Search for a dataset using [Ocean market](https://market.oceanprotocol.com/) or [Aquarius](../aquarius/README.md) -- [Allow an algorithm to run on the dataset](https://github.com/oceanprotocol/ocean.py/blob/6eb068df338abc7376430cc5ba7fe2d381508328/READMEs/c2d-flow.md#5-alice-allows-the-algorithm-for-c2d-for-that-data-asset) -- Buy datatokens using [Ocean market](https://market.oceanprotocol.com/) or [ocean.py](https://github.com/oceanprotocol/ocean.py) -- [Set up ocean.py](../ocean.py/install.md) - -{% tabs %} -{% tab title="Python" %} - -
# Import dependencies
-from config import web3_wallet, ocean, config, web3_wallet
-from ocean_lib.models.compute_input import ComputeInput
-# Replace theses variables with the appropriate did values
-dataset_did = "did:op:<>"
-algorithm_did = "did:op:<>"
-
-# Define algorithm input
-algorithm_input = {
- "iterations": 1000
-}
-
-# Define dataset parameters
-dataset_input = {
- "type": "temperature",
- "location": "XYZ"
-}
-
-# Resolve assets using Aquarius
-aquarius = Aquarius.get_instance(config.metadata_cache_uri)
-DATA_asset = aquarius.wait_for_asset(dataset_did)
-ALGO_asset = aquarius.wait_for_asset(algorithm_did)
-
-compute_service = DATA_asset.services[0]
-algo_service = ALGO_asset.services[0]
-free_c2d_env = ocean.compute.get_free_c2d_environment(compute_service.service_endpoint)
-
-DATA_compute_input = ComputeInput(DATA_asset, compute_service, userdata=dataset_input)
-ALGO_compute_input = ComputeInput(ALGO_asset, algo_service)
-
-# Pay for the compute job
-datasets, algorithm = ocean.assets.pay_for_compute_service(
- datasets=[DATA_compute_input],
- algorithm_data=ALGO_compute_input,
- consume_market_order_fee_address=web3_wallet.address,
- wallet=web3_wallet,
- compute_environment=free_c2d_env["id"],
- valid_until=int((datetime.utcnow() + timedelta(days=1)).timestamp()),
- consumer_address=free_c2d_env["consumerAddress"],
-)
-
-assert datasets, "pay for dataset unsuccessful"
-assert algorithm, "pay for algorithm unsuccessful"
-
-# Start compute job
-job_id = ocean.compute.start(
- consumer_wallet=web3_wallet,
- dataset=datasets[0],
- compute_environment=free_c2d_env["id"],
- algorithm=algorithm,
- algorithm_algocustomdata=algorithm_input
-)
-# Printing the job id here. Use this job_id to retrive the result of the compute job.
-print("job_id", job_id)
-
-Execute script
-
-```bash
-python start_compute.py
-```
-
-{% endtab %}
-{% endtabs %}