1
0
mirror of https://github.com/oceanprotocol/docs.git synced 2024-11-26 19:49:26 +01:00

Update the data asset(s) label to dataset

This commit is contained in:
Ana Loznianu 2023-11-06 17:34:13 +02:00
parent f2c513d956
commit 1aad928d59

View File

@ -13,9 +13,9 @@ For visual clarity, here's an image of the workflow in action! 🖼️✨
Below, we'll outline each step in detail 📝 Below, we'll outline each step in detail 📝
## Starting a C2D Job ## Starting a C2D Job
1. The consumer selects a preferred environment from the provider's list and initiates a compute-to-data job by choosing a data asset-algorithm pair. 1. The consumer selects a preferred environment from the provider's list and initiates a compute-to-data job by choosing a dataset-algorithm pair.
2. The provider checks the orders on the blockchain. 2. The provider checks the orders on the blockchain.
3. If the orders for data asset, algorithm and compute environment fees are valid, the provider can start the compute flow. 3. If the orders for dataset, algorithm and compute environment fees are valid, the provider can start the compute flow.
4. The provider informs the consumer of the job id's successful creation. 4. The provider informs the consumer of the job id's successful creation.
5. With the job ID and confirmation of the orders, the provider starts the job by calling the operator service. 5. With the job ID and confirmation of the orders, the provider starts the job by calling the operator service.
6. The operator service adds the new job in its local jobs queue. 6. The operator service adds the new job in its local jobs queue.
@ -28,14 +28,14 @@ Below, we'll outline each step in detail 📝
11. The volumes are created and allocated to the pod. 11. The volumes are created and allocated to the pod.
12. After volume creation and allocation, the operator engine starts "pod-configuration" as a new pod in the cluster. 12. After volume creation and allocation, the operator engine starts "pod-configuration" as a new pod in the cluster.
## Loading Assets and Algorithms ## Loading Datasets and Algorithms
13. Pod-configuration requests the necessary data asset(s) and algorithm from their respective providers. 13. Pod-configuration requests the necessary dataset(s) and algorithm from their respective providers.
14. The files are downloaded by the pod configuration via the provider. 14. The files are downloaded by the pod configuration via the provider.
15. The pod configuration writes the assets in the job volume. 15. The pod configuration writes the datasets in the job volume.
16. The pod configuration informs the operator engine that it's ready to start the job. 16. The pod configuration informs the operator engine that it's ready to start the job.
## Running the Algorithm on Data Asset(s) ## Running the Algorithm on Dataset(s)
17. The operator engine launches the algorithm pod on the Kubernetes cluster, with volume containing data asset(s) and algorithm mounted. 17. The operator engine launches the algorithm pod on the Kubernetes cluster, with volume containing dataset(s) and algorithm mounted.
18. Kubernetes runs the algorithm pod. 18. Kubernetes runs the algorithm pod.
19. The Operator engine monitors the algorithm, stopping it if it exceeds the specified time limit based on the chosen environment. 19. The Operator engine monitors the algorithm, stopping it if it exceeds the specified time limit based on the chosen environment.
20. Now that the results are available, the operator engine starts "pod-publishing". 20. Now that the results are available, the operator engine starts "pod-publishing".
@ -52,13 +52,13 @@ Below, we'll outline each step in detail 📝
27. The consumer retrieves job details by calling the provider's `get job details`. 27. The consumer retrieves job details by calling the provider's `get job details`.
28. The provider communicates with the operator service to fetch job details. 28. The provider communicates with the operator service to fetch job details.
29. The operator service returns the job details to the provider. 29. The operator service returns the job details to the provider.
30. With the job details, the provider can share them with the asset consumer. 30. With the job details, the provider can share them with the dataset consumer.
## Retrieving Job Results ## Retrieving Job Results
31. Equipped with job details, the asset consumer can retrieve the results from the recently executed job. 31. Equipped with job details, the dataset consumer can retrieve the results from the recently executed job.
32. The provider engages the operator engine to access the job results. 32. The provider engages the operator engine to access the job results.
33. As the operator service lacks access to this information, it uses the output volume to fetch the results. 33. As the operator service lacks access to this information, it uses the output volume to fetch the results.
34. The output volume provides the stored job results to the operator service. 34. The output volume provides the stored job results to the operator service.
35. The operator service shares the results with the provider. 35. The operator service shares the results with the provider.
36. The provider then delivers the results to the asset consumer. 36. The provider then delivers the results to the dataset consumer.