diff --git a/developers/compute-to-data/compute-workflow.md b/developers/compute-to-data/compute-workflow.md index af140581..8b8f7eb8 100644 --- a/developers/compute-to-data/compute-workflow.md +++ b/developers/compute-to-data/compute-workflow.md @@ -13,9 +13,9 @@ For visual clarity, here's an image of the workflow in action! 🖼️✨ Below, we'll outline each step in detail 📝 ## Starting a C2D Job -1. The consumer selects a preferred environment from the provider's list and initiates a compute-to-data job by choosing a data asset-algorithm pair. +1. The consumer selects a preferred environment from the provider's list and initiates a compute-to-data job by choosing a dataset-algorithm pair. 2. The provider checks the orders on the blockchain. -3. If the orders for data asset, algorithm and compute environment fees are valid, the provider can start the compute flow. +3. If the orders for dataset, algorithm and compute environment fees are valid, the provider can start the compute flow. 4. The provider informs the consumer of the job id's successful creation. 5. With the job ID and confirmation of the orders, the provider starts the job by calling the operator service. 6. The operator service adds the new job in its local jobs queue. @@ -28,14 +28,14 @@ Below, we'll outline each step in detail 📝 11. The volumes are created and allocated to the pod. 12. After volume creation and allocation, the operator engine starts "pod-configuration" as a new pod in the cluster. -## Loading Assets and Algorithms -13. Pod-configuration requests the necessary data asset(s) and algorithm from their respective providers. +## Loading Datasets and Algorithms +13. Pod-configuration requests the necessary dataset(s) and algorithm from their respective providers. 14. The files are downloaded by the pod configuration via the provider. -15. The pod configuration writes the assets in the job volume. +15. The pod configuration writes the datasets in the job volume. 16. The pod configuration informs the operator engine that it's ready to start the job. -## Running the Algorithm on Data Asset(s) -17. The operator engine launches the algorithm pod on the Kubernetes cluster, with volume containing data asset(s) and algorithm mounted. +## Running the Algorithm on Dataset(s) +17. The operator engine launches the algorithm pod on the Kubernetes cluster, with volume containing dataset(s) and algorithm mounted. 18. Kubernetes runs the algorithm pod. 19. The Operator engine monitors the algorithm, stopping it if it exceeds the specified time limit based on the chosen environment. 20. Now that the results are available, the operator engine starts "pod-publishing". @@ -52,13 +52,13 @@ Below, we'll outline each step in detail 📝 27. The consumer retrieves job details by calling the provider's `get job details`. 28. The provider communicates with the operator service to fetch job details. 29. The operator service returns the job details to the provider. -30. With the job details, the provider can share them with the asset consumer. +30. With the job details, the provider can share them with the dataset consumer. ## Retrieving Job Results -31. Equipped with job details, the asset consumer can retrieve the results from the recently executed job. +31. Equipped with job details, the dataset consumer can retrieve the results from the recently executed job. 32. The provider engages the operator engine to access the job results. 33. As the operator service lacks access to this information, it uses the output volume to fetch the results. 34. The output volume provides the stored job results to the operator service. 35. The operator service shares the results with the provider. -36. The provider then delivers the results to the asset consumer. +36. The provider then delivers the results to the dataset consumer.