1
0
mirror of https://github.com/oceanprotocol/docs.git synced 2024-11-26 19:49:26 +01:00

WIP: Updated the CLI docs. Added the images in the folder structure, removed some of the pages and placed the content in 1

This commit is contained in:
Ana Loznianu 2023-10-03 14:10:54 +03:00
parent 19475ebf22
commit 1bc398da9f
21 changed files with 158 additions and 123 deletions

Binary file not shown.

After

Width:  |  Height:  |  Size: 304 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 198 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 887 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 287 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 486 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 159 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 282 KiB

View File

@ -92,14 +92,10 @@
- [Run C2D Jobs](developers/ocean.js/cod-asset.md) - [Run C2D Jobs](developers/ocean.js/cod-asset.md)
- [Ocean CLI](developers/ocean-cli/README.md) - [Ocean CLI](developers/ocean-cli/README.md)
- [Installation](developers/ocean-cli/installation.md) - [Installation](developers/ocean-cli/installation.md)
- [Setting environment variables](developers/ocean-cli/setting-environment-variables.md) - [Publish](developers/ocean-cli/publish.md)
- [Usage](developers/ocean-cli/usage.md) - [Edit](developers/ocean-cli/edit.md)
- [Publish a dataset](developers/ocean-cli/publish-a-dataset.md) - [Consume](developers/ocean-cli/consume.md)
- [Getting asset DDO](developers/ocean-cli/getting-asset-ddo.md) - [Run C2D Jobs](developers/ocean-cli/run-c2d.md)
- [Editing a dataset](developers/ocean-cli/editing-a-dataset.md)
- [Consuming an asset](developers/ocean-cli/consuming-an-asset.md)
- [Starting a compute job](developers/ocean-cli/starting-a-compute-job.md)
- [Downloading compute results](developers/ocean-cli/downloading-compute-results.md)
- [Compute to data](developers/compute-to-data/README.md) - [Compute to data](developers/compute-to-data/README.md)
- [Architecture](developers/compute-to-data/compute-to-data-architecture.md) - [Architecture](developers/compute-to-data/compute-to-data-architecture.md)
- [Datasets & Algorithms](developers/compute-to-data/compute-to-data-datasets-algorithms.md) - [Datasets & Algorithms](developers/compute-to-data/compute-to-data-datasets-algorithms.md)

View File

@ -3,13 +3,20 @@ description: >-
CLI tool to interact with the oceanprotocol's JavaScript library to privately & securely publish, consume and run compute on data. CLI tool to interact with the oceanprotocol's JavaScript library to privately & securely publish, consume and run compute on data.
--- ---
# Ocean CLI # Ocean CLI 🌊
The Ocean CLI tool offers a range of functionalities, including the ability to: Welcome to the Ocean CLI, your powerful command-line tool for seamless interaction with Ocean Protocol's data-sharing capabilities. 🚀
- [**Publish**](./publish-a-dataset.md) data services: downloadable files or compute-to-data. The Ocean CLI offers a wide range of functionalities, enabling you to:
- [**Edit**](./editing-a-dataset.md) existing assets.
- [**Consume**](./consuming-an-asset.md) data services, ordering datatokens and downloading data. - [**Publish**](./publish.md) 📤 data services: downloadable files or compute-to-data.
- [**Compute to data**](./starting-a-compute-job.md) on public available datasets using a published algorithm. - [**Edit**](./edit.md) ✏️ existing assets.
- [**Consume**](./consume.md) 📥 data services, ordering datatokens and downloading data.
- [**Compute to Data**](./run-c2d.md) 💻 on public available datasets using a published algorithm.
## Key Information
The Ocean CLI is powered by the [ocean.js](../ocean.js/README.md) JavaScript library, an integral part of the [Ocean Protocol](https://oceanprotocol.com) toolset. 🌐
Let's dive into the CLI's capabilities and unlock the full potential of Ocean Protocol together! If you're ready to explore each functionality in detail, simply go through the next pages.
Ocean CLI is using ocean.js Javascripti library witch is part of the [Ocean Protocol](https://oceanprotocol.com) toolset.

View File

@ -0,0 +1,13 @@
# Consume a Dataset 📥
The process of consuming an asset is remarkably straightforward. To achieve this, you only need to execute a single command:
```bash
npm run cli download 'assetDID' 'download-location-path'
```
In this command, replace 'assetDID' with the specific DID of the asset you want to consume, and 'download-location-path' with the desired path where you wish to store the downloaded asset content
Once executed, this command orchestrates both the **ordering** of a [datatoken](../contracts/datatokens.md) and the subsequent download operation. The asset's content will be automatically retrieved and saved at the specified location, simplifying the consumption process for users.
<figure><img src="../../.gitbook/assets/cli/download.png" alt=""><figcaption>Consume</figcaption></figure>

View File

@ -1,13 +0,0 @@
# Consuming an asset
The process of consuming an asset is remarkably straightforward. To achieve this, you only need to execute a single command:
```bash
npm run cli download 'assetDID' 'download-location-path'
```
In this command, replace 'assetDID' with the specific DID of the asset you want to consume, and 'download-location-path' with the desired path where you wish to store the downloaded asset content
Once executed, this command seamlessly orchestrates both the ordering of datatoken and the subsequent download operation. The asset's content will be automatically retrieved and saved at the specified location, simplifying the consumption process for users.
<figure><img src="https://files.gitbook.com/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FzQlpIJEeu8x5yl0OLuXn%2Fuploads%2Fqiex0R1FVw2XSzqmSdu6%2FScreenshot%202023-09-28%20at%2001.25.23.png?alt=media&token=70f5e7b7-6698-4817-824c-28ea7483d39c" alt=""><figcaption>dowload method example</figcaption></figure>

View File

@ -1,21 +0,0 @@
# Downloading compute results
To retrieve the compute results, we'll employ two distinct methods. Initially, we'll use the getJobStatus method, where we'll wait for or periodically check the status until it indicates that the job is finished. Subsequently, we'll utilize the method to obtain the actual results.
For the first method, you'll need both the dataset DID and the compute job DID. You can execute the following command:
```bash
npm run cli getJobStatus 'DATASET_DID' 'JOB_ID'
```
This command will enable you to monitor the status of the job and ensure it has completed successfully.
<figure><img src="https://files.gitbook.com/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FzQlpIJEeu8x5yl0OLuXn%2Fuploads%2FGURaTt0siyBHMDLwD1s5%2FScreenshot%202023-09-28%20at%2001.59.14.png?alt=media&token=528bd4e0-4a48-43f1-924d-c32976a7663b" alt=""><figcaption>getJobStatus</figcaption></figure>
For the second method, the dataset DID is no longer required. Instead, you'll need to specify the job ID, the index of the result you wish to download from the available results for that job, and the destination folder where you want to save the downloaded content. The corresponding command is as follows:
```bash
npm run cli downloadJobResults 'JOB_ID' 'RESULT_INDEX' 'DESTINATION_FOLDER'
```
<figure><img src="https://files.gitbook.com/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FzQlpIJEeu8x5yl0OLuXn%2Fuploads%2FjvvCAOQVQvAv2ygNtPby%2FScreenshot%202023-09-28%20at%2002.01.42.png?alt=media&token=da8868d7-6fdb-4dce-ac9a-6b395a0960e5" alt=""><figcaption>downloadJobResults</figcaption></figure>

View File

@ -0,0 +1,20 @@
# Edit a Dataset ✏️
To make changes to a dataset, you'll need to start by retrieving the asset's [Decentralized Data Object](../ddo-specification.md) (DDO).
## Retrieve DDO
Obtaining the DDO of an asset is a straightforward process. You can accomplish this task by executing the following command:
```bash
npm run cli getDDO 'assetDID'
```
<figure><img src="../../.gitbook/assets/cli/getAsset.png" alt=""><figcaption>Get DDO</figcaption></figure>
After retrieving the asset's DDO and saving it as a JSON file, you can proceed to edit the metadata as needed. Once you've made the necessary changes, you can utilize the following command to apply the updated metadata:
```bash
npm run cli editAsset 'DATASET_DID' 'PATH_TO_UPDATED_FILE`
```

View File

@ -1,8 +0,0 @@
# Editting a dataset
To make modifications to a dataset, you can follow the steps outlined in the previous section titled [getting asset DDO](./getting-asset-ddo.md). After obtaining the asset's DDO and saving it as a JSON file, you can proceed to edit the metadata as needed. Once you've made the necessary changes, you can utilize the following command to apply the updated metadata:
```bash
npm run cli editAsset 'DATASET_DID' 'PATH_TO_UPDATED_FILE`
```

View File

@ -1,9 +0,0 @@
# Getting asset DDO
Obtaining the DDO of an asset is a straightforward process. You can accomplish this task by executing the following command:
```bash
npm run cli getDDO 'assetDID'
```
<figure><img src="https://files.gitbook.com/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FzQlpIJEeu8x5yl0OLuXn%2Fuploads%2Fh3AQakJYiL7sDUEvfxhA%2FScreenshot%202023-09-28%20at%2001.06.55.png?alt=media&token=7c6c527a-b0cc-4517-8ec3-48cf27782261" alt=""><figcaption>getDDO method example</figcaption></figure>

View File

@ -1,6 +1,10 @@
# Installation # Installation and Configuration 🛠️
**Clone the Repository**: Begin by cloning the repository. You can achieve this by executing the following command in your terminal: To get started with the Ocean CLI, follow these steps for a seamless setup:
## Clone the Repository
Begin by cloning the repository. You can achieve this by executing the following command in your terminal:
```bash ```bash
$ git clone https://github.com/oceanprotocol/ocean.js-cli.git $ git clone https://github.com/oceanprotocol/ocean.js-cli.git
@ -8,14 +12,60 @@ $ git clone https://github.com/oceanprotocol/ocean.js-cli.git
Cloning the repository will create a local copy on your machine, allowing you to access and work with its contents. Cloning the repository will create a local copy on your machine, allowing you to access and work with its contents.
**Install NPM Dependencies**: After successfully cloning the repository, you should install the necessary npm dependencies to ensure that the project functions correctly. This can be done with the following command: ## Install NPM Dependencies
After successfully cloning the repository, you should install the necessary npm dependencies to ensure that the project functions correctly. This can be done with the following command:
```bash ```bash
npm install npm install
``` ```
Build the TypeScript code ## Build the TypeScript code
To compile the TypeScript code and prepare the CLI for use, execute the following command:
```bash ```bash
npm run build npm run build
``` ```
Now, let's configure the environment variables required for the CLI to function effectively. 🚀
## Setting Environment Variables 🌐
To successfully configure the CLI tool, two essential steps must be undertaken: the setting of the account's private key and the definition of the desired RPC endpoint. These actions are pivotal in enabling the CLI tool to function effectively.
### Private Key Configuration
The CLI tool necessitates the configuration of the account's private key. This private key serves as the means by which the CLI tool establishes a connection to the associated wallet. It plays a crucial role in authenticating and authorizing operations performed by the tool.
```bash
export MNEMONIC="XXXX"
```
### RPC Endpoint Specification
Additionally, it is imperative to specify the RPC endpoint that corresponds to the desired network for executing operations. The CLI tool relies on this user-provided RPC endpoint to connect to the network required for its functions. This connection to the network is vital as it enables the CLI tool to interact with the blockchain and execute operations seamlessly.
```bash
export RPC='XXXX'
```
Furthermore, there are additional environment variables that can be configured to enhance the flexibility and customization of the environment. These variables include options such as the metadataCache URL and Provider URL, which can be specified if you prefer to utilize a custom deployment of Aquarius or Provider in contrast of the default settings. Moreover, you have the option to provide a custom address file path if you wish to use customized smart contracts or deployments for your specific use case. Remeber setting the next envariament variables is optional.
```bash
export AQUARIUS_URL='XXXX'
export PROVIDER_URL='XXXX'
export ADDRESS_FILE='../path/to/your/address-file'
```
## Usage
To explore the commands and option flags available in the Ocean CLI, simply run the following command:
```bash
npm run cli h
```
<figure><img src="../../.gitbook/assets/cli/usage.png" alt=""><figcaption>Available CLI commands & options</figcaption></figure>
With the Ocean CLI successfully installed and configured, you're ready to dive into its capabilities and unlock the full potential of Ocean Protocol. If you encounter any issues during the setup process or have questions, feel free to seek assistance from our [support](https://discord.com/invite/TnXjkR5) team. 🌊

View File

@ -1,8 +1,8 @@
# Publish a dataset # Publish a Dataset 📤
Once the RPC environment variable has been configured, we can proceed to publish a new dataset on the connected network. The flexibility of our setup allows us to easily switch to a different network by simply substituting the RPC endpoint with one corresponding to another network. Once you've configured the RPC environment variable, you're ready to publish a new dataset on the connected network. Our flexible setup allows you to switch to a different network simply by substituting the RPC endpoint with one corresponding to another network. 🌐
To initiate this process, we'll begin by updating the helper DDO example named "SimpleDownloadDataset.json". This example can be found in the "./metadata" folder, which is located at the root directory of the cloned Ocean CLI project. To initiate the dataset publishing process, we'll start by updating the helper [DDO](../ddo-specification.md)(Decentralized Data Object) example named "SimpleDownloadDataset.json." This example can be found in the "./metadata" folder, located at the root directory of the cloned Ocean CLI project.
```json ```json
{ {
@ -68,11 +68,16 @@ To initiate this process, we'll begin by updating the helper DDO example named "
} }
``` ```
Note: The provided example creates a consumable asset with a predetermined price of 2 OCEAN tokens. If you wish to modify this and create an asset that is freely accessible, you can do so by replacing the value of "stats.price.value" with 0 in the JSON example mentioned above. {% hint style="info" %}
Next step is to run the command The provided example creates a consumable asset with a predetermined price of 2 OCEAN tokens. If you wish to modify this and create an asset that is freely accessible, you can do so by replacing the value of "stats.price.value" with 0 in the JSON example mentioned above.
{% endhint %}
Now, let's run the command to publish the dataset:
```bash ```bash
npm run cli publish metadata/simpleDownloadDataset.json npm run cli publish metadata/simpleDownloadDataset.json
``` ```
<figure><img src="https://files.gitbook.com/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FzQlpIJEeu8x5yl0OLuXn%2Fuploads%2F0WBqBdns3Nqip0W91uld%2FScreenshot%202023-09-28%20at%2000.58.48.png?alt=media&token=29806606-4029-4979-85a5-a3d02bb0a79d" alt=""><figcaption>running publish command</figcaption></figure> <figure><img src="../../.gitbook/assets/cli/publish.png" alt=""><figcaption>Publish dataset</figcaption></figure>
Executing this command will initiate the dataset publishing process, making your dataset accessible and discoverable on the Ocean Protocol network. 🌊

View File

@ -0,0 +1,42 @@
# Run C2D Jobs 🚀
## Start a Compute Job 🎯
Initiating a compute job can be accomplished through two primary methods.
1. The first approach involves publishing both the dataset and algorithm, as explained in the previous section, [Publish a Dataset](./publish.md) Once that's completed, you can proceed to initiate the compute job.
2. Alternatively, you have the option to explore available datasets and algorithms and kickstart a compute-to-data job by combining your preferred choices.
To illustrate the latter option, you can use the following command:
```bash
npm run cli startCompute 'DATASET_DID' 'ALGO_DID'
```
In this command, replace `DATASET_DID` with the specific DID of the dataset you intend to utilize and `ALGO_DID` with the DID of the algorithm you want to apply. By executing this command, you'll trigger the initiation of a compute-to-data job that harnesses the selected dataset and algorithm for processing.
<figure><img src="../../.gitbook/assets/cli/c2dstart.png" alt=""><figcaption>Start a compute job</figcaption></figure>
## Download Compute Results 🧮
To obtain the compute results, we'll follow a two-step process. First, we'll employ the `getJobStatus`` method, patiently monitoring its status until it signals the job's completion. Afterward, we'll utilize this method to acquire the actual results.
## Monitor Job Status
To track the status of a job, you'll require both the dataset DID and the compute job DID. You can initiate this process by executing the following command:
```bash
npm run cli getJobStatus 'DATASET_DID' 'JOB_ID'
```
Executing this command will allow you to observe the job's status and verify its successful completion.
<figure><img src="../../.gitbook/assets/cli/jobstatus.png" alt=""><figcaption>Get Job Status</figcaption></figure>
## Download the results
For the second method, the dataset DID is no longer required. Instead, you'll need to specify the job ID, the index of the result you wish to download from the available results for that job, and the destination folder where you want to save the downloaded content. The corresponding command is as follows:
```bash
npm run cli downloadJobResults 'JOB_ID' 'RESULT_INDEX' 'DESTINATION_FOLDER'
```
<figure><img src="../../.gitbook/assets/cli/jobResults.png" alt=""><figcaption>Download C2D Job Results</figcaption></figure>

View File

@ -1,25 +0,0 @@
# Setting environment variables
To successfully configure the CLI tool, two essential steps must be undertaken: the setting of the account's private key and the definition of the desired RPC endpoint. These actions are pivotal in enabling the CLI tool to function effectively.
**Private Key Configuration** \
The CLI tool necessitates the configuration of the account's private key. This private key serves as the means by which the CLI tool establishes a connection to the associated wallet. It plays a crucial role in authenticating and authorizing operations performed by the tool.
```bash
export MNEMONIC="XXXX"
```
**RPC Endpoint Specification** \
Additionally, it is imperative to specify the RPC endpoint that corresponds to the desired network for executing operations. The CLI tool relies on this user-provided RPC endpoint to connect to the network required for its functions. This connection to the network is vital as it enables the CLI tool to interact with the blockchain and execute operations seamlessly.
```bash
export RPC='XXXX'
```
Furthermore, there are additional environment variables that can be configured to enhance the flexibility and customization of the environment. These variables include options such as the metadataCache URL and Provider URL, which can be specified if you prefer to utilize a custom deployment of Aquarius or Provider in contrast of the default settings. Moreover, you have the option to provide a custom address file path if you wish to use customized smart contracts or deployments for your specific use case. Remeber setting the next envariament variables is optional.
```bash
export AQUARIUS_URL='XXXX'
export PROVIDER_URL='XXXX'
export ADDRESS_FILE='../path/to/your/address-file'
```

View File

@ -1,13 +0,0 @@
# Starting a compute job
To initiate a compute job, there are two main approaches you can take. The first method involves publishing both the dataset and algorithm, as detailed in the previous section Publish a dataset, and then proceeding to start the compute job. Alternatively, you can opt to explore available datasets and algorithms and initiate a compute-to-data job using a combination of your choice.
To demonstrate the latter option, you can utilize the following command:
```bash
npm run cli startCompute 'DATASET_DID' 'ALGO_DID'
```
In this command, 'DATASET_DID' should be replaced with the specific DID of the dataset you wish to employ, and 'ALGO_DID' with the DID of the algorithm you want to apply. By executing this command, you will trigger the initiation of a compute-to-data job that leverages the selected dataset and algorithm for processing.
<figure><img src="https://files.gitbook.com/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FzQlpIJEeu8x5yl0OLuXn%2Fuploads%2FMEY5XCfq5mW4Nm7mQlo9%2FScreenshot%202023-09-28%20at%2001.49.16.png?alt=media&token=30b0d80e-cc9b-43fb-a50e-66d7409c6b69" alt=""><figcaption>startCompute method example</figcaption></figure>

View File

@ -1,9 +0,0 @@
# Usage
The command below returns a list of all commands and option flags.
```bash
npm run cli h
```
<figure><img src="https://files.gitbook.com/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FzQlpIJEeu8x5yl0OLuXn%2Fuploads%2FBCFAxxcCjwVPJfiKOOt1%2FScreenshot%202023-09-27%20at%2023.55.43.png?alt=media&token=76114c63-da9c-4d34-afdb-97b6bafca9d9" alt=""><figcaption>available options</figcaption></figure>