mirror of
https://github.com/oceanprotocol/docs.git
synced 2024-11-26 19:49:26 +01:00
GITBOOK-641: change request with no subject merged in GitBook
This commit is contained in:
parent
d13e25165e
commit
a7d3c5b08c
@ -46,7 +46,7 @@ Before the flow can begin, these pre-conditions must be met:
|
||||
|
||||
### Access Control using Ocean Provider
|
||||
|
||||
Similar to the `access service`, the `compute service` within Ocean Protocol relies on the [Ocean Provider](../provider/), which is a crucial component managed by Publishers. The role of the Ocean Provider is to facilitate interactions with users and handle the fundamental aspects of a Publisher's infrastructure, enabling seamless integration into the Ocean Protocol ecosystem. It serves as the primary interface for direct interaction with the infrastructure where the data is located.
|
||||
Similar to the `access service`, the `compute service` within Ocean Protocol relies on the [Ocean Provider](../provider/), which is a crucial component managed by the asset Publishers. The role of the Ocean Provider is to facilitate interactions with users and handle the fundamental aspects of a Publisher's infrastructure, enabling seamless integration into the Ocean Protocol ecosystem. It serves as the primary interface for direct interaction with the infrastructure where the data is located.
|
||||
|
||||
The [Ocean Provider](../provider/) encompasses the necessary credentials to establish secure and authorized interactions with the underlying infrastructure. Initially, this infrastructure may be hosted in cloud providers, although it also has the flexibility to extend to on-premise environments if required. By encompassing the necessary credentials, the Ocean Provider ensures the smooth and controlled access to the infrastructure, allowing Publishers to effectively leverage the compute service within Ocean Protocol.
|
||||
|
||||
@ -99,7 +99,7 @@ Upon the successful completion of its tasks, the Pod-Configuration gracefully co
|
||||
|
||||
### Pod Publishing
|
||||
|
||||
Pod Publishing is a command-line utility that seamlessly integrates with the Operator Service and Operator Engine within a Kubernetes-based compute infrastructure. It serves as a versatile tool for efficiently processing, logging, and uploading workflow outputs. By working in tandem with the Operator Service and Operator Engine, Pod Publishing streamlines the workflow management process, enabling easy and reliable handling of output data generated during computation tasks. Whether it's processing complex datasets or logging crucial information, Pod Publishing simplifies these tasks and enhances the overall efficiency of the compute infrastructure.
|
||||
Pod Publishing is a command-line utility that seamlessly integrates with the Operator Service and Operator Engine within a Kubernetes-based compute infrastructure. It serves as a versatile tool for efficient processing, logging, and uploading workflow outputs. By working in tandem with the Operator Service and Operator Engine, Pod Publishing streamlines the workflow management process, enabling easy and reliable handling of output data generated during computation tasks. Whether it's processing complex datasets or logging crucial information, Pod Publishing simplifies these tasks and enhances the overall efficiency of the compute infrastructure.
|
||||
|
||||
The primary functionality of Pod Publishing can be divided into three key areas:
|
||||
|
||||
@ -107,7 +107,7 @@ The primary functionality of Pod Publishing can be divided into three key areas:
|
||||
2. **Role in Publishing Pod**: Within the compute infrastructure orchestrated by the Operator Engine on Kubernetes (K8s), Pod Publishing is integral to the Publishing Pod. The Publishing Pod handles the creation of new assets in the Ocean Protocol network after a workflow execution.
|
||||
3. **Workflow Outputs Management**: Pod Publishing manages the storage of workflow outputs. Depending on configuration, it interacts with IPFS or AWS S3, and logs the processing steps.
|
||||
|
||||
Please note:
|
||||
|
||||
{% hint style="info" %}
|
||||
* Pod Publishing does not provide storage capabilities; all state information is stored directly in the K8s cluster or the respective data storage solution (AWS S3 or IPFS).
|
||||
* The utility works in close coordination with the Operator Service and Operator Engine, but does not have standalone functionality.
|
||||
{% endhint %}
|
||||
|
@ -6,8 +6,8 @@ title: Minikube Compute-to-Data Environment
|
||||
|
||||
### Requirements
|
||||
|
||||
* functioning internet-accessable provider service
|
||||
* machine capable of running compute (e.g. we used a machine with 8 CPUs, 16 GB Ram, 100GB SSD and fast internet connection)
|
||||
* functioning internet-accessible provider service
|
||||
* a machine capable of running compute (e.g. we used a machine with 8 CPUs, 16 GB Ram, 100GB SSD, and fast internet connection)
|
||||
* Ubuntu 22.04 LTS
|
||||
|
||||
### Install Docker and Git
|
||||
@ -27,7 +27,7 @@ sudo dpkg -i minikube_1.22.0-0_amd64.deb
|
||||
|
||||
### Start Minikube
|
||||
|
||||
First command is imporant, and solves a [PersistentVolumeClaims problem](https://github.com/kubernetes/minikube/issues/7828).
|
||||
The first command is important and solves a [PersistentVolumeClaims problem](https://github.com/kubernetes/minikube/issues/7828).
|
||||
|
||||
```bash
|
||||
minikube config set kubernetes-version v1.16.0
|
||||
@ -48,7 +48,7 @@ echo "$(<kubectl.sha256) kubectl" | sha256sum --check
|
||||
sudo install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl
|
||||
```
|
||||
|
||||
Wait untill all the defaults are running (1/1).
|
||||
Wait until all the defaults are running (1/1).
|
||||
|
||||
```bash
|
||||
watch kubectl get pods --all-namespaces
|
||||
@ -68,7 +68,7 @@ sudo /bin/sh -c 'echo "127.0.0.1 youripfsserver" >> /etc/hosts'
|
||||
|
||||
### Storage class (Optional)
|
||||
|
||||
For minikube, you can use the default 'standard' class.
|
||||
For Minikube, you can use the default 'standard' class.
|
||||
|
||||
For AWS, please make sure that your class allocates volumes in the same region and zone in which you are running your pods.
|
||||
|
||||
@ -98,13 +98,13 @@ For more information, please visit https://kubernetes.io/docs/concepts/storage/s
|
||||
|
||||
### Download and Configure Operator Service
|
||||
|
||||
Open new terminal and run the command below.
|
||||
Open a new terminal and run the command below.
|
||||
|
||||
```bash
|
||||
git clone https://github.com/oceanprotocol/operator-service.git
|
||||
```
|
||||
|
||||
Edit `operator-service/kubernetes/postgres-configmap.yaml`. Change `POSTGRES_PASSWORD` to nice long random password.
|
||||
Edit `operator-service/kubernetes/postgres-configmap.yaml`. Change `POSTGRES_PASSWORD` to a nice long random password.
|
||||
|
||||
Edit `operator-service/kubernetes/deployment.yaml`. Optionally change:
|
||||
|
||||
@ -138,9 +138,9 @@ spec:
|
||||
git clone https://github.com/oceanprotocol/operator-engine.git
|
||||
```
|
||||
|
||||
Check the [README](https://github.com/oceanprotocol/operator-engine#customize-your-operator-engine-deployment) section of operator engine to customize your deployment.
|
||||
Check the [README](https://github.com/oceanprotocol/operator-engine#customize-your-operator-engine-deployment) section of the operator engine to customize your deployment.
|
||||
|
||||
At a minimum you should add your IPFS URLs or AWS settings, and add (or remove) notification URLs.
|
||||
At a minimum, you should add your IPFS URLs or AWS settings, and add (or remove) notification URLs.
|
||||
|
||||
### Create namespaces
|
||||
|
||||
@ -192,7 +192,7 @@ Alternatively you could use another method to communicate between the C2D Enviro
|
||||
|
||||
### Initialize database
|
||||
|
||||
If your minikube is running on compute.example.com:
|
||||
If your Minikube is running on compute.example.com:
|
||||
|
||||
```bash
|
||||
curl -X POST "https://compute.example.com/api/v1/operator/pgsqlinit" -H "accept: application/json"
|
||||
|
Loading…
Reference in New Issue
Block a user