1
0
mirror of https://github.com/oceanprotocol/docs.git synced 2024-11-26 19:49:26 +01:00
docs/content/setup/compute-to-data.md
Matthias Kretschmann 7014ee27a7
package updates
* update most dependencies
* simplify linting: kick out editorconfig, stylelint
* update to @oceanprotocol/art v3.0.0, source new assets and update UI for it
2020-07-01 11:28:50 +02:00

6.7 KiB

title description
Set Up a Compute-to-Data Environment Set Up a Compute-to-Data environment.

Requirements

First, create a folder with the following structure:

ocean/
  barge/
  operator-service/
  operator-engine/

Then you need the following parts:

Customize your Operator Service deployment

The following resources need attention:

Resource Variable Description
postgres-configmap.yaml Contains secrets for the PostgreSQL deployment.
deployment.yaml ALGO_POD_TIMEOUT Allowed time for a algorithm to run. If it exceeded this value (in minutes), it's going to get killed.

Customize your Operator Engine deployment

The following resources need attention:

Resource Variable Description
operator.yaml ACCOUNT_JSON, ACCOUNT_PASSWORD Defines the account that is going to be used when publishing results back to OceanProtocol.
AWS_ACCESS_KEY_ID, AWS_ACCESS_KEY_ID, AWS_REGION S3 credentials for the logs and output buckets.
AWS_BUCKET_OUTPUT Bucket that will hold the output data (algorithm logs & algorithm output).
AWS_BUCKET_ADMINLOGS Bucket that will hold the admin logs (logs from pod-configure & pod-publish).
STORAGE_CLASS Storage class to use (see next section).

Storage class

For minikube, you can use 'standard' class.

For AWS , please make sure that your class allocates volumes in the same region and zone in which you are running your pods.

We created our own 'standard' class in AWS:

kubectl get storageclass standard -o yaml
allowedTopologies:
- matchLabelExpressions:
    - key: failure-domain.beta.kubernetes.io/zone
          values:
          - us-east-1a
apiVersion: storage.k8s.io/v1
kind: StorageClass
parameters:
    fsType: ext4
    type: gp2
provisioner: kubernetes.io/aws-ebs
reclaimPolicy: Delete
volumeBindingMode: Immediate

Or we can use this for minikube:

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: standard
provisioner: docker.io/hostpath
reclaimPolicy: Retain

For more information, please visit https://kubernetes.io/docs/concepts/storage/storage-classes/

Create namespaces

kubectl create ns ocean-operator
kubectl create ns ocean-compute

Deploy Operator Service

kubectl config set-context --current --namespace ocean-operator
kubectl create -f /ocean/operator-service/postgres-configmap.yaml
kubectl create -f /ocean/operator-service/postgres-storage.yaml
kubectl create -f /ocean/operator-service/postgres-deployment.yaml
kubectl create -f /ocean/operator-service/postgresql-service.yaml
kubectl apply -f /ocean/operator-service/deployment.yaml
kubectl apply -f /ocean/operator-service/role_binding.yaml
kubectl apply -f /ocean/operator-service/service_account.yaml

Deploy Operator Engine

kubectl config set-context --current --namespace ocean-compute
kubectl apply -f /ocean/operator-engine/sa.yml
kubectl apply -f /ocean/operator-engine/binding.yml
kubectl apply -f /ocean/operator-engine/operator.yml
kubectl apply -f /ocean/operator-engine/computejob-crd.yaml
kubectl apply -f /ocean/operator-engine/workflow-crd.yaml
kubectl create -f /ocean/operator-service/postgres-configmap.yaml

Expose Operator Service

kubectl expose deployment operator-api --namespace=ocean-operator --port=8050

Run a port forward or create your ingress service (not covered here):

kubectl -n ocean-operator port-forward svc/operator-api 8050

Initialize database

If your cluster is running on example.com:

curl -X POST "http://example.com:8050/api/v1/operator/pgsqlinit" -H  "accept: application/json"

Update Brizo

Update Brizo by adding or updating the OPERATOR_SERVICE_URL env in /ocean/barge/compose-files/brizo.yaml

OPERATOR_SERVICE_URL: http://example.com:8050/

Restart Barge with updated Brizo configuration