1
0
mirror of https://github.com/oceanprotocol/docs.git synced 2024-11-26 19:49:26 +01:00

remove unnecessary compute service params

This commit is contained in:
Alex Coseru 2022-01-06 18:10:46 +02:00 committed by GitHub
parent 913d117a30
commit f219f2ac4f
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23

View File

@ -299,12 +299,6 @@ An asset with a service of `type` `compute` has the following additional attribu
| Attribute | Type | Required | Description | | Attribute | Type | Required | Description |
| ------------------------------------------ | ------------------------------------- | -------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | | ------------------------------------------ | ------------------------------------- | -------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
| **`namespace`** | `string` | **✓** | Namespace used for the compute job. Defaults to 'ocean-compute'. |
| **`cpus`** | `number` | | Maximum number of CPUs allocated for a job. |
| **`gpus`** | `number` | | Maximum number of GPUs allocated for a job. |
| **`gpuType`** | `string` | | Type of GPU (if any). |
| **`memory`** | `string` | | Maximum amount of memory allocated for a job. You can express memory as a plain integer or as a fixed-point number using one of these suffixes: E, P, T, G, M, k. You can also use the power-of-two equivalents: Ei, Pi, Ti, Gi, Mi, Ki. For example, the following represent roughly the same value: 128974848, 129e6, 129M, 123Mi. |
| **`volumeSize`** | `string` | | Amount of disk space allocated. You can express it as a plain integer or as a fixed-point number using one of these suffixes: E, P, T, G, M, k. You can also use the power-of-two equivalents: Ei, Pi, Ti, Gi, Mi, Ki. |
| **`allowRawAlgorithm`** | `boolean` | **✓** | If `true`, any passed raw text will be allowed to run. Useful for an algorithm drag & drop use case, but increases risk of data escape through malicious user input. Should be `false` by default in all implementations. | | **`allowRawAlgorithm`** | `boolean` | **✓** | If `true`, any passed raw text will be allowed to run. Useful for an algorithm drag & drop use case, but increases risk of data escape through malicious user input. Should be `false` by default in all implementations. |
| **`allowNetworkAccess`** | `boolean` | **✓** | If `true`, the algorithm job will have network access. | | **`allowNetworkAccess`** | `boolean` | **✓** | If `true`, the algorithm job will have network access. |
| **`publisherTrustedAlgorithmPublishers `** | Array of `string` | **✓** | If empty, then any published algorithm is allowed. Otherwise, only published algorithms by some publishers are allowed. | | **`publisherTrustedAlgorithmPublishers `** | Array of `string` | **✓** | If empty, then any published algorithm is allowed. Otherwise, only published algorithms by some publishers are allowed. |
@ -355,12 +349,6 @@ Example:
"serviceEndpoint": "https://myprovider.com", "serviceEndpoint": "https://myprovider.com",
"timeout": 0, "timeout": 0,
"compute": { "compute": {
"namespace": "ocean-compute",
"cpus": 2,
"gpus": 4,
"gpuType": "NVIDIA Tesla V100 GPU",
"memory": "128M",
"volumeSize": "2G",
"allowRawAlgorithm": false, "allowRawAlgorithm": false,
"allowNetworkAccess": true, "allowNetworkAccess": true,
"publisherTrustedAlgorithmPublishers": ["0x234", "0x235"], "publisherTrustedAlgorithmPublishers": ["0x234", "0x235"],
@ -632,12 +620,6 @@ Example:
"serviceEndpoint": "https://myprovider.com", "serviceEndpoint": "https://myprovider.com",
"timeout": 3600, "timeout": 3600,
"compute": { "compute": {
"namespace": "ocean-compute",
"cpus": 2,
"gpus": 4,
"gpuType": "NVIDIA Tesla V100 GPU",
"memory": "128M",
"volumeSize": "2G",
"allowRawAlgorithm": false, "allowRawAlgorithm": false,
"allowNetworkAccess": true, "allowNetworkAccess": true,
"publisherTrustedAlgorithmPublishers": ["0x234", "0x235"], "publisherTrustedAlgorithmPublishers": ["0x234", "0x235"],