bigchaindb/docs/server/source/production-deployment-template/node-on-kubernetes.rst

1038 lines
37 KiB
ReStructuredText
Raw Normal View History

.. _kubernetes-template-deploy-a-single-node-bigchaindb:
Kubernetes Template: Deploy a Single BigchainDB Node
====================================================
This page describes how to deploy the first BigchainDB node
in a BigchainDB cluster, or a stand-alone BigchainDB node,
using `Kubernetes <https://kubernetes.io/>`_.
It assumes you already have a running Kubernetes cluster.
If you want to add a new BigchainDB node to an existing BigchainDB cluster,
refer to :doc:`the page about that <add-node-on-kubernetes>`.
2017-06-08 17:36:41 +02:00
Below, we refer to many files by their directory and filename,
such as ``configuration/config-map.yaml``. Those files are files in the
2017-05-31 11:14:04 +02:00
`bigchaindb/bigchaindb repository on GitHub
<https://github.com/bigchaindb/bigchaindb/>`_ in the ``k8s/`` directory.
Make sure you're getting those files from the appropriate Git branch on
GitHub, i.e. the branch for the version of BigchainDB that your BigchainDB
cluster is using.
2017-02-15 14:57:42 +01:00
Step 1: Install and Configure kubectl
-------------------------------------
kubectl is the Kubernetes CLI.
2017-02-15 14:57:42 +01:00
If you don't already have it installed,
then see the `Kubernetes docs to install it
<https://kubernetes.io/docs/user-guide/prereqs/>`_.
2017-02-15 14:57:42 +01:00
The default location of the kubectl configuration file is ``~/.kube/config``.
If you don't have that file, then you need to get it.
2017-02-28 19:08:13 +01:00
**Azure.** If you deployed your Kubernetes cluster on Azure
2017-02-15 14:57:42 +01:00
using the Azure CLI 2.0 (as per :doc:`our template <template-kubernetes-azure>`),
then you can get the ``~/.kube/config`` file using:
.. code:: bash
$ az acs kubernetes get-credentials \
--resource-group <name of resource group containing the cluster> \
--name <ACS cluster name>
If it asks for a password (to unlock the SSH key)
and you enter the correct password,
but you get an error message,
then try adding ``--ssh-key-file ~/.ssh/<name>``
to the above command (i.e. the path to the private key).
2017-06-08 17:36:41 +02:00
.. note::
2017-02-15 14:57:42 +01:00
2017-06-08 17:36:41 +02:00
**About kubectl contexts.** You might manage several
Kubernetes clusters. To make it easy to switch from one to another,
kubectl has a notion of "contexts," e.g. the context for cluster 1 or
the context for cluster 2. To find out the current context, do:
2017-05-29 13:42:49 +02:00
2017-06-08 17:36:41 +02:00
.. code:: bash
2017-06-08 17:36:41 +02:00
$ kubectl config view
2017-05-29 13:42:49 +02:00
2017-06-08 17:36:41 +02:00
and then look for the ``current-context`` in the output.
The output also lists all clusters, contexts and users.
(You might have only one of each.)
You can switch to a different context using:
2017-05-29 13:42:49 +02:00
2017-06-08 17:36:41 +02:00
.. code:: bash
2017-05-29 13:42:49 +02:00
2017-06-08 17:36:41 +02:00
$ kubectl config use-context <new-context-name>
You can also switch to a different context for just one command
by inserting ``--context <context-name>`` into any kubectl command.
For example:
.. code:: bash
$ kubectl --context k8s-bdb-test-cluster-0 get pods
2017-05-29 13:42:49 +02:00
2017-06-08 17:36:41 +02:00
will get a list of the pods in the Kubernetes cluster associated
with the context named ``k8s-bdb-test-cluster-0``.
2017-05-29 13:42:49 +02:00
2017-06-08 17:36:41 +02:00
Step 2: Connect to Your Cluster's Web UI (Optional)
---------------------------------------------------
You can connect to your cluster's
`Kubernetes Dashboard <https://kubernetes.io/docs/tasks/access-application-cluster/web-ui-dashboard/>`_
(also called the Web UI) using:
.. code:: bash
$ kubectl proxy -p 8001
or
$ az acs kubernetes browse -g [Resource Group] -n [Container service instance name] --ssh-key-file /path/to/privateKey
2017-06-08 17:36:41 +02:00
or, if you prefer to be explicit about the context (explained above):
.. code:: bash
$ kubectl --context k8s-bdb-test-cluster-0 proxy -p 8001
The output should be something like ``Starting to serve on 127.0.0.1:8001``.
That means you can visit the dashboard in your web browser at
`http://127.0.0.1:8001/ui <http://127.0.0.1:8001/ui>`_.
Step 3: Configure Your BigchainDB Node
--------------------------------------
See the page titled :ref:`how-to-configure-a-bigchaindb-node`.
.. _start-the-nginx-service:
2017-05-29 13:42:49 +02:00
Step 4: Start the NGINX Service
-------------------------------
* This will will give us a public IP for the cluster.
* Once you complete this step, you might need to wait up to 10 mins for the
public IP to be assigned.
* You have the option to use vanilla NGINX without HTTPS support or an
NGINX with HTTPS support.
2017-05-29 13:42:49 +02:00
Step 4.1: Vanilla NGINX
^^^^^^^^^^^^^^^^^^^^^^^
* This configuration is located in the file ``nginx-http/nginx-http-svc.yaml``.
2017-05-29 13:42:49 +02:00
* Set the ``metadata.name`` and ``metadata.labels.name`` to the value
set in ``ngx-instance-name`` in the ConfigMap above.
2017-05-29 13:42:49 +02:00
* Set the ``spec.selector.app`` to the value set in ``ngx-instance-name`` in
the ConfigMap followed by ``-dep``. For example, if the value set in the
``ngx-instance-name`` is ``ngx-http-instance-0``, set the
``spec.selector.app`` to ``ngx-http-instance-0-dep``.
* Set ``ports[0].port`` and ``ports[0].targetPort`` to the value set in the
``cluster-frontend-port`` in the ConfigMap above. This is the
``public-cluster-port`` in the file which is the ingress in to the cluster.
2017-05-29 13:42:49 +02:00
* Start the Kubernetes Service:
.. code:: bash
$ kubectl --context k8s-bdb-test-cluster-0 apply -f nginx-http/nginx-http-svc.yaml
2017-05-29 13:42:49 +02:00
Step 4.2: NGINX with HTTPS
^^^^^^^^^^^^^^^^^^^^^^^^^^
2017-05-29 13:42:49 +02:00
* You have to enable HTTPS for this one and will need an HTTPS certificate
2017-06-29 14:35:23 +02:00
for your domain.
2017-06-29 14:35:23 +02:00
* You should have already created the necessary Kubernetes Secrets in the previous
step (i.e. ``https-certs``).
2017-06-29 14:35:23 +02:00
* This configuration is located in the file ``nginx-https/nginx-https-svc.yaml``.
2017-06-29 14:35:23 +02:00
* Set the ``metadata.name`` and ``metadata.labels.name`` to the value
set in ``ngx-instance-name`` in the ConfigMap above.
2017-05-29 13:42:49 +02:00
* Set the ``spec.selector.app`` to the value set in ``ngx-instance-name`` in
the ConfigMap followed by ``-dep``. For example, if the value set in the
``ngx-instance-name`` is ``ngx-https-instance-0``, set the
``spec.selector.app`` to ``ngx-https-instance-0-dep``.
* Set ``ports[0].port`` and ``ports[0].targetPort`` to the value set in the
``cluster-frontend-port`` in the ConfigMap above. This is the
``public-secure-cluster-port`` in the file which is the ingress in to the cluster.
* Set ``ports[1].port`` and ``ports[1].targetPort`` to the value set in the
``mongodb-frontend-port`` in the ConfigMap above. This is the
``public-mdb-port`` in the file which specifies where MongoDB is
available.
2017-05-29 13:42:49 +02:00
* Start the Kubernetes Service:
2017-05-29 13:42:49 +02:00
.. code:: bash
$ kubectl --context k8s-bdb-test-cluster-0 apply -f nginx-https/nginx-https-svc.yaml
2017-05-29 13:42:49 +02:00
.. _assign-dns-name-to-the-nginx-public-ip:
2017-05-29 13:42:49 +02:00
Step 5: Assign DNS Name to the NGINX Public IP
----------------------------------------------
* This step is required only if you are planning to set up multiple
`BigchainDB nodes
<https://docs.bigchaindb.com/en/latest/terminology.html>`_ or are using
HTTPS certificates tied to a domain.
* The following command can help you find out if the NGINX service started
2017-05-29 13:42:49 +02:00
above has been assigned a public IP or external IP address:
2017-05-29 13:42:49 +02:00
.. code:: bash
$ kubectl --context k8s-bdb-test-cluster-0 get svc -w
* Once a public IP is assigned, you can map it to
2017-05-29 13:42:49 +02:00
a DNS name.
We usually assign ``bdb-test-cluster-0``, ``bdb-test-cluster-1`` and
2017-05-29 13:42:49 +02:00
so on in our documentation.
Let's assume that we assign the unique name of ``bdb-test-cluster-0`` here.
2017-05-29 13:42:49 +02:00
**Set up DNS mapping in Azure.**
Select the current Azure resource group and look for the ``Public IP``
resource. You should see at least 2 entries there - one for the Kubernetes
master and the other for the NGINX instance. You may have to ``Refresh`` the
2017-05-29 13:42:49 +02:00
Azure web page listing the resources in a resource group for the latest
changes to be reflected.
Select the ``Public IP`` resource that is attached to your service (it should
have the Azure DNS prefix name along with a long random string, without the
``master-ip`` string), select ``Configuration``, add the DNS assigned above
(for example, ``bdb-test-cluster-0``), click ``Save``, and wait for the
changes to be applied.
2017-05-29 13:42:49 +02:00
To verify the DNS setting is operational, you can run ``nslookup <DNS
name added in Azure configuration>`` from your local Linux shell.
2017-05-29 13:42:49 +02:00
This will ensure that when you scale the replica set later, other MongoDB
members in the replica set can reach this instance.
.. _start-the-mongodb-kubernetes-service:
2017-05-29 13:42:49 +02:00
Step 6: Start the MongoDB Kubernetes Service
--------------------------------------------
* This configuration is located in the file ``mongodb/mongo-svc.yaml``.
* Set the ``metadata.name`` and ``metadata.labels.name`` to the value
set in ``mdb-instance-name`` in the ConfigMap above.
2017-05-29 13:42:49 +02:00
* Set the ``spec.selector.app`` to the value set in ``mdb-instance-name`` in
the ConfigMap followed by ``-ss``. For example, if the value set in the
``mdb-instance-name`` is ``mdb-instance-0``, set the
``spec.selector.app`` to ``mdb-instance-0-ss``.
* Set ``ports[0].port`` and ``ports[0].targetPort`` to the value set in the
``mongodb-backend-port`` in the ConfigMap above.
This is the ``mdb-port`` in the file which specifies where MongoDB listens
for API requests.
2017-05-29 13:42:49 +02:00
* Start the Kubernetes Service:
.. code:: bash
$ kubectl --context k8s-bdb-test-cluster-0 apply -f mongodb/mongo-svc.yaml
.. _start-the-bigchaindb-kubernetes-service:
2017-05-29 13:42:49 +02:00
Step 7: Start the BigchainDB Kubernetes Service
-----------------------------------------------
* This configuration is located in the file ``bigchaindb/bigchaindb-svc.yaml``.
* Set the ``metadata.name`` and ``metadata.labels.name`` to the value
set in ``bdb-instance-name`` in the ConfigMap above.
2017-05-29 13:42:49 +02:00
* Set the ``spec.selector.app`` to the value set in ``bdb-instance-name`` in
the ConfigMap followed by ``-dep``. For example, if the value set in the
``bdb-instance-name`` is ``bdb-instance-0``, set the
``spec.selector.app`` to ``bdb-instance-0-dep``.
* Set ``ports[0].port`` and ``ports[0].targetPort`` to the value set in the
``bigchaindb-api-port`` in the ConfigMap above.
This is the ``bdb-api-port`` in the file which specifies where BigchainDB
listens for HTTP API requests.
* Set ``ports[1].port`` and ``ports[1].targetPort`` to the value set in the
``bigchaindb-ws-port`` in the ConfigMap above.
This is the ``bdb-ws-port`` in the file which specifies where BigchainDB
listens for Websocket connections.
2017-05-29 13:42:49 +02:00
* Start the Kubernetes Service:
.. code:: bash
$ kubectl --context k8s-bdb-test-cluster-0 apply -f bigchaindb/bigchaindb-svc.yaml
.. _start-the-openresty-kubernetes-service:
Step 8: Start the OpenResty Kubernetes Service
----------------------------------------------
* This configuration is located in the file ``nginx-openresty/nginx-openresty-svc.yaml``.
* Set the ``metadata.name`` and ``metadata.labels.name`` to the value
set in ``openresty-instance-name`` in the ConfigMap above.
* Set the ``spec.selector.app`` to the value set in ``openresty-instance-name`` in
the ConfigMap followed by ``-dep``. For example, if the value set in the
``openresty-instance-name`` is ``openresty-instance-0``, set the
``spec.selector.app`` to ``openresty-instance-0-dep``.
* Start the Kubernetes Service:
.. code:: bash
$ kubectl --context k8s-bdb-test-cluster-0 apply -f nginx-openresty/nginx-openresty-svc.yaml
.. _start-the-nginx-kubernetes-deployment:
Step 9: Start the NGINX Kubernetes Deployment
2017-05-29 13:42:49 +02:00
---------------------------------------------
* NGINX is used as a proxy to OpenResty, BigchainDB and MongoDB instances in
the node. It proxies HTTP/HTTPS requests on the ``cluster-frontend-port``
to the corresponding OpenResty or BigchainDB backend, and TCP connections
on ``mongodb-frontend-port`` to the MongoDB backend.
2017-05-29 13:42:49 +02:00
* As in step 4, you have the option to use vanilla NGINX without HTTPS or
NGINX with HTTPS support.
2017-05-29 13:42:49 +02:00
Step 9.1: Vanilla NGINX
2017-05-29 13:42:49 +02:00
^^^^^^^^^^^^^^^^^^^^^^^
* This configuration is located in the file ``nginx-http/nginx-http-dep.yaml``.
2017-05-29 13:42:49 +02:00
* Set the ``metadata.name`` and ``spec.template.metadata.labels.app``
to the value set in ``ngx-instance-name`` in the ConfigMap followed by a
``-dep``. For example, if the value set in the ``ngx-instance-name`` is
``ngx-http-instance-0``, set the fields to ``ngx-http-instance-0-dep``.
* Set the ports to be exposed from the pod in the
``spec.containers[0].ports`` section. We currently expose 3 ports -
``mongodb-frontend-port``, ``cluster-frontend-port`` and
``cluster-health-check-port``. Set them to the values specified in the
ConfigMap.
* The configuration uses the following values set in the ConfigMap:
- ``cluster-frontend-port``
- ``cluster-health-check-port``
- ``cluster-dns-server-ip``
- ``mongodb-frontend-port``
- ``ngx-mdb-instance-name``
- ``mongodb-backend-port``
- ``ngx-bdb-instance-name``
- ``bigchaindb-api-port``
- ``bigchaindb-ws-port``
2017-05-29 13:42:49 +02:00
* Start the Kubernetes Deployment:
.. code:: bash
$ kubectl --context k8s-bdb-test-cluster-0 apply -f nginx-http/nginx-http-dep.yaml
2017-05-29 13:42:49 +02:00
Step 9.2: NGINX with HTTPS
^^^^^^^^^^^^^^^^^^^^^^^^^^
2017-05-29 13:42:49 +02:00
* This configuration is located in the file
``nginx-https/nginx-https-dep.yaml``.
2017-05-29 13:42:49 +02:00
* Set the ``metadata.name`` and ``spec.template.metadata.labels.app``
to the value set in ``ngx-instance-name`` in the ConfigMap followed by a
``-dep``. For example, if the value set in the ``ngx-instance-name`` is
``ngx-https-instance-0``, set the fields to ``ngx-https-instance-0-dep``.
* Set the ports to be exposed from the pod in the
``spec.containers[0].ports`` section. We currently expose 3 ports -
``mongodb-frontend-port``, ``cluster-frontend-port`` and
``cluster-health-check-port``. Set them to the values specified in the
ConfigMap.
* The configuration uses the following values set in the ConfigMap:
- ``cluster-frontend-port``
- ``cluster-health-check-port``
- ``cluster-fqdn``
- ``cluster-dns-server-ip``
- ``mongodb-frontend-port``
- ``ngx-mdb-instance-name``
- ``mongodb-backend-port``
- ``openresty-backend-port``
- ``ngx-openresty-instance-name``
- ``ngx-bdb-instance-name``
- ``bigchaindb-api-port``
- ``bigchaindb-ws-port``
* The configuration uses the following values set in the Secret:
- ``https-certs``
2017-05-29 13:42:49 +02:00
* Start the Kubernetes Deployment:
.. code:: bash
$ kubectl --context k8s-bdb-test-cluster-0 apply -f nginx-https/nginx-https-dep.yaml
2017-05-29 13:42:49 +02:00
.. _create-kubernetes-storage-classes-for-mongodb:
Step 10: Create Kubernetes Storage Classes for MongoDB
------------------------------------------------------
2017-02-15 14:57:42 +01:00
2017-02-28 19:08:13 +01:00
MongoDB needs somewhere to store its data persistently,
outside the container where MongoDB is running.
Our MongoDB Docker container
(based on the official MongoDB Docker container)
exports two volume mounts with correct
Running a single node on k8s (#1269) * Single node as a StatefulSet in k8s - uses bigchaindb/bigchaindb:0.9.1 * Updating README * rdb, mdb as stateful services * [WIP] bdb as a statefulset * [WIP] bdb w/ rdb and bdb w/ mdb backends - does not work as of now * Split mdb & bdb into separate pods + enhancements * discovery of the mongodb service by the bdb pod by using dns name. * using separate storage classes to map 2 different volumes exposed by the mongo docker container; one for /data/db (dbPath) and the other for /data/configdb (configDB). * using the `persistentVolumeReclaimPolicy: Retain` in k8s pvc. However, this seems to be unsupported in Azure and the disks still show a reclaim policy of `delete`. * mongodb container runs the `mongod` process as user `mongodb` and group `mongodb. The corresponding `uid` and `gid` for the `mongod` process is 999 and 999 respectively. When the constinaer runs on a host with a mounted disk, the writes fail, when there is no user with uid 999. To avoid this, I use the docker provided feature of --cap-add=FOWNER in k8s. This bypasses the uid and gid permission checks during writes and allows writes. Ref: https://docs.docker.com/engine/reference/run/#runtime-privilege-and-linux-capabilities * Delete redundant k8s files, add cluster deletion steps. * Single node as a StatefulSet in k8s - uses bigchaindb/bigchaindb:0.9.1 * Updating README * rdb, mdb as stateful services * [WIP] bdb as a statefulset * [WIP] bdb w/ rdb and bdb w/ mdb backends - does not work as of now * Split mdb & bdb into separate pods + enhancements * discovery of the mongodb service by the bdb pod by using dns name. * using separate storage classes to map 2 different volumes exposed by the mongo docker container; one for /data/db (dbPath) and the other for /data/configdb (configDB). * using the `persistentVolumeReclaimPolicy: Retain` in k8s pvc. However, this seems to be unsupported in Azure and the disks still show a reclaim policy of `delete`. * mongodb container runs the `mongod` process as user `mongodb` and group `mongodb. The corresponding `uid` and `gid` for the `mongod` process is 999 and 999 respectively. When the constinaer runs on a host with a mounted disk, the writes fail, when there is no user with uid 999. To avoid this, I use the docker provided feature of --cap-add=FOWNER in k8s. This bypasses the uid and gid permission checks during writes and allows writes. Ref: https://docs.docker.com/engine/reference/run/#runtime-privilege-and-linux-capabilities * Delete redundant k8s files, add cluster deletion steps. * Documentation: running a single node with distinct mongodb and bigchaindb pods on k8s * Updates as per @ttmc's comments
2017-03-09 16:53:00 +01:00
permissions from inside the container:
* The directory where the mongod instance stores its data: ``/data/db``.
There's more explanation in the MongoDB docs about `storage.dbpath <https://docs.mongodb.com/manual/reference/configuration-options/#storage.dbPath>`_.
Running a single node on k8s (#1269) * Single node as a StatefulSet in k8s - uses bigchaindb/bigchaindb:0.9.1 * Updating README * rdb, mdb as stateful services * [WIP] bdb as a statefulset * [WIP] bdb w/ rdb and bdb w/ mdb backends - does not work as of now * Split mdb & bdb into separate pods + enhancements * discovery of the mongodb service by the bdb pod by using dns name. * using separate storage classes to map 2 different volumes exposed by the mongo docker container; one for /data/db (dbPath) and the other for /data/configdb (configDB). * using the `persistentVolumeReclaimPolicy: Retain` in k8s pvc. However, this seems to be unsupported in Azure and the disks still show a reclaim policy of `delete`. * mongodb container runs the `mongod` process as user `mongodb` and group `mongodb. The corresponding `uid` and `gid` for the `mongod` process is 999 and 999 respectively. When the constinaer runs on a host with a mounted disk, the writes fail, when there is no user with uid 999. To avoid this, I use the docker provided feature of --cap-add=FOWNER in k8s. This bypasses the uid and gid permission checks during writes and allows writes. Ref: https://docs.docker.com/engine/reference/run/#runtime-privilege-and-linux-capabilities * Delete redundant k8s files, add cluster deletion steps. * Single node as a StatefulSet in k8s - uses bigchaindb/bigchaindb:0.9.1 * Updating README * rdb, mdb as stateful services * [WIP] bdb as a statefulset * [WIP] bdb w/ rdb and bdb w/ mdb backends - does not work as of now * Split mdb & bdb into separate pods + enhancements * discovery of the mongodb service by the bdb pod by using dns name. * using separate storage classes to map 2 different volumes exposed by the mongo docker container; one for /data/db (dbPath) and the other for /data/configdb (configDB). * using the `persistentVolumeReclaimPolicy: Retain` in k8s pvc. However, this seems to be unsupported in Azure and the disks still show a reclaim policy of `delete`. * mongodb container runs the `mongod` process as user `mongodb` and group `mongodb. The corresponding `uid` and `gid` for the `mongod` process is 999 and 999 respectively. When the constinaer runs on a host with a mounted disk, the writes fail, when there is no user with uid 999. To avoid this, I use the docker provided feature of --cap-add=FOWNER in k8s. This bypasses the uid and gid permission checks during writes and allows writes. Ref: https://docs.docker.com/engine/reference/run/#runtime-privilege-and-linux-capabilities * Delete redundant k8s files, add cluster deletion steps. * Documentation: running a single node with distinct mongodb and bigchaindb pods on k8s * Updates as per @ttmc's comments
2017-03-09 16:53:00 +01:00
* The directory where the mongodb instance stores the metadata for a sharded
cluster: ``/data/configdb/``.
There's more explanation in the MongoDB docs about `sharding.configDB <https://docs.mongodb.com/manual/reference/configuration-options/#sharding.configDB>`_.
Running a single node on k8s (#1269) * Single node as a StatefulSet in k8s - uses bigchaindb/bigchaindb:0.9.1 * Updating README * rdb, mdb as stateful services * [WIP] bdb as a statefulset * [WIP] bdb w/ rdb and bdb w/ mdb backends - does not work as of now * Split mdb & bdb into separate pods + enhancements * discovery of the mongodb service by the bdb pod by using dns name. * using separate storage classes to map 2 different volumes exposed by the mongo docker container; one for /data/db (dbPath) and the other for /data/configdb (configDB). * using the `persistentVolumeReclaimPolicy: Retain` in k8s pvc. However, this seems to be unsupported in Azure and the disks still show a reclaim policy of `delete`. * mongodb container runs the `mongod` process as user `mongodb` and group `mongodb. The corresponding `uid` and `gid` for the `mongod` process is 999 and 999 respectively. When the constinaer runs on a host with a mounted disk, the writes fail, when there is no user with uid 999. To avoid this, I use the docker provided feature of --cap-add=FOWNER in k8s. This bypasses the uid and gid permission checks during writes and allows writes. Ref: https://docs.docker.com/engine/reference/run/#runtime-privilege-and-linux-capabilities * Delete redundant k8s files, add cluster deletion steps. * Single node as a StatefulSet in k8s - uses bigchaindb/bigchaindb:0.9.1 * Updating README * rdb, mdb as stateful services * [WIP] bdb as a statefulset * [WIP] bdb w/ rdb and bdb w/ mdb backends - does not work as of now * Split mdb & bdb into separate pods + enhancements * discovery of the mongodb service by the bdb pod by using dns name. * using separate storage classes to map 2 different volumes exposed by the mongo docker container; one for /data/db (dbPath) and the other for /data/configdb (configDB). * using the `persistentVolumeReclaimPolicy: Retain` in k8s pvc. However, this seems to be unsupported in Azure and the disks still show a reclaim policy of `delete`. * mongodb container runs the `mongod` process as user `mongodb` and group `mongodb. The corresponding `uid` and `gid` for the `mongod` process is 999 and 999 respectively. When the constinaer runs on a host with a mounted disk, the writes fail, when there is no user with uid 999. To avoid this, I use the docker provided feature of --cap-add=FOWNER in k8s. This bypasses the uid and gid permission checks during writes and allows writes. Ref: https://docs.docker.com/engine/reference/run/#runtime-privilege-and-linux-capabilities * Delete redundant k8s files, add cluster deletion steps. * Documentation: running a single node with distinct mongodb and bigchaindb pods on k8s * Updates as per @ttmc's comments
2017-03-09 16:53:00 +01:00
2017-02-28 19:08:13 +01:00
Explaining how Kubernetes handles persistent volumes,
and the associated terminology,
is beyond the scope of this documentation;
see `the Kubernetes docs about persistent volumes
<https://kubernetes.io/docs/user-guide/persistent-volumes>`_.
Running a single node on k8s (#1269) * Single node as a StatefulSet in k8s - uses bigchaindb/bigchaindb:0.9.1 * Updating README * rdb, mdb as stateful services * [WIP] bdb as a statefulset * [WIP] bdb w/ rdb and bdb w/ mdb backends - does not work as of now * Split mdb & bdb into separate pods + enhancements * discovery of the mongodb service by the bdb pod by using dns name. * using separate storage classes to map 2 different volumes exposed by the mongo docker container; one for /data/db (dbPath) and the other for /data/configdb (configDB). * using the `persistentVolumeReclaimPolicy: Retain` in k8s pvc. However, this seems to be unsupported in Azure and the disks still show a reclaim policy of `delete`. * mongodb container runs the `mongod` process as user `mongodb` and group `mongodb. The corresponding `uid` and `gid` for the `mongod` process is 999 and 999 respectively. When the constinaer runs on a host with a mounted disk, the writes fail, when there is no user with uid 999. To avoid this, I use the docker provided feature of --cap-add=FOWNER in k8s. This bypasses the uid and gid permission checks during writes and allows writes. Ref: https://docs.docker.com/engine/reference/run/#runtime-privilege-and-linux-capabilities * Delete redundant k8s files, add cluster deletion steps. * Single node as a StatefulSet in k8s - uses bigchaindb/bigchaindb:0.9.1 * Updating README * rdb, mdb as stateful services * [WIP] bdb as a statefulset * [WIP] bdb w/ rdb and bdb w/ mdb backends - does not work as of now * Split mdb & bdb into separate pods + enhancements * discovery of the mongodb service by the bdb pod by using dns name. * using separate storage classes to map 2 different volumes exposed by the mongo docker container; one for /data/db (dbPath) and the other for /data/configdb (configDB). * using the `persistentVolumeReclaimPolicy: Retain` in k8s pvc. However, this seems to be unsupported in Azure and the disks still show a reclaim policy of `delete`. * mongodb container runs the `mongod` process as user `mongodb` and group `mongodb. The corresponding `uid` and `gid` for the `mongod` process is 999 and 999 respectively. When the constinaer runs on a host with a mounted disk, the writes fail, when there is no user with uid 999. To avoid this, I use the docker provided feature of --cap-add=FOWNER in k8s. This bypasses the uid and gid permission checks during writes and allows writes. Ref: https://docs.docker.com/engine/reference/run/#runtime-privilege-and-linux-capabilities * Delete redundant k8s files, add cluster deletion steps. * Documentation: running a single node with distinct mongodb and bigchaindb pods on k8s * Updates as per @ttmc's comments
2017-03-09 16:53:00 +01:00
The first thing to do is create the Kubernetes storage classes.
2017-02-28 19:08:13 +01:00
2017-05-29 13:42:49 +02:00
**Set up Storage Classes in Azure.**
First, you need an Azure storage account.
If you deployed your Kubernetes cluster on Azure
using the Azure CLI 2.0
(as per :doc:`our template <template-kubernetes-azure>`),
then the `az acs create` command already created a
storage account in the same location and resource group
as your Kubernetes cluster.
Both should have the same "storage account SKU": ``Standard_LRS``.
Standard storage is lower-cost and lower-performance.
It uses hard disk drives (HDD).
LRS means locally-redundant storage: three replicas
in the same data center.
Premium storage is higher-cost and higher-performance.
It uses solid state drives (SSD).
You can create a `storage account <https://docs.microsoft.com/en-us/azure/storage/common/storage-create-storage-account>`_
for Premium storage and associate it with your Azure resource group.
For future reference, the command to create a storage account is
`az storage account create <https://docs.microsoft.com/en-us/cli/azure/storage/account#create>`_.
.. Note::
Please refer to `Azure documentation <https://docs.microsoft.com/en-us/azure/virtual-machines/windows/premium-storage>`_
for the list of VMs that are supported by Premium Storage.
2017-02-28 19:08:13 +01:00
2017-05-29 13:42:49 +02:00
The Kubernetes template for configuration of Storage Class is located in the
file ``mongodb/mongo-sc.yaml``.
2017-02-28 19:08:13 +01:00
2017-05-29 13:42:49 +02:00
You may have to update the ``parameters.location`` field in the file to
Running a single node on k8s (#1269) * Single node as a StatefulSet in k8s - uses bigchaindb/bigchaindb:0.9.1 * Updating README * rdb, mdb as stateful services * [WIP] bdb as a statefulset * [WIP] bdb w/ rdb and bdb w/ mdb backends - does not work as of now * Split mdb & bdb into separate pods + enhancements * discovery of the mongodb service by the bdb pod by using dns name. * using separate storage classes to map 2 different volumes exposed by the mongo docker container; one for /data/db (dbPath) and the other for /data/configdb (configDB). * using the `persistentVolumeReclaimPolicy: Retain` in k8s pvc. However, this seems to be unsupported in Azure and the disks still show a reclaim policy of `delete`. * mongodb container runs the `mongod` process as user `mongodb` and group `mongodb. The corresponding `uid` and `gid` for the `mongod` process is 999 and 999 respectively. When the constinaer runs on a host with a mounted disk, the writes fail, when there is no user with uid 999. To avoid this, I use the docker provided feature of --cap-add=FOWNER in k8s. This bypasses the uid and gid permission checks during writes and allows writes. Ref: https://docs.docker.com/engine/reference/run/#runtime-privilege-and-linux-capabilities * Delete redundant k8s files, add cluster deletion steps. * Single node as a StatefulSet in k8s - uses bigchaindb/bigchaindb:0.9.1 * Updating README * rdb, mdb as stateful services * [WIP] bdb as a statefulset * [WIP] bdb w/ rdb and bdb w/ mdb backends - does not work as of now * Split mdb & bdb into separate pods + enhancements * discovery of the mongodb service by the bdb pod by using dns name. * using separate storage classes to map 2 different volumes exposed by the mongo docker container; one for /data/db (dbPath) and the other for /data/configdb (configDB). * using the `persistentVolumeReclaimPolicy: Retain` in k8s pvc. However, this seems to be unsupported in Azure and the disks still show a reclaim policy of `delete`. * mongodb container runs the `mongod` process as user `mongodb` and group `mongodb. The corresponding `uid` and `gid` for the `mongod` process is 999 and 999 respectively. When the constinaer runs on a host with a mounted disk, the writes fail, when there is no user with uid 999. To avoid this, I use the docker provided feature of --cap-add=FOWNER in k8s. This bypasses the uid and gid permission checks during writes and allows writes. Ref: https://docs.docker.com/engine/reference/run/#runtime-privilege-and-linux-capabilities * Delete redundant k8s files, add cluster deletion steps. * Documentation: running a single node with distinct mongodb and bigchaindb pods on k8s * Updates as per @ttmc's comments
2017-03-09 16:53:00 +01:00
specify the location you are using in Azure.
2017-02-28 19:08:13 +01:00
If you want to use a custom storage account with the Storage Class, you
can also update `parameters.storageAccount` and provide the Azure storage
account name.
Create the required storage classes using:
2017-02-28 19:08:13 +01:00
.. code:: bash
2017-05-29 13:42:49 +02:00
$ kubectl --context k8s-bdb-test-cluster-0 apply -f mongodb/mongo-sc.yaml
Running a single node on k8s (#1269) * Single node as a StatefulSet in k8s - uses bigchaindb/bigchaindb:0.9.1 * Updating README * rdb, mdb as stateful services * [WIP] bdb as a statefulset * [WIP] bdb w/ rdb and bdb w/ mdb backends - does not work as of now * Split mdb & bdb into separate pods + enhancements * discovery of the mongodb service by the bdb pod by using dns name. * using separate storage classes to map 2 different volumes exposed by the mongo docker container; one for /data/db (dbPath) and the other for /data/configdb (configDB). * using the `persistentVolumeReclaimPolicy: Retain` in k8s pvc. However, this seems to be unsupported in Azure and the disks still show a reclaim policy of `delete`. * mongodb container runs the `mongod` process as user `mongodb` and group `mongodb. The corresponding `uid` and `gid` for the `mongod` process is 999 and 999 respectively. When the constinaer runs on a host with a mounted disk, the writes fail, when there is no user with uid 999. To avoid this, I use the docker provided feature of --cap-add=FOWNER in k8s. This bypasses the uid and gid permission checks during writes and allows writes. Ref: https://docs.docker.com/engine/reference/run/#runtime-privilege-and-linux-capabilities * Delete redundant k8s files, add cluster deletion steps. * Single node as a StatefulSet in k8s - uses bigchaindb/bigchaindb:0.9.1 * Updating README * rdb, mdb as stateful services * [WIP] bdb as a statefulset * [WIP] bdb w/ rdb and bdb w/ mdb backends - does not work as of now * Split mdb & bdb into separate pods + enhancements * discovery of the mongodb service by the bdb pod by using dns name. * using separate storage classes to map 2 different volumes exposed by the mongo docker container; one for /data/db (dbPath) and the other for /data/configdb (configDB). * using the `persistentVolumeReclaimPolicy: Retain` in k8s pvc. However, this seems to be unsupported in Azure and the disks still show a reclaim policy of `delete`. * mongodb container runs the `mongod` process as user `mongodb` and group `mongodb. The corresponding `uid` and `gid` for the `mongod` process is 999 and 999 respectively. When the constinaer runs on a host with a mounted disk, the writes fail, when there is no user with uid 999. To avoid this, I use the docker provided feature of --cap-add=FOWNER in k8s. This bypasses the uid and gid permission checks during writes and allows writes. Ref: https://docs.docker.com/engine/reference/run/#runtime-privilege-and-linux-capabilities * Delete redundant k8s files, add cluster deletion steps. * Documentation: running a single node with distinct mongodb and bigchaindb pods on k8s * Updates as per @ttmc's comments
2017-03-09 16:53:00 +01:00
2017-02-28 19:08:13 +01:00
You can check if it worked using ``kubectl get storageclasses``.
.. _create-kubernetes-persistent-volume-claims:
Step 11: Create Kubernetes Persistent Volume Claims
2017-05-29 13:42:49 +02:00
---------------------------------------------------
Next, you will create two PersistentVolumeClaim objects ``mongo-db-claim`` and
Running a single node on k8s (#1269) * Single node as a StatefulSet in k8s - uses bigchaindb/bigchaindb:0.9.1 * Updating README * rdb, mdb as stateful services * [WIP] bdb as a statefulset * [WIP] bdb w/ rdb and bdb w/ mdb backends - does not work as of now * Split mdb & bdb into separate pods + enhancements * discovery of the mongodb service by the bdb pod by using dns name. * using separate storage classes to map 2 different volumes exposed by the mongo docker container; one for /data/db (dbPath) and the other for /data/configdb (configDB). * using the `persistentVolumeReclaimPolicy: Retain` in k8s pvc. However, this seems to be unsupported in Azure and the disks still show a reclaim policy of `delete`. * mongodb container runs the `mongod` process as user `mongodb` and group `mongodb. The corresponding `uid` and `gid` for the `mongod` process is 999 and 999 respectively. When the constinaer runs on a host with a mounted disk, the writes fail, when there is no user with uid 999. To avoid this, I use the docker provided feature of --cap-add=FOWNER in k8s. This bypasses the uid and gid permission checks during writes and allows writes. Ref: https://docs.docker.com/engine/reference/run/#runtime-privilege-and-linux-capabilities * Delete redundant k8s files, add cluster deletion steps. * Single node as a StatefulSet in k8s - uses bigchaindb/bigchaindb:0.9.1 * Updating README * rdb, mdb as stateful services * [WIP] bdb as a statefulset * [WIP] bdb w/ rdb and bdb w/ mdb backends - does not work as of now * Split mdb & bdb into separate pods + enhancements * discovery of the mongodb service by the bdb pod by using dns name. * using separate storage classes to map 2 different volumes exposed by the mongo docker container; one for /data/db (dbPath) and the other for /data/configdb (configDB). * using the `persistentVolumeReclaimPolicy: Retain` in k8s pvc. However, this seems to be unsupported in Azure and the disks still show a reclaim policy of `delete`. * mongodb container runs the `mongod` process as user `mongodb` and group `mongodb. The corresponding `uid` and `gid` for the `mongod` process is 999 and 999 respectively. When the constinaer runs on a host with a mounted disk, the writes fail, when there is no user with uid 999. To avoid this, I use the docker provided feature of --cap-add=FOWNER in k8s. This bypasses the uid and gid permission checks during writes and allows writes. Ref: https://docs.docker.com/engine/reference/run/#runtime-privilege-and-linux-capabilities * Delete redundant k8s files, add cluster deletion steps. * Documentation: running a single node with distinct mongodb and bigchaindb pods on k8s * Updates as per @ttmc's comments
2017-03-09 16:53:00 +01:00
``mongo-configdb-claim``.
2017-05-29 13:42:49 +02:00
This configuration is located in the file ``mongodb/mongo-pvc.yaml``.
2017-02-28 19:08:13 +01:00
Note how there's no explicit mention of Azure, AWS or whatever.
2017-02-28 19:08:13 +01:00
``ReadWriteOnce`` (RWO) means the volume can be mounted as
read-write by a single Kubernetes node.
(``ReadWriteOnce`` is the *only* access mode supported
by AzureDisk.)
``storage: 20Gi`` means the volume has a size of 20
2017-02-28 19:08:13 +01:00
`gibibytes <https://en.wikipedia.org/wiki/Gibibyte>`_.
Running a single node on k8s (#1269) * Single node as a StatefulSet in k8s - uses bigchaindb/bigchaindb:0.9.1 * Updating README * rdb, mdb as stateful services * [WIP] bdb as a statefulset * [WIP] bdb w/ rdb and bdb w/ mdb backends - does not work as of now * Split mdb & bdb into separate pods + enhancements * discovery of the mongodb service by the bdb pod by using dns name. * using separate storage classes to map 2 different volumes exposed by the mongo docker container; one for /data/db (dbPath) and the other for /data/configdb (configDB). * using the `persistentVolumeReclaimPolicy: Retain` in k8s pvc. However, this seems to be unsupported in Azure and the disks still show a reclaim policy of `delete`. * mongodb container runs the `mongod` process as user `mongodb` and group `mongodb. The corresponding `uid` and `gid` for the `mongod` process is 999 and 999 respectively. When the constinaer runs on a host with a mounted disk, the writes fail, when there is no user with uid 999. To avoid this, I use the docker provided feature of --cap-add=FOWNER in k8s. This bypasses the uid and gid permission checks during writes and allows writes. Ref: https://docs.docker.com/engine/reference/run/#runtime-privilege-and-linux-capabilities * Delete redundant k8s files, add cluster deletion steps. * Single node as a StatefulSet in k8s - uses bigchaindb/bigchaindb:0.9.1 * Updating README * rdb, mdb as stateful services * [WIP] bdb as a statefulset * [WIP] bdb w/ rdb and bdb w/ mdb backends - does not work as of now * Split mdb & bdb into separate pods + enhancements * discovery of the mongodb service by the bdb pod by using dns name. * using separate storage classes to map 2 different volumes exposed by the mongo docker container; one for /data/db (dbPath) and the other for /data/configdb (configDB). * using the `persistentVolumeReclaimPolicy: Retain` in k8s pvc. However, this seems to be unsupported in Azure and the disks still show a reclaim policy of `delete`. * mongodb container runs the `mongod` process as user `mongodb` and group `mongodb. The corresponding `uid` and `gid` for the `mongod` process is 999 and 999 respectively. When the constinaer runs on a host with a mounted disk, the writes fail, when there is no user with uid 999. To avoid this, I use the docker provided feature of --cap-add=FOWNER in k8s. This bypasses the uid and gid permission checks during writes and allows writes. Ref: https://docs.docker.com/engine/reference/run/#runtime-privilege-and-linux-capabilities * Delete redundant k8s files, add cluster deletion steps. * Documentation: running a single node with distinct mongodb and bigchaindb pods on k8s * Updates as per @ttmc's comments
2017-03-09 16:53:00 +01:00
You may want to update the ``spec.resources.requests.storage`` field in both
the files to specify a different disk size.
Create the required Persistent Volume Claims using:
2017-02-28 19:08:13 +01:00
.. code:: bash
2017-05-29 13:42:49 +02:00
$ kubectl --context k8s-bdb-test-cluster-0 apply -f mongodb/mongo-pvc.yaml
Running a single node on k8s (#1269) * Single node as a StatefulSet in k8s - uses bigchaindb/bigchaindb:0.9.1 * Updating README * rdb, mdb as stateful services * [WIP] bdb as a statefulset * [WIP] bdb w/ rdb and bdb w/ mdb backends - does not work as of now * Split mdb & bdb into separate pods + enhancements * discovery of the mongodb service by the bdb pod by using dns name. * using separate storage classes to map 2 different volumes exposed by the mongo docker container; one for /data/db (dbPath) and the other for /data/configdb (configDB). * using the `persistentVolumeReclaimPolicy: Retain` in k8s pvc. However, this seems to be unsupported in Azure and the disks still show a reclaim policy of `delete`. * mongodb container runs the `mongod` process as user `mongodb` and group `mongodb. The corresponding `uid` and `gid` for the `mongod` process is 999 and 999 respectively. When the constinaer runs on a host with a mounted disk, the writes fail, when there is no user with uid 999. To avoid this, I use the docker provided feature of --cap-add=FOWNER in k8s. This bypasses the uid and gid permission checks during writes and allows writes. Ref: https://docs.docker.com/engine/reference/run/#runtime-privilege-and-linux-capabilities * Delete redundant k8s files, add cluster deletion steps. * Single node as a StatefulSet in k8s - uses bigchaindb/bigchaindb:0.9.1 * Updating README * rdb, mdb as stateful services * [WIP] bdb as a statefulset * [WIP] bdb w/ rdb and bdb w/ mdb backends - does not work as of now * Split mdb & bdb into separate pods + enhancements * discovery of the mongodb service by the bdb pod by using dns name. * using separate storage classes to map 2 different volumes exposed by the mongo docker container; one for /data/db (dbPath) and the other for /data/configdb (configDB). * using the `persistentVolumeReclaimPolicy: Retain` in k8s pvc. However, this seems to be unsupported in Azure and the disks still show a reclaim policy of `delete`. * mongodb container runs the `mongod` process as user `mongodb` and group `mongodb. The corresponding `uid` and `gid` for the `mongod` process is 999 and 999 respectively. When the constinaer runs on a host with a mounted disk, the writes fail, when there is no user with uid 999. To avoid this, I use the docker provided feature of --cap-add=FOWNER in k8s. This bypasses the uid and gid permission checks during writes and allows writes. Ref: https://docs.docker.com/engine/reference/run/#runtime-privilege-and-linux-capabilities * Delete redundant k8s files, add cluster deletion steps. * Documentation: running a single node with distinct mongodb and bigchaindb pods on k8s * Updates as per @ttmc's comments
2017-03-09 16:53:00 +01:00
2017-02-28 19:08:13 +01:00
Running a single node on k8s (#1269) * Single node as a StatefulSet in k8s - uses bigchaindb/bigchaindb:0.9.1 * Updating README * rdb, mdb as stateful services * [WIP] bdb as a statefulset * [WIP] bdb w/ rdb and bdb w/ mdb backends - does not work as of now * Split mdb & bdb into separate pods + enhancements * discovery of the mongodb service by the bdb pod by using dns name. * using separate storage classes to map 2 different volumes exposed by the mongo docker container; one for /data/db (dbPath) and the other for /data/configdb (configDB). * using the `persistentVolumeReclaimPolicy: Retain` in k8s pvc. However, this seems to be unsupported in Azure and the disks still show a reclaim policy of `delete`. * mongodb container runs the `mongod` process as user `mongodb` and group `mongodb. The corresponding `uid` and `gid` for the `mongod` process is 999 and 999 respectively. When the constinaer runs on a host with a mounted disk, the writes fail, when there is no user with uid 999. To avoid this, I use the docker provided feature of --cap-add=FOWNER in k8s. This bypasses the uid and gid permission checks during writes and allows writes. Ref: https://docs.docker.com/engine/reference/run/#runtime-privilege-and-linux-capabilities * Delete redundant k8s files, add cluster deletion steps. * Single node as a StatefulSet in k8s - uses bigchaindb/bigchaindb:0.9.1 * Updating README * rdb, mdb as stateful services * [WIP] bdb as a statefulset * [WIP] bdb w/ rdb and bdb w/ mdb backends - does not work as of now * Split mdb & bdb into separate pods + enhancements * discovery of the mongodb service by the bdb pod by using dns name. * using separate storage classes to map 2 different volumes exposed by the mongo docker container; one for /data/db (dbPath) and the other for /data/configdb (configDB). * using the `persistentVolumeReclaimPolicy: Retain` in k8s pvc. However, this seems to be unsupported in Azure and the disks still show a reclaim policy of `delete`. * mongodb container runs the `mongod` process as user `mongodb` and group `mongodb. The corresponding `uid` and `gid` for the `mongod` process is 999 and 999 respectively. When the constinaer runs on a host with a mounted disk, the writes fail, when there is no user with uid 999. To avoid this, I use the docker provided feature of --cap-add=FOWNER in k8s. This bypasses the uid and gid permission checks during writes and allows writes. Ref: https://docs.docker.com/engine/reference/run/#runtime-privilege-and-linux-capabilities * Delete redundant k8s files, add cluster deletion steps. * Documentation: running a single node with distinct mongodb and bigchaindb pods on k8s * Updates as per @ttmc's comments
2017-03-09 16:53:00 +01:00
You can check its status using: ``kubectl get pvc -w``
Initially, the status of persistent volume claims might be "Pending"
but it should become "Bound" fairly quickly.
.. Note::
The default Reclaim Policy for dynamically created persistent volumes is ``Delete``
which means the PV and its associated Azure storage resource will be automatically
deleted on deletion of PVC or PV. In order to prevent this from happening do
the following steps to change default reclaim policy of dyanmically created PVs
from ``Delete`` to ``Retain``
* Run the following command to list existing PVs
.. Code:: bash
$ kubectl --context k8s-bdb-test-cluster-0 get pv
* Run the following command to update a PV's reclaim policy to <Retain>
.. Code:: bash
$ kubectl --context k8s-bdb-test-cluster-0 patch pv <pv-name> -p '{"spec":{"persistentVolumeReclaimPolicy":"Retain"}}'
For notes on recreating a private volume form a released Azure disk resource consult
:ref:`cluster-troubleshooting`.
.. _start-a-kubernetes-statefulset-for-mongodb:
Running a single node on k8s (#1269) * Single node as a StatefulSet in k8s - uses bigchaindb/bigchaindb:0.9.1 * Updating README * rdb, mdb as stateful services * [WIP] bdb as a statefulset * [WIP] bdb w/ rdb and bdb w/ mdb backends - does not work as of now * Split mdb & bdb into separate pods + enhancements * discovery of the mongodb service by the bdb pod by using dns name. * using separate storage classes to map 2 different volumes exposed by the mongo docker container; one for /data/db (dbPath) and the other for /data/configdb (configDB). * using the `persistentVolumeReclaimPolicy: Retain` in k8s pvc. However, this seems to be unsupported in Azure and the disks still show a reclaim policy of `delete`. * mongodb container runs the `mongod` process as user `mongodb` and group `mongodb. The corresponding `uid` and `gid` for the `mongod` process is 999 and 999 respectively. When the constinaer runs on a host with a mounted disk, the writes fail, when there is no user with uid 999. To avoid this, I use the docker provided feature of --cap-add=FOWNER in k8s. This bypasses the uid and gid permission checks during writes and allows writes. Ref: https://docs.docker.com/engine/reference/run/#runtime-privilege-and-linux-capabilities * Delete redundant k8s files, add cluster deletion steps. * Single node as a StatefulSet in k8s - uses bigchaindb/bigchaindb:0.9.1 * Updating README * rdb, mdb as stateful services * [WIP] bdb as a statefulset * [WIP] bdb w/ rdb and bdb w/ mdb backends - does not work as of now * Split mdb & bdb into separate pods + enhancements * discovery of the mongodb service by the bdb pod by using dns name. * using separate storage classes to map 2 different volumes exposed by the mongo docker container; one for /data/db (dbPath) and the other for /data/configdb (configDB). * using the `persistentVolumeReclaimPolicy: Retain` in k8s pvc. However, this seems to be unsupported in Azure and the disks still show a reclaim policy of `delete`. * mongodb container runs the `mongod` process as user `mongodb` and group `mongodb. The corresponding `uid` and `gid` for the `mongod` process is 999 and 999 respectively. When the constinaer runs on a host with a mounted disk, the writes fail, when there is no user with uid 999. To avoid this, I use the docker provided feature of --cap-add=FOWNER in k8s. This bypasses the uid and gid permission checks during writes and allows writes. Ref: https://docs.docker.com/engine/reference/run/#runtime-privilege-and-linux-capabilities * Delete redundant k8s files, add cluster deletion steps. * Documentation: running a single node with distinct mongodb and bigchaindb pods on k8s * Updates as per @ttmc's comments
2017-03-09 16:53:00 +01:00
Step 12: Start a Kubernetes StatefulSet for MongoDB
---------------------------------------------------
2017-05-29 13:42:49 +02:00
* This configuration is located in the file ``mongodb/mongo-ss.yaml``.
* Set the ``spec.serviceName`` to the value set in ``mdb-instance-name`` in
the ConfigMap.
For example, if the value set in the ``mdb-instance-name``
is ``mdb-instance-0``, set the field to ``mdb-instance-0``.
2017-05-29 13:42:49 +02:00
* Set ``metadata.name``, ``spec.template.metadata.name`` and
``spec.template.metadata.labels.app`` to the value set in
``mdb-instance-name`` in the ConfigMap, followed by
``-ss``.
For example, if the value set in the
``mdb-instance-name`` is ``mdb-instance-0``, set the fields to the value
``mdb-insance-0-ss``.
* Note how the MongoDB container uses the ``mongo-db-claim`` and the
``mongo-configdb-claim`` PersistentVolumeClaims for its ``/data/db`` and
``/data/configdb`` directories (mount paths).
2017-05-29 13:42:49 +02:00
* Note also that we use the pod's ``securityContext.capabilities.add``
specification to add the ``FOWNER`` capability to the container. That is
because the MongoDB container has the user ``mongodb``, with uid ``999`` and
2017-05-29 13:42:49 +02:00
group ``mongodb``, with gid ``999``.
When this container runs on a host with a mounted disk, the writes fail
when there is no user with uid ``999``. To avoid this, we use the Docker
feature of ``--cap-add=FOWNER``. This bypasses the uid and gid permission
checks during writes and allows data to be persisted to disk.
Refer to the `Docker docs
<https://docs.docker.com/engine/reference/run/#runtime-privilege-and-linux-capabilities>`_
for details.
* As we gain more experience running MongoDB in testing and production, we
will tweak the ``resources.limits.cpu`` and ``resources.limits.memory``.
* Set the ports to be exposed from the pod in the
``spec.containers[0].ports`` section. We currently only expose the MongoDB
backend port. Set it to the value specified for ``mongodb-backend-port``
in the ConfigMap.
* The configuration uses the following values set in the ConfigMap:
- ``mdb-instance-name``
- ``mongodb-replicaset-name``
- ``mongodb-backend-port``
* The configuration uses the following values set in the Secret:
- ``mdb-certs``
- ``ca-auth``
* **Optional**: You can change the value for ``STORAGE_ENGINE_CACHE_SIZE`` in the ConfigMap ``storage-engine-cache-size``, for more information
regarding this configuration, please consult the `MongoDB Official
Documentation <https://docs.mongodb.com/manual/reference/configuration-options/#storage.wiredTiger.engineConfig.cacheSizeGB>`_.
* **Optional**: If you are not using the **Standard_D2_v2** virtual machines for Kubernetes agents as per the guide,
please update the ``resources`` for ``mongo-ss``. We suggest allocating ``memory`` using the following scheme
for a MongoDB StatefulSet:
.. code:: bash
memory = (Total_Memory_Agent_VM_GB - 2GB)
STORAGE_ENGINE_CACHE_SIZE = memory / 2
2017-05-29 13:42:49 +02:00
* Create the MongoDB StatefulSet using:
2017-05-29 13:42:49 +02:00
.. code:: bash
2017-05-29 13:42:49 +02:00
$ kubectl --context k8s-bdb-test-cluster-0 apply -f mongodb/mongo-ss.yaml
* It might take up to 10 minutes for the disks, specified in the Persistent
Volume Claims above, to be created and attached to the pod.
The UI might show that the pod has errored with the message
"timeout expired waiting for volumes to attach/mount". Use the CLI below
to check the status of the pod in this case, instead of the UI.
This happens due to a bug in Azure ACS.
.. code:: bash
2017-06-08 17:36:41 +02:00
$ kubectl --context k8s-bdb-test-cluster-0 get pods -w
.. _configure-users-and-access-control-for-mongodb:
Step 13: Configure Users and Access Control for MongoDB
-------------------------------------------------------
* In this step, you will create a user on MongoDB with authorization
to create more users and assign
roles to them.
Note: You need to do this only when setting up the first MongoDB node of
the cluster.
* Find out the name of your MongoDB pod by reading the output
of the ``kubectl ... get pods`` command at the end of the last step.
It should be something like ``mdb-instance-0-ss-0``.
* Log in to the MongoDB pod using:
.. code:: bash
$ kubectl --context k8s-bdb-test-cluster-0 exec -it <name of your MongoDB pod> bash
* Open a mongo shell using the certificates
already present at ``/etc/mongod/ssl/``
.. code:: bash
$ mongo --host localhost --port 27017 --verbose --ssl \
--sslCAFile /etc/mongod/ca/ca.pem \
--sslPEMKeyFile /etc/mongod/ssl/mdb-instance.pem
* Initialize the replica set using:
.. code:: bash
> rs.initiate( {
_id : "bigchain-rs",
members: [ {
_id : 0,
host :"<hostname>:27017"
} ]
} )
The ``hostname`` in this case will be the value set in
``mdb-instance-name`` in the ConfigMap.
For example, if the value set in the ``mdb-instance-name`` is
``mdb-instance-0``, set the ``hostname`` above to the value ``mdb-instance-0``.
* The instance should be voted as the ``PRIMARY`` in the replica set (since
this is the only instance in the replica set till now).
This can be observed from the mongo shell prompt,
which will read ``PRIMARY>``.
* Create a user ``adminUser`` on the ``admin`` database with the
authorization to create other users. This will only work the first time you
log in to the mongo shell. For further details, see `localhost
exception <https://docs.mongodb.com/manual/core/security-users/#localhost-exception>`_
in MongoDB.
.. code:: bash
PRIMARY> use admin
PRIMARY> db.createUser( {
user: "adminUser",
pwd: "superstrongpassword",
roles: [ { role: "userAdminAnyDatabase", db: "admin" },
{ role: "clusterManager", db: "admin"} ]
} )
* Exit and restart the mongo shell using the above command.
Authenticate as the ``adminUser`` we created earlier:
.. code:: bash
PRIMARY> use admin
PRIMARY> db.auth("adminUser", "superstrongpassword")
``db.auth()`` returns 0 when authentication is not successful,
and 1 when successful.
* We need to specify the user name *as seen in the certificate* issued to
the BigchainDB instance in order to authenticate correctly. Use
the following ``openssl`` command to extract the user name from the
certificate:
.. code:: bash
$ openssl x509 -in <path to the bigchaindb certificate> \
-inform PEM -subject -nameopt RFC2253
You should see an output line that resembles:
.. code:: bash
subject= emailAddress=dev@bigchaindb.com,CN=test-bdb-ssl,OU=BigchainDB-Instance,O=BigchainDB GmbH,L=Berlin,ST=Berlin,C=DE
The ``subject`` line states the complete user name we need to use for
creating the user on the mongo shell as follows:
.. code:: bash
PRIMARY> db.getSiblingDB("$external").runCommand( {
createUser: 'emailAddress=dev@bigchaindb.com,CN=test-bdb-ssl,OU=BigchainDB-Instance,O=BigchainDB GmbH,L=Berlin,ST=Berlin,C=DE',
writeConcern: { w: 'majority' , wtimeout: 5000 },
roles: [
{ role: 'clusterAdmin', db: 'admin' },
{ role: 'readWriteAnyDatabase', db: 'admin' }
]
} )
* You can similarly create users for MongoDB Monitoring Agent and MongoDB
Backup Agent. For example:
.. code:: bash
PRIMARY> db.getSiblingDB("$external").runCommand( {
createUser: 'emailAddress=dev@bigchaindb.com,CN=test-mdb-mon-ssl,OU=MongoDB-Mon-Instance,O=BigchainDB GmbH,L=Berlin,ST=Berlin,C=DE',
writeConcern: { w: 'majority' , wtimeout: 5000 },
roles: [
{ role: 'clusterMonitor', db: 'admin' }
]
} )
PRIMARY> db.getSiblingDB("$external").runCommand( {
createUser: 'emailAddress=dev@bigchaindb.com,CN=test-mdb-bak-ssl,OU=MongoDB-Bak-Instance,O=BigchainDB GmbH,L=Berlin,ST=Berlin,C=DE',
writeConcern: { w: 'majority' , wtimeout: 5000 },
roles: [
{ role: 'backup', db: 'admin' }
]
} )
.. _start-a-kubernetes-deployment-for-mongodb-monitoring-agent:
Step 14: Start a Kubernetes Deployment for MongoDB Monitoring Agent
2017-05-29 13:42:49 +02:00
-------------------------------------------------------------------
2017-05-29 13:42:49 +02:00
* This configuration is located in the file
``mongodb-monitoring-agent/mongo-mon-dep.yaml``.
2017-05-29 13:42:49 +02:00
* Set ``metadata.name``, ``spec.template.metadata.name`` and
``spec.template.metadata.labels.app`` to the value set in
``mdb-mon-instance-name`` in the ConfigMap, followed by
``-dep``.
For example, if the value set in the
``mdb-mon-instance-name`` is ``mdb-mon-instance-0``, set the fields to the
value ``mdb-mon-instance-0-dep``.
* The configuration uses the following values set in the Secret:
- ``mdb-mon-certs``
- ``ca-auth``
- ``cloud-manager-credentials``
2017-05-29 13:42:49 +02:00
* Start the Kubernetes Deployment using:
Running a single node on k8s (#1269) * Single node as a StatefulSet in k8s - uses bigchaindb/bigchaindb:0.9.1 * Updating README * rdb, mdb as stateful services * [WIP] bdb as a statefulset * [WIP] bdb w/ rdb and bdb w/ mdb backends - does not work as of now * Split mdb & bdb into separate pods + enhancements * discovery of the mongodb service by the bdb pod by using dns name. * using separate storage classes to map 2 different volumes exposed by the mongo docker container; one for /data/db (dbPath) and the other for /data/configdb (configDB). * using the `persistentVolumeReclaimPolicy: Retain` in k8s pvc. However, this seems to be unsupported in Azure and the disks still show a reclaim policy of `delete`. * mongodb container runs the `mongod` process as user `mongodb` and group `mongodb. The corresponding `uid` and `gid` for the `mongod` process is 999 and 999 respectively. When the constinaer runs on a host with a mounted disk, the writes fail, when there is no user with uid 999. To avoid this, I use the docker provided feature of --cap-add=FOWNER in k8s. This bypasses the uid and gid permission checks during writes and allows writes. Ref: https://docs.docker.com/engine/reference/run/#runtime-privilege-and-linux-capabilities * Delete redundant k8s files, add cluster deletion steps. * Single node as a StatefulSet in k8s - uses bigchaindb/bigchaindb:0.9.1 * Updating README * rdb, mdb as stateful services * [WIP] bdb as a statefulset * [WIP] bdb w/ rdb and bdb w/ mdb backends - does not work as of now * Split mdb & bdb into separate pods + enhancements * discovery of the mongodb service by the bdb pod by using dns name. * using separate storage classes to map 2 different volumes exposed by the mongo docker container; one for /data/db (dbPath) and the other for /data/configdb (configDB). * using the `persistentVolumeReclaimPolicy: Retain` in k8s pvc. However, this seems to be unsupported in Azure and the disks still show a reclaim policy of `delete`. * mongodb container runs the `mongod` process as user `mongodb` and group `mongodb. The corresponding `uid` and `gid` for the `mongod` process is 999 and 999 respectively. When the constinaer runs on a host with a mounted disk, the writes fail, when there is no user with uid 999. To avoid this, I use the docker provided feature of --cap-add=FOWNER in k8s. This bypasses the uid and gid permission checks during writes and allows writes. Ref: https://docs.docker.com/engine/reference/run/#runtime-privilege-and-linux-capabilities * Delete redundant k8s files, add cluster deletion steps. * Documentation: running a single node with distinct mongodb and bigchaindb pods on k8s * Updates as per @ttmc's comments
2017-03-09 16:53:00 +01:00
2017-05-29 13:42:49 +02:00
.. code:: bash
Running a single node on k8s (#1269) * Single node as a StatefulSet in k8s - uses bigchaindb/bigchaindb:0.9.1 * Updating README * rdb, mdb as stateful services * [WIP] bdb as a statefulset * [WIP] bdb w/ rdb and bdb w/ mdb backends - does not work as of now * Split mdb & bdb into separate pods + enhancements * discovery of the mongodb service by the bdb pod by using dns name. * using separate storage classes to map 2 different volumes exposed by the mongo docker container; one for /data/db (dbPath) and the other for /data/configdb (configDB). * using the `persistentVolumeReclaimPolicy: Retain` in k8s pvc. However, this seems to be unsupported in Azure and the disks still show a reclaim policy of `delete`. * mongodb container runs the `mongod` process as user `mongodb` and group `mongodb. The corresponding `uid` and `gid` for the `mongod` process is 999 and 999 respectively. When the constinaer runs on a host with a mounted disk, the writes fail, when there is no user with uid 999. To avoid this, I use the docker provided feature of --cap-add=FOWNER in k8s. This bypasses the uid and gid permission checks during writes and allows writes. Ref: https://docs.docker.com/engine/reference/run/#runtime-privilege-and-linux-capabilities * Delete redundant k8s files, add cluster deletion steps. * Single node as a StatefulSet in k8s - uses bigchaindb/bigchaindb:0.9.1 * Updating README * rdb, mdb as stateful services * [WIP] bdb as a statefulset * [WIP] bdb w/ rdb and bdb w/ mdb backends - does not work as of now * Split mdb & bdb into separate pods + enhancements * discovery of the mongodb service by the bdb pod by using dns name. * using separate storage classes to map 2 different volumes exposed by the mongo docker container; one for /data/db (dbPath) and the other for /data/configdb (configDB). * using the `persistentVolumeReclaimPolicy: Retain` in k8s pvc. However, this seems to be unsupported in Azure and the disks still show a reclaim policy of `delete`. * mongodb container runs the `mongod` process as user `mongodb` and group `mongodb. The corresponding `uid` and `gid` for the `mongod` process is 999 and 999 respectively. When the constinaer runs on a host with a mounted disk, the writes fail, when there is no user with uid 999. To avoid this, I use the docker provided feature of --cap-add=FOWNER in k8s. This bypasses the uid and gid permission checks during writes and allows writes. Ref: https://docs.docker.com/engine/reference/run/#runtime-privilege-and-linux-capabilities * Delete redundant k8s files, add cluster deletion steps. * Documentation: running a single node with distinct mongodb and bigchaindb pods on k8s * Updates as per @ttmc's comments
2017-03-09 16:53:00 +01:00
2017-05-29 13:42:49 +02:00
$ kubectl --context k8s-bdb-test-cluster-0 apply -f mongodb-monitoring-agent/mongo-mon-dep.yaml
Running a single node on k8s (#1269) * Single node as a StatefulSet in k8s - uses bigchaindb/bigchaindb:0.9.1 * Updating README * rdb, mdb as stateful services * [WIP] bdb as a statefulset * [WIP] bdb w/ rdb and bdb w/ mdb backends - does not work as of now * Split mdb & bdb into separate pods + enhancements * discovery of the mongodb service by the bdb pod by using dns name. * using separate storage classes to map 2 different volumes exposed by the mongo docker container; one for /data/db (dbPath) and the other for /data/configdb (configDB). * using the `persistentVolumeReclaimPolicy: Retain` in k8s pvc. However, this seems to be unsupported in Azure and the disks still show a reclaim policy of `delete`. * mongodb container runs the `mongod` process as user `mongodb` and group `mongodb. The corresponding `uid` and `gid` for the `mongod` process is 999 and 999 respectively. When the constinaer runs on a host with a mounted disk, the writes fail, when there is no user with uid 999. To avoid this, I use the docker provided feature of --cap-add=FOWNER in k8s. This bypasses the uid and gid permission checks during writes and allows writes. Ref: https://docs.docker.com/engine/reference/run/#runtime-privilege-and-linux-capabilities * Delete redundant k8s files, add cluster deletion steps. * Single node as a StatefulSet in k8s - uses bigchaindb/bigchaindb:0.9.1 * Updating README * rdb, mdb as stateful services * [WIP] bdb as a statefulset * [WIP] bdb w/ rdb and bdb w/ mdb backends - does not work as of now * Split mdb & bdb into separate pods + enhancements * discovery of the mongodb service by the bdb pod by using dns name. * using separate storage classes to map 2 different volumes exposed by the mongo docker container; one for /data/db (dbPath) and the other for /data/configdb (configDB). * using the `persistentVolumeReclaimPolicy: Retain` in k8s pvc. However, this seems to be unsupported in Azure and the disks still show a reclaim policy of `delete`. * mongodb container runs the `mongod` process as user `mongodb` and group `mongodb. The corresponding `uid` and `gid` for the `mongod` process is 999 and 999 respectively. When the constinaer runs on a host with a mounted disk, the writes fail, when there is no user with uid 999. To avoid this, I use the docker provided feature of --cap-add=FOWNER in k8s. This bypasses the uid and gid permission checks during writes and allows writes. Ref: https://docs.docker.com/engine/reference/run/#runtime-privilege-and-linux-capabilities * Delete redundant k8s files, add cluster deletion steps. * Documentation: running a single node with distinct mongodb and bigchaindb pods on k8s * Updates as per @ttmc's comments
2017-03-09 16:53:00 +01:00
.. _start-a-kubernetes-deployment-for-mongodb-backup-agent:
Step 15: Start a Kubernetes Deployment for MongoDB Backup Agent
2017-05-29 13:42:49 +02:00
---------------------------------------------------------------
2017-05-29 13:42:49 +02:00
* This configuration is located in the file
``mongodb-backup-agent/mongo-backup-dep.yaml``.
2017-05-29 13:42:49 +02:00
* Set ``metadata.name``, ``spec.template.metadata.name`` and
``spec.template.metadata.labels.app`` to the value set in
``mdb-bak-instance-name`` in the ConfigMap, followed by
``-dep``.
For example, if the value set in the
``mdb-bak-instance-name`` is ``mdb-bak-instance-0``, set the fields to the
value ``mdb-bak-instance-0-dep``.
* The configuration uses the following values set in the Secret:
- ``mdb-bak-certs``
- ``ca-auth``
- ``cloud-manager-credentials``
2017-05-29 13:42:49 +02:00
* Start the Kubernetes Deployment using:
2017-05-29 13:42:49 +02:00
.. code:: bash
2017-05-29 13:42:49 +02:00
$ kubectl --context k8s-bdb-test-cluster-0 apply -f mongodb-backup-agent/mongo-backup-dep.yaml
Step 16: Start a Kubernetes Deployment for BigchainDB
2017-05-29 13:42:49 +02:00
-----------------------------------------------------
Running a single node on k8s (#1269) * Single node as a StatefulSet in k8s - uses bigchaindb/bigchaindb:0.9.1 * Updating README * rdb, mdb as stateful services * [WIP] bdb as a statefulset * [WIP] bdb w/ rdb and bdb w/ mdb backends - does not work as of now * Split mdb & bdb into separate pods + enhancements * discovery of the mongodb service by the bdb pod by using dns name. * using separate storage classes to map 2 different volumes exposed by the mongo docker container; one for /data/db (dbPath) and the other for /data/configdb (configDB). * using the `persistentVolumeReclaimPolicy: Retain` in k8s pvc. However, this seems to be unsupported in Azure and the disks still show a reclaim policy of `delete`. * mongodb container runs the `mongod` process as user `mongodb` and group `mongodb. The corresponding `uid` and `gid` for the `mongod` process is 999 and 999 respectively. When the constinaer runs on a host with a mounted disk, the writes fail, when there is no user with uid 999. To avoid this, I use the docker provided feature of --cap-add=FOWNER in k8s. This bypasses the uid and gid permission checks during writes and allows writes. Ref: https://docs.docker.com/engine/reference/run/#runtime-privilege-and-linux-capabilities * Delete redundant k8s files, add cluster deletion steps. * Single node as a StatefulSet in k8s - uses bigchaindb/bigchaindb:0.9.1 * Updating README * rdb, mdb as stateful services * [WIP] bdb as a statefulset * [WIP] bdb w/ rdb and bdb w/ mdb backends - does not work as of now * Split mdb & bdb into separate pods + enhancements * discovery of the mongodb service by the bdb pod by using dns name. * using separate storage classes to map 2 different volumes exposed by the mongo docker container; one for /data/db (dbPath) and the other for /data/configdb (configDB). * using the `persistentVolumeReclaimPolicy: Retain` in k8s pvc. However, this seems to be unsupported in Azure and the disks still show a reclaim policy of `delete`. * mongodb container runs the `mongod` process as user `mongodb` and group `mongodb. The corresponding `uid` and `gid` for the `mongod` process is 999 and 999 respectively. When the constinaer runs on a host with a mounted disk, the writes fail, when there is no user with uid 999. To avoid this, I use the docker provided feature of --cap-add=FOWNER in k8s. This bypasses the uid and gid permission checks during writes and allows writes. Ref: https://docs.docker.com/engine/reference/run/#runtime-privilege-and-linux-capabilities * Delete redundant k8s files, add cluster deletion steps. * Documentation: running a single node with distinct mongodb and bigchaindb pods on k8s * Updates as per @ttmc's comments
2017-03-09 16:53:00 +01:00
2017-05-29 13:42:49 +02:00
* This configuration is located in the file
``bigchaindb/bigchaindb-dep.yaml``.
Running a single node on k8s (#1269) * Single node as a StatefulSet in k8s - uses bigchaindb/bigchaindb:0.9.1 * Updating README * rdb, mdb as stateful services * [WIP] bdb as a statefulset * [WIP] bdb w/ rdb and bdb w/ mdb backends - does not work as of now * Split mdb & bdb into separate pods + enhancements * discovery of the mongodb service by the bdb pod by using dns name. * using separate storage classes to map 2 different volumes exposed by the mongo docker container; one for /data/db (dbPath) and the other for /data/configdb (configDB). * using the `persistentVolumeReclaimPolicy: Retain` in k8s pvc. However, this seems to be unsupported in Azure and the disks still show a reclaim policy of `delete`. * mongodb container runs the `mongod` process as user `mongodb` and group `mongodb. The corresponding `uid` and `gid` for the `mongod` process is 999 and 999 respectively. When the constinaer runs on a host with a mounted disk, the writes fail, when there is no user with uid 999. To avoid this, I use the docker provided feature of --cap-add=FOWNER in k8s. This bypasses the uid and gid permission checks during writes and allows writes. Ref: https://docs.docker.com/engine/reference/run/#runtime-privilege-and-linux-capabilities * Delete redundant k8s files, add cluster deletion steps. * Single node as a StatefulSet in k8s - uses bigchaindb/bigchaindb:0.9.1 * Updating README * rdb, mdb as stateful services * [WIP] bdb as a statefulset * [WIP] bdb w/ rdb and bdb w/ mdb backends - does not work as of now * Split mdb & bdb into separate pods + enhancements * discovery of the mongodb service by the bdb pod by using dns name. * using separate storage classes to map 2 different volumes exposed by the mongo docker container; one for /data/db (dbPath) and the other for /data/configdb (configDB). * using the `persistentVolumeReclaimPolicy: Retain` in k8s pvc. However, this seems to be unsupported in Azure and the disks still show a reclaim policy of `delete`. * mongodb container runs the `mongod` process as user `mongodb` and group `mongodb. The corresponding `uid` and `gid` for the `mongod` process is 999 and 999 respectively. When the constinaer runs on a host with a mounted disk, the writes fail, when there is no user with uid 999. To avoid this, I use the docker provided feature of --cap-add=FOWNER in k8s. This bypasses the uid and gid permission checks during writes and allows writes. Ref: https://docs.docker.com/engine/reference/run/#runtime-privilege-and-linux-capabilities * Delete redundant k8s files, add cluster deletion steps. * Documentation: running a single node with distinct mongodb and bigchaindb pods on k8s * Updates as per @ttmc's comments
2017-03-09 16:53:00 +01:00
2017-05-29 13:42:49 +02:00
* Set ``metadata.name`` and ``spec.template.metadata.labels.app`` to the
value set in ``bdb-instance-name`` in the ConfigMap, followed by
``-dep``.
For example, if the value set in the
``bdb-instance-name`` is ``bdb-instance-0``, set the fields to the
value ``bdb-insance-0-dep``.
2017-06-30 15:00:01 +02:00
* Set the value of ``BIGCHAINDB_KEYPAIR_PRIVATE`` (not base64-encoded).
(In the future, we'd like to pull the BigchainDB private key from
the Secret named ``bdb-private-key``,
but a Secret can only be mounted as a file,
so BigchainDB Server would have to be modified to look for it
in a file.)
2017-05-29 13:42:49 +02:00
* As we gain more experience running BigchainDB in testing and production,
we will tweak the ``resources.limits`` values for CPU and memory, and as
richer monitoring and probing becomes available in BigchainDB, we will
tweak the ``livenessProbe`` and ``readinessProbe`` parameters.
* Set the ports to be exposed from the pod in the
``spec.containers[0].ports`` section. We currently expose 2 ports -
``bigchaindb-api-port`` and ``bigchaindb-ws-port``. Set them to the
values specified in the ConfigMap.
* The configuration uses the following values set in the ConfigMap:
- ``mdb-instance-name``
- ``mongodb-backend-port``
- ``mongodb-replicaset-name``
- ``bigchaindb-database-name``
- ``bigchaindb-server-bind``
- ``bigchaindb-ws-interface``
- ``cluster-fqdn``
- ``bigchaindb-ws-port``
- ``cluster-frontend-port``
- ``bigchaindb-wsserver-advertised-scheme``
- ``bdb-public-key``
- ``bigchaindb-backlog-reassign-delay``
- ``bigchaindb-database-maxtries``
- ``bigchaindb-database-connection-timeout``
- ``bigchaindb-log-level``
- ``bdb-user``
* The configuration uses the following values set in the Secret:
- ``bdb-certs``
- ``ca-auth``
2017-05-29 13:42:49 +02:00
* Create the BigchainDB Deployment using:
Running a single node on k8s (#1269) * Single node as a StatefulSet in k8s - uses bigchaindb/bigchaindb:0.9.1 * Updating README * rdb, mdb as stateful services * [WIP] bdb as a statefulset * [WIP] bdb w/ rdb and bdb w/ mdb backends - does not work as of now * Split mdb & bdb into separate pods + enhancements * discovery of the mongodb service by the bdb pod by using dns name. * using separate storage classes to map 2 different volumes exposed by the mongo docker container; one for /data/db (dbPath) and the other for /data/configdb (configDB). * using the `persistentVolumeReclaimPolicy: Retain` in k8s pvc. However, this seems to be unsupported in Azure and the disks still show a reclaim policy of `delete`. * mongodb container runs the `mongod` process as user `mongodb` and group `mongodb. The corresponding `uid` and `gid` for the `mongod` process is 999 and 999 respectively. When the constinaer runs on a host with a mounted disk, the writes fail, when there is no user with uid 999. To avoid this, I use the docker provided feature of --cap-add=FOWNER in k8s. This bypasses the uid and gid permission checks during writes and allows writes. Ref: https://docs.docker.com/engine/reference/run/#runtime-privilege-and-linux-capabilities * Delete redundant k8s files, add cluster deletion steps. * Single node as a StatefulSet in k8s - uses bigchaindb/bigchaindb:0.9.1 * Updating README * rdb, mdb as stateful services * [WIP] bdb as a statefulset * [WIP] bdb w/ rdb and bdb w/ mdb backends - does not work as of now * Split mdb & bdb into separate pods + enhancements * discovery of the mongodb service by the bdb pod by using dns name. * using separate storage classes to map 2 different volumes exposed by the mongo docker container; one for /data/db (dbPath) and the other for /data/configdb (configDB). * using the `persistentVolumeReclaimPolicy: Retain` in k8s pvc. However, this seems to be unsupported in Azure and the disks still show a reclaim policy of `delete`. * mongodb container runs the `mongod` process as user `mongodb` and group `mongodb. The corresponding `uid` and `gid` for the `mongod` process is 999 and 999 respectively. When the constinaer runs on a host with a mounted disk, the writes fail, when there is no user with uid 999. To avoid this, I use the docker provided feature of --cap-add=FOWNER in k8s. This bypasses the uid and gid permission checks during writes and allows writes. Ref: https://docs.docker.com/engine/reference/run/#runtime-privilege-and-linux-capabilities * Delete redundant k8s files, add cluster deletion steps. * Documentation: running a single node with distinct mongodb and bigchaindb pods on k8s * Updates as per @ttmc's comments
2017-03-09 16:53:00 +01:00
2017-05-29 13:42:49 +02:00
.. code:: bash
Running a single node on k8s (#1269) * Single node as a StatefulSet in k8s - uses bigchaindb/bigchaindb:0.9.1 * Updating README * rdb, mdb as stateful services * [WIP] bdb as a statefulset * [WIP] bdb w/ rdb and bdb w/ mdb backends - does not work as of now * Split mdb & bdb into separate pods + enhancements * discovery of the mongodb service by the bdb pod by using dns name. * using separate storage classes to map 2 different volumes exposed by the mongo docker container; one for /data/db (dbPath) and the other for /data/configdb (configDB). * using the `persistentVolumeReclaimPolicy: Retain` in k8s pvc. However, this seems to be unsupported in Azure and the disks still show a reclaim policy of `delete`. * mongodb container runs the `mongod` process as user `mongodb` and group `mongodb. The corresponding `uid` and `gid` for the `mongod` process is 999 and 999 respectively. When the constinaer runs on a host with a mounted disk, the writes fail, when there is no user with uid 999. To avoid this, I use the docker provided feature of --cap-add=FOWNER in k8s. This bypasses the uid and gid permission checks during writes and allows writes. Ref: https://docs.docker.com/engine/reference/run/#runtime-privilege-and-linux-capabilities * Delete redundant k8s files, add cluster deletion steps. * Single node as a StatefulSet in k8s - uses bigchaindb/bigchaindb:0.9.1 * Updating README * rdb, mdb as stateful services * [WIP] bdb as a statefulset * [WIP] bdb w/ rdb and bdb w/ mdb backends - does not work as of now * Split mdb & bdb into separate pods + enhancements * discovery of the mongodb service by the bdb pod by using dns name. * using separate storage classes to map 2 different volumes exposed by the mongo docker container; one for /data/db (dbPath) and the other for /data/configdb (configDB). * using the `persistentVolumeReclaimPolicy: Retain` in k8s pvc. However, this seems to be unsupported in Azure and the disks still show a reclaim policy of `delete`. * mongodb container runs the `mongod` process as user `mongodb` and group `mongodb. The corresponding `uid` and `gid` for the `mongod` process is 999 and 999 respectively. When the constinaer runs on a host with a mounted disk, the writes fail, when there is no user with uid 999. To avoid this, I use the docker provided feature of --cap-add=FOWNER in k8s. This bypasses the uid and gid permission checks during writes and allows writes. Ref: https://docs.docker.com/engine/reference/run/#runtime-privilege-and-linux-capabilities * Delete redundant k8s files, add cluster deletion steps. * Documentation: running a single node with distinct mongodb and bigchaindb pods on k8s * Updates as per @ttmc's comments
2017-03-09 16:53:00 +01:00
2017-05-29 13:42:49 +02:00
$ kubectl --context k8s-bdb-test-cluster-0 apply -f bigchaindb/bigchaindb-dep.yaml
2017-06-30 15:00:01 +02:00
* You can check its status using the command ``kubectl get deployments -w``
.. _start-a-kubernetes-deployment-for-openresty:
Step 17: Start a Kubernetes Deployment for OpenResty
----------------------------------------------------
* This configuration is located in the file
``nginx-openresty/nginx-openresty-dep.yaml``.
* Set ``metadata.name`` and ``spec.template.metadata.labels.app`` to the
value set in ``openresty-instance-name`` in the ConfigMap, followed by
``-dep``.
For example, if the value set in the
``openresty-instance-name`` is ``openresty-instance-0``, set the fields to
the value ``openresty-instance-0-dep``.
* Set the port to be exposed from the pod in the
``spec.containers[0].ports`` section. We currently expose the port at
which OpenResty is listening for requests, ``openresty-backend-port`` in
the above ConfigMap.
* The configuration uses the following values set in the Secret:
- ``threescale-credentials``
* The configuration uses the following values set in the ConfigMap:
- ``cluster-dns-server-ip``
- ``openresty-backend-port``
- ``ngx-bdb-instance-name``
- ``bigchaindb-api-port``
* Create the OpenResty Deployment using:
.. code:: bash
$ kubectl --context k8s-bdb-test-cluster-0 apply -f nginx-openresty/nginx-openresty-dep.yaml
* You can check its status using the command ``kubectl get deployments -w``
Step 18: Configure the MongoDB Cloud Manager
2017-06-01 17:49:32 +02:00
--------------------------------------------
2017-07-02 11:23:40 +02:00
Refer to the
:ref:`documentation <configure-mongodb-cloud-manager-for-monitoring-and-backup>`
2017-07-02 11:23:40 +02:00
for details on how to configure the MongoDB Cloud Manager to enable
monitoring and backup.
2017-06-01 17:49:32 +02:00
.. _verify-the-bigchaindb-node-setup:
Step 19: Verify the BigchainDB Node Setup
-----------------------------------------
Step 19.1: Testing Internally
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2017-07-02 11:23:40 +02:00
To test the setup of your BigchainDB node, you could use a Docker container
that provides utilities like ``nslookup``, ``curl`` and ``dig``.
For example, you could use a container based on our
`bigchaindb/toolbox <https://hub.docker.com/r/bigchaindb/toolbox/>`_ image.
(The corresponding
2017-05-29 13:42:49 +02:00
`Dockerfile <https://github.com/bigchaindb/bigchaindb/blob/master/k8s/toolbox/Dockerfile>`_
2017-07-02 11:23:40 +02:00
is in the ``bigchaindb/bigchaindb`` repository on GitHub.)
Running a single node on k8s (#1269) * Single node as a StatefulSet in k8s - uses bigchaindb/bigchaindb:0.9.1 * Updating README * rdb, mdb as stateful services * [WIP] bdb as a statefulset * [WIP] bdb w/ rdb and bdb w/ mdb backends - does not work as of now * Split mdb & bdb into separate pods + enhancements * discovery of the mongodb service by the bdb pod by using dns name. * using separate storage classes to map 2 different volumes exposed by the mongo docker container; one for /data/db (dbPath) and the other for /data/configdb (configDB). * using the `persistentVolumeReclaimPolicy: Retain` in k8s pvc. However, this seems to be unsupported in Azure and the disks still show a reclaim policy of `delete`. * mongodb container runs the `mongod` process as user `mongodb` and group `mongodb. The corresponding `uid` and `gid` for the `mongod` process is 999 and 999 respectively. When the constinaer runs on a host with a mounted disk, the writes fail, when there is no user with uid 999. To avoid this, I use the docker provided feature of --cap-add=FOWNER in k8s. This bypasses the uid and gid permission checks during writes and allows writes. Ref: https://docs.docker.com/engine/reference/run/#runtime-privilege-and-linux-capabilities * Delete redundant k8s files, add cluster deletion steps. * Single node as a StatefulSet in k8s - uses bigchaindb/bigchaindb:0.9.1 * Updating README * rdb, mdb as stateful services * [WIP] bdb as a statefulset * [WIP] bdb w/ rdb and bdb w/ mdb backends - does not work as of now * Split mdb & bdb into separate pods + enhancements * discovery of the mongodb service by the bdb pod by using dns name. * using separate storage classes to map 2 different volumes exposed by the mongo docker container; one for /data/db (dbPath) and the other for /data/configdb (configDB). * using the `persistentVolumeReclaimPolicy: Retain` in k8s pvc. However, this seems to be unsupported in Azure and the disks still show a reclaim policy of `delete`. * mongodb container runs the `mongod` process as user `mongodb` and group `mongodb. The corresponding `uid` and `gid` for the `mongod` process is 999 and 999 respectively. When the constinaer runs on a host with a mounted disk, the writes fail, when there is no user with uid 999. To avoid this, I use the docker provided feature of --cap-add=FOWNER in k8s. This bypasses the uid and gid permission checks during writes and allows writes. Ref: https://docs.docker.com/engine/reference/run/#runtime-privilege-and-linux-capabilities * Delete redundant k8s files, add cluster deletion steps. * Documentation: running a single node with distinct mongodb and bigchaindb pods on k8s * Updates as per @ttmc's comments
2017-03-09 16:53:00 +01:00
You can use it as below to get started immediately:
.. code:: bash
2017-05-29 15:41:22 +02:00
$ kubectl --context k8s-bdb-test-cluster-0 \
run -it toolbox \
--image bigchaindb/toolbox \
--image-pull-policy=Always \
--restart=Never --rm
Running a single node on k8s (#1269) * Single node as a StatefulSet in k8s - uses bigchaindb/bigchaindb:0.9.1 * Updating README * rdb, mdb as stateful services * [WIP] bdb as a statefulset * [WIP] bdb w/ rdb and bdb w/ mdb backends - does not work as of now * Split mdb & bdb into separate pods + enhancements * discovery of the mongodb service by the bdb pod by using dns name. * using separate storage classes to map 2 different volumes exposed by the mongo docker container; one for /data/db (dbPath) and the other for /data/configdb (configDB). * using the `persistentVolumeReclaimPolicy: Retain` in k8s pvc. However, this seems to be unsupported in Azure and the disks still show a reclaim policy of `delete`. * mongodb container runs the `mongod` process as user `mongodb` and group `mongodb. The corresponding `uid` and `gid` for the `mongod` process is 999 and 999 respectively. When the constinaer runs on a host with a mounted disk, the writes fail, when there is no user with uid 999. To avoid this, I use the docker provided feature of --cap-add=FOWNER in k8s. This bypasses the uid and gid permission checks during writes and allows writes. Ref: https://docs.docker.com/engine/reference/run/#runtime-privilege-and-linux-capabilities * Delete redundant k8s files, add cluster deletion steps. * Single node as a StatefulSet in k8s - uses bigchaindb/bigchaindb:0.9.1 * Updating README * rdb, mdb as stateful services * [WIP] bdb as a statefulset * [WIP] bdb w/ rdb and bdb w/ mdb backends - does not work as of now * Split mdb & bdb into separate pods + enhancements * discovery of the mongodb service by the bdb pod by using dns name. * using separate storage classes to map 2 different volumes exposed by the mongo docker container; one for /data/db (dbPath) and the other for /data/configdb (configDB). * using the `persistentVolumeReclaimPolicy: Retain` in k8s pvc. However, this seems to be unsupported in Azure and the disks still show a reclaim policy of `delete`. * mongodb container runs the `mongod` process as user `mongodb` and group `mongodb. The corresponding `uid` and `gid` for the `mongod` process is 999 and 999 respectively. When the constinaer runs on a host with a mounted disk, the writes fail, when there is no user with uid 999. To avoid this, I use the docker provided feature of --cap-add=FOWNER in k8s. This bypasses the uid and gid permission checks during writes and allows writes. Ref: https://docs.docker.com/engine/reference/run/#runtime-privilege-and-linux-capabilities * Delete redundant k8s files, add cluster deletion steps. * Documentation: running a single node with distinct mongodb and bigchaindb pods on k8s * Updates as per @ttmc's comments
2017-03-09 16:53:00 +01:00
It will drop you to the shell prompt.
2017-07-02 11:23:40 +02:00
To test the MongoDB instance:
2017-07-02 11:23:40 +02:00
.. code:: bash
2017-05-29 13:42:49 +02:00
2017-07-02 11:23:40 +02:00
$ nslookup mdb-instance-0
2017-07-02 11:23:40 +02:00
$ dig +noall +answer _mdb-port._tcp.mdb-instance-0.default.svc.cluster.local SRV
2017-07-02 11:23:40 +02:00
$ curl -X GET http://mdb-instance-0:27017
The ``nslookup`` command should output the configured IP address of the service
(in the cluster).
The ``dig`` command should return the configured port numbers.
The ``curl`` command tests the availability of the service.
To test the BigchainDB instance:
2017-07-02 11:23:40 +02:00
.. code:: bash
2017-05-29 13:42:49 +02:00
2017-07-02 11:23:40 +02:00
$ nslookup bdb-instance-0
$ dig +noall +answer _bdb-api-port._tcp.bdb-instance-0.default.svc.cluster.local SRV
$ dig +noall +answer _bdb-ws-port._tcp.bdb-instance-0.default.svc.cluster.local SRV
2017-07-02 11:23:40 +02:00
$ curl -X GET http://bdb-instance-0:9984
$ wsc -er ws://bdb-instance-0:9985/api/v1/streams/valid_transactions
To test the OpenResty instance:
.. code:: bash
$ nslookup openresty-instance-0
$ dig +noall +answer _openresty-svc-port._tcp.openresty-instance-0.default.svc.cluster.local SRV
To verify if OpenResty instance forwards the requests properly, send a ``POST``
transaction to OpenResty at post ``80`` and check the response from the backend
BigchainDB instance.
To test the vanilla NGINX instance:
2017-07-02 11:23:40 +02:00
.. code:: bash
2017-05-29 13:42:49 +02:00
$ nslookup ngx-http-instance-0
$ dig +noall +answer _public-cluster-port._tcp.ngx-http-instance-0.default.svc.cluster.local SRV
2017-05-29 13:42:49 +02:00
$ dig +noall +answer _public-health-check-port._tcp.ngx-http-instance-0.default.svc.cluster.local SRV
2017-05-29 13:42:49 +02:00
$ wsc -er ws://ngx-http-instance-0/api/v1/streams/valid_transactions
2017-05-29 13:42:49 +02:00
$ curl -X GET http://ngx-http-instance-0:27017
2017-05-29 13:42:49 +02:00
The above curl command should result in the response
``It looks like you are trying to access MongoDB over HTTP on the native driver port.``
2017-07-02 11:23:40 +02:00
To test the NGINX instance with HTTPS and 3scale integration:
2017-07-02 11:23:40 +02:00
.. code:: bash
$ nslookup ngx-instance-0
$ dig +noall +answer _public-secure-cluster-port._tcp.ngx-instance-0.default.svc.cluster.local SRV
$ dig +noall +answer _public-mdb-port._tcp.ngx-instance-0.default.svc.cluster.local SRV
$ dig +noall +answer _public-insecure-cluster-port._tcp.ngx-instance-0.default.svc.cluster.local SRV
2017-05-29 13:42:49 +02:00
$ wsc -er wss://<cluster-fqdn>/api/v1/streams/valid_transactions
2017-05-29 13:42:49 +02:00
$ curl -X GET http://<cluster-fqdn>:27017
The above curl command should result in the response
``It looks like you are trying to access MongoDB over HTTP on the native driver port.``
Step 19.2: Testing Externally
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2017-07-02 11:23:40 +02:00
Check the MongoDB monitoring and backup agent on the MongoDB Cloud Manager
portal to verify they are working fine.
If you are using the NGINX with HTTP support, accessing the URL
``http://<DNS/IP of your exposed BigchainDB service endpoint>:cluster-frontend-port``
on your browser should result in a JSON response that shows the BigchainDB
2017-07-02 11:23:40 +02:00
server version, among other things.
If you are using the NGINX with HTTPS support, use ``https`` instead of
``http`` above.
2017-05-29 15:41:22 +02:00
Use the Python Driver to send some transactions to the BigchainDB node and
verify that your node or cluster works as expected.