Update docs to reflect TLS setup

This commit is contained in:
krish7919 (Krish) 2017-05-29 13:42:49 +02:00 committed by Krish
parent 54189ba418
commit 8fb1c0be8b
13 changed files with 643 additions and 312 deletions

View File

@ -71,10 +71,10 @@ Step 2: Prepare the New Kubernetes Cluster
Follow the steps in the sections to set up Storage Classes and Persistent Volume
Claims, and to run MongoDB in the new cluster:
1. :ref:`Add Storage Classes <Step 3: Create Storage Classes>`
2. :ref:`Add Persistent Volume Claims <Step 4: Create Persistent Volume Claims>`
3. :ref:`Create the Config Map <Step 5: Create the Config Map - Optional>`
4. :ref:`Run MongoDB instance <Step 6: Run MongoDB as a StatefulSet>`
1. :ref:`Add Storage Classes <Step 9: Create Kubernetes Storage Classes for MongoDB>`.
2. :ref:`Add Persistent Volume Claims <Step 10: Create Kubernetes Persistent Volume Claims>`.
3. :ref:`Create the Config Map <Step 4: Configure the Node>`.
4. :ref:`Run MongoDB instance <Step 11: Start a Kubernetes StatefulSet for MongoDB>`.
Step 3: Add the New MongoDB Instance to the Existing Replica Set
@ -166,13 +166,13 @@ show-config`` command to check that the keyring is updated.
Step 7: Run NGINX as a Deployment
---------------------------------
Please refer :ref:`this <Step 10: Run NGINX as a Deployment>` to
Please refer :ref:`this <Step 8: Start the NGINX Kubernetes Deployment>` to
set up NGINX in your new node.
Step 8: Test Your New BigchainDB Node
-------------------------------------
Please refer to the testing steps :ref:`here <Step 11: Verify the BigchainDB
Please refer to the testing steps :ref:`here <Step 18: Verify the BigchainDB
Node Setup>` to verify that your new BigchainDB node is working as expected.

View File

@ -53,7 +53,7 @@ by using the subcommand ``./easyrsa help``
Step 3: Create an Intermediate CA
---------------------------------
TODO(Krish)
TODO
Step 4: Generate a Certificate Revocation List
----------------------------------------------
@ -64,9 +64,9 @@ You can generate a Certificate Revocation List (CRL) using:
./easyrsa gen-crl
You will need to run this command every time you revoke a certificate and the
generated ``crl.pem`` needs to be uploaded to your infrastructure to prevent
the revoked certificate from being used again.
You will need to run this command every time you revoke a certificate.
The generated ``crl.pem`` needs to be uploaded to your infrastructure to
prevent the revoked certificate from being used again.
Step 5: Secure the CA

View File

@ -1,9 +1,8 @@
How to Generate a Client Certificate for MongoDB
================================================
This page enumerates the steps *we* use
to generate a client certificate
to be used by clients who want to connect to a TLS-secured MongoDB cluster.
This page enumerates the steps *we* use to generate a client certificate to be
used by clients who want to connect to a TLS-secured MongoDB cluster.
We use Easy-RSA.
@ -34,8 +33,8 @@ and using:
./easyrsa gen-req bdb-instance-0 nopass
You should change ``bdb-instance-0`` to a value based on the client
the certificate is for.
You should change ``bdb-instance-0`` to a value that reflects what the
client certificate is being used for.
Tip: You can get help with the ``easyrsa`` command (and its subcommands)
by using the subcommand ``./easyrsa help``
@ -44,7 +43,7 @@ by using the subcommand ``./easyrsa help``
Step 3: Get the Client Certificate Signed
-----------------------------------------
The CSR file (created in the last step)
The CSR file (created in the previous step)
should be located in ``pki/reqs/bdb-instance-0.req``.
You need to send it to the organization managing the cluster
so that they can use their CA

View File

@ -0,0 +1,68 @@
Configure MongoDB Cloud Manager for Monitoring and Backup
=========================================================
This document details the steps required to configure MongoDB Cloud Manager to
enable monitoring and back up of data in a MongoDB Replica Set.
Configure MongoDB Cloud Manager for Monitoring
----------------------------------------------
* Once the Monitoring Agent is up and running, open
`MongoDB Cloud Manager <https://cloud.mongodb.com>`_.
* Click ``Login`` under ``MongoDB Cloud Manager`` and log in to the Cloud
Manager.
* Select the group from the dropdown box on the page.
* Go to Settings, Group Settings and add a Preferred Hostnames regexp as
``^mdb-instance-[0-9]{1,2}$``. It may take up to 5 mins till this setting
is in effect. You may refresh the browser window and verify whether the
changes have been saved or not.
* Next, click the ``Deployment`` tab, and then the ``Manage Existing``
button.
* On the ``Import your deployment for monitoring`` page, enter the hostname
to be the same as the one set for ``mdb-instance-name`` in the global
ConfigMap for a node.
For example, if the ``mdb-instance-name`` is set to ``mdb-instance-0``,
enter ``mdb-instance-0`` as the value in this field.
* Enter the port number as ``27017``, with no authentication.
* If you have TLS enabled, select the option to enable TLS/SSL for MongoDB
connections.
* Once the deployment is found, click the ``Continue`` button.
This may take about a minute or two.
* Do not add ``Automation Agent`` when given an option to add it.
* Verify on the UI that data is being sent by the monitoring agent to the
Cloud Manager.
Configure MongoDB Cloud Manager for Backup
------------------------------------------
* Once the Backup Agent is up and running, open
`MongoDB Cloud Manager <https://cloud.mongodb.com>`_.
* Click ``Login`` under ``MongoDB Cloud Manager`` and log in to the Cloud
Manager.
* Select the group from the dropdown box on the page.
* Click ``Backup`` tab.
* Click on the ``Begin Setup``.
* Click on ``Next``, select the replica set from the dropdown menu.
* Verify the details of your MongoDB instance and click on ``Start`` again.
* It might take up to 5 minutes to start the backup process.
* Verify that data is being backed up on the UI.

View File

@ -48,10 +48,10 @@ by copying the existing ``vars.example`` file
and then editing it.
You should change the
country, province, city, org and email
to the correct values for you.
to the correct values for your organisation.
(Note: The country, province, city, org and email are part of
the `Distinguished Name <https://en.wikipedia.org/wiki/X.509#Certificates>`_ (DN).)
The comments in the file explain what the variables mean.
The comments in the file explain what each of the variables mean.
.. code:: bash
@ -61,7 +61,7 @@ The comments in the file explain what the variables mean.
echo 'set_var EASYRSA_DN "org"' >> vars
echo 'set_var EASYRSA_KEY_SIZE 4096' >> vars
echo 'set_var EASYRSA_REQ_COUNTRY "DE"' >> vars
echo 'set_var EASYRSA_REQ_PROVINCE "Berlin"' >> vars
echo 'set_var EASYRSA_REQ_CITY "Berlin"' >> vars

View File

@ -22,6 +22,7 @@ Feel free change things to suit your needs or preferences.
node-on-kubernetes
add-node-on-kubernetes
upgrade-on-kubernetes
first-node
log-analytics
easy-rsa
cloud-manager
node-config-map-and-secrets

View File

@ -0,0 +1,72 @@
Configure the Node
==================
Use the ConfigMap template in ``configuration/config-map.yaml`` file to
configure the node. Update all the values for the keys in the
ConfigMaps ``vars``, ``mdb-fqdn``, ``bdb-public-key``, ``bdb-keyring`` and
``mongodb-whitelist``.
Use the Secret template in ``configuration/secret.yaml`` file to configure
the secrets for this node. Update all the values for the keys in the Secrets
``mdb-agent-api-key``, ``https-certs``, ``bdb-private-key``,
``threescale-credentials`` and ``mdb-certs``.
You might not need all the keys during the deployment.
For example, if you plan to access the BigchainDB API over HTTP, you might
not need the ``https-certs`` Secret.
Ensure that all the secrets are base64 encoded values and the unused ones
are set to an empty string.
For example, assuming that the public key chain is named ``cert.pem`` and
private key is ``cert.key``, run the following commands to encode the
certificates into single continuous string that can be embedded in yaml,
and then copy the contents of ``cert.pem.b64`` in the ``cert.pem`` field,
and the contents of ``cert.key.b64`` in the ``cert.key`` field.
.. code:: bash
cat cert.pem | base64 -w 0 > cert.pem.b64
cat cert.key | base64 -w 0 > cert.key.b64
Create the ConfigMap and Secret using the commands:
.. code:: bash
kubectl --context k8s-bdb-test-cluster-0 apply -f configuration/config-map.yaml
kubectl --context k8s-bdb-test-cluster-0 apply -f configuration/secret.yaml
Some of the Node Configuration Options
--------------------------------------
1. ConfigMap vars.mdb-instance-name
* MongoDB reads the local ``/etc/hosts`` file while bootstrapping a replica
set to resolve the hostname provided to the ``rs.initiate()`` command.
It needs to ensure that the replica set is being initialized in the same
instance where the MongoDB instance is running.
* We use the value in the ``mdb-instance-name`` field to achieve this.
* This field will be the DNS name of your MongoDB instance, and Kubernetes
maps this name to its internal DNS.
* This field will also be used by other MongoDB instances when forming a
MongoDB replica set.
* We use ``mdb-instance-0``, ``mdb-instance-1`` and so on in our
documentation.
2. ConfigMap bdb-keyring.bdb-keyring
* This value specifies the public keys of all the nodes in a BigchainDB
cluster.
* It is a ':' separated list, similar to the PATH variables in Unix systems.
3. ConfigMap bdb-public-key.bdb-public-key
* This value specifies the public key of the current BigchainDB node.

View File

@ -25,6 +25,19 @@ Step 2: Configure kubectl
The default location of the kubectl configuration file is ``~/.kube/config``.
If you don't have that file, then you need to get it.
Find out the ``kubectl context`` of your Kubernetes cluster using the command:
.. code:: bash
$ kubectl config view
The context will be one of the entries in ``context.cluster`` under the
``contexts`` list in the output.
Assuming that the current context for your cluster is
``k8s-bdb-test-cluster-0``, you will always specify the context in the
following commands as ``kubectl --context k8s-bdb-test-cluster-0``.
**Azure.** If you deployed your Kubernetes cluster on Azure
using the Azure CLI 2.0 (as per :doc:`our template <template-kubernetes-azure>`),
then you can get the ``~/.kube/config`` file using:
@ -42,8 +55,254 @@ then try adding ``--ssh-key-file ~/.ssh/<name>``
to the above command (i.e. the path to the private key).
Step 3: Create Storage Classes
------------------------------
Step 3: Connect to the Cluster UI - (optional)
----------------------------------------------
* Get the kubectl context for this cluster using ``kubectl config view``.
* For the above commands, the context would be ``k8s-bdb-test-cluster-0``.
.. code:: bash
$ kubectl --context k8s-bdb-test-cluster-0 proxy -p 8001
Step 4: Configure the Node
--------------------------
* You need to have all the information :ref:`listed here <Things Each Node Operator Must Do>`.
* The information needs to be populated in ``configuration/config-map.yaml``
and ``configuration/secret.yaml``.
* For more details, refer the document on how to :ref:`configure a node <Configure the Node>`.
Step 4: Start the NGINX Service
-------------------------------
* This will will give us a public IP for the cluster.
* Once you complete this step, you might need to wait up to 10 mins for the
public IP to be assigned.
* You have the option to use vanilla NGINX without HTTPS support or an
OpenResty NGINX integrated with 3scale API Gateway.
Step 4.1: Vanilla NGINX
^^^^^^^^^^^^^^^^^^^^^^^
* This configuration is located in the file ``nginx/nginx-svc.yaml``.
* Set the ``metadata.name`` and ``metadata.labels.name`` to the value
set in ``ngx-instance-name`` in the ConfigMap above.
* Set the ``spec.selector.app`` to the value set in ``ngx-instance-name`` in
the ConfigMap followed by ``-dep``. For example, if the value set in the
``ngx-instance-name`` is ``ngx-instance-0``, set the
``spec.selector.app`` to ``ngx-instance-0-dep``.
* Start the Kubernetes Service:
.. code:: bash
$ kubectl --context k8s-bdb-test-cluster-0 apply -f nginx/nginx-svc.yaml
Step 4.2: OpenResty NGINX + 3scale
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
* This configuration is located in the file ``nginx/nginx-3scale-svc.yaml``.
* You have to enable HTTPS for this one and will need an HTTPS certificate
for your domain
* You should have already created the Kubernetes Secret in the previous
step.
* Set the ``spec.selector.app`` to the value set in ``ngx-instance-name`` in
the ConfigMap followed by ``-dep``. For example, if the value set in the
``ngx-instance-name`` is ``ngx-instance-0``, set the
``spec.selector.app`` to ``ngx-instance-0-dep``.
* Start the Kubernetes Service:
.. code:: bash
$ kubectl --context k8s-bdb-test-cluster-0 apply -f nginx-3scale/nginx-3scale-svc.yaml
Step 5: Assign DNS Name to the NGINX Public IP
----------------------------------------------
* This step is required only if you are planning to set up multiple
`BigchainDB nodes
<https://docs.bigchaindb.com/en/latest/terminology.html>`_ or are using
HTTPS certificates tied to a domain.
* The following command can help you find out if the nginx service started
above has been assigned a public IP or external IP address:
.. code:: bash
$ kubectl --context k8s-bdb-test-cluster-0 get svc -w
* Once a public IP is assigned, you can log in to the Azure portal and map it to
a DNS name.
* We usually assign ``bdb-test-cluster-0``, ``bdb-test-cluster-0`` and
so on in our documentation.
* Let us assume that we assigned the unique name of ``bdb-test-cluster-0`` here.
**Set up DNS mapping in Azure.**
Select the current Azure resource group and look for the ``Public IP``
resource. You should see at least 2 entries there - one for the Kubernetes
master and the other for the MongoDB instance. You may have to ``Refresh`` the
Azure web page listing the resources in a resource group for the latest
changes to be reflected.
Select the ``Public IP`` resource that is attached to your service (it should
have the Kubernetes cluster name along with a random string),
select ``Configuration``, add the DNS name that was added in the
ConfigMap earlier, click ``Save``, and wait for the changes to be applied.
To verify the DNS setting is operational, you can run ``nslookup <dns
name added in ConfigMap>`` from your local Linux shell.
This will ensure that when you scale the replica set later, other MongoDB
members in the replica set can reach this instance.
Step 6: Start the MongoDB Kubernetes Service
--------------------------------------------
* This configuration is located in the file ``mongodb/mongo-svc.yaml``.
* Set the ``metadata.name`` and ``metadata.labels.name`` to the value
set in ``mdb-instance-name`` in the ConfigMap above.
* Set the ``spec.selector.app`` to the value set in ``mdb-instance-name`` in
the ConfigMap followed by ``-ss``. For example, if the value set in the
``mdb-instance-name`` is ``mdb-instance-0``, set the
``spec.selector.app`` to ``mdb-instance-0-ss``.
* Start the Kubernetes Service:
.. code:: bash
$ kubectl --context k8s-bdb-test-cluster-0 apply -f mongodb/mongo-svc.yaml
Step 7: Start the BigchainDB Kubernetes Service
-----------------------------------------------
* This configuration is located in the file ``bigchaindb/bigchaindb-svc.yaml``.
* Set the ``metadata.name`` and ``metadata.labels.name`` to the value
set in ``bdb-instance-name`` in the ConfigMap above.
* Set the ``spec.selector.app`` to the value set in ``bdb-instance-name`` in
the ConfigMap followed by ``-dep``. For example, if the value set in the
``bdb-instance-name`` is ``bdb-instance-0``, set the
``spec.selector.app`` to ``bdb-instance-0-dep``.
* Start the Kubernetes Service:
.. code:: bash
$ kubectl --context k8s-bdb-test-cluster-0 apply -f bigchaindb/bigchaindb-svc.yaml
Step 8: Start the NGINX Kubernetes Deployment
---------------------------------------------
* NGINX is used as a proxy to both the BigchainDB and MongoDB instances in
the node. It proxies HTTP requests on port 80 to the BigchainDB backend,
and TCP connections on port 27017 to the MongoDB backend.
* As in step 4, you have the option to use vanilla NGINX or an OpenResty
NGINX integrated with 3scale API Gateway.
Step 8.1: Vanilla NGINX
^^^^^^^^^^^^^^^^^^^^^^^
* This configuration is located in the file ``nginx/nginx-dep.yaml``.
* Set the ``metadata.name`` and ``spec.template.metadata.labels.app``
to the value set in ``ngx-instance-name`` in the ConfigMap followed by a
``-dep``. For example, if the value set in the ``ngx-instance-name`` is
``ngx-instance-0``, set the fields to ``ngx-instance-0-dep``.
* Set ``MONGODB_BACKEND_HOST`` env var to
the value set in ``mdb-instance-name`` in the ConfigMap, followed by
``.default.svc.cluster.local``. For example, if the value set in the
``mdb-instance-name`` is ``mdb-instance-0``, set the
``MONGODB_BACKEND_HOST`` env var to
``mdb-instance-0.default.svc.cluster.local``.
* Set ``BIGCHAINDB_BACKEND_HOST`` env var to
the value set in ``bdb-instance-name`` in the ConfigMap, followed by
``.default.svc.cluster.local``. For example, if the value set in the
``bdb-instance-name`` is ``bdb-instance-0``, set the
``BIGCHAINDB_BACKEND_HOST`` env var to
``bdb-instance-0.default.svc.cluster.local``.
* Set ``MONGODB_FRONTEND_PORT`` to 27017, or the port number on which you
want to expose MongoDB service.
* Set ``BIGCHAINDB_FRONTEND_PORT`` to 80, or the port number on which you
want to expose BigchainDB service.
* Start the Kubernetes Deployment:
.. code:: bash
$ kubectl --context k8s-bdb-test-cluster-0 apply -f nginx/nginx-dep.yaml
Step 8.2: OpenResty NGINX + 3scale
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
* This configuration is located in the file
``nginx-3scale/nginx-3scale-dep.yaml``.
* Set the ``metadata.name`` and ``spec.template.metadata.labels.app``
to the value set in ``ngx-instance-name`` in the ConfigMap followed by a
``-dep``. For example, if the value set in the ``ngx-instance-name`` is
``ngx-instance-0``, set the fields to ``ngx-instance-0-dep``.
* Set ``MONGODB_BACKEND_HOST`` env var to
the value set in ``mdb-instance-name`` in the ConfigMap, followed by
``.default.svc.cluster.local``. For example, if the value set in the
``mdb-instance-name`` is ``mdb-instance-0``, set the
``MONGODB_BACKEND_HOST`` env var to
``mdb-instance-0.default.svc.cluster.local``.
* Set ``BIGCHAINDB_BACKEND_HOST`` env var to
the value set in ``bdb-instance-name`` in the ConfigMap, followed by
``.default.svc.cluster.local``. For example, if the value set in the
``bdb-instance-name`` is ``bdb-instance-0``, set the
``BIGCHAINDB_BACKEND_HOST`` env var to
``bdb-instance-0.default.svc.cluster.local``.
* Set ``MONGODB_FRONTEND_PORT`` to 27017, or the port number on which you
want to expose the MongoDB service.
* Set ``BIGCHAINDB_FRONTEND_PORT`` to 443, or the port number on which you
want to expose the BigchainDB service over HTTPS.
* Start the Kubernetes Deployment:
.. code:: bash
$ kubectl --context k8s-bdb-test-cluster-0 apply -f nginx-3scale/nginx-3scale-dep.yaml
Step 9: Create Kubernetes Storage Classes for MongoDB
-----------------------------------------------------
MongoDB needs somewhere to store its data persistently,
outside the container where MongoDB is running.
@ -67,7 +326,9 @@ see `the Kubernetes docs about persistent volumes
The first thing to do is create the Kubernetes storage classes.
**Azure.** First, you need an Azure storage account.
**Set up Storage Classes in Azure.**
First, you need an Azure storage account.
If you deployed your Kubernetes cluster on Azure
using the Azure CLI 2.0
(as per :doc:`our template <template-kubernetes-azure>`),
@ -89,20 +350,17 @@ For future reference, the command to create a storage account is
`az storage account create <https://docs.microsoft.com/en-us/cli/azure/storage/account#create>`_.
Get the file ``mongo-sc.yaml`` from GitHub using:
The Kubernetes template for configuration of Storage Class is located in the
file ``mongodb/mongo-sc.yaml``.
.. code:: bash
$ wget https://raw.githubusercontent.com/bigchaindb/bigchaindb/master/k8s/mongodb/mongo-sc.yaml
You may have to update the ``parameters.location`` field in both the files to
You may have to update the ``parameters.location`` field in the file to
specify the location you are using in Azure.
Create the required storage classes using:
.. code:: bash
$ kubectl apply -f mongo-sc.yaml
$ kubectl --context k8s-bdb-test-cluster-0 apply -f mongodb/mongo-sc.yaml
You can check if it worked using ``kubectl get storageclasses``.
@ -117,16 +375,13 @@ Kubernetes just looks for a storageAccount
with the specified skuName and location.
Step 4: Create Persistent Volume Claims
---------------------------------------
Step 10: Create Kubernetes Persistent Volume Claims
---------------------------------------------------
Next, you will create two PersistentVolumeClaim objects ``mongo-db-claim`` and
``mongo-configdb-claim``.
Get the file ``mongo-pvc.yaml`` from GitHub using:
.. code:: bash
$ wget https://raw.githubusercontent.com/bigchaindb/bigchaindb/master/k8s/mongodb/mongo-pvc.yaml
This configuration is located in the file ``mongodb/mongo-pvc.yaml``.
Note how there's no explicit mention of Azure, AWS or whatever.
``ReadWriteOnce`` (RWO) means the volume can be mounted as
@ -143,7 +398,7 @@ Create the required Persistent Volume Claims using:
.. code:: bash
$ kubectl apply -f mongo-pvc.yaml
$ kubectl --context k8s-bdb-test-cluster-0 apply -f mongodb/mongo-pvc.yaml
You can check its status using: ``kubectl get pvc -w``
@ -152,270 +407,155 @@ Initially, the status of persistent volume claims might be "Pending"
but it should become "Bound" fairly quickly.
Step 5: Create the Config Map - Optional
----------------------------------------
This step is required only if you are planning to set up multiple
`BigchainDB nodes
<https://docs.bigchaindb.com/en/latest/terminology.html>`_.
MongoDB reads the local ``/etc/hosts`` file while bootstrapping a replica set
to resolve the hostname provided to the ``rs.initiate()`` command. It needs to
ensure that the replica set is being initialized in the same instance where
the MongoDB instance is running.
To achieve this, you will create a ConfigMap with the FQDN of the MongoDB instance
and populate the ``/etc/hosts`` file with this value so that a replica set can
be created seamlessly.
Get the file ``mongo-cm.yaml`` from GitHub using:
.. code:: bash
$ wget https://raw.githubusercontent.com/bigchaindb/bigchaindb/master/k8s/mongodb/mongo-cm.yaml
You may want to update the ``data.fqdn`` field in the file before creating the
ConfigMap. ``data.fqdn`` field will be the DNS name of your MongoDB instance.
This will be used by other MongoDB instances when forming a MongoDB
replica set. It should resolve to the MongoDB instance in your cluster when
you are done with the setup. This will help when you are adding more MongoDB
instances to the replica set in the future.
**Azure.**
In Kubernetes on ACS, the name you populate in the ``data.fqdn`` field
will be used to configure a DNS name for the public IP assigned to the
Kubernetes Service that is the frontend for the MongoDB instance.
We suggest using a name that will already be available in Azure.
We use ``mdb-instance-0``, ``mdb-instance-1`` and so on in this document,
which gives us ``mdb-instance-0.<azure location>.cloudapp.azure.com``,
``mdb-instance-1.<azure location>.cloudapp.azure.com``, etc. as the FQDNs.
The ``<azure location>`` is the Azure datacenter location you are using,
which can also be obtained using the ``az account list-locations`` command.
You can also try to assign a name to an Public IP in Azure before starting
the process, or use ``nslookup`` with the name you have in mind to check
if it's available for use.
You should ensure that the the name specified in the ``data.fqdn`` field is
a unique one.
**Kubernetes on bare-metal or other cloud providers.**
You need to provide the name resolution function
by other means (using DNS providers like GoDaddy, CloudFlare or your own
private DNS server). The DNS set up for other environments is currently
beyond the scope of this document.
Create the required ConfigMap using:
.. code:: bash
$ kubectl apply -f mongo-cm.yaml
You can check its status using: ``kubectl get cm``
Now you are ready to run MongoDB and BigchainDB on our Kubernetes cluster.
Step 6: Run MongoDB as a StatefulSet
------------------------------------
Get the file ``mongo-ss.yaml`` from GitHub using:
.. code:: bash
$ wget https://raw.githubusercontent.com/bigchaindb/bigchaindb/master/k8s/mongodb/mongo-ss.yaml
Note how the MongoDB container uses the ``mongo-db-claim`` and the
``mongo-configdb-claim`` PersistentVolumeClaims for its ``/data/db`` and
``/data/configdb`` diretories (mount path). Note also that we use the pod's
``securityContext.capabilities.add`` specification to add the ``FOWNER``
capability to the container.
That is because MongoDB container has the user ``mongodb``, with uid ``999``
and group ``mongodb``, with gid ``999``.
When this container runs on a host with a mounted disk, the writes fail when
there is no user with uid ``999``.
To avoid this, we use the Docker feature of ``--cap-add=FOWNER``.
This bypasses the uid and gid permission checks during writes and allows data
to be persisted to disk.
Refer to the
`Docker docs <https://docs.docker.com/engine/reference/run/#runtime-privilege-and-linux-capabilities>`_
for details.
As we gain more experience running MongoDB in testing and production, we will
tweak the ``resources.limits.cpu`` and ``resources.limits.memory``.
We will also stop exposing port ``27017`` globally and/or allow only certain
hosts to connect to the MongoDB instance in the future.
Create the required StatefulSet using:
.. code:: bash
$ kubectl apply -f mongo-ss.yaml
You can check its status using the commands ``kubectl get statefulsets -w``
and ``kubectl get svc -w``
You may have to wait for up to 10 minutes for the disk to be created
and attached on the first run. The pod can fail several times with the message
saying that the timeout for mounting the disk was exceeded.
Step 7: Initialize a MongoDB Replica Set - Optional
Step 11: Start a Kubernetes StatefulSet for MongoDB
---------------------------------------------------
This step is required only if you are planning to set up multiple
`BigchainDB nodes
<https://docs.bigchaindb.com/en/latest/terminology.html>`_.
* This configuration is located in the file ``mongodb/mongo-ss.yaml``.
* Set the ``spec.serviceName`` to the value set in ``mdb-instance-name`` in
the ConfigMap.
For example, if the value set in the ``mdb-instance-name``
is ``mdb-instance-0``, set the field to ``mdb-instance-0``.
* Set ``metadata.name``, ``spec.template.metadata.name`` and
``spec.template.metadata.labels.app`` to the value set in
``mdb-instance-name`` in the ConfigMap, followed by
``-ss``.
For example, if the value set in the
``mdb-instance-name`` is ``mdb-instance-0``, set the fields to the value
``mdb-insance-0-ss``.
Login to the running MongoDB instance and access the mongo shell using:
* Note how the MongoDB container uses the ``mongo-db-claim`` and the
``mongo-configdb-claim`` PersistentVolumeClaims for its ``/data/db`` and
``/data/configdb`` diretories (mount path).
* Note also that we use the pod's ``securityContext.capabilities.add``
specification to add the ``FOWNER`` capability to the container. That is
because MongoDB container has the user ``mongodb``, with uid ``999`` and
group ``mongodb``, with gid ``999``.
When this container runs on a host with a mounted disk, the writes fail
when there is no user with uid ``999``. To avoid this, we use the Docker
feature of ``--cap-add=FOWNER``. This bypasses the uid and gid permission
checks during writes and allows data to be persisted to disk.
Refer to the `Docker docs
<https://docs.docker.com/engine/reference/run/#runtime-privilege-and-linux-capabilities>`_
for details.
.. code:: bash
* As we gain more experience running MongoDB in testing and production, we
will tweak the ``resources.limits.cpu`` and ``resources.limits.memory``.
* It might take up to 10 minutes for the disks to be created and attached to
the pod. The UI might show that the pod has errored with the
message "timeout expired waiting for volumes to attach/mount". Use the CLI
below to check the status of the pod in this case, instead of the UI. This
happens due to a bug in Azure ACS.
$ kubectl exec -it mdb-0 -c mongodb -- /bin/bash
root@mdb-0:/# mongo --port 27017
.. code:: bash
You will initiate the replica set by using the ``rs.initiate()`` command from the
mongo shell. Its syntax is:
$ kubectl --context k8s-bdb-test-cluster-0 get po -w
* Create the MongoDB StatefulSet using:
.. code:: bash
.. code:: bash
rs.initiate({
_id : "<replica-set-name",
members: [ {
_id : 0,
host : "<fqdn of this instance>:<port number>"
} ]
})
An example command might look like:
.. code:: bash
$ kubectl --context k8s-bdb-test-cluster-0 apply -f mongodb/mongo-ss.yaml
> rs.initiate({ _id : "bigchain-rs", members: [ { _id : 0, host :"mdb-instance-0.westeurope.cloudapp.azure.com:27017" } ] })
Step 13: Start a Kubernetes Deployment for MongoDB Monitoring Agent
-------------------------------------------------------------------
* This configuration is located in the file
``mongodb-monitoring-agent/mongo-mon-dep.yaml``.
* Set ``metadata.name``, ``spec.template.metadata.name`` and
``spec.template.metadata.labels.app`` to the value set in
``mdb-mon-instance-name`` in the ConfigMap, followed by
``-dep``.
For example, if the value set in the
``mdb-mon-instance-name`` is ``mdb-mon-instance-0``, set the fields to the
value ``mdb-mon-insance-0-ss``.
* Start the Kubernetes Deployment using:
.. code:: bash
$ kubectl --context k8s-bdb-test-cluster-0 apply -f mongodb-monitoring-agent/mongo-mon-dep.yaml
where ``mdb-instance-0.westeurope.cloudapp.azure.com`` is the value stored in
the ``data.fqdn`` field in the ConfigMap created using ``mongo-cm.yaml``.
Step 15: Start a Kubernetes Deployment for MongoDB Backup Agent
---------------------------------------------------------------
* This configuration is located in the file
``mongodb-backup-agent/mongo-backup-dep.yaml``.
* Set ``metadata.name``, ``spec.template.metadata.name`` and
``spec.template.metadata.labels.app`` to the value set in
``mdb-bak-instance-name`` in the ConfigMap, followed by
``-dep``.
For example, if the value set in the
``mdb-bak-instance-name`` is ``mdb-bak-instance-0``, set the fields to the
value ``mdb-bak-insance-0-ss``.
* Start the Kubernetes Deployment using:
.. code:: bash
$ kubectl --context k8s-bdb-test-cluster-0 apply -f mongodb-backup-agent/mongo-backup-dep.yaml
You should see changes in the mongo shell prompt from ``>``
to ``bigchain-rs:OTHER>`` to ``bigchain-rs:SECONDARY>`` and finally
to ``bigchain-rs:PRIMARY>``.
Step 16: Configure the MongoDB Cloud Manager
--------------------------------------------
You can use the ``rs.conf()`` and the ``rs.status()`` commands to check the
detailed replica set configuration now.
* Refer the
:ref:`documentation <Configure MongoDB Cloud Manager for Monitoring and Backup>`
for details on how to configure the MongoDB Cloud Manager to enable
monitoring and backup.
Step 8: Create a DNS record - Optional
--------------------------------------
Step 17: Start a Kubernetes Deployment for Bigchaindb
-----------------------------------------------------
This step is required only if you are planning to set up multiple
`BigchainDB nodes
<https://docs.bigchaindb.com/en/latest/terminology.html>`_.
* This configuration is located in the file
``bigchaindb/bigchaindb-dep.yaml``.
**Azure.** Select the current Azure resource group and look for the ``Public IP``
resource. You should see at least 2 entries there - one for the Kubernetes
master and the other for the MongoDB instance. You may have to ``Refresh`` the
Azure web page listing the resources in a resource group for the latest
changes to be reflected.
Select the ``Public IP`` resource that is attached to your service (it should
have the Kubernetes cluster name along with a random string),
select ``Configuration``, add the DNS name that was added in the
ConfigMap earlier, click ``Save``, and wait for the changes to be applied.
* Set ``metadata.name`` and ``spec.template.metadata.labels.app`` to the
value set in ``bdb-instance-name`` in the ConfigMap, followed by
``-dep``.
For example, if the value set in the
``bdb-instance-name`` is ``bdb-instance-0``, set the fields to the
value ``bdb-insance-0-dep``.
To verify the DNS setting is operational, you can run ``nslookup <dns
name added in ConfigMap>`` from your local Linux shell.
This will ensure that when you scale the replica set later, other MongoDB
members in the replica set can reach this instance.
Step 9: Run BigchainDB as a Deployment
--------------------------------------
Get the file ``bigchaindb-dep.yaml`` from GitHub using:
.. code:: bash
$ wget https://raw.githubusercontent.com/bigchaindb/bigchaindb/master/k8s/bigchaindb/bigchaindb-dep.yaml
Note that we set the ``BIGCHAINDB_DATABASE_HOST`` to ``mdb-svc`` which is the
name of the MongoDB service defined earlier.
We also hardcode the ``BIGCHAINDB_KEYPAIR_PUBLIC``,
``BIGCHAINDB_KEYPAIR_PRIVATE`` and ``BIGCHAINDB_KEYRING`` for now.
As we gain more experience running BigchainDB in testing and production, we
will tweak the ``resources.limits`` values for CPU and memory, and as richer
monitoring and probing becomes available in BigchainDB, we will tweak the
``livenessProbe`` and ``readinessProbe`` parameters.
We also plan to specify scheduling policies for the BigchainDB deployment so
that we ensure that BigchainDB and MongoDB are running in separate nodes, and
build security around the globally exposed port ``9984``.
Create the required Deployment using:
.. code:: bash
$ kubectl apply -f bigchaindb-dep.yaml
You can check its status using the command ``kubectl get deploy -w``
Step 10: Run NGINX as a Deployment
----------------------------------
NGINX is used as a proxy to both the BigchainDB and MongoDB instances in the
node.
It proxies HTTP requests on port 80 to the BigchainDB backend, and TCP
connections on port 27017 to the MongoDB backend.
You can also configure a whitelist in NGINX to allow only connections from
other instances in the MongoDB replica set to access the backend MongoDB
instance.
Get the file ``nginx-cm.yaml`` from GitHub using:
.. code:: bash
* Set ``BIGCHAINDB_DATABASE_HOST`` to the value set in ``mdb-instance-name``
in the ConfigMap.
For example, if the value set in the ``mdb-instance-name`` is
``mdb-instance-0``, set the field to the value ``mdb-instance-0``.
$ wget https://raw.githubusercontent.com/bigchaindb/bigchaindb/master/k8s/nginx/nginx-cm.yaml
The IP address whitelist can be explicitly configured in ``nginx-cm.yaml``
file. You will need a list of the IP addresses of all the other MongoDB
instances in the cluster. If the MongoDB intances specify a hostname, then this
needs to be resolved to the corresponding IP addresses. If the IP address of
any MongoDB instance changes, we can start a 'rolling upgrade' of NGINX after
updating the corresponding ConfigMap without affecting availabilty.
Create the ConfigMap for the whitelist using:
.. code:: bash
* Set the appropriate ``BIGCHAINDB_KEYPAIR_PUBLIC``,
``BIGCHAINDB_KEYPAIR_PRIVATE`` values.
$ kubectl apply -f nginx-cm.yaml
Get the file ``nginx-dep.yaml`` from GitHub using:
.. code:: bash
* One way to generate BigchainDB keypair is to run a Python shell with
the command
``from bigchaindb_driver import crypto; crypto.generate_keypair()``.
$ wget https://raw.githubusercontent.com/bigchaindb/bigchaindb/master/k8s/nginx/nginx-dep.yaml
* As we gain more experience running BigchainDB in testing and production,
we will tweak the ``resources.limits`` values for CPU and memory, and as
richer monitoring and probing becomes available in BigchainDB, we will
tweak the ``livenessProbe`` and ``readinessProbe`` parameters.
* Create the BigchainDB Deployment using:
Create the NGINX deployment using:
.. code:: bash
.. code:: bash
$ kubectl apply -f nginx-dep.yaml
$ kubectl --context k8s-bdb-test-cluster-0 apply -f bigchaindb/bigchaindb-dep.yaml
Step 11: Verify the BigchainDB Node Setup
* You can check its status using the command ``kubectl get deploy -w``
Step 18: Verify the BigchainDB Node Setup
-----------------------------------------
Step 11.1: Testing Internally
Step 18.1: Testing Internally
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Run a container that provides utilities like ``nslookup``, ``curl`` and ``dig``
@ -426,32 +566,26 @@ on the cluster and query the internal DNS and IP endpoints.
$ kubectl run -it toolbox -- image <docker image to run> --restart=Never --rm
There is a generic image based on alpine:3.5 with the required utilities
hosted at Docker Hub under `bigchaindb/toolbox <https://hub.docker.com/r/bigchaindb/toolbox/>`_.
The corresponding Dockerfile is in the bigchaindb/bigchaindb repository on GitHub, at `https://github.com/bigchaindb/bigchaindb/blob/master/k8s/toolbox/Dockerfile <https://github.com/bigchaindb/bigchaindb/blob/master/k8s/toolbox/Dockerfile>`_.
hosted at Docker Hub under
`bigchaindb/toolbox <https://hub.docker.com/r/bigchaindb/toolbox/>`_.
The corresponding
`Dockerfile <https://github.com/bigchaindb/bigchaindb/blob/master/k8s/toolbox/Dockerfile>`_
is in the ``bigchaindb/bigchaindb`` repository on GitHub.
You can use it as below to get started immediately:
.. code:: bash
$ kubectl run -it toolbox --image bigchaindb/toolbox --restart=Never --rm
kubectl --context k8s-bdb-test-cluster-0 \
run -it toolbox \
--image bigchaindb/toolbox \
--image-pull-policy=Always \
--restart=Never --rm
It will drop you to the shell prompt.
Now you can query for the ``mdb`` and ``bdb`` service details.
.. code:: bash
# nslookup mdb-svc
# nslookup bdb-svc
# nslookup ngx-svc
# dig +noall +answer _mdb-port._tcp.mdb-svc.default.svc.cluster.local SRV
# dig +noall +answer _bdb-port._tcp.bdb-svc.default.svc.cluster.local SRV
# dig +noall +answer _ngx-public-mdb-port._tcp.ngx-svc.default.svc.cluster.local SRV
# dig +noall +answer _ngx-public-bdb-port._tcp.ngx-svc.default.svc.cluster.local SRV
# curl -X GET http://mdb-svc:27017
# curl -X GET http://bdb-svc:9984
# curl -X GET http://ngx-svc:80
# curl -X GET http://ngx-svc:27017
The ``nslookup`` commands should output the configured IP addresses of the
services in the cluster
@ -461,16 +595,60 @@ various services in the cluster.
Finally, the ``curl`` commands test the availability of the services
themselves.
Step 11.2: Testing Externally
* Verify MongoDB instance
.. code:: bash
$ nslookup mdb-instance-0
$ dig +noall +answer _mdb-port._tcp.mdb-instance-0.default.svc.cluster.local SRV
$ curl -X GET http://mdb-instance-0:27017
* Verify BigchainDB instance
.. code:: bash
$ nslookup bdb-instance-0
$ dig +noall +answer _bdb-port._tcp.bdb-instance-0.default.svc.cluster.local SRV
$ curl -X GET http://bdb-instance-0:9984
* Verify NGINX instance
.. code:: bash
$ nslookup ngx-instance-0
$ dig +noall +answer _ngx-public-mdb-port._tcp.ngx-instance-0.default.svc.cluster.local SRV
$ curl -X GET http://ngx-instance-0:27017 # results in curl: (56) Recv failure: Connection reset by peer
$ dig +noall +answer _ngx-public-bdb-port._tcp.ngx-instance-0.default.svc.cluster.local SRV
* If you have run the vanilla NGINX instance, run
.. code:: bash
$ curl -X GET http://ngx-instance-0:80
* If you have the OpenResty NGINX + 3scale instance, run
.. code:: bash
$ curl -X GET https://ngx-instance-0
* Check the MongoDB monitoring and backup agent on the MongoDB Cloud Manager
portal to verify they are working fine.
* Send some transactions to BigchainDB and verify it's up and running!
Step 18.2: Testing Externally
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Try to access the ``<dns/ip of your exposed bigchaindb service endpoint>:80``
on your browser. You must receive a json output that shows the BigchainDB
server version among other things.
Try to access the ``<dns/ip of your exposed mongodb service endpoint>:27017``
on your browser. If your IP is in the whitelist, you will receive a message
from the MongoDB instance stating that it doesn't allow HTTP connections to
the port anymore. If your IP is not in the whitelist, your access will be
blocked and you will not see any response from the MongoDB instance.

View File

@ -1,8 +1,8 @@
How to Revoke an SSL/TLS Certificate
====================================
This page enumerates the steps *we* take to revoke a self-signed SSL/TLS certificate
in a cluster.
This page enumerates the steps *we* take to revoke a self-signed SSL/TLS
certificate in a cluster.
It can only be done by someone with access to the self-signed CA
associated with the cluster's managing organization.
@ -23,11 +23,11 @@ certificate:
./easyrsa revoke <filename>
This will update the CA database with the revocation details.
The next step is to use the updated database to issue an up-to-date
certificate revocation list (CRL).
Step 2: Generate a New CRL
--------------------------
@ -39,4 +39,3 @@ Generate a new CRL for your infrastructure using:
The generated ``crl.pem`` file needs to be uploaded to your infrastructure to
prevent the revoked certificate from being used again.

View File

@ -35,19 +35,18 @@ and using something like:
./easyrsa --req-cn=mdb-instance-0 --subject-alt-name=DNS:localhost,DNS:mdb-instance-0 gen-req mdb-instance-0 nopass
You must replace the common name (``mdb-instance-0`` above)
with the common name of *your* MongoDB instance
(which should be the same as the hostname of your MongoDB instance).
You will be prompted to enter the Distinguished Name for this certificate. You
can hit enter to accept the default values or change them at each prompt.
You need to provide the ``DNS:localhost`` SAN during certificate generation for
using the ``localhost exception`` in the MongoDB instance.
You can replace the common name (``mdb-instance-0`` above) with any other name
so long as the instance can verify that it is the hostname.
You need to provide the ``DNS:localhost`` SAN during certificate generation
for using the ``localhost exception`` in the MongoDB instance.
All certificates can have this attribute without compromising security as the
``localhost exception`` works only the first time.
Tip: You can get help with the ``easyrsa`` command (and its subcommands)
by using the subcommand ``./easyrsa help``
Step 3: Get the Server Certificate Signed
-----------------------------------------
@ -87,6 +86,6 @@ private keys.
Step 5: Update the MongoDB Config File
--------------------------------------
In the MongoDB configuration file,
set the ``net.ssl.PEMKeyFile`` parameter to the path of the ``mdb-instance-0.pem`` file,
and the ``net.ssl.CAFile`` parameter to the ``ca.crt`` file.
In the MongoDB configuration file, set the ``net.ssl.PEMKeyFile`` parameter to
the path of the ``mdb-instance-0.pem`` file, and the ``net.ssl.CAFile``
parameter to the ``ca.crt`` file.

View File

@ -138,7 +138,7 @@ of a master node from the Azure Portal. For example:
.. note::
All the master nodes should have the *same* IP address and hostname
All the master nodes should have the *same* public IP address and hostname
(also called the Master FQDN).
The "agent" nodes shouldn't get public IP addresses or hostnames,

View File

@ -84,6 +84,20 @@ and have an SSL certificate for the FQDN.
(You can get an SSL certificate from any SSL certificate provider).
☐ Share your BigchaindB *public* key with all the other nodes
in the BigchainDB cluster.
Don't share your private key.
☐ Get the BigchainDB public keys of all the other nodes in the cluster.
That list of public keys is known as the BigchainDB "keyring."
☐ Ask the managing organization
for the FQDN used to serve the BigchainDB APIs
and for a copy of the associated SSL/TLS certificate.
☐ If the cluster uses 3scale for API authentication, monitoring and billing,
you must ask the managing organization for all relevant 3scale credentials.
@ -121,4 +135,4 @@ gathered above.
☐ Deploy your BigchainDB node on your Kubernetes cluster.
TODO: Links to instructions for first-node-in-cluster or second-or-later-node-in-cluster
TODO: Links to instructions for first-node-in-cluster or second-or-later-node-in-cluster

View File

@ -10,6 +10,7 @@ metadata:
namespace: default
type: Opaque
data:
# This is the API Key obtained from MongoDB Cloud Manager
api-key: "<b64 encoded api key>"
---
apiVersion: v1