1
0
mirror of https://github.com/bigchaindb/bigchaindb.git synced 2024-06-26 11:16:44 +02:00

Merge pull request #1713 from bigchaindb/fix-multi-node-dep

Verify and fix BDB multi node deployment guide
This commit is contained in:
Ahmed Muawia Khan 2017-08-17 10:21:40 +02:00 committed by GitHub
commit 2fc114a596
15 changed files with 405 additions and 164 deletions

View File

@ -35,11 +35,19 @@ cluster.
``existing BigchainDB instance`` will refer to the BigchainDB instance in the ``existing BigchainDB instance`` will refer to the BigchainDB instance in the
existing cluster. existing cluster.
Below, we refer to multiple files by their directory and filename,
such as ``mongodb/mongo-ext-conn-svc.yaml``. Those files are files in the
`bigchaindb/bigchaindb repository on GitHub
<https://github.com/bigchaindb/bigchaindb/>`_ in the ``k8s/`` directory.
Make sure you're getting those files from the appropriate Git branch on
GitHub, i.e. the branch for the version of BigchainDB that your BigchainDB
cluster is using.
Step 1: Prerequisites Step 1: Prerequisites
--------------------- ---------------------
* A public/private key pair for the new BigchainDB instance. * :ref:`List of all the things to be done by each node operator <Things Each Node Operator Must Do>`.
* The public key should be shared offline with the other existing BigchainDB * The public key should be shared offline with the other existing BigchainDB
nodes in the existing BigchainDB cluster. nodes in the existing BigchainDB cluster.
@ -65,20 +73,126 @@ example:
$ kubectl --context ctx-2 proxy --port 8002 $ kubectl --context ctx-2 proxy --port 8002
Step 2: Prepare the New Kubernetes Cluster Step 2: Configure the BigchainDB Node
------------------------------------------ -------------------------------------
Follow the steps in the sections to set up Storage Classes and Persistent Volume See the section on how to :ref:`configure your BigchainDB node <How to Configure a BigchainDB Node>`.
Claims, and to run MongoDB in the new cluster:
1. :ref:`Add Storage Classes <Step 10: Create Kubernetes Storage Classes for MongoDB>`.
2. :ref:`Add Persistent Volume Claims <Step 11: Create Kubernetes Persistent Volume Claims>`.
3. :ref:`Create the Config Map <Step 3: Configure Your BigchainDB Node>`.
4. :ref:`Run MongoDB instance <Step 12: Start a Kubernetes StatefulSet for MongoDB>`.
Step 3: Add the New MongoDB Instance to the Existing Replica Set Step 3: Start the NGINX Service
---------------------------------------------------------------- --------------------------------
Please see the following section:
* :ref:`Start NGINX service <Step 4: Start the NGINX Service>`.
Step 4: Assign DNS Name to the NGINX Public IP
----------------------------------------------
Please see the following section:
* :ref:`Assign DNS to NGINX Public IP <Step 5: Assign DNS Name to the NGINX Public IP>`.
Step 5: Start the MongoDB Kubernetes Service
--------------------------------------------
Please see the following section:
* :ref:`Start the MongoDB Kubernetes Service <Step 6: Start the MongoDB Kubernetes Service>`.
Step 6: Start the BigchainDB Kubernetes Service
-----------------------------------------------
Please see the following section:
* :ref:`Start the BigchainDB Kubernetes Service <Step 7: Start the BigchainDB Kubernetes Service>`.
Step 7: Start the OpenResty Kubernetes Service
----------------------------------------------
Please see the following section:
* :ref:`Start the OpenResty Kubernetes Service <Step 8: Start the OpenResty Kubernetes Service>`.
Step 8: Start the NGINX Kubernetes Deployment
---------------------------------------------
Please see the following section:
* :ref:`Run NGINX deployment <Step 9: Start the NGINX Kubernetes Deployment>`.
Step 9: Create Kubernetes Storage Classes for MongoDB
-----------------------------------------------------
Please see the following section:
* :ref:`Step 10: Create Kubernetes Storage Classes for MongoDB`.
Step 10: Create Kubernetes Persistent Volume Claims
---------------------------------------------------
Please see the following section:
* :ref:`Step 11: Create Kubernetes Persistent Volume Claims`.
Step 11: Start a Kubernetes StatefulSet for MongoDB
---------------------------------------------------
Please see the following section:
* :ref:`Step 12: Start a Kubernetes StatefulSet for MongoDB`.
Step 12: Verify network connectivity between the MongoDB instances
------------------------------------------------------------------
Make sure your MongoDB instances can access each other over the network. *If* you are deploying
the new MongoDB node in a different cluster or geographical location using Azure Kubernetes Container
Service, you will have to set up networking between the two clusters using `Kubernetes
Services <https://kubernetes.io/docs/concepts/services-networking/service/>`_.
Assuming we have an existing MongoDB instance ``mdb-instance-0`` residing in Azure data center location ``westeurope`` and we
want to add a new MongoDB instance ``mdb-instance-1`` located in Azure data center location ``eastus`` to the existing MongoDB
replica set. Unless you already have explicitly set up networking for ``mdb-instance-0`` to communicate with ``mdb-instance-1`` and
vice versa, we will have to add a Kubernetes Service in each cluster to accomplish this goal in order to set up a
MongoDB replica set.
It is similar to ensuring that there is a ``CNAME`` record in the DNS
infrastructure to resolve ``mdb-instance-X`` to the host where it is actually available.
We can do this in Kubernetes using a Kubernetes Service of ``type``
``ExternalName``.
* This configuration is located in the file ``mongodb/mongo-ext-conn-svc.yaml``.
* Set the name of the ``metadata.name`` to the host name of the MongoDB instance you are trying to connect to.
For instance if you are configuring this service on cluster with ``mdb-instance-0`` then the ``metadata.name`` will
be ``mdb-instance-1`` and vice versa.
* Set ``spec.ports.port[0]`` to the ``mongodb-backend-port`` from the ConfigMap for the other cluster.
* Set ``spec.externalName`` to the FQDN mapped to NGINX Public IP of the cluster you are trying to connect to.
For more information about the FQDN please refer to: :ref:`Assign DNS Name to the NGINX Public
IP <Step 5: Assign DNS Name to the NGINX Public IP>`
.. note::
This operation needs to be replicated ``n-1`` times per node for a ``n`` node cluster, with the respective FQDNs
we need to communicate with.
If you are not the system administrator of the cluster, you have to get in
touch with the system administrator/s of the other ``n-1`` clusters and
share with them your instance name (``mdb-instance-name`` in the ConfigMap)
and the FQDN for your node (``cluster-fqdn`` in the ConfigMap).
Step 13: Add the New MongoDB Instance to the Existing Replica Set
-----------------------------------------------------------------
Note that by ``replica set``, we are referring to the MongoDB replica set, Note that by ``replica set``, we are referring to the MongoDB replica set,
not a Kubernetes' ``ReplicaSet``. not a Kubernetes' ``ReplicaSet``.
@ -88,12 +202,18 @@ will have to coordinate offline with an existing administrator so that they can
add the new MongoDB instance to the replica set. add the new MongoDB instance to the replica set.
Add the new instance of MongoDB from an existing instance by accessing the Add the new instance of MongoDB from an existing instance by accessing the
``mongo`` shell. ``mongo`` shell and authenticate as the ``adminUser`` we created for existing MongoDB instance OR
contact the admin of the PRIMARY MongoDB node:
.. code:: bash .. code:: bash
$ kubectl --context ctx-1 exec -it mdb-0 -c mongodb -- /bin/bash $ kubectl --context ctx-1 exec -it <existing mongodb-instance-name> bash
root@mdb-0# mongo --port 27017 $ mongo --host <existing mongodb-instance-name> --port 27017 --verbose --ssl \
--sslCAFile /etc/mongod/ssl/ca.pem \
--sslPEMKeyFile /etc/mongod/ssl/mdb-instance.pem
PRIMARY> use admin
PRIMARY> db.auth("adminUser", "superstrongpassword")
One can only add members to a replica set from the ``PRIMARY`` instance. One can only add members to a replica set from the ``PRIMARY`` instance.
The ``mongo`` shell prompt should state that this is the primary member in the The ``mongo`` shell prompt should state that this is the primary member in the
@ -105,11 +225,11 @@ Run the ``rs.add()`` command with the FQDN and port number of the other instance
.. code:: bash .. code:: bash
PRIMARY> rs.add("<fqdn>:<port>") PRIMARY> rs.add("<new mdb-instance-name>:<port>")
Step 4: Verify the Replica Set Membership Step 14: Verify the Replica Set Membership
----------------------------------------- ------------------------------------------
You can use the ``rs.conf()`` and the ``rs.status()`` commands available in the You can use the ``rs.conf()`` and the ``rs.status()`` commands available in the
mongo shell to verify the replica set membership. mongo shell to verify the replica set membership.
@ -118,22 +238,86 @@ The new MongoDB instance should be listed in the membership information
displayed. displayed.
Step 5: Start the New BigchainDB Instance Step 15: Configure Users and Access Control for MongoDB
----------------------------------------- -------------------------------------------------------
Get the file ``bigchaindb-dep.yaml`` from GitHub using: * Create the users in MongoDB with the appropriate roles assigned to them. This
will enable the new BigchainDB instance, new MongoDB Monitoring Agent
instance and the new MongoDB Backup Agent instance to function correctly.
.. code:: bash * Please refer to
:ref:`Configure Users and Access Control for MongoDB <Step 13: Configure
Users and Access Control for MongoDB>` to create and configure the new
BigchainDB, MongoDB Monitoring Agent and MongoDB Backup Agent users on the
cluster.
$ wget https://raw.githubusercontent.com/bigchaindb/bigchaindb/master/k8s/bigchaindb/bigchaindb-dep.yaml .. note::
You will not have to create the MongoDB replica set or create the admin user, as they already exist.
Note that we set the ``BIGCHAINDB_DATABASE_HOST`` to ``mdb`` which is the name If you do not have access to the ``PRIMARY`` member of the replica set, you
of the MongoDB service defined earlier. need to get in touch with the administrator who can create the users in the
MongoDB cluster.
Edit the ``BIGCHAINDB_KEYPAIR_PUBLIC`` with the public key of this instance,
the ``BIGCHAINDB_KEYPAIR_PRIVATE`` with the private key of this instance and
the ``BIGCHAINDB_KEYRING`` with a ``:`` delimited list of all the public keys Step 16: Start a Kubernetes Deployment for MongoDB Monitoring Agent
in the BigchainDB cluster. -------------------------------------------------------------------
Please see the following section:
* :ref:`Step 14: Start a Kubernetes Deployment for MongoDB Monitoring Agent`.
.. note::
Every MMS group has only one active Monitoring and Backup Agent and having
multiple agents provides high availability and failover, in case one goes
down. For more information about Monitoring and Backup Agents please
consult the `official MongoDB documenation
<https://docs.cloudmanager.mongodb.com/tutorial/move-agent-to-new-server/>`_.
Step 17: Start a Kubernetes Deployment for MongoDB Backup Agent
---------------------------------------------------------------
Please see the following section:
* :ref:`Step 15: Start a Kubernetes Deployment for MongoDB Backup Agent`.
.. note::
Every MMS group has only one active Monitoring and Backup Agent and having
multiple agents provides high availability and failover, in case one goes
down. For more information about Monitoring and Backup Agents please
consult the `official MongoDB documenation
<https://docs.cloudmanager.mongodb.com/tutorial/move-agent-to-new-server/>`_.
Step 18: Start a Kubernetes Deployment for BigchainDB
-----------------------------------------------------
* Set ``metadata.name`` and ``spec.template.metadata.labels.app`` to the
value set in ``bdb-instance-name`` in the ConfigMap, followed by
``-dep``.
For example, if the value set in the
``bdb-instance-name`` is ``bdb-instance-0``, set the fields to the
value ``bdb-instance-0-dep``.
* Set the value of ``BIGCHAINDB_KEYPAIR_PRIVATE`` (not base64-encoded).
(In the future, we'd like to pull the BigchainDB private key from
the Secret named ``bdb-private-key``, but a Secret can only be mounted as a file,
so BigchainDB Server would have to be modified to look for it
in a file.)
* As we gain more experience running BigchainDB in testing and production,
we will tweak the ``resources.limits`` values for CPU and memory, and as
richer monitoring and probing becomes available in BigchainDB, we will
tweak the ``livenessProbe`` and ``readinessProbe`` parameters.
* Set the ports to be exposed from the pod in the
``spec.containers[0].ports`` section. We currently expose 2 ports -
``bigchaindb-api-port`` and ``bigchaindb-ws-port``. Set them to the
values specified in the ConfigMap.
* Uncomment the env var ``BIGCHAINDB_KEYRING``, it will pick up the
``:`` delimited list of all the public keys in the BigchainDB cluster from the ConfigMap.
Create the required Deployment using: Create the required Deployment using:
@ -141,38 +325,59 @@ Create the required Deployment using:
$ kubectl --context ctx-2 apply -f bigchaindb-dep.yaml $ kubectl --context ctx-2 apply -f bigchaindb-dep.yaml
You can check its status using the command ``kubectl get deploy -w`` You can check its status using the command ``kubectl --context ctx-2 get deploy -w``
Step 6: Restart the Existing BigchainDB Instance(s) Step 19: Restart the Existing BigchainDB Instance(s)
--------------------------------------------------- ----------------------------------------------------
Add the public key of the new BigchainDB instance to the keyring of all the * Add the public key of the new BigchainDB instance to the ConfigMap
existing BigchainDB instances and update the BigchainDB instances using: ``bdb-keyring`` variable of all the existing BigchainDB instances.
Update all the existing ConfigMap using:
.. code:: bash .. code:: bash
$ kubectl --context ctx-1 replace -f bigchaindb-dep.yaml $ kubectl --context ctx-1 apply -f configuration/config-map.yaml
This will create a "rolling deployment" in Kubernetes where a new instance of * Uncomment the ``BIGCHAINDB_KEYRING`` variable from the
BigchainDB will be created, and if the health check on the new instance is ``bigchaindb/bigchaindb-dep.yaml`` to refer to the keyring updated in the
successful, the earlier one will be terminated. This ensures that there is ConfigMap.
zero downtime during updates. Update the running BigchainDB instance using:
.. code:: bash
$ kubectl --context ctx-1 delete -f bigchaindb/bigchaindb-dep.yaml
$ kubectl --context ctx-1 apply -f bigchaindb/bigchaindb-dep.yaml
See the page titled :ref:`How to Configure a BigchainDB Node` for more information about
ConfigMap configuration.
You can SSH to an existing BigchainDB instance and run the ``bigchaindb You can SSH to an existing BigchainDB instance and run the ``bigchaindb
show-config`` command to check that the keyring is updated. show-config`` command to check that the keyring is updated.
Step 7: Run NGINX as a Deployment Step 20: Start a Kubernetes Deployment for OpenResty
--------------------------------- ----------------------------------------------------
Please see :ref:`this page <Step 9: Start the NGINX Kubernetes Deployment>` to Please see the following section:
set up NGINX in your new node.
* :ref:`Step 17: Start a Kubernetes Deployment for OpenResty`.
Step 8: Test Your New BigchainDB Node Step 21: Configure the MongoDB Cloud Manager
------------------------------------- --------------------------------------------
Please refer to the testing steps :ref:`here <Step 19: Verify the BigchainDB * MongoDB Cloud Manager auto-detects the members of the replica set and
Node Setup>` to verify that your new BigchainDB node is working as expected. configures the agents to act as a master/slave accordingly.
* You can verify that the new MongoDB instance is detected by the
Monitoring and Backup Agent using the Cloud Manager UI.
Step 22: Test Your New BigchainDB Node
--------------------------------------
* Please refer to the testing steps :ref:`here <Step 19: Verify the BigchainDB
Node Setup>` to verify that your new BigchainDB node is working as expected.

View File

@ -28,13 +28,13 @@ by going into the directory ``client-cert/easy-rsa-3.0.1/easyrsa3``
and using: and using:
.. code:: bash .. code:: bash
./easyrsa init-pki ./easyrsa init-pki
./easyrsa gen-req bdb-instance-0 nopass ./easyrsa gen-req bdb-instance-0 nopass
You should change the Common Name (e.g. ``bdb-instance-0``) You should change the Common Name (e.g. ``bdb-instance-0``)
to a value that reflects what the to a value that reflects what the
client certificate is being used for, e.g. ``mdb-mon-instance-3`` or ``mdb-bak-instance-4``. (The final integer is specific to your BigchainDB node in the BigchainDB cluster.) client certificate is being used for, e.g. ``mdb-mon-instance-3`` or ``mdb-bak-instance-4``. (The final integer is specific to your BigchainDB node in the BigchainDB cluster.)
You will be prompted to enter the Distinguished Name (DN) information for this certificate. For each field, you can accept the default value [in brackets] by pressing Enter. You will be prompted to enter the Distinguished Name (DN) information for this certificate. For each field, you can accept the default value [in brackets] by pressing Enter.
@ -48,6 +48,10 @@ You will be prompted to enter the Distinguished Name (DN) information for this c
Aside: The ``nopass`` option means "do not encrypt the private key (default is encrypted)". You can get help with the ``easyrsa`` command (and its subcommands) Aside: The ``nopass`` option means "do not encrypt the private key (default is encrypted)". You can get help with the ``easyrsa`` command (and its subcommands)
by using the subcommand ``./easyrsa help``. by using the subcommand ``./easyrsa help``.
.. note::
For more information about requirements for MongoDB client certificates, please consult the `official MongoDB
documentation <https://docs.mongodb.com/manual/tutorial/configure-x509-client-authentication/>`_.
Step 3: Get the Client Certificate Signed Step 3: Get the Client Certificate Signed
----------------------------------------- -----------------------------------------
@ -66,11 +70,11 @@ Go to your ``bdb-cluster-ca/easy-rsa-3.0.1/easyrsa3/``
directory and do something like: directory and do something like:
.. code:: bash .. code:: bash
./easyrsa import-req /path/to/bdb-instance-0.req bdb-instance-0 ./easyrsa import-req /path/to/bdb-instance-0.req bdb-instance-0
./easyrsa sign-req client bdb-instance-0 ./easyrsa sign-req client bdb-instance-0
Once you have signed it, you can send the signed certificate Once you have signed it, you can send the signed certificate
and the CA certificate back to the requestor. and the CA certificate back to the requestor.
The files are ``pki/issued/bdb-instance-0.crt`` and ``pki/ca.crt``. The files are ``pki/issued/bdb-instance-0.crt`` and ``pki/ca.crt``.
@ -79,9 +83,21 @@ The files are ``pki/issued/bdb-instance-0.crt`` and ``pki/ca.crt``.
Step 4: Generate the Consolidated Client PEM File Step 4: Generate the Consolidated Client PEM File
------------------------------------------------- -------------------------------------------------
MongoDB requires a single, consolidated file containing both the public and .. note::
private keys. This step can be skipped for BigchainDB client certificate as BigchainDB
uses the PyMongo driver, which accepts separate certificate and key files.
MongoDB, MongoDB Backup Agent and MongoDB Monitoring Agent require a single,
consolidated file containing both the public and private keys.
.. code:: bash .. code:: bash
cat /path/to/bdb-instance-0.crt /path/to/bdb-instance-0.key > bdb-instance-0.pem cat /path/to/mdb-instance-0.crt /path/to/mdb-instance-0.key > mdb-instance-0.pem
OR
cat /path/to/mdb-mon-instance-0.crt /path/to/mdb-mon-instance-0.key > mdb-mon-instance-0.pem
OR
cat /path/to/mdb-bak-instance-0.crt /path/to/mdb-bak-instance-0.key > mdb-bak-instance-0.pem

View File

@ -29,7 +29,6 @@ where all data values must be base64-encoded.
This is true of all Kubernetes ConfigMaps and Secrets.) This is true of all Kubernetes ConfigMaps and Secrets.)
vars.cluster-fqdn vars.cluster-fqdn
~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~
@ -83,7 +82,7 @@ There are some things worth noting about the ``mdb-instance-name``:
documentation. Your BigchainDB cluster may use a different naming convention. documentation. Your BigchainDB cluster may use a different naming convention.
vars.ngx-ndb-instance-name and Similar vars.ngx-mdb-instance-name and Similar
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
NGINX needs the FQDN of the servers inside the cluster to be able to forward NGINX needs the FQDN of the servers inside the cluster to be able to forward

View File

@ -53,7 +53,7 @@ to the above command (i.e. the path to the private key).
the context for cluster 2. To find out the current context, do: the context for cluster 2. To find out the current context, do:
.. code:: bash .. code:: bash
$ kubectl config view $ kubectl config view
and then look for the ``current-context`` in the output. and then look for the ``current-context`` in the output.
@ -106,7 +106,7 @@ Step 3: Configure Your BigchainDB Node
-------------------------------------- --------------------------------------
See the page titled :ref:`How to Configure a BigchainDB Node`. See the page titled :ref:`How to Configure a BigchainDB Node`.
Step 4: Start the NGINX Service Step 4: Start the NGINX Service
------------------------------- -------------------------------
@ -117,22 +117,22 @@ Step 4: Start the NGINX Service
public IP to be assigned. public IP to be assigned.
* You have the option to use vanilla NGINX without HTTPS support or an * You have the option to use vanilla NGINX without HTTPS support or an
NGINX with HTTPS support integrated with 3scale API Gateway. NGINX with HTTPS support.
Step 4.1: Vanilla NGINX Step 4.1: Vanilla NGINX
^^^^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^^^
* This configuration is located in the file ``nginx-http/nginx-http-svc.yaml``. * This configuration is located in the file ``nginx-http/nginx-http-svc.yaml``.
* Set the ``metadata.name`` and ``metadata.labels.name`` to the value * Set the ``metadata.name`` and ``metadata.labels.name`` to the value
set in ``ngx-instance-name`` in the ConfigMap above. set in ``ngx-instance-name`` in the ConfigMap above.
* Set the ``spec.selector.app`` to the value set in ``ngx-instance-name`` in * Set the ``spec.selector.app`` to the value set in ``ngx-instance-name`` in
the ConfigMap followed by ``-dep``. For example, if the value set in the the ConfigMap followed by ``-dep``. For example, if the value set in the
``ngx-instance-name`` is ``ngx-http-instance-0``, set the ``ngx-instance-name`` is ``ngx-http-instance-0``, set the
``spec.selector.app`` to ``ngx-http-instance-0-dep``. ``spec.selector.app`` to ``ngx-http-instance-0-dep``.
* Set ``ports[0].port`` and ``ports[0].targetPort`` to the value set in the * Set ``ports[0].port`` and ``ports[0].targetPort`` to the value set in the
``cluster-frontend-port`` in the ConfigMap above. This is the ``cluster-frontend-port`` in the ConfigMap above. This is the
``public-cluster-port`` in the file which is the ingress in to the cluster. ``public-cluster-port`` in the file which is the ingress in to the cluster.
@ -140,18 +140,18 @@ Step 4.1: Vanilla NGINX
* Start the Kubernetes Service: * Start the Kubernetes Service:
.. code:: bash .. code:: bash
$ kubectl --context k8s-bdb-test-cluster-0 apply -f nginx-http/nginx-http-svc.yaml $ kubectl --context k8s-bdb-test-cluster-0 apply -f nginx-http/nginx-http-svc.yaml
Step 4.2: NGINX with HTTPS + 3scale Step 4.2: NGINX with HTTPS
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^^^^^^
* You have to enable HTTPS for this one and will need an HTTPS certificate * You have to enable HTTPS for this one and will need an HTTPS certificate
for your domain. for your domain.
* You should have already created the necessary Kubernetes Secrets in the previous * You should have already created the necessary Kubernetes Secrets in the previous
step (e.g. ``https-certs`` and ``threescale-credentials``). step (i.e. ``https-certs``).
* This configuration is located in the file ``nginx-https/nginx-https-svc.yaml``. * This configuration is located in the file ``nginx-https/nginx-https-svc.yaml``.
@ -162,9 +162,9 @@ Step 4.2: NGINX with HTTPS + 3scale
the ConfigMap followed by ``-dep``. For example, if the value set in the the ConfigMap followed by ``-dep``. For example, if the value set in the
``ngx-instance-name`` is ``ngx-https-instance-0``, set the ``ngx-instance-name`` is ``ngx-https-instance-0``, set the
``spec.selector.app`` to ``ngx-https-instance-0-dep``. ``spec.selector.app`` to ``ngx-https-instance-0-dep``.
* Set ``ports[0].port`` and ``ports[0].targetPort`` to the value set in the * Set ``ports[0].port`` and ``ports[0].targetPort`` to the value set in the
``cluster-frontend-port`` in the ConfigMap above. This is the ``cluster-frontend-port`` in the ConfigMap above. This is the
``public-secure-cluster-port`` in the file which is the ingress in to the cluster. ``public-secure-cluster-port`` in the file which is the ingress in to the cluster.
* Set ``ports[1].port`` and ``ports[1].targetPort`` to the value set in the * Set ``ports[1].port`` and ``ports[1].targetPort`` to the value set in the
@ -173,7 +173,7 @@ Step 4.2: NGINX with HTTPS + 3scale
available. available.
* Start the Kubernetes Service: * Start the Kubernetes Service:
.. code:: bash .. code:: bash
$ kubectl --context k8s-bdb-test-cluster-0 apply -f nginx-https/nginx-https-svc.yaml $ kubectl --context k8s-bdb-test-cluster-0 apply -f nginx-https/nginx-https-svc.yaml
@ -189,11 +189,11 @@ Step 5: Assign DNS Name to the NGINX Public IP
* The following command can help you find out if the NGINX service started * The following command can help you find out if the NGINX service started
above has been assigned a public IP or external IP address: above has been assigned a public IP or external IP address:
.. code:: bash .. code:: bash
$ kubectl --context k8s-bdb-test-cluster-0 get svc -w $ kubectl --context k8s-bdb-test-cluster-0 get svc -w
* Once a public IP is assigned, you can map it to * Once a public IP is assigned, you can map it to
a DNS name. a DNS name.
We usually assign ``bdb-test-cluster-0``, ``bdb-test-cluster-1`` and We usually assign ``bdb-test-cluster-0``, ``bdb-test-cluster-1`` and
@ -237,7 +237,7 @@ Step 6: Start the MongoDB Kubernetes Service
``mongodb-backend-port`` in the ConfigMap above. ``mongodb-backend-port`` in the ConfigMap above.
This is the ``mdb-port`` in the file which specifies where MongoDB listens This is the ``mdb-port`` in the file which specifies where MongoDB listens
for API requests. for API requests.
* Start the Kubernetes Service: * Start the Kubernetes Service:
.. code:: bash .. code:: bash
@ -304,13 +304,13 @@ Step 9: Start the NGINX Kubernetes Deployment
on ``mongodb-frontend-port`` to the MongoDB backend. on ``mongodb-frontend-port`` to the MongoDB backend.
* As in step 4, you have the option to use vanilla NGINX without HTTPS or * As in step 4, you have the option to use vanilla NGINX without HTTPS or
NGINX with HTTPS support integrated with 3scale API Gateway. NGINX with HTTPS support.
Step 9.1: Vanilla NGINX Step 9.1: Vanilla NGINX
^^^^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^^^
* This configuration is located in the file ``nginx-http/nginx-http-dep.yaml``. * This configuration is located in the file ``nginx-http/nginx-http-dep.yaml``.
* Set the ``metadata.name`` and ``spec.template.metadata.labels.app`` * Set the ``metadata.name`` and ``spec.template.metadata.labels.app``
to the value set in ``ngx-instance-name`` in the ConfigMap followed by a to the value set in ``ngx-instance-name`` in the ConfigMap followed by a
``-dep``. For example, if the value set in the ``ngx-instance-name`` is ``-dep``. For example, if the value set in the ``ngx-instance-name`` is
@ -329,9 +329,9 @@ Step 9.1: Vanilla NGINX
$ kubectl --context k8s-bdb-test-cluster-0 apply -f nginx-http/nginx-http-dep.yaml $ kubectl --context k8s-bdb-test-cluster-0 apply -f nginx-http/nginx-http-dep.yaml
Step 9.2: NGINX with HTTPS + 3scale Step 9.2: NGINX with HTTPS
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^^^^^^
* This configuration is located in the file * This configuration is located in the file
``nginx-https/nginx-https-dep.yaml``. ``nginx-https/nginx-https-dep.yaml``.
@ -467,7 +467,7 @@ Step 12: Start a Kubernetes StatefulSet for MongoDB
the ConfigMap. the ConfigMap.
For example, if the value set in the ``mdb-instance-name`` For example, if the value set in the ``mdb-instance-name``
is ``mdb-instance-0``, set the field to ``mdb-instance-0``. is ``mdb-instance-0``, set the field to ``mdb-instance-0``.
* Set ``metadata.name``, ``spec.template.metadata.name`` and * Set ``metadata.name``, ``spec.template.metadata.name`` and
``spec.template.metadata.labels.app`` to the value set in ``spec.template.metadata.labels.app`` to the value set in
``mdb-instance-name`` in the ConfigMap, followed by ``mdb-instance-name`` in the ConfigMap, followed by
@ -479,7 +479,7 @@ Step 12: Start a Kubernetes StatefulSet for MongoDB
* Note how the MongoDB container uses the ``mongo-db-claim`` and the * Note how the MongoDB container uses the ``mongo-db-claim`` and the
``mongo-configdb-claim`` PersistentVolumeClaims for its ``/data/db`` and ``mongo-configdb-claim`` PersistentVolumeClaims for its ``/data/db`` and
``/data/configdb`` directories (mount paths). ``/data/configdb`` directories (mount paths).
* Note also that we use the pod's ``securityContext.capabilities.add`` * Note also that we use the pod's ``securityContext.capabilities.add``
specification to add the ``FOWNER`` capability to the container. That is specification to add the ``FOWNER`` capability to the container. That is
because the MongoDB container has the user ``mongodb``, with uid ``999`` and because the MongoDB container has the user ``mongodb``, with uid ``999`` and
@ -505,18 +505,18 @@ Step 12: Start a Kubernetes StatefulSet for MongoDB
.. code:: bash .. code:: bash
$ kubectl --context k8s-bdb-test-cluster-0 apply -f mongodb/mongo-ss.yaml $ kubectl --context k8s-bdb-test-cluster-0 apply -f mongodb/mongo-ss.yaml
* It might take up to 10 minutes for the disks, specified in the Persistent * It might take up to 10 minutes for the disks, specified in the Persistent
Volume Claims above, to be created and attached to the pod. Volume Claims above, to be created and attached to the pod.
The UI might show that the pod has errored with the message The UI might show that the pod has errored with the message
"timeout expired waiting for volumes to attach/mount". Use the CLI below "timeout expired waiting for volumes to attach/mount". Use the CLI below
to check the status of the pod in this case, instead of the UI. to check the status of the pod in this case, instead of the UI.
This happens due to a bug in Azure ACS. This happens due to a bug in Azure ACS.
.. code:: bash .. code:: bash
$ kubectl --context k8s-bdb-test-cluster-0 get pods -w $ kubectl --context k8s-bdb-test-cluster-0 get pods -w
Step 13: Configure Users and Access Control for MongoDB Step 13: Configure Users and Access Control for MongoDB
------------------------------------------------------- -------------------------------------------------------
@ -530,26 +530,26 @@ Step 13: Configure Users and Access Control for MongoDB
* Find out the name of your MongoDB pod by reading the output * Find out the name of your MongoDB pod by reading the output
of the ``kubectl ... get pods`` command at the end of the last step. of the ``kubectl ... get pods`` command at the end of the last step.
It should be something like ``mdb-instance-0-ss-0``. It should be something like ``mdb-instance-0-ss-0``.
* Log in to the MongoDB pod using: * Log in to the MongoDB pod using:
.. code:: bash .. code:: bash
$ kubectl --context k8s-bdb-test-cluster-0 exec -it <name of your MongoDB pod> bash $ kubectl --context k8s-bdb-test-cluster-0 exec -it <name of your MongoDB pod> bash
* Open a mongo shell using the certificates * Open a mongo shell using the certificates
already present at ``/etc/mongod/ssl/`` already present at ``/etc/mongod/ssl/``
.. code:: bash .. code:: bash
$ mongo --host localhost --port 27017 --verbose --ssl \ $ mongo --host localhost --port 27017 --verbose --ssl \
--sslCAFile /etc/mongod/ssl/ca.pem \ --sslCAFile /etc/mongod/ssl/ca.pem \
--sslPEMKeyFile /etc/mongod/ssl/mdb-instance.pem --sslPEMKeyFile /etc/mongod/ssl/mdb-instance.pem
* Initialize the replica set using: * Initialize the replica set using:
.. code:: bash .. code:: bash
> rs.initiate( { > rs.initiate( {
_id : "bigchain-rs", _id : "bigchain-rs",
members: [ { members: [ {
@ -562,7 +562,7 @@ Step 13: Configure Users and Access Control for MongoDB
``mdb-instance-name`` in the ConfigMap. ``mdb-instance-name`` in the ConfigMap.
For example, if the value set in the ``mdb-instance-name`` is For example, if the value set in the ``mdb-instance-name`` is
``mdb-instance-0``, set the ``hostname`` above to the value ``mdb-instance-0``. ``mdb-instance-0``, set the ``hostname`` above to the value ``mdb-instance-0``.
* The instance should be voted as the ``PRIMARY`` in the replica set (since * The instance should be voted as the ``PRIMARY`` in the replica set (since
this is the only instance in the replica set till now). this is the only instance in the replica set till now).
This can be observed from the mongo shell prompt, This can be observed from the mongo shell prompt,
@ -573,14 +573,15 @@ Step 13: Configure Users and Access Control for MongoDB
log in to the mongo shell. For further details, see `localhost log in to the mongo shell. For further details, see `localhost
exception <https://docs.mongodb.com/manual/core/security-users/#localhost-exception>`_ exception <https://docs.mongodb.com/manual/core/security-users/#localhost-exception>`_
in MongoDB. in MongoDB.
.. code:: bash .. code:: bash
PRIMARY> use admin PRIMARY> use admin
PRIMARY> db.createUser( { PRIMARY> db.createUser( {
user: "adminUser", user: "adminUser",
pwd: "superstrongpassword", pwd: "superstrongpassword",
roles: [ { role: "userAdminAnyDatabase", db: "admin" } ] roles: [ { role: "userAdminAnyDatabase", db: "admin" },
{ role: "clusterManager", db: "admin"} ]
} ) } )
* Exit and restart the mongo shell using the above command. * Exit and restart the mongo shell using the above command.
@ -605,16 +606,16 @@ Step 13: Configure Users and Access Control for MongoDB
-inform PEM -subject -nameopt RFC2253 -inform PEM -subject -nameopt RFC2253
You should see an output line that resembles: You should see an output line that resembles:
.. code:: bash .. code:: bash
subject= emailAddress=dev@bigchaindb.com,CN=test-bdb-ssl,OU=BigchainDB-Instance,O=BigchainDB GmbH,L=Berlin,ST=Berlin,C=DE subject= emailAddress=dev@bigchaindb.com,CN=test-bdb-ssl,OU=BigchainDB-Instance,O=BigchainDB GmbH,L=Berlin,ST=Berlin,C=DE
The ``subject`` line states the complete user name we need to use for The ``subject`` line states the complete user name we need to use for
creating the user on the mongo shell as follows: creating the user on the mongo shell as follows:
.. code:: bash .. code:: bash
PRIMARY> db.getSiblingDB("$external").runCommand( { PRIMARY> db.getSiblingDB("$external").runCommand( {
createUser: 'emailAddress=dev@bigchaindb.com,CN=test-bdb-ssl,OU=BigchainDB-Instance,O=BigchainDB GmbH,L=Berlin,ST=Berlin,C=DE', createUser: 'emailAddress=dev@bigchaindb.com,CN=test-bdb-ssl,OU=BigchainDB-Instance,O=BigchainDB GmbH,L=Berlin,ST=Berlin,C=DE',
writeConcern: { w: 'majority' , wtimeout: 5000 }, writeConcern: { w: 'majority' , wtimeout: 5000 },
@ -700,19 +701,19 @@ Step 16: Start a Kubernetes Deployment for BigchainDB
For example, if the value set in the For example, if the value set in the
``bdb-instance-name`` is ``bdb-instance-0``, set the fields to the ``bdb-instance-name`` is ``bdb-instance-0``, set the fields to the
value ``bdb-insance-0-dep``. value ``bdb-insance-0-dep``.
* Set the value of ``BIGCHAINDB_KEYPAIR_PRIVATE`` (not base64-encoded). * Set the value of ``BIGCHAINDB_KEYPAIR_PRIVATE`` (not base64-encoded).
(In the future, we'd like to pull the BigchainDB private key from (In the future, we'd like to pull the BigchainDB private key from
the Secret named ``bdb-private-key``, the Secret named ``bdb-private-key``,
but a Secret can only be mounted as a file, but a Secret can only be mounted as a file,
so BigchainDB Server would have to be modified to look for it so BigchainDB Server would have to be modified to look for it
in a file.) in a file.)
* As we gain more experience running BigchainDB in testing and production, * As we gain more experience running BigchainDB in testing and production,
we will tweak the ``resources.limits`` values for CPU and memory, and as we will tweak the ``resources.limits`` values for CPU and memory, and as
richer monitoring and probing becomes available in BigchainDB, we will richer monitoring and probing becomes available in BigchainDB, we will
tweak the ``livenessProbe`` and ``readinessProbe`` parameters. tweak the ``livenessProbe`` and ``readinessProbe`` parameters.
* Set the ports to be exposed from the pod in the * Set the ports to be exposed from the pod in the
``spec.containers[0].ports`` section. We currently expose 2 ports - ``spec.containers[0].ports`` section. We currently expose 2 ports -
``bigchaindb-api-port`` and ``bigchaindb-ws-port``. Set them to the ``bigchaindb-api-port`` and ``bigchaindb-ws-port``. Set them to the
@ -740,11 +741,11 @@ Step 17: Start a Kubernetes Deployment for OpenResty
For example, if the value set in the For example, if the value set in the
``openresty-instance-name`` is ``openresty-instance-0``, set the fields to ``openresty-instance-name`` is ``openresty-instance-0``, set the fields to
the value ``openresty-instance-0-dep``. the value ``openresty-instance-0-dep``.
* Set the port to be exposed from the pod in the * Set the port to be exposed from the pod in the
``spec.containers[0].ports`` section. We currently expose the port at ``spec.containers[0].ports`` section. We currently expose the port at
which OpenResty is listening for requests, ``openresty-backend-port`` in which OpenResty is listening for requests, ``openresty-backend-port`` in
the above ConfigMap. the above ConfigMap.
* Create the OpenResty Deployment using: * Create the OpenResty Deployment using:
@ -791,13 +792,13 @@ You can use it as below to get started immediately:
It will drop you to the shell prompt. It will drop you to the shell prompt.
To test the MongoDB instance: To test the MongoDB instance:
.. code:: bash .. code:: bash
$ nslookup mdb-instance-0 $ nslookup mdb-instance-0
$ dig +noall +answer _mdb-port._tcp.mdb-instance-0.default.svc.cluster.local SRV $ dig +noall +answer _mdb-port._tcp.mdb-instance-0.default.svc.cluster.local SRV
$ curl -X GET http://mdb-instance-0:27017 $ curl -X GET http://mdb-instance-0:27017
The ``nslookup`` command should output the configured IP address of the service The ``nslookup`` command should output the configured IP address of the service
@ -806,20 +807,20 @@ The ``dig`` command should return the configured port numbers.
The ``curl`` command tests the availability of the service. The ``curl`` command tests the availability of the service.
To test the BigchainDB instance: To test the BigchainDB instance:
.. code:: bash .. code:: bash
$ nslookup bdb-instance-0 $ nslookup bdb-instance-0
$ dig +noall +answer _bdb-api-port._tcp.bdb-instance-0.default.svc.cluster.local SRV $ dig +noall +answer _bdb-api-port._tcp.bdb-instance-0.default.svc.cluster.local SRV
$ dig +noall +answer _bdb-ws-port._tcp.bdb-instance-0.default.svc.cluster.local SRV $ dig +noall +answer _bdb-ws-port._tcp.bdb-instance-0.default.svc.cluster.local SRV
$ curl -X GET http://bdb-instance-0:9984 $ curl -X GET http://bdb-instance-0:9984
$ wsc -er ws://bdb-instance-0:9985/api/v1/streams/valid_transactions $ wsc -er ws://bdb-instance-0:9985/api/v1/streams/valid_transactions
To test the OpenResty instance: To test the OpenResty instance:
.. code:: bash .. code:: bash
@ -834,11 +835,11 @@ BigchainDB instance.
To test the vanilla NGINX instance: To test the vanilla NGINX instance:
.. code:: bash .. code:: bash
$ nslookup ngx-http-instance-0 $ nslookup ngx-http-instance-0
$ dig +noall +answer _public-cluster-port._tcp.ngx-http-instance-0.default.svc.cluster.local SRV $ dig +noall +answer _public-cluster-port._tcp.ngx-http-instance-0.default.svc.cluster.local SRV
$ dig +noall +answer _public-health-check-port._tcp.ngx-http-instance-0.default.svc.cluster.local SRV $ dig +noall +answer _public-health-check-port._tcp.ngx-http-instance-0.default.svc.cluster.local SRV
@ -855,7 +856,7 @@ The above curl command should result in the response
To test the NGINX instance with HTTPS and 3scale integration: To test the NGINX instance with HTTPS and 3scale integration:
.. code:: bash .. code:: bash
$ nslookup ngx-https-instance-0 $ nslookup ngx-https-instance-0
$ dig +noall +answer _public-secure-cluster-port._tcp.ngx-https-instance-0.default.svc.cluster.local SRV $ dig +noall +answer _public-secure-cluster-port._tcp.ngx-https-instance-0.default.svc.cluster.local SRV
@ -886,5 +887,4 @@ If you are using the NGINX with HTTPS support, use ``https`` instead of
``http`` above. ``http`` above.
Use the Python Driver to send some transactions to the BigchainDB node and Use the Python Driver to send some transactions to the BigchainDB node and
verify that your node or cluster works as expected. verify that your node or cluster works as expected.

View File

@ -29,8 +29,13 @@ You can create the server private key and certificate signing request (CSR)
by going into the directory ``member-cert/easy-rsa-3.0.1/easyrsa3`` by going into the directory ``member-cert/easy-rsa-3.0.1/easyrsa3``
and using something like: and using something like:
.. note::
Please make sure you are fullfilling the requirements for `MongoDB server/member certificates
<https://docs.mongodb.com/manual/tutorial/configure-x509-member-authentication>`_.
.. code:: bash .. code:: bash
./easyrsa init-pki ./easyrsa init-pki
./easyrsa --req-cn=mdb-instance-0 --subject-alt-name=DNS:localhost,DNS:mdb-instance-0 gen-req mdb-instance-0 nopass ./easyrsa --req-cn=mdb-instance-0 --subject-alt-name=DNS:localhost,DNS:mdb-instance-0 gen-req mdb-instance-0 nopass
@ -67,11 +72,11 @@ Go to your ``bdb-cluster-ca/easy-rsa-3.0.1/easyrsa3/``
directory and do something like: directory and do something like:
.. code:: bash .. code:: bash
./easyrsa import-req /path/to/mdb-instance-0.req mdb-instance-0 ./easyrsa import-req /path/to/mdb-instance-0.req mdb-instance-0
./easyrsa --subject-alt-name=DNS:localhost,DNS:mdb-instance-0 sign-req server mdb-instance-0 ./easyrsa --subject-alt-name=DNS:localhost,DNS:mdb-instance-0 sign-req server mdb-instance-0
Once you have signed it, you can send the signed certificate Once you have signed it, you can send the signed certificate
and the CA certificate back to the requestor. and the CA certificate back to the requestor.
The files are ``pki/issued/mdb-instance-0.crt`` and ``pki/ca.crt``. The files are ``pki/issued/mdb-instance-0.crt`` and ``pki/ca.crt``.
@ -84,6 +89,6 @@ MongoDB requires a single, consolidated file containing both the public and
private keys. private keys.
.. code:: bash .. code:: bash
cat /path/to/mdb-instance-0.crt /path/to/mdb-instance-0.key > mdb-instance-0.pem cat /path/to/mdb-instance-0.crt /path/to/mdb-instance-0.key > mdb-instance-0.pem

View File

@ -49,7 +49,7 @@ If you already *have* the Azure CLI installed, you may want to update it.
.. warning:: .. warning::
``az component update`` isn't supported if you installed the CLI using some of Microsoft's provided installation instructions. See `the Microsoft docs for update instructions <https://docs.microsoft.com/en-us/cli/azure/install-az-cli2>`_. ``az component update`` isn't supported if you installed the CLI using some of Microsoft's provided installation instructions. See `the Microsoft docs for update instructions <https://docs.microsoft.com/en-us/cli/azure/install-az-cli2>`_.
Next, login to your account using: Next, login to your account using:
@ -128,9 +128,9 @@ You can SSH to one of the just-deployed Kubernetes "master" nodes
.. code:: bash .. code:: bash
$ ssh -i ~/.ssh/<name> ubuntu@<master-ip-address-or-hostname> $ ssh -i ~/.ssh/<name> ubuntu@<master-ip-address-or-fqdn>
where you can get the IP address or hostname where you can get the IP address or FQDN
of a master node from the Azure Portal. For example: of a master node from the Azure Portal. For example:
.. code:: bash .. code:: bash
@ -139,13 +139,14 @@ of a master node from the Azure Portal. For example:
.. note:: .. note::
All the master nodes should have the *same* public IP address and hostname All the master nodes are accessible behind the *same* public IP address and
(also called the Master FQDN). FQDN. You connect to one of the masters randomly based on the load balancing
policy.
The "agent" nodes shouldn't get public IP addresses or hostnames, The "agent" nodes shouldn't get public IP addresses or externally accessible
so you can't SSH to them *directly*, FQDNs, so you can't SSH to them *directly*,
but you can first SSH to the master but you can first SSH to the master
and then SSH to an agent from there. and then SSH to an agent from there using their hostname.
To do that, you could To do that, you could
copy your SSH key pair to the master (a bad idea), copy your SSH key pair to the master (a bad idea),
or use SSH agent forwarding (better). or use SSH agent forwarding (better).
@ -168,14 +169,14 @@ then SSH agent forwarding hasn't been set up correctly.
If you get a non-empty response, If you get a non-empty response,
then SSH agent forwarding should work fine then SSH agent forwarding should work fine
and you can SSH to one of the agent nodes (from a master) and you can SSH to one of the agent nodes (from a master)
using something like: using:
.. code:: bash .. code:: bash
$ ssh ubuntu@k8s-agent-4AC80E97-0 $ ssh ubuntu@k8s-agent-4AC80E97-0
where ``k8s-agent-4AC80E97-0`` is the name where ``k8s-agent-4AC80E97-0`` is the name
of a Kubernetes agent node in your Kubernetes cluster. of a Kubernetes agent node in your Kubernetes cluster.
You will have to replace it by the name You will have to replace it by the name
of an agent node in your cluster. of an agent node in your cluster.
@ -202,4 +203,4 @@ CAUTION: You might end up deleting resources other than the ACS cluster.
Next, you can :doc:`run a BigchainDB node on your new Next, you can :doc:`run a BigchainDB node on your new
Kubernetes cluster <node-on-kubernetes>`. Kubernetes cluster <node-on-kubernetes>`.

View File

@ -45,7 +45,7 @@ For example, maybe they assign a unique number to each node,
so that if you're operating node 12, your MongoDB instance would be named so that if you're operating node 12, your MongoDB instance would be named
``mdb-instance-12``. ``mdb-instance-12``.
Similarly, other instances must also have unique names in the cluster. Similarly, other instances must also have unique names in the cluster.
#. Name of the MongoDB instance (``mdb-instance-*``) #. Name of the MongoDB instance (``mdb-instance-*``)
#. Name of the BigchainDB instance (``bdb-instance-*``) #. Name of the BigchainDB instance (``bdb-instance-*``)
#. Name of the NGINX instance (``ngx-http-instance-*`` or ``ngx-https-instance-*``) #. Name of the NGINX instance (``ngx-http-instance-*`` or ``ngx-https-instance-*``)
@ -80,7 +80,7 @@ You can generate a BigchainDB keypair for your node, for example,
using the `BigchainDB Python Driver <http://docs.bigchaindb.com/projects/py-driver/en/latest/index.html>`_. using the `BigchainDB Python Driver <http://docs.bigchaindb.com/projects/py-driver/en/latest/index.html>`_.
.. code:: python .. code:: python
from bigchaindb_driver.crypto import generate_keypair from bigchaindb_driver.crypto import generate_keypair
print(generate_keypair()) print(generate_keypair())
@ -100,15 +100,13 @@ and have an SSL certificate for the FQDN.
(You can get an SSL certificate from any SSL certificate provider.) (You can get an SSL certificate from any SSL certificate provider.)
☐ Ask the managing organization ☐ Ask the managing organization for the user name to use for authenticating to
for the FQDN used to serve the BigchainDB APIs MongoDB.
(e.g. ``api.orgname.net`` or ``bdb.clustername.com``)
and for a copy of the associated SSL/TLS certificate.
Also, ask for the user name to use for authenticating to MongoDB.
☐ If the cluster uses 3scale for API authentication, monitoring and billing, ☐ If the cluster uses 3scale for API authentication, monitoring and billing,
you must ask the managing organization for all relevant 3scale credentials. you must ask the managing organization for all relevant 3scale credentials -
secret token, service ID, version header and API service token.
☐ If the cluster uses MongoDB Cloud Manager for monitoring and backup, ☐ If the cluster uses MongoDB Cloud Manager for monitoring and backup,

View File

@ -1,4 +1,4 @@
## Note: data values do NOT have to be base64-encoded in this file. ## Note: data values do NOT have to be base64-encoded in this file.
## vars is common environment variables for this BigchaindB node ## vars is common environment variables for this BigchaindB node
apiVersion: v1 apiVersion: v1
@ -12,7 +12,7 @@ data:
# cluster-frontend-port is the port number on which this node's services # cluster-frontend-port is the port number on which this node's services
# are available to external clients. # are available to external clients.
cluster-frontend-port: "443" cluster-frontend-port: "443"
# cluster-health-check-port is the port number on which an external load # cluster-health-check-port is the port number on which an external load
# balancer can check the status/liveness of the external/public server. # balancer can check the status/liveness of the external/public server.

View File

@ -0,0 +1,13 @@
apiVersion: v1
kind: Service
metadata:
# Name of mongodb instance you are trying to connect to
# e.g. mdb-instance-0
name: "<remote-mongodb-host>"
namespace: default
spec:
ports:
- port: "<mongodb-backend-port from ConfigMap>"
type: ExternalName
# FQDN of remote cluster/NGINX instance
externalName: "<dns-name-remote-nginx>"

View File

@ -1,17 +1,17 @@
apiVersion: extensions/v1beta1 apiVersion: extensions/v1beta1
kind: Deployment kind: Deployment
metadata: metadata:
name: ngx-http-instance-0-dep name: ngx-instance-0-dep
spec: spec:
replicas: 1 replicas: 1
template: template:
metadata: metadata:
labels: labels:
app: ngx-http-instance-0-dep app: ngx-instance-0-dep
spec: spec:
terminationGracePeriodSeconds: 10 terminationGracePeriodSeconds: 10
containers: containers:
- name: nginx-http - name: nginx
image: bigchaindb/nginx_http:1.0 image: bigchaindb/nginx_http:1.0
imagePullPolicy: IfNotPresent imagePullPolicy: IfNotPresent
env: env:

View File

@ -1,17 +1,17 @@
apiVersion: v1 apiVersion: v1
kind: Service kind: Service
metadata: metadata:
name: ngx-http-instance-0 name: ngx-instance-0
namespace: default namespace: default
labels: labels:
name: ngx-http-instance-0 name: ngx-instance-0
annotations: annotations:
# NOTE: the following annotation is a beta feature and # NOTE: the following annotation is a beta feature and
# only available in GCE/GKE and Azure as of now # only available in GCE/GKE and Azure as of now
service.beta.kubernetes.io/external-traffic: OnlyLocal service.beta.kubernetes.io/external-traffic: OnlyLocal
spec: spec:
selector: selector:
app: ngx-http-instance-0-dep app: ngx-instance-0-dep
ports: ports:
- port: "<cluster-frontend-port from ConfigMap>" - port: "<cluster-frontend-port from ConfigMap>"
targetPort: "<cluster-frontend-port from ConfigMap>" targetPort: "<cluster-frontend-port from ConfigMap>"

View File

@ -100,7 +100,7 @@ http {
add_header 'Access-Control-Allow-Methods' 'GET, POST, OPTIONS'; add_header 'Access-Control-Allow-Methods' 'GET, POST, OPTIONS';
add_header 'Access-Control-Allow-Headers' 'DNT,X-CustomHeader,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,Content-Range,Range'; add_header 'Access-Control-Allow-Headers' 'DNT,X-CustomHeader,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,Content-Range,Range';
add_header 'Access-Control-Expose-Headers' 'DNT,X-CustomHeader,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,Content-Range,Range'; add_header 'Access-Control-Expose-Headers' 'DNT,X-CustomHeader,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,Content-Range,Range';
proxy_pass http://$openresty_backend:OPENRESTY_BACKEND_PORT; proxy_pass http://$openresty_backend:OPENRESTY_BACKEND_PORT;
} }
@ -157,10 +157,14 @@ stream {
# Enable logging when connections are being throttled. # Enable logging when connections are being throttled.
limit_conn_log_level notice; limit_conn_log_level notice;
# Allow 16 connections from the same IP address. # For a multi node BigchainDB deployment we need around 2^5 connections
limit_conn two 16; # (for inter-node communication)per node via NGINX, we can bump this up in case
# there is a requirement to scale up. But we should not remove this
# for security reasons.
# Allow 256 connections from the same IP address.
limit_conn two 256;
# DNS resolver to use for all the backend names specified in this configuration. # DNS resolver to use for all the backend names specified in this configuration.
resolver DNS_SERVER valid=30s ipv6=off; resolver DNS_SERVER valid=30s ipv6=off;
@ -169,10 +173,10 @@ stream {
map $remote_addr $mdb_backend { map $remote_addr $mdb_backend {
default MONGODB_BACKEND_HOST; default MONGODB_BACKEND_HOST;
} }
# Frontend server to forward connections to MDB instance. # Frontend server to forward connections to MDB instance.
server { server {
listen MONGODB_FRONTEND_PORT so_keepalive=10m:1m:5; listen MONGODB_FRONTEND_PORT so_keepalive=3m:1m:5;
preread_timeout 30s; preread_timeout 30s;
tcp_nodelay on; tcp_nodelay on;
proxy_pass $mdb_backend:MONGODB_BACKEND_PORT; proxy_pass $mdb_backend:MONGODB_BACKEND_PORT;

View File

@ -1,17 +1,17 @@
apiVersion: extensions/v1beta1 apiVersion: extensions/v1beta1
kind: Deployment kind: Deployment
metadata: metadata:
name: ngx-https-instance-0-dep name: ngx-instance-0-dep
spec: spec:
replicas: 1 replicas: 1
template: template:
metadata: metadata:
labels: labels:
app: ngx-https-instance-0-dep app: ngx-instance-0-dep
spec: spec:
terminationGracePeriodSeconds: 10 terminationGracePeriodSeconds: 10
containers: containers:
- name: nginx-https - name: nginx
image: bigchaindb/nginx_https:1.0 image: bigchaindb/nginx_https:1.0
imagePullPolicy: IfNotPresent imagePullPolicy: IfNotPresent
env: env:
@ -59,7 +59,7 @@ spec:
valueFrom: valueFrom:
configMapKeyRef: configMapKeyRef:
name: vars name: vars
key: openresty-instance-name key: ngx-openresty-instance-name
- name: BIGCHAINDB_BACKEND_HOST - name: BIGCHAINDB_BACKEND_HOST
valueFrom: valueFrom:
configMapKeyRef: configMapKeyRef:

View File

@ -1,17 +1,17 @@
apiVersion: v1 apiVersion: v1
kind: Service kind: Service
metadata: metadata:
name: ngx-https-instance-0 name: ngx-instance-0
namespace: default namespace: default
labels: labels:
name: ngx-https-instance-0 name: ngx-instance-0
annotations: annotations:
# NOTE: the following annotation is a beta feature and # NOTE: the following annotation is a beta feature and
# only available in GCE/GKE and Azure as of now # only available in GCE/GKE and Azure as of now
service.beta.kubernetes.io/external-traffic: OnlyLocal service.beta.kubernetes.io/external-traffic: OnlyLocal
spec: spec:
selector: selector:
app: ngx-https-instance-0-dep app: ngx-instance-0-dep
ports: ports:
- port: "<cluster-frontend-port from ConfigMap>" - port: "<cluster-frontend-port from ConfigMap>"
targetPort: "<cluster-frontend-port from ConfigMap>" targetPort: "<cluster-frontend-port from ConfigMap>"

View File

@ -12,7 +12,7 @@ spec:
terminationGracePeriodSeconds: 10 terminationGracePeriodSeconds: 10
containers: containers:
- name: nginx-openresty - name: nginx-openresty
image: bigchaindb/nginx_3scale:2.0 image: bigchaindb/nginx_3scale:3.0
imagePullPolicy: IfNotPresent imagePullPolicy: IfNotPresent
env: env:
- name: DNS_SERVER - name: DNS_SERVER