Verify and fix BDB multi node deployment guide

- Documentation support to add a new BDB node to an existing
  replica set, using x.509 certificates and SSL/TSL connections, across
  geographically dispersed clusters.
- Fix some documentation issues and add more references i.e.
  specifically about signing of MongoDB member certificates.
- Minor fixes for nginx-https-dep.yaml(invalid configMap var)
- Reconfigure nginx keep_alive between MongoDB front and backend ports.
- Editor removed whitespaces
This commit is contained in:
Muawia Khan 2017-08-05 13:07:54 +02:00
parent e7640feaec
commit 0cf46b331f
8 changed files with 228 additions and 92 deletions

View File

@ -35,6 +35,14 @@ cluster.
``existing BigchainDB instance`` will refer to the BigchainDB instance in the
existing cluster.
Below, we refer to multiple files by their directory and filename,
such as ``mongodb/mongo-ext-conn-svc.yaml``. Those files are files in the
`bigchaindb/bigchaindb repository on GitHub
<https://github.com/bigchaindb/bigchaindb/>`_ in the ``k8s/`` directory.
Make sure you're getting those files from the appropriate Git branch on
GitHub, i.e. the branch for the version of BigchainDB that your BigchainDB
cluster is using.
Step 1: Prerequisites
---------------------
@ -46,11 +54,15 @@ Step 1: Prerequisites
* You will need the public keys of all the existing BigchainDB nodes.
* Client Certificate for the new BigchainDB Server to identify itself to the cluster.
* A new Kubernetes cluster setup with kubectl configured to access it.
* Some familiarity with deploying a BigchainDB node on Kubernetes.
See our :doc:`other docs about that <node-on-kubernetes>`.
* You will need a client certificate for each MongoDB monitoring and backup agent.
Note: If you are managing multiple Kubernetes clusters, from your local
system, you can run ``kubectl config view`` to list all the contexts that
are available for the local kubectl.
@ -74,10 +86,54 @@ Claims, and to run MongoDB in the new cluster:
1. :ref:`Add Storage Classes <Step 10: Create Kubernetes Storage Classes for MongoDB>`.
2. :ref:`Add Persistent Volume Claims <Step 11: Create Kubernetes Persistent Volume Claims>`.
3. :ref:`Create the Config Map <Step 3: Configure Your BigchainDB Node>`.
4. :ref:`Run MongoDB instance <Step 12: Start a Kubernetes StatefulSet for MongoDB>`.
4. :ref:`Prepare the Kubernetes Secrets <Step 3: Configure Your BigchainDB Node>`, as per your
requirement i.e. if you do not want a certain functionality, just remove it from the
``configuration/secret.yaml``.
5. :ref:`Run MongoDB instance <Step 12: Start a Kubernetes StatefulSet for MongoDB>`.
Step 3: Add the New MongoDB Instance to the Existing Replica Set
Step 3: Start NGINX service, Assign DNS to NGINX Pubic IP and run NGINX deployment
----------------------------------------------------------------------------------
Please see the following pages:
* :ref:`Start NGINX service <Step 4: Start the NGINX Service>`.
* :ref:`Assign DNS to NGINX Public IP <Step 5: Assign DNS Name to the NGINX Public IP>`.
* :ref:`Run NGINX deployment <Step 9: Start the NGINX Kubernetes Deployment>`.
Step 4: Verify network connectivity between the MongoDB instances
-----------------------------------------------------------------
Make sure your MongoDB instances can access each other over the network. *If* you are deploying
the new MongoDB node in a different cluster or geographical location using Azure Kubernetes Container
Service, you will have to set up networking between the two clusters using `Kubernetes
Services <https://kubernetes.io/docs/concepts/services-networking/service/>`_.
Assuming we have an existing MongoDB instance ``mdb-instance-0`` residing in Azure data center location ``westeurope`` and we
want to add a new MongoDB instance ``mdb-instance-1`` located in Azure data center location ``eastus`` to the existing MongoDB
replica set. Unless you already have explicitly set up networking for ``mdb-instance-0`` to communicate with ``mdb-instance-1`` and
vice versa, we will have to add a Kubernetes Service in each cluster to accomplish this goal in order to set up a
MongoDB replica set.
* This configuration is located in the file ``mongodb/mongo-ext-conn-svc.yaml``.
* Set the name of the ``metadata.name`` to the host name of the MongoDB instance you are trying to connect to.
For instance if you are configuring this service on cluster with `mdb-instance-0` then the ``metadata.name`` will
be ``mdb-instance-1`` and vice versa.
* Set ``spec.ports.port[0]`` to the ``mongodb-backend-port`` from the ConfigMap.
* Set ``spec.externalName`` to the FQDN mapped to NGINX Public IP of the cluster you are trying to connect to.
For more information about the FQDN please refer to: :ref:`Assign DNS Name to the NGINX Public
IP <Step 5: Assign DNS Name to the NGINX Public IP>`
.. note::
This operation needs to be replicated ``n-1`` times per node for a ``n`` node cluster, with the respective FQDNs
we need to communicate with.
Step 5: Add the New MongoDB Instance to the Existing Replica Set
----------------------------------------------------------------
Note that by ``replica set``, we are referring to the MongoDB replica set,
@ -88,12 +144,18 @@ will have to coordinate offline with an existing administrator so that they can
add the new MongoDB instance to the replica set.
Add the new instance of MongoDB from an existing instance by accessing the
``mongo`` shell.
``mongo`` shell and authenticate as the ``adminUser`` we created for existing MongoDB instance OR
contact the admin of the PRIMARY MongoDB node:
.. code:: bash
$ kubectl --context ctx-1 exec -it mdb-0 -c mongodb -- /bin/bash
root@mdb-0# mongo --port 27017
$ kubectl --context ctx-1 exec -it <existing-mongodb-host-name> -c mongodb -- /bin/bash
$ mongo --host <existing-mongodb-host-name> --port 27017 --verbose --ssl \
--sslCAFile /etc/mongod/ssl/ca.pem \
--sslPEMKeyFile /etc/mongod/ssl/mdb-instance.pem
PRIMARY> use admin
PRIMARY> db.auth("adminUser", "superstrongpassword")
One can only add members to a replica set from the ``PRIMARY`` instance.
The ``mongo`` shell prompt should state that this is the primary member in the
@ -108,7 +170,7 @@ Run the ``rs.add()`` command with the FQDN and port number of the other instance
PRIMARY> rs.add("<fqdn>:<port>")
Step 4: Verify the Replica Set Membership
Step 6: Verify the Replica Set Membership
-----------------------------------------
You can use the ``rs.conf()`` and the ``rs.status()`` commands available in the
@ -118,7 +180,7 @@ The new MongoDB instance should be listed in the membership information
displayed.
Step 5: Start the New BigchainDB Instance
Step 7: Start the New BigchainDB Instance
-----------------------------------------
Get the file ``bigchaindb-dep.yaml`` from GitHub using:
@ -127,13 +189,36 @@ Get the file ``bigchaindb-dep.yaml`` from GitHub using:
$ wget https://raw.githubusercontent.com/bigchaindb/bigchaindb/master/k8s/bigchaindb/bigchaindb-dep.yaml
Note that we set the ``BIGCHAINDB_DATABASE_HOST`` to ``mdb`` which is the name
of the MongoDB service defined earlier.
Edit the ``BIGCHAINDB_KEYPAIR_PUBLIC`` with the public key of this instance,
the ``BIGCHAINDB_KEYPAIR_PRIVATE`` with the private key of this instance and
the ``BIGCHAINDB_KEYRING`` with a ``:`` delimited list of all the public keys
in the BigchainDB cluster.
* Set ``metadata.name`` and ``spec.template.metadata.labels.app`` to the
value set in ``bdb-instance-name`` in the ConfigMap, followed by
``-dep``.
For example, if the value set in the
``bdb-instance-name`` is ``bdb-instance-0``, set the fields to the
value ``bdb-instance-0-dep``.
* Set the value of ``BIGCHAINDB_KEYPAIR_PRIVATE`` (not base64-encoded).
(In the future, we'd like to pull the BigchainDB private key from
the Secret named ``bdb-private-key``, but a Secret can only be mounted as a file,
so BigchainDB Server would have to be modified to look for it
in a file.)
* As we gain more experience running BigchainDB in testing and production,
we will tweak the ``resources.limits`` values for CPU and memory, and as
richer monitoring and probing becomes available in BigchainDB, we will
tweak the ``livenessProbe`` and ``readinessProbe`` parameters.
* Set the ports to be exposed from the pod in the
``spec.containers[0].ports`` section. We currently expose 2 ports -
``bigchaindb-api-port`` and ``bigchaindb-ws-port``. Set them to the
values specified in the ConfigMap.
* Uncomment the env var ``BIGCHAINDB_KEYRING``, it will pick up the
``:`` delimited list of all the public keys in the BigchainDB cluster from the ConfigMap.
* Authenticate the new BigchainDB instance using the client x.509 certificate with MongoDB. We need to specify the
user name *as seen in the certificate* issued to the BigchainDB instance in order to authenticate correctly.
Please refer to: :ref:`Configure Users and Access Control for MongoDB <Step 13: Configure Users and Access Control for MongoDB>`
Create the required Deployment using:
@ -144,15 +229,20 @@ Create the required Deployment using:
You can check its status using the command ``kubectl get deploy -w``
Step 6: Restart the Existing BigchainDB Instance(s)
Step 8: Restart the Existing BigchainDB Instance(s)
---------------------------------------------------
Add the public key of the new BigchainDB instance to the keyring of all the
existing BigchainDB instances and update the BigchainDB instances using:
Add the public key of the new BigchainDB instance to the ConfigMap ``bdb-keyring``
variable of existing BigchainDB instances, update the ConfigMap of the existing
BigchainDB instances and update the instances respectively:
.. code:: bash
$ kubectl --context ctx-1 replace -f bigchaindb-dep.yaml
$ kubectl --context ctx-1 apply -f configuration/config-map.yaml
$ kubectl --context ctx-1 replace -f bigchaindb/bigchaindb-dep.yaml --force
See the page titled :ref:`How to Configure a BigchainDB Node` for more information about
ConfigMap configuration.
This will create a "rolling deployment" in Kubernetes where a new instance of
BigchainDB will be created, and if the health check on the new instance is
@ -163,16 +253,35 @@ You can SSH to an existing BigchainDB instance and run the ``bigchaindb
show-config`` command to check that the keyring is updated.
Step 7: Run NGINX as a Deployment
---------------------------------
Step 9: Deploy MongoDB Monitoring and Backup Agent
--------------------------------------------------
Please see :ref:`this page <Step 9: Start the NGINX Kubernetes Deployment>` to
set up NGINX in your new node.
To Deploy MongoDB monitoring and backup agent for the new cluster, you have to authenticate each agent using its
unique client certificate. For more information on how to authenticate and add users to MongoDB please refer to:
* :ref:`Configure Users and Access Control for MongoDB<Step 13: Configure Users and Access Control for MongoDB>`
After authentication, start the Kubernetes Deployments:
* :ref:`Start a Kubernetes Deployment for MongoDB Monitoring Agent <Step 14: Start a Kubernetes Deployment for MongoDB Monitoring Agent>`.
* :ref:`Start a Kubernetes Deployment for MongoDB Backup Agent <Step 15: Start a Kubernetes Deployment for MongoDB Backup Agent>`.
.. note::
Every MMS group has only one active Monitoring and Backup agent and having multiple agents provides High availability and failover, in case
one goes down. For more information about Monitoring and Backup Agents please consult the `official MongoDB documenation <https://docs.cloudmanager.mongodb.com/tutorial/move-agent-to-new-server/>`_.
Step 8: Test Your New BigchainDB Node
-------------------------------------
Step 10: Start OpenResty Service and Deployment
---------------------------------------------------------
Please refer to the following instructions:
* :ref:`Start the OpenResty Kubernetes Service <Step 8: Start the OpenResty Kubernetes Service>`.
* :ref:`Start a Kubernetes Deployment for OpenResty <Step 17: Start a Kubernetes Deployment for OpenResty>`.
Step 11: Test Your New BigchainDB Node
--------------------------------------
Please refer to the testing steps :ref:`here <Step 19: Verify the BigchainDB
Node Setup>` to verify that your new BigchainDB node is working as expected.
Node Setup>` to verify that your new BigchainDB node is working as expected.

View File

@ -28,13 +28,13 @@ by going into the directory ``client-cert/easy-rsa-3.0.1/easyrsa3``
and using:
.. code:: bash
./easyrsa init-pki
./easyrsa gen-req bdb-instance-0 nopass
You should change the Common Name (e.g. ``bdb-instance-0``)
to a value that reflects what the
to a value that reflects what the
client certificate is being used for, e.g. ``mdb-mon-instance-3`` or ``mdb-bak-instance-4``. (The final integer is specific to your BigchainDB node in the BigchainDB cluster.)
You will be prompted to enter the Distinguished Name (DN) information for this certificate. For each field, you can accept the default value [in brackets] by pressing Enter.
@ -48,6 +48,10 @@ You will be prompted to enter the Distinguished Name (DN) information for this c
Aside: The ``nopass`` option means "do not encrypt the private key (default is encrypted)". You can get help with the ``easyrsa`` command (and its subcommands)
by using the subcommand ``./easyrsa help``.
.. note::
For more information about requirements for MongoDB client certificates, please consult the `official MongoDB
documentation <https://docs.mongodb.com/manual/tutorial/configure-x509-client-authentication/>`_.
Step 3: Get the Client Certificate Signed
-----------------------------------------
@ -66,11 +70,11 @@ Go to your ``bdb-cluster-ca/easy-rsa-3.0.1/easyrsa3/``
directory and do something like:
.. code:: bash
./easyrsa import-req /path/to/bdb-instance-0.req bdb-instance-0
./easyrsa sign-req client bdb-instance-0
Once you have signed it, you can send the signed certificate
and the CA certificate back to the requestor.
The files are ``pki/issued/bdb-instance-0.crt`` and ``pki/ca.crt``.
@ -83,5 +87,5 @@ MongoDB requires a single, consolidated file containing both the public and
private keys.
.. code:: bash
cat /path/to/bdb-instance-0.crt /path/to/bdb-instance-0.key > bdb-instance-0.pem

View File

@ -53,7 +53,7 @@ to the above command (i.e. the path to the private key).
the context for cluster 2. To find out the current context, do:
.. code:: bash
$ kubectl config view
and then look for the ``current-context`` in the output.
@ -106,7 +106,7 @@ Step 3: Configure Your BigchainDB Node
--------------------------------------
See the page titled :ref:`How to Configure a BigchainDB Node`.
Step 4: Start the NGINX Service
-------------------------------
@ -124,15 +124,15 @@ Step 4.1: Vanilla NGINX
^^^^^^^^^^^^^^^^^^^^^^^
* This configuration is located in the file ``nginx-http/nginx-http-svc.yaml``.
* Set the ``metadata.name`` and ``metadata.labels.name`` to the value
set in ``ngx-instance-name`` in the ConfigMap above.
* Set the ``spec.selector.app`` to the value set in ``ngx-instance-name`` in
the ConfigMap followed by ``-dep``. For example, if the value set in the
``ngx-instance-name`` is ``ngx-http-instance-0``, set the
``spec.selector.app`` to ``ngx-http-instance-0-dep``.
* Set ``ports[0].port`` and ``ports[0].targetPort`` to the value set in the
``cluster-frontend-port`` in the ConfigMap above. This is the
``public-cluster-port`` in the file which is the ingress in to the cluster.
@ -140,7 +140,7 @@ Step 4.1: Vanilla NGINX
* Start the Kubernetes Service:
.. code:: bash
$ kubectl --context k8s-bdb-test-cluster-0 apply -f nginx-http/nginx-http-svc.yaml
@ -149,7 +149,7 @@ Step 4.2: NGINX with HTTPS + 3scale
* You have to enable HTTPS for this one and will need an HTTPS certificate
for your domain.
* You should have already created the necessary Kubernetes Secrets in the previous
step (e.g. ``https-certs`` and ``threescale-credentials``).
@ -162,9 +162,9 @@ Step 4.2: NGINX with HTTPS + 3scale
the ConfigMap followed by ``-dep``. For example, if the value set in the
``ngx-instance-name`` is ``ngx-https-instance-0``, set the
``spec.selector.app`` to ``ngx-https-instance-0-dep``.
* Set ``ports[0].port`` and ``ports[0].targetPort`` to the value set in the
``cluster-frontend-port`` in the ConfigMap above. This is the
``cluster-frontend-port`` in the ConfigMap above. This is the
``public-secure-cluster-port`` in the file which is the ingress in to the cluster.
* Set ``ports[1].port`` and ``ports[1].targetPort`` to the value set in the
@ -173,7 +173,7 @@ Step 4.2: NGINX with HTTPS + 3scale
available.
* Start the Kubernetes Service:
.. code:: bash
$ kubectl --context k8s-bdb-test-cluster-0 apply -f nginx-https/nginx-https-svc.yaml
@ -189,11 +189,11 @@ Step 5: Assign DNS Name to the NGINX Public IP
* The following command can help you find out if the NGINX service started
above has been assigned a public IP or external IP address:
.. code:: bash
$ kubectl --context k8s-bdb-test-cluster-0 get svc -w
* Once a public IP is assigned, you can map it to
a DNS name.
We usually assign ``bdb-test-cluster-0``, ``bdb-test-cluster-1`` and
@ -237,7 +237,7 @@ Step 6: Start the MongoDB Kubernetes Service
``mongodb-backend-port`` in the ConfigMap above.
This is the ``mdb-port`` in the file which specifies where MongoDB listens
for API requests.
* Start the Kubernetes Service:
.. code:: bash
@ -308,9 +308,9 @@ Step 9: Start the NGINX Kubernetes Deployment
Step 9.1: Vanilla NGINX
^^^^^^^^^^^^^^^^^^^^^^^
* This configuration is located in the file ``nginx-http/nginx-http-dep.yaml``.
* Set the ``metadata.name`` and ``spec.template.metadata.labels.app``
to the value set in ``ngx-instance-name`` in the ConfigMap followed by a
``-dep``. For example, if the value set in the ``ngx-instance-name`` is
@ -331,7 +331,7 @@ Step 9.1: Vanilla NGINX
Step 9.2: NGINX with HTTPS + 3scale
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
* This configuration is located in the file
``nginx-https/nginx-https-dep.yaml``.
@ -467,7 +467,7 @@ Step 12: Start a Kubernetes StatefulSet for MongoDB
the ConfigMap.
For example, if the value set in the ``mdb-instance-name``
is ``mdb-instance-0``, set the field to ``mdb-instance-0``.
* Set ``metadata.name``, ``spec.template.metadata.name`` and
``spec.template.metadata.labels.app`` to the value set in
``mdb-instance-name`` in the ConfigMap, followed by
@ -479,7 +479,7 @@ Step 12: Start a Kubernetes StatefulSet for MongoDB
* Note how the MongoDB container uses the ``mongo-db-claim`` and the
``mongo-configdb-claim`` PersistentVolumeClaims for its ``/data/db`` and
``/data/configdb`` directories (mount paths).
* Note also that we use the pod's ``securityContext.capabilities.add``
specification to add the ``FOWNER`` capability to the container. That is
because the MongoDB container has the user ``mongodb``, with uid ``999`` and
@ -505,18 +505,18 @@ Step 12: Start a Kubernetes StatefulSet for MongoDB
.. code:: bash
$ kubectl --context k8s-bdb-test-cluster-0 apply -f mongodb/mongo-ss.yaml
* It might take up to 10 minutes for the disks, specified in the Persistent
Volume Claims above, to be created and attached to the pod.
The UI might show that the pod has errored with the message
"timeout expired waiting for volumes to attach/mount". Use the CLI below
to check the status of the pod in this case, instead of the UI.
This happens due to a bug in Azure ACS.
.. code:: bash
$ kubectl --context k8s-bdb-test-cluster-0 get pods -w
Step 13: Configure Users and Access Control for MongoDB
-------------------------------------------------------
@ -530,26 +530,26 @@ Step 13: Configure Users and Access Control for MongoDB
* Find out the name of your MongoDB pod by reading the output
of the ``kubectl ... get pods`` command at the end of the last step.
It should be something like ``mdb-instance-0-ss-0``.
* Log in to the MongoDB pod using:
.. code:: bash
$ kubectl --context k8s-bdb-test-cluster-0 exec -it <name of your MongoDB pod> bash
* Open a mongo shell using the certificates
already present at ``/etc/mongod/ssl/``
.. code:: bash
$ mongo --host localhost --port 27017 --verbose --ssl \
--sslCAFile /etc/mongod/ssl/ca.pem \
--sslPEMKeyFile /etc/mongod/ssl/mdb-instance.pem
* Initialize the replica set using:
.. code:: bash
> rs.initiate( {
_id : "bigchain-rs",
members: [ {
@ -562,7 +562,7 @@ Step 13: Configure Users and Access Control for MongoDB
``mdb-instance-name`` in the ConfigMap.
For example, if the value set in the ``mdb-instance-name`` is
``mdb-instance-0``, set the ``hostname`` above to the value ``mdb-instance-0``.
* The instance should be voted as the ``PRIMARY`` in the replica set (since
this is the only instance in the replica set till now).
This can be observed from the mongo shell prompt,
@ -573,14 +573,15 @@ Step 13: Configure Users and Access Control for MongoDB
log in to the mongo shell. For further details, see `localhost
exception <https://docs.mongodb.com/manual/core/security-users/#localhost-exception>`_
in MongoDB.
.. code:: bash
PRIMARY> use admin
PRIMARY> db.createUser( {
user: "adminUser",
pwd: "superstrongpassword",
roles: [ { role: "userAdminAnyDatabase", db: "admin" } ]
roles: [ { role: "userAdminAnyDatabase", db: "admin" },
{ role: "clusterManager", db: "admin"} ]
} )
* Exit and restart the mongo shell using the above command.
@ -605,16 +606,16 @@ Step 13: Configure Users and Access Control for MongoDB
-inform PEM -subject -nameopt RFC2253
You should see an output line that resembles:
.. code:: bash
subject= emailAddress=dev@bigchaindb.com,CN=test-bdb-ssl,OU=BigchainDB-Instance,O=BigchainDB GmbH,L=Berlin,ST=Berlin,C=DE
The ``subject`` line states the complete user name we need to use for
creating the user on the mongo shell as follows:
.. code:: bash
PRIMARY> db.getSiblingDB("$external").runCommand( {
createUser: 'emailAddress=dev@bigchaindb.com,CN=test-bdb-ssl,OU=BigchainDB-Instance,O=BigchainDB GmbH,L=Berlin,ST=Berlin,C=DE',
writeConcern: { w: 'majority' , wtimeout: 5000 },
@ -700,19 +701,19 @@ Step 16: Start a Kubernetes Deployment for BigchainDB
For example, if the value set in the
``bdb-instance-name`` is ``bdb-instance-0``, set the fields to the
value ``bdb-insance-0-dep``.
* Set the value of ``BIGCHAINDB_KEYPAIR_PRIVATE`` (not base64-encoded).
(In the future, we'd like to pull the BigchainDB private key from
the Secret named ``bdb-private-key``,
but a Secret can only be mounted as a file,
so BigchainDB Server would have to be modified to look for it
in a file.)
* As we gain more experience running BigchainDB in testing and production,
we will tweak the ``resources.limits`` values for CPU and memory, and as
richer monitoring and probing becomes available in BigchainDB, we will
tweak the ``livenessProbe`` and ``readinessProbe`` parameters.
* Set the ports to be exposed from the pod in the
``spec.containers[0].ports`` section. We currently expose 2 ports -
``bigchaindb-api-port`` and ``bigchaindb-ws-port``. Set them to the
@ -740,7 +741,7 @@ Step 17: Start a Kubernetes Deployment for OpenResty
For example, if the value set in the
``openresty-instance-name`` is ``openresty-instance-0``, set the fields to
the value ``openresty-instance-0-dep``.
* Set the port to be exposed from the pod in the
``spec.containers[0].ports`` section. We currently expose the port at
which OpenResty is listening for requests, ``openresty-backend-port`` in
@ -791,13 +792,13 @@ You can use it as below to get started immediately:
It will drop you to the shell prompt.
To test the MongoDB instance:
.. code:: bash
$ nslookup mdb-instance-0
$ dig +noall +answer _mdb-port._tcp.mdb-instance-0.default.svc.cluster.local SRV
$ curl -X GET http://mdb-instance-0:27017
The ``nslookup`` command should output the configured IP address of the service
@ -806,20 +807,20 @@ The ``dig`` command should return the configured port numbers.
The ``curl`` command tests the availability of the service.
To test the BigchainDB instance:
.. code:: bash
$ nslookup bdb-instance-0
$ dig +noall +answer _bdb-api-port._tcp.bdb-instance-0.default.svc.cluster.local SRV
$ dig +noall +answer _bdb-ws-port._tcp.bdb-instance-0.default.svc.cluster.local SRV
$ curl -X GET http://bdb-instance-0:9984
$ wsc -er ws://bdb-instance-0:9985/api/v1/streams/valid_transactions
To test the OpenResty instance:
.. code:: bash
@ -834,11 +835,11 @@ BigchainDB instance.
To test the vanilla NGINX instance:
.. code:: bash
$ nslookup ngx-http-instance-0
$ dig +noall +answer _public-cluster-port._tcp.ngx-http-instance-0.default.svc.cluster.local SRV
$ dig +noall +answer _public-health-check-port._tcp.ngx-http-instance-0.default.svc.cluster.local SRV
@ -855,7 +856,7 @@ The above curl command should result in the response
To test the NGINX instance with HTTPS and 3scale integration:
.. code:: bash
$ nslookup ngx-https-instance-0
$ dig +noall +answer _public-secure-cluster-port._tcp.ngx-https-instance-0.default.svc.cluster.local SRV

View File

@ -29,8 +29,13 @@ You can create the server private key and certificate signing request (CSR)
by going into the directory ``member-cert/easy-rsa-3.0.1/easyrsa3``
and using something like:
.. note::
Please make sure you are fullfilling the requirements for `MongoDB server/member certificates
<https://docs.mongodb.com/manual/tutorial/configure-x509-member-authentication>`_.
.. code:: bash
./easyrsa init-pki
./easyrsa --req-cn=mdb-instance-0 --subject-alt-name=DNS:localhost,DNS:mdb-instance-0 gen-req mdb-instance-0 nopass
@ -67,11 +72,11 @@ Go to your ``bdb-cluster-ca/easy-rsa-3.0.1/easyrsa3/``
directory and do something like:
.. code:: bash
./easyrsa import-req /path/to/mdb-instance-0.req mdb-instance-0
./easyrsa --subject-alt-name=DNS:localhost,DNS:mdb-instance-0 sign-req server mdb-instance-0
Once you have signed it, you can send the signed certificate
and the CA certificate back to the requestor.
The files are ``pki/issued/mdb-instance-0.crt`` and ``pki/ca.crt``.
@ -84,6 +89,6 @@ MongoDB requires a single, consolidated file containing both the public and
private keys.
.. code:: bash
cat /path/to/mdb-instance-0.crt /path/to/mdb-instance-0.key > mdb-instance-0.pem

View File

@ -1,4 +1,4 @@
## Note: data values do NOT have to be base64-encoded in this file.
## Note: data values do NOT have to be base64-encoded in this file.
## vars is common environment variables for this BigchaindB node
apiVersion: v1
@ -12,7 +12,7 @@ data:
# cluster-frontend-port is the port number on which this node's services
# are available to external clients.
cluster-frontend-port: "443"
cluster-frontend-port: "443"
# cluster-health-check-port is the port number on which an external load
# balancer can check the status/liveness of the external/public server.

View File

@ -0,0 +1,13 @@
apiVersion: v1
kind: Service
metadata:
# Name of mongodb instance you are trying to connect to
# e.g. mdb-instance-0
name: "<remote-mongodb-host>"
namespace: default
spec:
ports:
- port: "<mongodb-backend-port from ConfigMap>"
type: ExternalName
# FQDN of remote cluster/NGINX instance
externalName: "<dns-name-remote-nginx>"

View File

@ -100,7 +100,7 @@ http {
add_header 'Access-Control-Allow-Methods' 'GET, POST, OPTIONS';
add_header 'Access-Control-Allow-Headers' 'DNT,X-CustomHeader,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,Content-Range,Range';
add_header 'Access-Control-Expose-Headers' 'DNT,X-CustomHeader,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,Content-Range,Range';
proxy_pass http://$openresty_backend:OPENRESTY_BACKEND_PORT;
}
@ -157,10 +157,14 @@ stream {
# Enable logging when connections are being throttled.
limit_conn_log_level notice;
# Allow 16 connections from the same IP address.
limit_conn two 16;
# For a multi node BigchainDB deployment we need around 2^5 connections
# (for inter-node communication)per node via NGINX, we can bump this up in case
# there is a requirement to scale up. But we should not remove this
# for security reasons.
# Allow 256 connections from the same IP address.
limit_conn two 256;
# DNS resolver to use for all the backend names specified in this configuration.
resolver DNS_SERVER valid=30s ipv6=off;
@ -169,10 +173,10 @@ stream {
map $remote_addr $mdb_backend {
default MONGODB_BACKEND_HOST;
}
# Frontend server to forward connections to MDB instance.
server {
listen MONGODB_FRONTEND_PORT so_keepalive=10m:1m:5;
listen MONGODB_FRONTEND_PORT so_keepalive=3m:1m:5;
preread_timeout 30s;
tcp_nodelay on;
proxy_pass $mdb_backend:MONGODB_BACKEND_PORT;

View File

@ -59,7 +59,7 @@ spec:
valueFrom:
configMapKeyRef:
name: vars
key: openresty-instance-name
key: ngx-openresty-instance-name
- name: BIGCHAINDB_BACKEND_HOST
valueFrom:
configMapKeyRef: