diff --git a/docs/server/source/production-deployment-template/add-node-on-kubernetes.rst b/docs/server/source/production-deployment-template/add-node-on-kubernetes.rst index b4444d89..826b7432 100644 --- a/docs/server/source/production-deployment-template/add-node-on-kubernetes.rst +++ b/docs/server/source/production-deployment-template/add-node-on-kubernetes.rst @@ -35,11 +35,19 @@ cluster. ``existing BigchainDB instance`` will refer to the BigchainDB instance in the existing cluster. +Below, we refer to multiple files by their directory and filename, +such as ``mongodb/mongo-ext-conn-svc.yaml``. Those files are files in the +`bigchaindb/bigchaindb repository on GitHub +`_ in the ``k8s/`` directory. +Make sure you're getting those files from the appropriate Git branch on +GitHub, i.e. the branch for the version of BigchainDB that your BigchainDB +cluster is using. + Step 1: Prerequisites --------------------- -* A public/private key pair for the new BigchainDB instance. +* :ref:`List of all the things to be done by each node operator `. * The public key should be shared offline with the other existing BigchainDB nodes in the existing BigchainDB cluster. @@ -65,20 +73,126 @@ example: $ kubectl --context ctx-2 proxy --port 8002 -Step 2: Prepare the New Kubernetes Cluster ------------------------------------------- +Step 2: Configure the BigchainDB Node +------------------------------------- -Follow the steps in the sections to set up Storage Classes and Persistent Volume -Claims, and to run MongoDB in the new cluster: - -1. :ref:`Add Storage Classes `. -2. :ref:`Add Persistent Volume Claims `. -3. :ref:`Create the Config Map `. -4. :ref:`Run MongoDB instance `. +See the section on how to :ref:`configure your BigchainDB node `. -Step 3: Add the New MongoDB Instance to the Existing Replica Set ----------------------------------------------------------------- +Step 3: Start the NGINX Service +-------------------------------- + +Please see the following section: + +* :ref:`Start NGINX service `. + + +Step 4: Assign DNS Name to the NGINX Public IP +---------------------------------------------- + +Please see the following section: + +* :ref:`Assign DNS to NGINX Public IP `. + + +Step 5: Start the MongoDB Kubernetes Service +-------------------------------------------- + +Please see the following section: + +* :ref:`Start the MongoDB Kubernetes Service `. + + +Step 6: Start the BigchainDB Kubernetes Service +----------------------------------------------- + +Please see the following section: + +* :ref:`Start the BigchainDB Kubernetes Service `. + + +Step 7: Start the OpenResty Kubernetes Service +---------------------------------------------- + +Please see the following section: + +* :ref:`Start the OpenResty Kubernetes Service `. + + +Step 8: Start the NGINX Kubernetes Deployment +--------------------------------------------- + +Please see the following section: + +* :ref:`Run NGINX deployment `. + + +Step 9: Create Kubernetes Storage Classes for MongoDB +----------------------------------------------------- + +Please see the following section: + +* :ref:`Step 10: Create Kubernetes Storage Classes for MongoDB`. + + +Step 10: Create Kubernetes Persistent Volume Claims +--------------------------------------------------- + +Please see the following section: + +* :ref:`Step 11: Create Kubernetes Persistent Volume Claims`. + + +Step 11: Start a Kubernetes StatefulSet for MongoDB +--------------------------------------------------- + +Please see the following section: + +* :ref:`Step 12: Start a Kubernetes StatefulSet for MongoDB`. + + +Step 12: Verify network connectivity between the MongoDB instances +------------------------------------------------------------------ + +Make sure your MongoDB instances can access each other over the network. *If* you are deploying +the new MongoDB node in a different cluster or geographical location using Azure Kubernetes Container +Service, you will have to set up networking between the two clusters using `Kubernetes +Services `_. + +Assuming we have an existing MongoDB instance ``mdb-instance-0`` residing in Azure data center location ``westeurope`` and we +want to add a new MongoDB instance ``mdb-instance-1`` located in Azure data center location ``eastus`` to the existing MongoDB +replica set. Unless you already have explicitly set up networking for ``mdb-instance-0`` to communicate with ``mdb-instance-1`` and +vice versa, we will have to add a Kubernetes Service in each cluster to accomplish this goal in order to set up a +MongoDB replica set. +It is similar to ensuring that there is a ``CNAME`` record in the DNS +infrastructure to resolve ``mdb-instance-X`` to the host where it is actually available. +We can do this in Kubernetes using a Kubernetes Service of ``type`` +``ExternalName``. + +* This configuration is located in the file ``mongodb/mongo-ext-conn-svc.yaml``. + +* Set the name of the ``metadata.name`` to the host name of the MongoDB instance you are trying to connect to. + For instance if you are configuring this service on cluster with ``mdb-instance-0`` then the ``metadata.name`` will + be ``mdb-instance-1`` and vice versa. + +* Set ``spec.ports.port[0]`` to the ``mongodb-backend-port`` from the ConfigMap for the other cluster. + +* Set ``spec.externalName`` to the FQDN mapped to NGINX Public IP of the cluster you are trying to connect to. + For more information about the FQDN please refer to: :ref:`Assign DNS Name to the NGINX Public + IP ` + +.. note:: + This operation needs to be replicated ``n-1`` times per node for a ``n`` node cluster, with the respective FQDNs + we need to communicate with. + + If you are not the system administrator of the cluster, you have to get in + touch with the system administrator/s of the other ``n-1`` clusters and + share with them your instance name (``mdb-instance-name`` in the ConfigMap) + and the FQDN for your node (``cluster-fqdn`` in the ConfigMap). + + +Step 13: Add the New MongoDB Instance to the Existing Replica Set +----------------------------------------------------------------- Note that by ``replica set``, we are referring to the MongoDB replica set, not a Kubernetes' ``ReplicaSet``. @@ -88,12 +202,18 @@ will have to coordinate offline with an existing administrator so that they can add the new MongoDB instance to the replica set. Add the new instance of MongoDB from an existing instance by accessing the -``mongo`` shell. +``mongo`` shell and authenticate as the ``adminUser`` we created for existing MongoDB instance OR +contact the admin of the PRIMARY MongoDB node: .. code:: bash - - $ kubectl --context ctx-1 exec -it mdb-0 -c mongodb -- /bin/bash - root@mdb-0# mongo --port 27017 + + $ kubectl --context ctx-1 exec -it bash + $ mongo --host --port 27017 --verbose --ssl \ + --sslCAFile /etc/mongod/ssl/ca.pem \ + --sslPEMKeyFile /etc/mongod/ssl/mdb-instance.pem + + PRIMARY> use admin + PRIMARY> db.auth("adminUser", "superstrongpassword") One can only add members to a replica set from the ``PRIMARY`` instance. The ``mongo`` shell prompt should state that this is the primary member in the @@ -105,11 +225,11 @@ Run the ``rs.add()`` command with the FQDN and port number of the other instance .. code:: bash - PRIMARY> rs.add(":") + PRIMARY> rs.add(":") -Step 4: Verify the Replica Set Membership ------------------------------------------ +Step 14: Verify the Replica Set Membership +------------------------------------------ You can use the ``rs.conf()`` and the ``rs.status()`` commands available in the mongo shell to verify the replica set membership. @@ -118,22 +238,86 @@ The new MongoDB instance should be listed in the membership information displayed. -Step 5: Start the New BigchainDB Instance ------------------------------------------ +Step 15: Configure Users and Access Control for MongoDB +------------------------------------------------------- -Get the file ``bigchaindb-dep.yaml`` from GitHub using: +* Create the users in MongoDB with the appropriate roles assigned to them. This + will enable the new BigchainDB instance, new MongoDB Monitoring Agent + instance and the new MongoDB Backup Agent instance to function correctly. -.. code:: bash +* Please refer to + :ref:`Configure Users and Access Control for MongoDB ` to create and configure the new + BigchainDB, MongoDB Monitoring Agent and MongoDB Backup Agent users on the + cluster. - $ wget https://raw.githubusercontent.com/bigchaindb/bigchaindb/master/k8s/bigchaindb/bigchaindb-dep.yaml +.. note:: + You will not have to create the MongoDB replica set or create the admin user, as they already exist. -Note that we set the ``BIGCHAINDB_DATABASE_HOST`` to ``mdb`` which is the name -of the MongoDB service defined earlier. + If you do not have access to the ``PRIMARY`` member of the replica set, you + need to get in touch with the administrator who can create the users in the + MongoDB cluster. -Edit the ``BIGCHAINDB_KEYPAIR_PUBLIC`` with the public key of this instance, -the ``BIGCHAINDB_KEYPAIR_PRIVATE`` with the private key of this instance and -the ``BIGCHAINDB_KEYRING`` with a ``:`` delimited list of all the public keys -in the BigchainDB cluster. + + +Step 16: Start a Kubernetes Deployment for MongoDB Monitoring Agent +------------------------------------------------------------------- + +Please see the following section: + +* :ref:`Step 14: Start a Kubernetes Deployment for MongoDB Monitoring Agent`. + +.. note:: + Every MMS group has only one active Monitoring and Backup Agent and having + multiple agents provides high availability and failover, in case one goes + down. For more information about Monitoring and Backup Agents please + consult the `official MongoDB documenation + `_. + + +Step 17: Start a Kubernetes Deployment for MongoDB Backup Agent +--------------------------------------------------------------- + +Please see the following section: + +* :ref:`Step 15: Start a Kubernetes Deployment for MongoDB Backup Agent`. + +.. note:: + Every MMS group has only one active Monitoring and Backup Agent and having + multiple agents provides high availability and failover, in case one goes + down. For more information about Monitoring and Backup Agents please + consult the `official MongoDB documenation + `_. + + +Step 18: Start a Kubernetes Deployment for BigchainDB +----------------------------------------------------- + +* Set ``metadata.name`` and ``spec.template.metadata.labels.app`` to the + value set in ``bdb-instance-name`` in the ConfigMap, followed by + ``-dep``. + For example, if the value set in the + ``bdb-instance-name`` is ``bdb-instance-0``, set the fields to the + value ``bdb-instance-0-dep``. + +* Set the value of ``BIGCHAINDB_KEYPAIR_PRIVATE`` (not base64-encoded). + (In the future, we'd like to pull the BigchainDB private key from + the Secret named ``bdb-private-key``, but a Secret can only be mounted as a file, + so BigchainDB Server would have to be modified to look for it + in a file.) + +* As we gain more experience running BigchainDB in testing and production, + we will tweak the ``resources.limits`` values for CPU and memory, and as + richer monitoring and probing becomes available in BigchainDB, we will + tweak the ``livenessProbe`` and ``readinessProbe`` parameters. + +* Set the ports to be exposed from the pod in the + ``spec.containers[0].ports`` section. We currently expose 2 ports - + ``bigchaindb-api-port`` and ``bigchaindb-ws-port``. Set them to the + values specified in the ConfigMap. + +* Uncomment the env var ``BIGCHAINDB_KEYRING``, it will pick up the + ``:`` delimited list of all the public keys in the BigchainDB cluster from the ConfigMap. Create the required Deployment using: @@ -141,38 +325,59 @@ Create the required Deployment using: $ kubectl --context ctx-2 apply -f bigchaindb-dep.yaml -You can check its status using the command ``kubectl get deploy -w`` +You can check its status using the command ``kubectl --context ctx-2 get deploy -w`` -Step 6: Restart the Existing BigchainDB Instance(s) ---------------------------------------------------- +Step 19: Restart the Existing BigchainDB Instance(s) +---------------------------------------------------- -Add the public key of the new BigchainDB instance to the keyring of all the -existing BigchainDB instances and update the BigchainDB instances using: +* Add the public key of the new BigchainDB instance to the ConfigMap + ``bdb-keyring`` variable of all the existing BigchainDB instances. + Update all the existing ConfigMap using: .. code:: bash - $ kubectl --context ctx-1 replace -f bigchaindb-dep.yaml + $ kubectl --context ctx-1 apply -f configuration/config-map.yaml -This will create a "rolling deployment" in Kubernetes where a new instance of -BigchainDB will be created, and if the health check on the new instance is -successful, the earlier one will be terminated. This ensures that there is -zero downtime during updates. +* Uncomment the ``BIGCHAINDB_KEYRING`` variable from the + ``bigchaindb/bigchaindb-dep.yaml`` to refer to the keyring updated in the + ConfigMap. + Update the running BigchainDB instance using: + +.. code:: bash + + $ kubectl --context ctx-1 delete -f bigchaindb/bigchaindb-dep.yaml + $ kubectl --context ctx-1 apply -f bigchaindb/bigchaindb-dep.yaml + + +See the page titled :ref:`How to Configure a BigchainDB Node` for more information about +ConfigMap configuration. You can SSH to an existing BigchainDB instance and run the ``bigchaindb show-config`` command to check that the keyring is updated. -Step 7: Run NGINX as a Deployment ---------------------------------- +Step 20: Start a Kubernetes Deployment for OpenResty +---------------------------------------------------- -Please see :ref:`this page ` to -set up NGINX in your new node. +Please see the following section: + +* :ref:`Step 17: Start a Kubernetes Deployment for OpenResty`. -Step 8: Test Your New BigchainDB Node -------------------------------------- +Step 21: Configure the MongoDB Cloud Manager +-------------------------------------------- -Please refer to the testing steps :ref:`here ` to verify that your new BigchainDB node is working as expected. +* MongoDB Cloud Manager auto-detects the members of the replica set and + configures the agents to act as a master/slave accordingly. + +* You can verify that the new MongoDB instance is detected by the + Monitoring and Backup Agent using the Cloud Manager UI. + + +Step 22: Test Your New BigchainDB Node +-------------------------------------- + +* Please refer to the testing steps :ref:`here ` to verify that your new BigchainDB node is working as expected. diff --git a/docs/server/source/production-deployment-template/client-tls-certificate.rst b/docs/server/source/production-deployment-template/client-tls-certificate.rst index af2cd767..80483b83 100644 --- a/docs/server/source/production-deployment-template/client-tls-certificate.rst +++ b/docs/server/source/production-deployment-template/client-tls-certificate.rst @@ -28,13 +28,13 @@ by going into the directory ``client-cert/easy-rsa-3.0.1/easyrsa3`` and using: .. code:: bash - + ./easyrsa init-pki ./easyrsa gen-req bdb-instance-0 nopass You should change the Common Name (e.g. ``bdb-instance-0``) -to a value that reflects what the +to a value that reflects what the client certificate is being used for, e.g. ``mdb-mon-instance-3`` or ``mdb-bak-instance-4``. (The final integer is specific to your BigchainDB node in the BigchainDB cluster.) You will be prompted to enter the Distinguished Name (DN) information for this certificate. For each field, you can accept the default value [in brackets] by pressing Enter. @@ -48,6 +48,10 @@ You will be prompted to enter the Distinguished Name (DN) information for this c Aside: The ``nopass`` option means "do not encrypt the private key (default is encrypted)". You can get help with the ``easyrsa`` command (and its subcommands) by using the subcommand ``./easyrsa help``. +.. note:: + For more information about requirements for MongoDB client certificates, please consult the `official MongoDB + documentation `_. + Step 3: Get the Client Certificate Signed ----------------------------------------- @@ -66,11 +70,11 @@ Go to your ``bdb-cluster-ca/easy-rsa-3.0.1/easyrsa3/`` directory and do something like: .. code:: bash - + ./easyrsa import-req /path/to/bdb-instance-0.req bdb-instance-0 ./easyrsa sign-req client bdb-instance-0 - + Once you have signed it, you can send the signed certificate and the CA certificate back to the requestor. The files are ``pki/issued/bdb-instance-0.crt`` and ``pki/ca.crt``. @@ -79,9 +83,21 @@ The files are ``pki/issued/bdb-instance-0.crt`` and ``pki/ca.crt``. Step 4: Generate the Consolidated Client PEM File ------------------------------------------------- -MongoDB requires a single, consolidated file containing both the public and -private keys. +.. note:: + This step can be skipped for BigchainDB client certificate as BigchainDB + uses the PyMongo driver, which accepts separate certificate and key files. + +MongoDB, MongoDB Backup Agent and MongoDB Monitoring Agent require a single, +consolidated file containing both the public and private keys. .. code:: bash - - cat /path/to/bdb-instance-0.crt /path/to/bdb-instance-0.key > bdb-instance-0.pem + + cat /path/to/mdb-instance-0.crt /path/to/mdb-instance-0.key > mdb-instance-0.pem + + OR + + cat /path/to/mdb-mon-instance-0.crt /path/to/mdb-mon-instance-0.key > mdb-mon-instance-0.pem + + OR + + cat /path/to/mdb-bak-instance-0.crt /path/to/mdb-bak-instance-0.key > mdb-bak-instance-0.pem diff --git a/docs/server/source/production-deployment-template/node-config-map-and-secrets.rst b/docs/server/source/production-deployment-template/node-config-map-and-secrets.rst index a4e8c107..a3a33c8d 100644 --- a/docs/server/source/production-deployment-template/node-config-map-and-secrets.rst +++ b/docs/server/source/production-deployment-template/node-config-map-and-secrets.rst @@ -29,7 +29,6 @@ where all data values must be base64-encoded. This is true of all Kubernetes ConfigMaps and Secrets.) - vars.cluster-fqdn ~~~~~~~~~~~~~~~~~ @@ -83,7 +82,7 @@ There are some things worth noting about the ``mdb-instance-name``: documentation. Your BigchainDB cluster may use a different naming convention. -vars.ngx-ndb-instance-name and Similar +vars.ngx-mdb-instance-name and Similar ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ NGINX needs the FQDN of the servers inside the cluster to be able to forward diff --git a/docs/server/source/production-deployment-template/node-on-kubernetes.rst b/docs/server/source/production-deployment-template/node-on-kubernetes.rst index 6822c02b..6146b34e 100644 --- a/docs/server/source/production-deployment-template/node-on-kubernetes.rst +++ b/docs/server/source/production-deployment-template/node-on-kubernetes.rst @@ -53,7 +53,7 @@ to the above command (i.e. the path to the private key). the context for cluster 2. To find out the current context, do: .. code:: bash - + $ kubectl config view and then look for the ``current-context`` in the output. @@ -106,7 +106,7 @@ Step 3: Configure Your BigchainDB Node -------------------------------------- See the page titled :ref:`How to Configure a BigchainDB Node`. - + Step 4: Start the NGINX Service ------------------------------- @@ -117,22 +117,22 @@ Step 4: Start the NGINX Service public IP to be assigned. * You have the option to use vanilla NGINX without HTTPS support or an - NGINX with HTTPS support integrated with 3scale API Gateway. + NGINX with HTTPS support. Step 4.1: Vanilla NGINX ^^^^^^^^^^^^^^^^^^^^^^^ * This configuration is located in the file ``nginx-http/nginx-http-svc.yaml``. - + * Set the ``metadata.name`` and ``metadata.labels.name`` to the value set in ``ngx-instance-name`` in the ConfigMap above. - + * Set the ``spec.selector.app`` to the value set in ``ngx-instance-name`` in the ConfigMap followed by ``-dep``. For example, if the value set in the ``ngx-instance-name`` is ``ngx-http-instance-0``, set the ``spec.selector.app`` to ``ngx-http-instance-0-dep``. - + * Set ``ports[0].port`` and ``ports[0].targetPort`` to the value set in the ``cluster-frontend-port`` in the ConfigMap above. This is the ``public-cluster-port`` in the file which is the ingress in to the cluster. @@ -140,18 +140,18 @@ Step 4.1: Vanilla NGINX * Start the Kubernetes Service: .. code:: bash - + $ kubectl --context k8s-bdb-test-cluster-0 apply -f nginx-http/nginx-http-svc.yaml -Step 4.2: NGINX with HTTPS + 3scale -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +Step 4.2: NGINX with HTTPS +^^^^^^^^^^^^^^^^^^^^^^^^^^ * You have to enable HTTPS for this one and will need an HTTPS certificate for your domain. - + * You should have already created the necessary Kubernetes Secrets in the previous - step (e.g. ``https-certs`` and ``threescale-credentials``). + step (i.e. ``https-certs``). * This configuration is located in the file ``nginx-https/nginx-https-svc.yaml``. @@ -162,9 +162,9 @@ Step 4.2: NGINX with HTTPS + 3scale the ConfigMap followed by ``-dep``. For example, if the value set in the ``ngx-instance-name`` is ``ngx-https-instance-0``, set the ``spec.selector.app`` to ``ngx-https-instance-0-dep``. - + * Set ``ports[0].port`` and ``ports[0].targetPort`` to the value set in the - ``cluster-frontend-port`` in the ConfigMap above. This is the + ``cluster-frontend-port`` in the ConfigMap above. This is the ``public-secure-cluster-port`` in the file which is the ingress in to the cluster. * Set ``ports[1].port`` and ``ports[1].targetPort`` to the value set in the @@ -173,7 +173,7 @@ Step 4.2: NGINX with HTTPS + 3scale available. * Start the Kubernetes Service: - + .. code:: bash $ kubectl --context k8s-bdb-test-cluster-0 apply -f nginx-https/nginx-https-svc.yaml @@ -189,11 +189,11 @@ Step 5: Assign DNS Name to the NGINX Public IP * The following command can help you find out if the NGINX service started above has been assigned a public IP or external IP address: - + .. code:: bash $ kubectl --context k8s-bdb-test-cluster-0 get svc -w - + * Once a public IP is assigned, you can map it to a DNS name. We usually assign ``bdb-test-cluster-0``, ``bdb-test-cluster-1`` and @@ -237,7 +237,7 @@ Step 6: Start the MongoDB Kubernetes Service ``mongodb-backend-port`` in the ConfigMap above. This is the ``mdb-port`` in the file which specifies where MongoDB listens for API requests. - + * Start the Kubernetes Service: .. code:: bash @@ -304,13 +304,13 @@ Step 9: Start the NGINX Kubernetes Deployment on ``mongodb-frontend-port`` to the MongoDB backend. * As in step 4, you have the option to use vanilla NGINX without HTTPS or - NGINX with HTTPS support integrated with 3scale API Gateway. + NGINX with HTTPS support. Step 9.1: Vanilla NGINX ^^^^^^^^^^^^^^^^^^^^^^^ - + * This configuration is located in the file ``nginx-http/nginx-http-dep.yaml``. - + * Set the ``metadata.name`` and ``spec.template.metadata.labels.app`` to the value set in ``ngx-instance-name`` in the ConfigMap followed by a ``-dep``. For example, if the value set in the ``ngx-instance-name`` is @@ -329,9 +329,9 @@ Step 9.1: Vanilla NGINX $ kubectl --context k8s-bdb-test-cluster-0 apply -f nginx-http/nginx-http-dep.yaml -Step 9.2: NGINX with HTTPS + 3scale -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ - +Step 9.2: NGINX with HTTPS +^^^^^^^^^^^^^^^^^^^^^^^^^^ + * This configuration is located in the file ``nginx-https/nginx-https-dep.yaml``. @@ -467,7 +467,7 @@ Step 12: Start a Kubernetes StatefulSet for MongoDB the ConfigMap. For example, if the value set in the ``mdb-instance-name`` is ``mdb-instance-0``, set the field to ``mdb-instance-0``. - + * Set ``metadata.name``, ``spec.template.metadata.name`` and ``spec.template.metadata.labels.app`` to the value set in ``mdb-instance-name`` in the ConfigMap, followed by @@ -479,7 +479,7 @@ Step 12: Start a Kubernetes StatefulSet for MongoDB * Note how the MongoDB container uses the ``mongo-db-claim`` and the ``mongo-configdb-claim`` PersistentVolumeClaims for its ``/data/db`` and ``/data/configdb`` directories (mount paths). - + * Note also that we use the pod's ``securityContext.capabilities.add`` specification to add the ``FOWNER`` capability to the container. That is because the MongoDB container has the user ``mongodb``, with uid ``999`` and @@ -505,18 +505,18 @@ Step 12: Start a Kubernetes StatefulSet for MongoDB .. code:: bash $ kubectl --context k8s-bdb-test-cluster-0 apply -f mongodb/mongo-ss.yaml - + * It might take up to 10 minutes for the disks, specified in the Persistent Volume Claims above, to be created and attached to the pod. The UI might show that the pod has errored with the message "timeout expired waiting for volumes to attach/mount". Use the CLI below to check the status of the pod in this case, instead of the UI. This happens due to a bug in Azure ACS. - + .. code:: bash $ kubectl --context k8s-bdb-test-cluster-0 get pods -w - + Step 13: Configure Users and Access Control for MongoDB ------------------------------------------------------- @@ -530,26 +530,26 @@ Step 13: Configure Users and Access Control for MongoDB * Find out the name of your MongoDB pod by reading the output of the ``kubectl ... get pods`` command at the end of the last step. It should be something like ``mdb-instance-0-ss-0``. - + * Log in to the MongoDB pod using: .. code:: bash $ kubectl --context k8s-bdb-test-cluster-0 exec -it bash - + * Open a mongo shell using the certificates already present at ``/etc/mongod/ssl/`` .. code:: bash - + $ mongo --host localhost --port 27017 --verbose --ssl \ --sslCAFile /etc/mongod/ssl/ca.pem \ --sslPEMKeyFile /etc/mongod/ssl/mdb-instance.pem * Initialize the replica set using: - + .. code:: bash - + > rs.initiate( { _id : "bigchain-rs", members: [ { @@ -562,7 +562,7 @@ Step 13: Configure Users and Access Control for MongoDB ``mdb-instance-name`` in the ConfigMap. For example, if the value set in the ``mdb-instance-name`` is ``mdb-instance-0``, set the ``hostname`` above to the value ``mdb-instance-0``. - + * The instance should be voted as the ``PRIMARY`` in the replica set (since this is the only instance in the replica set till now). This can be observed from the mongo shell prompt, @@ -573,14 +573,15 @@ Step 13: Configure Users and Access Control for MongoDB log in to the mongo shell. For further details, see `localhost exception `_ in MongoDB. - + .. code:: bash - + PRIMARY> use admin PRIMARY> db.createUser( { user: "adminUser", pwd: "superstrongpassword", - roles: [ { role: "userAdminAnyDatabase", db: "admin" } ] + roles: [ { role: "userAdminAnyDatabase", db: "admin" }, + { role: "clusterManager", db: "admin"} ] } ) * Exit and restart the mongo shell using the above command. @@ -605,16 +606,16 @@ Step 13: Configure Users and Access Control for MongoDB -inform PEM -subject -nameopt RFC2253 You should see an output line that resembles: - + .. code:: bash - + subject= emailAddress=dev@bigchaindb.com,CN=test-bdb-ssl,OU=BigchainDB-Instance,O=BigchainDB GmbH,L=Berlin,ST=Berlin,C=DE The ``subject`` line states the complete user name we need to use for creating the user on the mongo shell as follows: .. code:: bash - + PRIMARY> db.getSiblingDB("$external").runCommand( { createUser: 'emailAddress=dev@bigchaindb.com,CN=test-bdb-ssl,OU=BigchainDB-Instance,O=BigchainDB GmbH,L=Berlin,ST=Berlin,C=DE', writeConcern: { w: 'majority' , wtimeout: 5000 }, @@ -700,19 +701,19 @@ Step 16: Start a Kubernetes Deployment for BigchainDB For example, if the value set in the ``bdb-instance-name`` is ``bdb-instance-0``, set the fields to the value ``bdb-insance-0-dep``. - + * Set the value of ``BIGCHAINDB_KEYPAIR_PRIVATE`` (not base64-encoded). (In the future, we'd like to pull the BigchainDB private key from the Secret named ``bdb-private-key``, but a Secret can only be mounted as a file, so BigchainDB Server would have to be modified to look for it in a file.) - + * As we gain more experience running BigchainDB in testing and production, we will tweak the ``resources.limits`` values for CPU and memory, and as richer monitoring and probing becomes available in BigchainDB, we will tweak the ``livenessProbe`` and ``readinessProbe`` parameters. - + * Set the ports to be exposed from the pod in the ``spec.containers[0].ports`` section. We currently expose 2 ports - ``bigchaindb-api-port`` and ``bigchaindb-ws-port``. Set them to the @@ -740,11 +741,11 @@ Step 17: Start a Kubernetes Deployment for OpenResty For example, if the value set in the ``openresty-instance-name`` is ``openresty-instance-0``, set the fields to the value ``openresty-instance-0-dep``. - - * Set the port to be exposed from the pod in the - ``spec.containers[0].ports`` section. We currently expose the port at - which OpenResty is listening for requests, ``openresty-backend-port`` in - the above ConfigMap. + + * Set the port to be exposed from the pod in the + ``spec.containers[0].ports`` section. We currently expose the port at + which OpenResty is listening for requests, ``openresty-backend-port`` in + the above ConfigMap. * Create the OpenResty Deployment using: @@ -791,13 +792,13 @@ You can use it as below to get started immediately: It will drop you to the shell prompt. To test the MongoDB instance: - + .. code:: bash $ nslookup mdb-instance-0 - + $ dig +noall +answer _mdb-port._tcp.mdb-instance-0.default.svc.cluster.local SRV - + $ curl -X GET http://mdb-instance-0:27017 The ``nslookup`` command should output the configured IP address of the service @@ -806,20 +807,20 @@ The ``dig`` command should return the configured port numbers. The ``curl`` command tests the availability of the service. To test the BigchainDB instance: - + .. code:: bash $ nslookup bdb-instance-0 - + $ dig +noall +answer _bdb-api-port._tcp.bdb-instance-0.default.svc.cluster.local SRV $ dig +noall +answer _bdb-ws-port._tcp.bdb-instance-0.default.svc.cluster.local SRV - + $ curl -X GET http://bdb-instance-0:9984 $ wsc -er ws://bdb-instance-0:9985/api/v1/streams/valid_transactions - + To test the OpenResty instance: .. code:: bash @@ -834,11 +835,11 @@ BigchainDB instance. To test the vanilla NGINX instance: - + .. code:: bash $ nslookup ngx-http-instance-0 - + $ dig +noall +answer _public-cluster-port._tcp.ngx-http-instance-0.default.svc.cluster.local SRV $ dig +noall +answer _public-health-check-port._tcp.ngx-http-instance-0.default.svc.cluster.local SRV @@ -855,7 +856,7 @@ The above curl command should result in the response To test the NGINX instance with HTTPS and 3scale integration: .. code:: bash - + $ nslookup ngx-https-instance-0 $ dig +noall +answer _public-secure-cluster-port._tcp.ngx-https-instance-0.default.svc.cluster.local SRV @@ -886,5 +887,4 @@ If you are using the NGINX with HTTPS support, use ``https`` instead of ``http`` above. Use the Python Driver to send some transactions to the BigchainDB node and -verify that your node or cluster works as expected. - +verify that your node or cluster works as expected. \ No newline at end of file diff --git a/docs/server/source/production-deployment-template/server-tls-certificate.rst b/docs/server/source/production-deployment-template/server-tls-certificate.rst index c220daa0..8444b0ab 100644 --- a/docs/server/source/production-deployment-template/server-tls-certificate.rst +++ b/docs/server/source/production-deployment-template/server-tls-certificate.rst @@ -29,8 +29,13 @@ You can create the server private key and certificate signing request (CSR) by going into the directory ``member-cert/easy-rsa-3.0.1/easyrsa3`` and using something like: +.. note:: + + Please make sure you are fullfilling the requirements for `MongoDB server/member certificates + `_. + .. code:: bash - + ./easyrsa init-pki ./easyrsa --req-cn=mdb-instance-0 --subject-alt-name=DNS:localhost,DNS:mdb-instance-0 gen-req mdb-instance-0 nopass @@ -67,11 +72,11 @@ Go to your ``bdb-cluster-ca/easy-rsa-3.0.1/easyrsa3/`` directory and do something like: .. code:: bash - + ./easyrsa import-req /path/to/mdb-instance-0.req mdb-instance-0 ./easyrsa --subject-alt-name=DNS:localhost,DNS:mdb-instance-0 sign-req server mdb-instance-0 - + Once you have signed it, you can send the signed certificate and the CA certificate back to the requestor. The files are ``pki/issued/mdb-instance-0.crt`` and ``pki/ca.crt``. @@ -84,6 +89,6 @@ MongoDB requires a single, consolidated file containing both the public and private keys. .. code:: bash - + cat /path/to/mdb-instance-0.crt /path/to/mdb-instance-0.key > mdb-instance-0.pem diff --git a/docs/server/source/production-deployment-template/template-kubernetes-azure.rst b/docs/server/source/production-deployment-template/template-kubernetes-azure.rst index 54927f5e..a916012f 100644 --- a/docs/server/source/production-deployment-template/template-kubernetes-azure.rst +++ b/docs/server/source/production-deployment-template/template-kubernetes-azure.rst @@ -49,7 +49,7 @@ If you already *have* the Azure CLI installed, you may want to update it. .. warning:: - ``az component update`` isn't supported if you installed the CLI using some of Microsoft's provided installation instructions. See `the Microsoft docs for update instructions `_. + ``az component update`` isn't supported if you installed the CLI using some of Microsoft's provided installation instructions. See `the Microsoft docs for update instructions `_. Next, login to your account using: @@ -128,9 +128,9 @@ You can SSH to one of the just-deployed Kubernetes "master" nodes .. code:: bash - $ ssh -i ~/.ssh/ ubuntu@ + $ ssh -i ~/.ssh/ ubuntu@ -where you can get the IP address or hostname +where you can get the IP address or FQDN of a master node from the Azure Portal. For example: .. code:: bash @@ -139,13 +139,14 @@ of a master node from the Azure Portal. For example: .. note:: - All the master nodes should have the *same* public IP address and hostname - (also called the Master FQDN). + All the master nodes are accessible behind the *same* public IP address and + FQDN. You connect to one of the masters randomly based on the load balancing + policy. -The "agent" nodes shouldn't get public IP addresses or hostnames, -so you can't SSH to them *directly*, +The "agent" nodes shouldn't get public IP addresses or externally accessible +FQDNs, so you can't SSH to them *directly*, but you can first SSH to the master -and then SSH to an agent from there. +and then SSH to an agent from there using their hostname. To do that, you could copy your SSH key pair to the master (a bad idea), or use SSH agent forwarding (better). @@ -168,14 +169,14 @@ then SSH agent forwarding hasn't been set up correctly. If you get a non-empty response, then SSH agent forwarding should work fine and you can SSH to one of the agent nodes (from a master) -using something like: +using: .. code:: bash $ ssh ubuntu@k8s-agent-4AC80E97-0 where ``k8s-agent-4AC80E97-0`` is the name -of a Kubernetes agent node in your Kubernetes cluster. +of a Kubernetes agent node in your Kubernetes cluster. You will have to replace it by the name of an agent node in your cluster. @@ -202,4 +203,4 @@ CAUTION: You might end up deleting resources other than the ACS cluster. Next, you can :doc:`run a BigchainDB node on your new -Kubernetes cluster `. +Kubernetes cluster `. \ No newline at end of file diff --git a/docs/server/source/production-deployment-template/workflow.rst b/docs/server/source/production-deployment-template/workflow.rst index d831287e..8b806bcd 100644 --- a/docs/server/source/production-deployment-template/workflow.rst +++ b/docs/server/source/production-deployment-template/workflow.rst @@ -45,7 +45,7 @@ For example, maybe they assign a unique number to each node, so that if you're operating node 12, your MongoDB instance would be named ``mdb-instance-12``. Similarly, other instances must also have unique names in the cluster. - + #. Name of the MongoDB instance (``mdb-instance-*``) #. Name of the BigchainDB instance (``bdb-instance-*``) #. Name of the NGINX instance (``ngx-http-instance-*`` or ``ngx-https-instance-*``) @@ -80,7 +80,7 @@ You can generate a BigchainDB keypair for your node, for example, using the `BigchainDB Python Driver `_. .. code:: python - + from bigchaindb_driver.crypto import generate_keypair print(generate_keypair()) @@ -100,15 +100,13 @@ and have an SSL certificate for the FQDN. (You can get an SSL certificate from any SSL certificate provider.) -☐ Ask the managing organization -for the FQDN used to serve the BigchainDB APIs -(e.g. ``api.orgname.net`` or ``bdb.clustername.com``) -and for a copy of the associated SSL/TLS certificate. -Also, ask for the user name to use for authenticating to MongoDB. +☐ Ask the managing organization for the user name to use for authenticating to +MongoDB. ☐ If the cluster uses 3scale for API authentication, monitoring and billing, -you must ask the managing organization for all relevant 3scale credentials. +you must ask the managing organization for all relevant 3scale credentials - +secret token, service ID, version header and API service token. ☐ If the cluster uses MongoDB Cloud Manager for monitoring and backup, diff --git a/k8s/configuration/config-map.yaml b/k8s/configuration/config-map.yaml index 198c5dfd..8c30565f 100644 --- a/k8s/configuration/config-map.yaml +++ b/k8s/configuration/config-map.yaml @@ -1,4 +1,4 @@ -## Note: data values do NOT have to be base64-encoded in this file. +## Note: data values do NOT have to be base64-encoded in this file. ## vars is common environment variables for this BigchaindB node apiVersion: v1 @@ -12,7 +12,7 @@ data: # cluster-frontend-port is the port number on which this node's services # are available to external clients. - cluster-frontend-port: "443" + cluster-frontend-port: "443" # cluster-health-check-port is the port number on which an external load # balancer can check the status/liveness of the external/public server. diff --git a/k8s/mongodb/mongo-ext-conn-svc.yaml b/k8s/mongodb/mongo-ext-conn-svc.yaml new file mode 100644 index 00000000..34d49a0b --- /dev/null +++ b/k8s/mongodb/mongo-ext-conn-svc.yaml @@ -0,0 +1,13 @@ +apiVersion: v1 +kind: Service +metadata: + # Name of mongodb instance you are trying to connect to + # e.g. mdb-instance-0 + name: "" + namespace: default +spec: + ports: + - port: "" + type: ExternalName + # FQDN of remote cluster/NGINX instance + externalName: "" \ No newline at end of file diff --git a/k8s/nginx-http/nginx-http-dep.yaml b/k8s/nginx-http/nginx-http-dep.yaml index 36a2c4a9..ad97bcdf 100644 --- a/k8s/nginx-http/nginx-http-dep.yaml +++ b/k8s/nginx-http/nginx-http-dep.yaml @@ -1,17 +1,17 @@ apiVersion: extensions/v1beta1 kind: Deployment metadata: - name: ngx-http-instance-0-dep + name: ngx-instance-0-dep spec: replicas: 1 template: metadata: labels: - app: ngx-http-instance-0-dep + app: ngx-instance-0-dep spec: terminationGracePeriodSeconds: 10 containers: - - name: nginx-http + - name: nginx image: bigchaindb/nginx_http:1.0 imagePullPolicy: IfNotPresent env: diff --git a/k8s/nginx-http/nginx-http-svc.yaml b/k8s/nginx-http/nginx-http-svc.yaml index 194c9257..76c603d2 100644 --- a/k8s/nginx-http/nginx-http-svc.yaml +++ b/k8s/nginx-http/nginx-http-svc.yaml @@ -1,17 +1,17 @@ apiVersion: v1 kind: Service metadata: - name: ngx-http-instance-0 + name: ngx-instance-0 namespace: default labels: - name: ngx-http-instance-0 + name: ngx-instance-0 annotations: # NOTE: the following annotation is a beta feature and # only available in GCE/GKE and Azure as of now service.beta.kubernetes.io/external-traffic: OnlyLocal spec: selector: - app: ngx-http-instance-0-dep + app: ngx-instance-0-dep ports: - port: "" targetPort: "" diff --git a/k8s/nginx-https/container/nginx.conf.template b/k8s/nginx-https/container/nginx.conf.template index 726e8113..8a85c894 100644 --- a/k8s/nginx-https/container/nginx.conf.template +++ b/k8s/nginx-https/container/nginx.conf.template @@ -100,7 +100,7 @@ http { add_header 'Access-Control-Allow-Methods' 'GET, POST, OPTIONS'; add_header 'Access-Control-Allow-Headers' 'DNT,X-CustomHeader,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,Content-Range,Range'; add_header 'Access-Control-Expose-Headers' 'DNT,X-CustomHeader,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,Content-Range,Range'; - + proxy_pass http://$openresty_backend:OPENRESTY_BACKEND_PORT; } @@ -157,10 +157,14 @@ stream { # Enable logging when connections are being throttled. limit_conn_log_level notice; - - # Allow 16 connections from the same IP address. - limit_conn two 16; - + + # For a multi node BigchainDB deployment we need around 2^5 connections + # (for inter-node communication)per node via NGINX, we can bump this up in case + # there is a requirement to scale up. But we should not remove this + # for security reasons. + # Allow 256 connections from the same IP address. + limit_conn two 256; + # DNS resolver to use for all the backend names specified in this configuration. resolver DNS_SERVER valid=30s ipv6=off; @@ -169,10 +173,10 @@ stream { map $remote_addr $mdb_backend { default MONGODB_BACKEND_HOST; } - + # Frontend server to forward connections to MDB instance. server { - listen MONGODB_FRONTEND_PORT so_keepalive=10m:1m:5; + listen MONGODB_FRONTEND_PORT so_keepalive=3m:1m:5; preread_timeout 30s; tcp_nodelay on; proxy_pass $mdb_backend:MONGODB_BACKEND_PORT; diff --git a/k8s/nginx-https/nginx-https-dep.yaml b/k8s/nginx-https/nginx-https-dep.yaml index 1ed7408c..57218424 100644 --- a/k8s/nginx-https/nginx-https-dep.yaml +++ b/k8s/nginx-https/nginx-https-dep.yaml @@ -1,17 +1,17 @@ apiVersion: extensions/v1beta1 kind: Deployment metadata: - name: ngx-https-instance-0-dep + name: ngx-instance-0-dep spec: replicas: 1 template: metadata: labels: - app: ngx-https-instance-0-dep + app: ngx-instance-0-dep spec: terminationGracePeriodSeconds: 10 containers: - - name: nginx-https + - name: nginx image: bigchaindb/nginx_https:1.0 imagePullPolicy: IfNotPresent env: @@ -59,7 +59,7 @@ spec: valueFrom: configMapKeyRef: name: vars - key: openresty-instance-name + key: ngx-openresty-instance-name - name: BIGCHAINDB_BACKEND_HOST valueFrom: configMapKeyRef: diff --git a/k8s/nginx-https/nginx-https-svc.yaml b/k8s/nginx-https/nginx-https-svc.yaml index cf1bc998..1d817fe5 100644 --- a/k8s/nginx-https/nginx-https-svc.yaml +++ b/k8s/nginx-https/nginx-https-svc.yaml @@ -1,17 +1,17 @@ apiVersion: v1 kind: Service metadata: - name: ngx-https-instance-0 + name: ngx-instance-0 namespace: default labels: - name: ngx-https-instance-0 + name: ngx-instance-0 annotations: # NOTE: the following annotation is a beta feature and # only available in GCE/GKE and Azure as of now service.beta.kubernetes.io/external-traffic: OnlyLocal spec: selector: - app: ngx-https-instance-0-dep + app: ngx-instance-0-dep ports: - port: "" targetPort: "" diff --git a/k8s/nginx-openresty/nginx-openresty-dep.yaml b/k8s/nginx-openresty/nginx-openresty-dep.yaml index 1b3c6ed4..f8f6a09b 100644 --- a/k8s/nginx-openresty/nginx-openresty-dep.yaml +++ b/k8s/nginx-openresty/nginx-openresty-dep.yaml @@ -12,7 +12,7 @@ spec: terminationGracePeriodSeconds: 10 containers: - name: nginx-openresty - image: bigchaindb/nginx_3scale:2.0 + image: bigchaindb/nginx_3scale:3.0 imagePullPolicy: IfNotPresent env: - name: DNS_SERVER