|
|
|
@ -25,6 +25,19 @@ Step 2: Configure kubectl
|
|
|
|
|
The default location of the kubectl configuration file is ``~/.kube/config``.
|
|
|
|
|
If you don't have that file, then you need to get it.
|
|
|
|
|
|
|
|
|
|
Find out the ``kubectl context`` of your Kubernetes cluster using the command:
|
|
|
|
|
|
|
|
|
|
.. code:: bash
|
|
|
|
|
|
|
|
|
|
$ kubectl config view
|
|
|
|
|
|
|
|
|
|
The context will be one of the entries in ``context.cluster`` under the
|
|
|
|
|
``contexts`` list in the output.
|
|
|
|
|
|
|
|
|
|
Assuming that the current context for your cluster is
|
|
|
|
|
``k8s-bdb-test-cluster-0``, you will always specify the context in the
|
|
|
|
|
following commands as ``kubectl --context k8s-bdb-test-cluster-0``.
|
|
|
|
|
|
|
|
|
|
**Azure.** If you deployed your Kubernetes cluster on Azure
|
|
|
|
|
using the Azure CLI 2.0 (as per :doc:`our template <template-kubernetes-azure>`),
|
|
|
|
|
then you can get the ``~/.kube/config`` file using:
|
|
|
|
@ -42,8 +55,254 @@ then try adding ``--ssh-key-file ~/.ssh/<name>``
|
|
|
|
|
to the above command (i.e. the path to the private key).
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Step 3: Create Storage Classes
|
|
|
|
|
------------------------------
|
|
|
|
|
Step 3: Connect to the Cluster UI - (optional)
|
|
|
|
|
----------------------------------------------
|
|
|
|
|
|
|
|
|
|
* Get the kubectl context for this cluster using ``kubectl config view``.
|
|
|
|
|
|
|
|
|
|
* For the above commands, the context would be ``k8s-bdb-test-cluster-0``.
|
|
|
|
|
|
|
|
|
|
.. code:: bash
|
|
|
|
|
|
|
|
|
|
$ kubectl --context k8s-bdb-test-cluster-0 proxy -p 8001
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Step 4: Configure the Node
|
|
|
|
|
--------------------------
|
|
|
|
|
|
|
|
|
|
* You need to have all the information :ref:`listed here <Things Each Node Operator Must Do>`.
|
|
|
|
|
|
|
|
|
|
* The information needs to be populated in ``configuration/config-map.yaml``
|
|
|
|
|
and ``configuration/secret.yaml``.
|
|
|
|
|
|
|
|
|
|
* For more details, refer the document on how to :ref:`configure a node <Configure the Node>`.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Step 4: Start the NGINX Service
|
|
|
|
|
-------------------------------
|
|
|
|
|
|
|
|
|
|
* This will will give us a public IP for the cluster.
|
|
|
|
|
|
|
|
|
|
* Once you complete this step, you might need to wait up to 10 mins for the
|
|
|
|
|
public IP to be assigned.
|
|
|
|
|
|
|
|
|
|
* You have the option to use vanilla NGINX without HTTPS support or an
|
|
|
|
|
OpenResty NGINX integrated with 3scale API Gateway.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Step 4.1: Vanilla NGINX
|
|
|
|
|
^^^^^^^^^^^^^^^^^^^^^^^
|
|
|
|
|
|
|
|
|
|
* This configuration is located in the file ``nginx/nginx-svc.yaml``.
|
|
|
|
|
|
|
|
|
|
* Set the ``metadata.name`` and ``metadata.labels.name`` to the value
|
|
|
|
|
set in ``ngx-instance-name`` in the ConfigMap above.
|
|
|
|
|
|
|
|
|
|
* Set the ``spec.selector.app`` to the value set in ``ngx-instance-name`` in
|
|
|
|
|
the ConfigMap followed by ``-dep``. For example, if the value set in the
|
|
|
|
|
``ngx-instance-name`` is ``ngx-instance-0``, set the
|
|
|
|
|
``spec.selector.app`` to ``ngx-instance-0-dep``.
|
|
|
|
|
|
|
|
|
|
* Start the Kubernetes Service:
|
|
|
|
|
|
|
|
|
|
.. code:: bash
|
|
|
|
|
|
|
|
|
|
$ kubectl --context k8s-bdb-test-cluster-0 apply -f nginx/nginx-svc.yaml
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Step 4.2: OpenResty NGINX + 3scale
|
|
|
|
|
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
|
|
|
|
|
|
|
|
|
* This configuration is located in the file ``nginx/nginx-3scale-svc.yaml``.
|
|
|
|
|
|
|
|
|
|
* You have to enable HTTPS for this one and will need an HTTPS certificate
|
|
|
|
|
for your domain
|
|
|
|
|
|
|
|
|
|
* You should have already created the Kubernetes Secret in the previous
|
|
|
|
|
step.
|
|
|
|
|
|
|
|
|
|
* Set the ``spec.selector.app`` to the value set in ``ngx-instance-name`` in
|
|
|
|
|
the ConfigMap followed by ``-dep``. For example, if the value set in the
|
|
|
|
|
``ngx-instance-name`` is ``ngx-instance-0``, set the
|
|
|
|
|
``spec.selector.app`` to ``ngx-instance-0-dep``.
|
|
|
|
|
|
|
|
|
|
* Start the Kubernetes Service:
|
|
|
|
|
|
|
|
|
|
.. code:: bash
|
|
|
|
|
|
|
|
|
|
$ kubectl --context k8s-bdb-test-cluster-0 apply -f nginx-3scale/nginx-3scale-svc.yaml
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Step 5: Assign DNS Name to the NGINX Public IP
|
|
|
|
|
----------------------------------------------
|
|
|
|
|
|
|
|
|
|
* This step is required only if you are planning to set up multiple
|
|
|
|
|
`BigchainDB nodes
|
|
|
|
|
<https://docs.bigchaindb.com/en/latest/terminology.html>`_ or are using
|
|
|
|
|
HTTPS certificates tied to a domain.
|
|
|
|
|
|
|
|
|
|
* The following command can help you find out if the nginx service started
|
|
|
|
|
above has been assigned a public IP or external IP address:
|
|
|
|
|
|
|
|
|
|
.. code:: bash
|
|
|
|
|
|
|
|
|
|
$ kubectl --context k8s-bdb-test-cluster-0 get svc -w
|
|
|
|
|
|
|
|
|
|
* Once a public IP is assigned, you can log in to the Azure portal and map it to
|
|
|
|
|
a DNS name.
|
|
|
|
|
|
|
|
|
|
* We usually assign ``bdb-test-cluster-0``, ``bdb-test-cluster-0`` and
|
|
|
|
|
so on in our documentation.
|
|
|
|
|
|
|
|
|
|
* Let us assume that we assigned the unique name of ``bdb-test-cluster-0`` here.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
**Set up DNS mapping in Azure.**
|
|
|
|
|
|
|
|
|
|
Select the current Azure resource group and look for the ``Public IP``
|
|
|
|
|
resource. You should see at least 2 entries there - one for the Kubernetes
|
|
|
|
|
master and the other for the MongoDB instance. You may have to ``Refresh`` the
|
|
|
|
|
Azure web page listing the resources in a resource group for the latest
|
|
|
|
|
changes to be reflected.
|
|
|
|
|
Select the ``Public IP`` resource that is attached to your service (it should
|
|
|
|
|
have the Kubernetes cluster name along with a random string),
|
|
|
|
|
select ``Configuration``, add the DNS name that was added in the
|
|
|
|
|
ConfigMap earlier, click ``Save``, and wait for the changes to be applied.
|
|
|
|
|
|
|
|
|
|
To verify the DNS setting is operational, you can run ``nslookup <dns
|
|
|
|
|
name added in ConfigMap>`` from your local Linux shell.
|
|
|
|
|
|
|
|
|
|
This will ensure that when you scale the replica set later, other MongoDB
|
|
|
|
|
members in the replica set can reach this instance.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Step 6: Start the MongoDB Kubernetes Service
|
|
|
|
|
--------------------------------------------
|
|
|
|
|
|
|
|
|
|
* This configuration is located in the file ``mongodb/mongo-svc.yaml``.
|
|
|
|
|
|
|
|
|
|
* Set the ``metadata.name`` and ``metadata.labels.name`` to the value
|
|
|
|
|
set in ``mdb-instance-name`` in the ConfigMap above.
|
|
|
|
|
|
|
|
|
|
* Set the ``spec.selector.app`` to the value set in ``mdb-instance-name`` in
|
|
|
|
|
the ConfigMap followed by ``-ss``. For example, if the value set in the
|
|
|
|
|
``mdb-instance-name`` is ``mdb-instance-0``, set the
|
|
|
|
|
``spec.selector.app`` to ``mdb-instance-0-ss``.
|
|
|
|
|
|
|
|
|
|
* Start the Kubernetes Service:
|
|
|
|
|
|
|
|
|
|
.. code:: bash
|
|
|
|
|
|
|
|
|
|
$ kubectl --context k8s-bdb-test-cluster-0 apply -f mongodb/mongo-svc.yaml
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Step 7: Start the BigchainDB Kubernetes Service
|
|
|
|
|
-----------------------------------------------
|
|
|
|
|
|
|
|
|
|
* This configuration is located in the file ``bigchaindb/bigchaindb-svc.yaml``.
|
|
|
|
|
|
|
|
|
|
* Set the ``metadata.name`` and ``metadata.labels.name`` to the value
|
|
|
|
|
set in ``bdb-instance-name`` in the ConfigMap above.
|
|
|
|
|
|
|
|
|
|
* Set the ``spec.selector.app`` to the value set in ``bdb-instance-name`` in
|
|
|
|
|
the ConfigMap followed by ``-dep``. For example, if the value set in the
|
|
|
|
|
``bdb-instance-name`` is ``bdb-instance-0``, set the
|
|
|
|
|
``spec.selector.app`` to ``bdb-instance-0-dep``.
|
|
|
|
|
|
|
|
|
|
* Start the Kubernetes Service:
|
|
|
|
|
|
|
|
|
|
.. code:: bash
|
|
|
|
|
|
|
|
|
|
$ kubectl --context k8s-bdb-test-cluster-0 apply -f bigchaindb/bigchaindb-svc.yaml
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Step 8: Start the NGINX Kubernetes Deployment
|
|
|
|
|
---------------------------------------------
|
|
|
|
|
|
|
|
|
|
* NGINX is used as a proxy to both the BigchainDB and MongoDB instances in
|
|
|
|
|
the node. It proxies HTTP requests on port 80 to the BigchainDB backend,
|
|
|
|
|
and TCP connections on port 27017 to the MongoDB backend.
|
|
|
|
|
|
|
|
|
|
* As in step 4, you have the option to use vanilla NGINX or an OpenResty
|
|
|
|
|
NGINX integrated with 3scale API Gateway.
|
|
|
|
|
|
|
|
|
|
Step 8.1: Vanilla NGINX
|
|
|
|
|
^^^^^^^^^^^^^^^^^^^^^^^
|
|
|
|
|
|
|
|
|
|
* This configuration is located in the file ``nginx/nginx-dep.yaml``.
|
|
|
|
|
|
|
|
|
|
* Set the ``metadata.name`` and ``spec.template.metadata.labels.app``
|
|
|
|
|
to the value set in ``ngx-instance-name`` in the ConfigMap followed by a
|
|
|
|
|
``-dep``. For example, if the value set in the ``ngx-instance-name`` is
|
|
|
|
|
``ngx-instance-0``, set the fields to ``ngx-instance-0-dep``.
|
|
|
|
|
|
|
|
|
|
* Set ``MONGODB_BACKEND_HOST`` env var to
|
|
|
|
|
the value set in ``mdb-instance-name`` in the ConfigMap, followed by
|
|
|
|
|
``.default.svc.cluster.local``. For example, if the value set in the
|
|
|
|
|
``mdb-instance-name`` is ``mdb-instance-0``, set the
|
|
|
|
|
``MONGODB_BACKEND_HOST`` env var to
|
|
|
|
|
``mdb-instance-0.default.svc.cluster.local``.
|
|
|
|
|
|
|
|
|
|
* Set ``BIGCHAINDB_BACKEND_HOST`` env var to
|
|
|
|
|
the value set in ``bdb-instance-name`` in the ConfigMap, followed by
|
|
|
|
|
``.default.svc.cluster.local``. For example, if the value set in the
|
|
|
|
|
``bdb-instance-name`` is ``bdb-instance-0``, set the
|
|
|
|
|
``BIGCHAINDB_BACKEND_HOST`` env var to
|
|
|
|
|
``bdb-instance-0.default.svc.cluster.local``.
|
|
|
|
|
|
|
|
|
|
* Set ``MONGODB_FRONTEND_PORT`` to 27017, or the port number on which you
|
|
|
|
|
want to expose MongoDB service.
|
|
|
|
|
|
|
|
|
|
* Set ``BIGCHAINDB_FRONTEND_PORT`` to 80, or the port number on which you
|
|
|
|
|
want to expose BigchainDB service.
|
|
|
|
|
|
|
|
|
|
* Start the Kubernetes Deployment:
|
|
|
|
|
|
|
|
|
|
.. code:: bash
|
|
|
|
|
|
|
|
|
|
$ kubectl --context k8s-bdb-test-cluster-0 apply -f nginx/nginx-dep.yaml
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Step 8.2: OpenResty NGINX + 3scale
|
|
|
|
|
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
|
|
|
|
|
|
|
|
|
* This configuration is located in the file
|
|
|
|
|
``nginx-3scale/nginx-3scale-dep.yaml``.
|
|
|
|
|
|
|
|
|
|
* Set the ``metadata.name`` and ``spec.template.metadata.labels.app``
|
|
|
|
|
to the value set in ``ngx-instance-name`` in the ConfigMap followed by a
|
|
|
|
|
``-dep``. For example, if the value set in the ``ngx-instance-name`` is
|
|
|
|
|
``ngx-instance-0``, set the fields to ``ngx-instance-0-dep``.
|
|
|
|
|
|
|
|
|
|
* Set ``MONGODB_BACKEND_HOST`` env var to
|
|
|
|
|
the value set in ``mdb-instance-name`` in the ConfigMap, followed by
|
|
|
|
|
``.default.svc.cluster.local``. For example, if the value set in the
|
|
|
|
|
``mdb-instance-name`` is ``mdb-instance-0``, set the
|
|
|
|
|
``MONGODB_BACKEND_HOST`` env var to
|
|
|
|
|
``mdb-instance-0.default.svc.cluster.local``.
|
|
|
|
|
|
|
|
|
|
* Set ``BIGCHAINDB_BACKEND_HOST`` env var to
|
|
|
|
|
the value set in ``bdb-instance-name`` in the ConfigMap, followed by
|
|
|
|
|
``.default.svc.cluster.local``. For example, if the value set in the
|
|
|
|
|
``bdb-instance-name`` is ``bdb-instance-0``, set the
|
|
|
|
|
``BIGCHAINDB_BACKEND_HOST`` env var to
|
|
|
|
|
``bdb-instance-0.default.svc.cluster.local``.
|
|
|
|
|
|
|
|
|
|
* Set ``MONGODB_FRONTEND_PORT`` to 27017, or the port number on which you
|
|
|
|
|
want to expose the MongoDB service.
|
|
|
|
|
|
|
|
|
|
* Set ``BIGCHAINDB_FRONTEND_PORT`` to 443, or the port number on which you
|
|
|
|
|
want to expose the BigchainDB service over HTTPS.
|
|
|
|
|
|
|
|
|
|
* Start the Kubernetes Deployment:
|
|
|
|
|
|
|
|
|
|
.. code:: bash
|
|
|
|
|
|
|
|
|
|
$ kubectl --context k8s-bdb-test-cluster-0 apply -f nginx-3scale/nginx-3scale-dep.yaml
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Step 9: Create Kubernetes Storage Classes for MongoDB
|
|
|
|
|
-----------------------------------------------------
|
|
|
|
|
|
|
|
|
|
MongoDB needs somewhere to store its data persistently,
|
|
|
|
|
outside the container where MongoDB is running.
|
|
|
|
@ -67,7 +326,9 @@ see `the Kubernetes docs about persistent volumes
|
|
|
|
|
|
|
|
|
|
The first thing to do is create the Kubernetes storage classes.
|
|
|
|
|
|
|
|
|
|
**Azure.** First, you need an Azure storage account.
|
|
|
|
|
**Set up Storage Classes in Azure.**
|
|
|
|
|
|
|
|
|
|
First, you need an Azure storage account.
|
|
|
|
|
If you deployed your Kubernetes cluster on Azure
|
|
|
|
|
using the Azure CLI 2.0
|
|
|
|
|
(as per :doc:`our template <template-kubernetes-azure>`),
|
|
|
|
@ -89,20 +350,17 @@ For future reference, the command to create a storage account is
|
|
|
|
|
`az storage account create <https://docs.microsoft.com/en-us/cli/azure/storage/account#create>`_.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Get the file ``mongo-sc.yaml`` from GitHub using:
|
|
|
|
|
The Kubernetes template for configuration of Storage Class is located in the
|
|
|
|
|
file ``mongodb/mongo-sc.yaml``.
|
|
|
|
|
|
|
|
|
|
.. code:: bash
|
|
|
|
|
|
|
|
|
|
$ wget https://raw.githubusercontent.com/bigchaindb/bigchaindb/master/k8s/mongodb/mongo-sc.yaml
|
|
|
|
|
|
|
|
|
|
You may have to update the ``parameters.location`` field in both the files to
|
|
|
|
|
You may have to update the ``parameters.location`` field in the file to
|
|
|
|
|
specify the location you are using in Azure.
|
|
|
|
|
|
|
|
|
|
Create the required storage classes using:
|
|
|
|
|
|
|
|
|
|
.. code:: bash
|
|
|
|
|
|
|
|
|
|
$ kubectl apply -f mongo-sc.yaml
|
|
|
|
|
$ kubectl --context k8s-bdb-test-cluster-0 apply -f mongodb/mongo-sc.yaml
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
You can check if it worked using ``kubectl get storageclasses``.
|
|
|
|
@ -117,16 +375,13 @@ Kubernetes just looks for a storageAccount
|
|
|
|
|
with the specified skuName and location.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Step 4: Create Persistent Volume Claims
|
|
|
|
|
---------------------------------------
|
|
|
|
|
Step 10: Create Kubernetes Persistent Volume Claims
|
|
|
|
|
---------------------------------------------------
|
|
|
|
|
|
|
|
|
|
Next, you will create two PersistentVolumeClaim objects ``mongo-db-claim`` and
|
|
|
|
|
``mongo-configdb-claim``.
|
|
|
|
|
Get the file ``mongo-pvc.yaml`` from GitHub using:
|
|
|
|
|
|
|
|
|
|
.. code:: bash
|
|
|
|
|
|
|
|
|
|
$ wget https://raw.githubusercontent.com/bigchaindb/bigchaindb/master/k8s/mongodb/mongo-pvc.yaml
|
|
|
|
|
This configuration is located in the file ``mongodb/mongo-pvc.yaml``.
|
|
|
|
|
|
|
|
|
|
Note how there's no explicit mention of Azure, AWS or whatever.
|
|
|
|
|
``ReadWriteOnce`` (RWO) means the volume can be mounted as
|
|
|
|
@ -143,7 +398,7 @@ Create the required Persistent Volume Claims using:
|
|
|
|
|
|
|
|
|
|
.. code:: bash
|
|
|
|
|
|
|
|
|
|
$ kubectl apply -f mongo-pvc.yaml
|
|
|
|
|
$ kubectl --context k8s-bdb-test-cluster-0 apply -f mongodb/mongo-pvc.yaml
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
You can check its status using: ``kubectl get pvc -w``
|
|
|
|
@ -152,270 +407,155 @@ Initially, the status of persistent volume claims might be "Pending"
|
|
|
|
|
but it should become "Bound" fairly quickly.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Step 5: Create the Config Map - Optional
|
|
|
|
|
----------------------------------------
|
|
|
|
|
|
|
|
|
|
This step is required only if you are planning to set up multiple
|
|
|
|
|
`BigchainDB nodes
|
|
|
|
|
<https://docs.bigchaindb.com/en/latest/terminology.html>`_.
|
|
|
|
|
|
|
|
|
|
MongoDB reads the local ``/etc/hosts`` file while bootstrapping a replica set
|
|
|
|
|
to resolve the hostname provided to the ``rs.initiate()`` command. It needs to
|
|
|
|
|
ensure that the replica set is being initialized in the same instance where
|
|
|
|
|
the MongoDB instance is running.
|
|
|
|
|
|
|
|
|
|
To achieve this, you will create a ConfigMap with the FQDN of the MongoDB instance
|
|
|
|
|
and populate the ``/etc/hosts`` file with this value so that a replica set can
|
|
|
|
|
be created seamlessly.
|
|
|
|
|
|
|
|
|
|
Get the file ``mongo-cm.yaml`` from GitHub using:
|
|
|
|
|
|
|
|
|
|
.. code:: bash
|
|
|
|
|
|
|
|
|
|
$ wget https://raw.githubusercontent.com/bigchaindb/bigchaindb/master/k8s/mongodb/mongo-cm.yaml
|
|
|
|
|
|
|
|
|
|
You may want to update the ``data.fqdn`` field in the file before creating the
|
|
|
|
|
ConfigMap. ``data.fqdn`` field will be the DNS name of your MongoDB instance.
|
|
|
|
|
This will be used by other MongoDB instances when forming a MongoDB
|
|
|
|
|
replica set. It should resolve to the MongoDB instance in your cluster when
|
|
|
|
|
you are done with the setup. This will help when you are adding more MongoDB
|
|
|
|
|
instances to the replica set in the future.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
**Azure.**
|
|
|
|
|
In Kubernetes on ACS, the name you populate in the ``data.fqdn`` field
|
|
|
|
|
will be used to configure a DNS name for the public IP assigned to the
|
|
|
|
|
Kubernetes Service that is the frontend for the MongoDB instance.
|
|
|
|
|
We suggest using a name that will already be available in Azure.
|
|
|
|
|
We use ``mdb-instance-0``, ``mdb-instance-1`` and so on in this document,
|
|
|
|
|
which gives us ``mdb-instance-0.<azure location>.cloudapp.azure.com``,
|
|
|
|
|
``mdb-instance-1.<azure location>.cloudapp.azure.com``, etc. as the FQDNs.
|
|
|
|
|
The ``<azure location>`` is the Azure datacenter location you are using,
|
|
|
|
|
which can also be obtained using the ``az account list-locations`` command.
|
|
|
|
|
You can also try to assign a name to an Public IP in Azure before starting
|
|
|
|
|
the process, or use ``nslookup`` with the name you have in mind to check
|
|
|
|
|
if it's available for use.
|
|
|
|
|
|
|
|
|
|
You should ensure that the the name specified in the ``data.fqdn`` field is
|
|
|
|
|
a unique one.
|
|
|
|
|
|
|
|
|
|
**Kubernetes on bare-metal or other cloud providers.**
|
|
|
|
|
You need to provide the name resolution function
|
|
|
|
|
by other means (using DNS providers like GoDaddy, CloudFlare or your own
|
|
|
|
|
private DNS server). The DNS set up for other environments is currently
|
|
|
|
|
beyond the scope of this document.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Create the required ConfigMap using:
|
|
|
|
|
|
|
|
|
|
.. code:: bash
|
|
|
|
|
|
|
|
|
|
$ kubectl apply -f mongo-cm.yaml
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
You can check its status using: ``kubectl get cm``
|
|
|
|
|
|
|
|
|
|
Now you are ready to run MongoDB and BigchainDB on our Kubernetes cluster.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Step 6: Run MongoDB as a StatefulSet
|
|
|
|
|
------------------------------------
|
|
|
|
|
|
|
|
|
|
Get the file ``mongo-ss.yaml`` from GitHub using:
|
|
|
|
|
|
|
|
|
|
.. code:: bash
|
|
|
|
|
|
|
|
|
|
$ wget https://raw.githubusercontent.com/bigchaindb/bigchaindb/master/k8s/mongodb/mongo-ss.yaml
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Note how the MongoDB container uses the ``mongo-db-claim`` and the
|
|
|
|
|
``mongo-configdb-claim`` PersistentVolumeClaims for its ``/data/db`` and
|
|
|
|
|
``/data/configdb`` diretories (mount path). Note also that we use the pod's
|
|
|
|
|
``securityContext.capabilities.add`` specification to add the ``FOWNER``
|
|
|
|
|
capability to the container.
|
|
|
|
|
That is because MongoDB container has the user ``mongodb``, with uid ``999``
|
|
|
|
|
and group ``mongodb``, with gid ``999``.
|
|
|
|
|
When this container runs on a host with a mounted disk, the writes fail when
|
|
|
|
|
there is no user with uid ``999``.
|
|
|
|
|
To avoid this, we use the Docker feature of ``--cap-add=FOWNER``.
|
|
|
|
|
This bypasses the uid and gid permission checks during writes and allows data
|
|
|
|
|
to be persisted to disk.
|
|
|
|
|
Refer to the
|
|
|
|
|
`Docker docs <https://docs.docker.com/engine/reference/run/#runtime-privilege-and-linux-capabilities>`_
|
|
|
|
|
for details.
|
|
|
|
|
|
|
|
|
|
As we gain more experience running MongoDB in testing and production, we will
|
|
|
|
|
tweak the ``resources.limits.cpu`` and ``resources.limits.memory``.
|
|
|
|
|
We will also stop exposing port ``27017`` globally and/or allow only certain
|
|
|
|
|
hosts to connect to the MongoDB instance in the future.
|
|
|
|
|
|
|
|
|
|
Create the required StatefulSet using:
|
|
|
|
|
|
|
|
|
|
.. code:: bash
|
|
|
|
|
|
|
|
|
|
$ kubectl apply -f mongo-ss.yaml
|
|
|
|
|
|
|
|
|
|
You can check its status using the commands ``kubectl get statefulsets -w``
|
|
|
|
|
and ``kubectl get svc -w``
|
|
|
|
|
|
|
|
|
|
You may have to wait for up to 10 minutes for the disk to be created
|
|
|
|
|
and attached on the first run. The pod can fail several times with the message
|
|
|
|
|
saying that the timeout for mounting the disk was exceeded.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Step 7: Initialize a MongoDB Replica Set - Optional
|
|
|
|
|
Step 11: Start a Kubernetes StatefulSet for MongoDB
|
|
|
|
|
---------------------------------------------------
|
|
|
|
|
|
|
|
|
|
This step is required only if you are planning to set up multiple
|
|
|
|
|
`BigchainDB nodes
|
|
|
|
|
<https://docs.bigchaindb.com/en/latest/terminology.html>`_.
|
|
|
|
|
* This configuration is located in the file ``mongodb/mongo-ss.yaml``.
|
|
|
|
|
|
|
|
|
|
* Set the ``spec.serviceName`` to the value set in ``mdb-instance-name`` in
|
|
|
|
|
the ConfigMap.
|
|
|
|
|
For example, if the value set in the ``mdb-instance-name``
|
|
|
|
|
is ``mdb-instance-0``, set the field to ``mdb-instance-0``.
|
|
|
|
|
|
|
|
|
|
* Set ``metadata.name``, ``spec.template.metadata.name`` and
|
|
|
|
|
``spec.template.metadata.labels.app`` to the value set in
|
|
|
|
|
``mdb-instance-name`` in the ConfigMap, followed by
|
|
|
|
|
``-ss``.
|
|
|
|
|
For example, if the value set in the
|
|
|
|
|
``mdb-instance-name`` is ``mdb-instance-0``, set the fields to the value
|
|
|
|
|
``mdb-insance-0-ss``.
|
|
|
|
|
|
|
|
|
|
Login to the running MongoDB instance and access the mongo shell using:
|
|
|
|
|
* Note how the MongoDB container uses the ``mongo-db-claim`` and the
|
|
|
|
|
``mongo-configdb-claim`` PersistentVolumeClaims for its ``/data/db`` and
|
|
|
|
|
``/data/configdb`` diretories (mount path).
|
|
|
|
|
|
|
|
|
|
* Note also that we use the pod's ``securityContext.capabilities.add``
|
|
|
|
|
specification to add the ``FOWNER`` capability to the container. That is
|
|
|
|
|
because MongoDB container has the user ``mongodb``, with uid ``999`` and
|
|
|
|
|
group ``mongodb``, with gid ``999``.
|
|
|
|
|
When this container runs on a host with a mounted disk, the writes fail
|
|
|
|
|
when there is no user with uid ``999``. To avoid this, we use the Docker
|
|
|
|
|
feature of ``--cap-add=FOWNER``. This bypasses the uid and gid permission
|
|
|
|
|
checks during writes and allows data to be persisted to disk.
|
|
|
|
|
Refer to the `Docker docs
|
|
|
|
|
<https://docs.docker.com/engine/reference/run/#runtime-privilege-and-linux-capabilities>`_
|
|
|
|
|
for details.
|
|
|
|
|
|
|
|
|
|
.. code:: bash
|
|
|
|
|
* As we gain more experience running MongoDB in testing and production, we
|
|
|
|
|
will tweak the ``resources.limits.cpu`` and ``resources.limits.memory``.
|
|
|
|
|
|
|
|
|
|
* It might take up to 10 minutes for the disks to be created and attached to
|
|
|
|
|
the pod. The UI might show that the pod has errored with the
|
|
|
|
|
message "timeout expired waiting for volumes to attach/mount". Use the CLI
|
|
|
|
|
below to check the status of the pod in this case, instead of the UI. This
|
|
|
|
|
happens due to a bug in Azure ACS.
|
|
|
|
|
|
|
|
|
|
$ kubectl exec -it mdb-0 -c mongodb -- /bin/bash
|
|
|
|
|
root@mdb-0:/# mongo --port 27017
|
|
|
|
|
.. code:: bash
|
|
|
|
|
|
|
|
|
|
You will initiate the replica set by using the ``rs.initiate()`` command from the
|
|
|
|
|
mongo shell. Its syntax is:
|
|
|
|
|
$ kubectl --context k8s-bdb-test-cluster-0 get po -w
|
|
|
|
|
|
|
|
|
|
* Create the MongoDB StatefulSet using:
|
|
|
|
|
|
|
|
|
|
.. code:: bash
|
|
|
|
|
.. code:: bash
|
|
|
|
|
|
|
|
|
|
rs.initiate({
|
|
|
|
|
_id : "<replica-set-name",
|
|
|
|
|
members: [ {
|
|
|
|
|
_id : 0,
|
|
|
|
|
host : "<fqdn of this instance>:<port number>"
|
|
|
|
|
} ]
|
|
|
|
|
})
|
|
|
|
|
|
|
|
|
|
An example command might look like:
|
|
|
|
|
|
|
|
|
|
.. code:: bash
|
|
|
|
|
$ kubectl --context k8s-bdb-test-cluster-0 apply -f mongodb/mongo-ss.yaml
|
|
|
|
|
|
|
|
|
|
> rs.initiate({ _id : "bigchain-rs", members: [ { _id : 0, host :"mdb-instance-0.westeurope.cloudapp.azure.com:27017" } ] })
|
|
|
|
|
|
|
|
|
|
Step 13: Start a Kubernetes Deployment for MongoDB Monitoring Agent
|
|
|
|
|
-------------------------------------------------------------------
|
|
|
|
|
|
|
|
|
|
* This configuration is located in the file
|
|
|
|
|
``mongodb-monitoring-agent/mongo-mon-dep.yaml``.
|
|
|
|
|
|
|
|
|
|
* Set ``metadata.name``, ``spec.template.metadata.name`` and
|
|
|
|
|
``spec.template.metadata.labels.app`` to the value set in
|
|
|
|
|
``mdb-mon-instance-name`` in the ConfigMap, followed by
|
|
|
|
|
``-dep``.
|
|
|
|
|
For example, if the value set in the
|
|
|
|
|
``mdb-mon-instance-name`` is ``mdb-mon-instance-0``, set the fields to the
|
|
|
|
|
value ``mdb-mon-insance-0-ss``.
|
|
|
|
|
|
|
|
|
|
* Start the Kubernetes Deployment using:
|
|
|
|
|
|
|
|
|
|
.. code:: bash
|
|
|
|
|
|
|
|
|
|
$ kubectl --context k8s-bdb-test-cluster-0 apply -f mongodb-monitoring-agent/mongo-mon-dep.yaml
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
where ``mdb-instance-0.westeurope.cloudapp.azure.com`` is the value stored in
|
|
|
|
|
the ``data.fqdn`` field in the ConfigMap created using ``mongo-cm.yaml``.
|
|
|
|
|
Step 15: Start a Kubernetes Deployment for MongoDB Backup Agent
|
|
|
|
|
---------------------------------------------------------------
|
|
|
|
|
|
|
|
|
|
* This configuration is located in the file
|
|
|
|
|
``mongodb-backup-agent/mongo-backup-dep.yaml``.
|
|
|
|
|
|
|
|
|
|
* Set ``metadata.name``, ``spec.template.metadata.name`` and
|
|
|
|
|
``spec.template.metadata.labels.app`` to the value set in
|
|
|
|
|
``mdb-bak-instance-name`` in the ConfigMap, followed by
|
|
|
|
|
``-dep``.
|
|
|
|
|
For example, if the value set in the
|
|
|
|
|
``mdb-bak-instance-name`` is ``mdb-bak-instance-0``, set the fields to the
|
|
|
|
|
value ``mdb-bak-insance-0-ss``.
|
|
|
|
|
|
|
|
|
|
* Start the Kubernetes Deployment using:
|
|
|
|
|
|
|
|
|
|
.. code:: bash
|
|
|
|
|
|
|
|
|
|
$ kubectl --context k8s-bdb-test-cluster-0 apply -f mongodb-backup-agent/mongo-backup-dep.yaml
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
You should see changes in the mongo shell prompt from ``>``
|
|
|
|
|
to ``bigchain-rs:OTHER>`` to ``bigchain-rs:SECONDARY>`` and finally
|
|
|
|
|
to ``bigchain-rs:PRIMARY>``.
|
|
|
|
|
Step 16: Configure the MongoDB Cloud Manager
|
|
|
|
|
--------------------------------------------
|
|
|
|
|
|
|
|
|
|
You can use the ``rs.conf()`` and the ``rs.status()`` commands to check the
|
|
|
|
|
detailed replica set configuration now.
|
|
|
|
|
* Refer the
|
|
|
|
|
:ref:`documentation <Configure MongoDB Cloud Manager for Monitoring and Backup>`
|
|
|
|
|
for details on how to configure the MongoDB Cloud Manager to enable
|
|
|
|
|
monitoring and backup.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Step 8: Create a DNS record - Optional
|
|
|
|
|
--------------------------------------
|
|
|
|
|
Step 17: Start a Kubernetes Deployment for Bigchaindb
|
|
|
|
|
-----------------------------------------------------
|
|
|
|
|
|
|
|
|
|
This step is required only if you are planning to set up multiple
|
|
|
|
|
`BigchainDB nodes
|
|
|
|
|
<https://docs.bigchaindb.com/en/latest/terminology.html>`_.
|
|
|
|
|
* This configuration is located in the file
|
|
|
|
|
``bigchaindb/bigchaindb-dep.yaml``.
|
|
|
|
|
|
|
|
|
|
**Azure.** Select the current Azure resource group and look for the ``Public IP``
|
|
|
|
|
resource. You should see at least 2 entries there - one for the Kubernetes
|
|
|
|
|
master and the other for the MongoDB instance. You may have to ``Refresh`` the
|
|
|
|
|
Azure web page listing the resources in a resource group for the latest
|
|
|
|
|
changes to be reflected.
|
|
|
|
|
Select the ``Public IP`` resource that is attached to your service (it should
|
|
|
|
|
have the Kubernetes cluster name along with a random string),
|
|
|
|
|
select ``Configuration``, add the DNS name that was added in the
|
|
|
|
|
ConfigMap earlier, click ``Save``, and wait for the changes to be applied.
|
|
|
|
|
* Set ``metadata.name`` and ``spec.template.metadata.labels.app`` to the
|
|
|
|
|
value set in ``bdb-instance-name`` in the ConfigMap, followed by
|
|
|
|
|
``-dep``.
|
|
|
|
|
For example, if the value set in the
|
|
|
|
|
``bdb-instance-name`` is ``bdb-instance-0``, set the fields to the
|
|
|
|
|
value ``bdb-insance-0-dep``.
|
|
|
|
|
|
|
|
|
|
To verify the DNS setting is operational, you can run ``nslookup <dns
|
|
|
|
|
name added in ConfigMap>`` from your local Linux shell.
|
|
|
|
|
|
|
|
|
|
This will ensure that when you scale the replica set later, other MongoDB
|
|
|
|
|
members in the replica set can reach this instance.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Step 9: Run BigchainDB as a Deployment
|
|
|
|
|
--------------------------------------
|
|
|
|
|
|
|
|
|
|
Get the file ``bigchaindb-dep.yaml`` from GitHub using:
|
|
|
|
|
|
|
|
|
|
.. code:: bash
|
|
|
|
|
|
|
|
|
|
$ wget https://raw.githubusercontent.com/bigchaindb/bigchaindb/master/k8s/bigchaindb/bigchaindb-dep.yaml
|
|
|
|
|
|
|
|
|
|
Note that we set the ``BIGCHAINDB_DATABASE_HOST`` to ``mdb-svc`` which is the
|
|
|
|
|
name of the MongoDB service defined earlier.
|
|
|
|
|
|
|
|
|
|
We also hardcode the ``BIGCHAINDB_KEYPAIR_PUBLIC``,
|
|
|
|
|
``BIGCHAINDB_KEYPAIR_PRIVATE`` and ``BIGCHAINDB_KEYRING`` for now.
|
|
|
|
|
|
|
|
|
|
As we gain more experience running BigchainDB in testing and production, we
|
|
|
|
|
will tweak the ``resources.limits`` values for CPU and memory, and as richer
|
|
|
|
|
monitoring and probing becomes available in BigchainDB, we will tweak the
|
|
|
|
|
``livenessProbe`` and ``readinessProbe`` parameters.
|
|
|
|
|
|
|
|
|
|
We also plan to specify scheduling policies for the BigchainDB deployment so
|
|
|
|
|
that we ensure that BigchainDB and MongoDB are running in separate nodes, and
|
|
|
|
|
build security around the globally exposed port ``9984``.
|
|
|
|
|
|
|
|
|
|
Create the required Deployment using:
|
|
|
|
|
|
|
|
|
|
.. code:: bash
|
|
|
|
|
|
|
|
|
|
$ kubectl apply -f bigchaindb-dep.yaml
|
|
|
|
|
|
|
|
|
|
You can check its status using the command ``kubectl get deploy -w``
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Step 10: Run NGINX as a Deployment
|
|
|
|
|
----------------------------------
|
|
|
|
|
|
|
|
|
|
NGINX is used as a proxy to both the BigchainDB and MongoDB instances in the
|
|
|
|
|
node.
|
|
|
|
|
It proxies HTTP requests on port 80 to the BigchainDB backend, and TCP
|
|
|
|
|
connections on port 27017 to the MongoDB backend.
|
|
|
|
|
|
|
|
|
|
You can also configure a whitelist in NGINX to allow only connections from
|
|
|
|
|
other instances in the MongoDB replica set to access the backend MongoDB
|
|
|
|
|
instance.
|
|
|
|
|
|
|
|
|
|
Get the file ``nginx-cm.yaml`` from GitHub using:
|
|
|
|
|
|
|
|
|
|
.. code:: bash
|
|
|
|
|
* Set ``BIGCHAINDB_DATABASE_HOST`` to the value set in ``mdb-instance-name``
|
|
|
|
|
in the ConfigMap.
|
|
|
|
|
For example, if the value set in the ``mdb-instance-name`` is
|
|
|
|
|
``mdb-instance-0``, set the field to the value ``mdb-instance-0``.
|
|
|
|
|
|
|
|
|
|
$ wget https://raw.githubusercontent.com/bigchaindb/bigchaindb/master/k8s/nginx/nginx-cm.yaml
|
|
|
|
|
|
|
|
|
|
The IP address whitelist can be explicitly configured in ``nginx-cm.yaml``
|
|
|
|
|
file. You will need a list of the IP addresses of all the other MongoDB
|
|
|
|
|
instances in the cluster. If the MongoDB intances specify a hostname, then this
|
|
|
|
|
needs to be resolved to the corresponding IP addresses. If the IP address of
|
|
|
|
|
any MongoDB instance changes, we can start a 'rolling upgrade' of NGINX after
|
|
|
|
|
updating the corresponding ConfigMap without affecting availabilty.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Create the ConfigMap for the whitelist using:
|
|
|
|
|
|
|
|
|
|
.. code:: bash
|
|
|
|
|
* Set the appropriate ``BIGCHAINDB_KEYPAIR_PUBLIC``,
|
|
|
|
|
``BIGCHAINDB_KEYPAIR_PRIVATE`` values.
|
|
|
|
|
|
|
|
|
|
$ kubectl apply -f nginx-cm.yaml
|
|
|
|
|
|
|
|
|
|
Get the file ``nginx-dep.yaml`` from GitHub using:
|
|
|
|
|
|
|
|
|
|
.. code:: bash
|
|
|
|
|
* One way to generate BigchainDB keypair is to run a Python shell with
|
|
|
|
|
the command
|
|
|
|
|
``from bigchaindb_driver import crypto; crypto.generate_keypair()``.
|
|
|
|
|
|
|
|
|
|
$ wget https://raw.githubusercontent.com/bigchaindb/bigchaindb/master/k8s/nginx/nginx-dep.yaml
|
|
|
|
|
* As we gain more experience running BigchainDB in testing and production,
|
|
|
|
|
we will tweak the ``resources.limits`` values for CPU and memory, and as
|
|
|
|
|
richer monitoring and probing becomes available in BigchainDB, we will
|
|
|
|
|
tweak the ``livenessProbe`` and ``readinessProbe`` parameters.
|
|
|
|
|
|
|
|
|
|
* Create the BigchainDB Deployment using:
|
|
|
|
|
|
|
|
|
|
Create the NGINX deployment using:
|
|
|
|
|
.. code:: bash
|
|
|
|
|
|
|
|
|
|
.. code:: bash
|
|
|
|
|
|
|
|
|
|
$ kubectl apply -f nginx-dep.yaml
|
|
|
|
|
$ kubectl --context k8s-bdb-test-cluster-0 apply -f bigchaindb/bigchaindb-dep.yaml
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Step 11: Verify the BigchainDB Node Setup
|
|
|
|
|
* You can check its status using the command ``kubectl get deploy -w``
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Step 18: Verify the BigchainDB Node Setup
|
|
|
|
|
-----------------------------------------
|
|
|
|
|
|
|
|
|
|
Step 11.1: Testing Internally
|
|
|
|
|
Step 18.1: Testing Internally
|
|
|
|
|
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
|
|
|
|
|
|
|
|
|
Run a container that provides utilities like ``nslookup``, ``curl`` and ``dig``
|
|
|
|
@ -426,32 +566,26 @@ on the cluster and query the internal DNS and IP endpoints.
|
|
|
|
|
$ kubectl run -it toolbox -- image <docker image to run> --restart=Never --rm
|
|
|
|
|
|
|
|
|
|
There is a generic image based on alpine:3.5 with the required utilities
|
|
|
|
|
hosted at Docker Hub under `bigchaindb/toolbox <https://hub.docker.com/r/bigchaindb/toolbox/>`_.
|
|
|
|
|
The corresponding Dockerfile is in the bigchaindb/bigchaindb repository on GitHub, at `https://github.com/bigchaindb/bigchaindb/blob/master/k8s/toolbox/Dockerfile <https://github.com/bigchaindb/bigchaindb/blob/master/k8s/toolbox/Dockerfile>`_.
|
|
|
|
|
hosted at Docker Hub under
|
|
|
|
|
`bigchaindb/toolbox <https://hub.docker.com/r/bigchaindb/toolbox/>`_.
|
|
|
|
|
The corresponding
|
|
|
|
|
`Dockerfile <https://github.com/bigchaindb/bigchaindb/blob/master/k8s/toolbox/Dockerfile>`_
|
|
|
|
|
is in the ``bigchaindb/bigchaindb`` repository on GitHub.
|
|
|
|
|
|
|
|
|
|
You can use it as below to get started immediately:
|
|
|
|
|
|
|
|
|
|
.. code:: bash
|
|
|
|
|
|
|
|
|
|
$ kubectl run -it toolbox --image bigchaindb/toolbox --restart=Never --rm
|
|
|
|
|
kubectl --context k8s-bdb-test-cluster-0 \
|
|
|
|
|
run -it toolbox \
|
|
|
|
|
--image bigchaindb/toolbox \
|
|
|
|
|
--image-pull-policy=Always \
|
|
|
|
|
--restart=Never --rm
|
|
|
|
|
|
|
|
|
|
It will drop you to the shell prompt.
|
|
|
|
|
Now you can query for the ``mdb`` and ``bdb`` service details.
|
|
|
|
|
|
|
|
|
|
.. code:: bash
|
|
|
|
|
|
|
|
|
|
# nslookup mdb-svc
|
|
|
|
|
# nslookup bdb-svc
|
|
|
|
|
# nslookup ngx-svc
|
|
|
|
|
# dig +noall +answer _mdb-port._tcp.mdb-svc.default.svc.cluster.local SRV
|
|
|
|
|
# dig +noall +answer _bdb-port._tcp.bdb-svc.default.svc.cluster.local SRV
|
|
|
|
|
# dig +noall +answer _ngx-public-mdb-port._tcp.ngx-svc.default.svc.cluster.local SRV
|
|
|
|
|
# dig +noall +answer _ngx-public-bdb-port._tcp.ngx-svc.default.svc.cluster.local SRV
|
|
|
|
|
# curl -X GET http://mdb-svc:27017
|
|
|
|
|
# curl -X GET http://bdb-svc:9984
|
|
|
|
|
# curl -X GET http://ngx-svc:80
|
|
|
|
|
# curl -X GET http://ngx-svc:27017
|
|
|
|
|
|
|
|
|
|
The ``nslookup`` commands should output the configured IP addresses of the
|
|
|
|
|
services in the cluster
|
|
|
|
|
|
|
|
|
@ -461,16 +595,60 @@ various services in the cluster.
|
|
|
|
|
Finally, the ``curl`` commands test the availability of the services
|
|
|
|
|
themselves.
|
|
|
|
|
|
|
|
|
|
Step 11.2: Testing Externally
|
|
|
|
|
* Verify MongoDB instance
|
|
|
|
|
|
|
|
|
|
.. code:: bash
|
|
|
|
|
|
|
|
|
|
$ nslookup mdb-instance-0
|
|
|
|
|
|
|
|
|
|
$ dig +noall +answer _mdb-port._tcp.mdb-instance-0.default.svc.cluster.local SRV
|
|
|
|
|
|
|
|
|
|
$ curl -X GET http://mdb-instance-0:27017
|
|
|
|
|
|
|
|
|
|
* Verify BigchainDB instance
|
|
|
|
|
|
|
|
|
|
.. code:: bash
|
|
|
|
|
|
|
|
|
|
$ nslookup bdb-instance-0
|
|
|
|
|
|
|
|
|
|
$ dig +noall +answer _bdb-port._tcp.bdb-instance-0.default.svc.cluster.local SRV
|
|
|
|
|
|
|
|
|
|
$ curl -X GET http://bdb-instance-0:9984
|
|
|
|
|
|
|
|
|
|
* Verify NGINX instance
|
|
|
|
|
|
|
|
|
|
.. code:: bash
|
|
|
|
|
|
|
|
|
|
$ nslookup ngx-instance-0
|
|
|
|
|
|
|
|
|
|
$ dig +noall +answer _ngx-public-mdb-port._tcp.ngx-instance-0.default.svc.cluster.local SRV
|
|
|
|
|
|
|
|
|
|
$ curl -X GET http://ngx-instance-0:27017 # results in curl: (56) Recv failure: Connection reset by peer
|
|
|
|
|
|
|
|
|
|
$ dig +noall +answer _ngx-public-bdb-port._tcp.ngx-instance-0.default.svc.cluster.local SRV
|
|
|
|
|
|
|
|
|
|
* If you have run the vanilla NGINX instance, run
|
|
|
|
|
|
|
|
|
|
.. code:: bash
|
|
|
|
|
|
|
|
|
|
$ curl -X GET http://ngx-instance-0:80
|
|
|
|
|
|
|
|
|
|
* If you have the OpenResty NGINX + 3scale instance, run
|
|
|
|
|
|
|
|
|
|
.. code:: bash
|
|
|
|
|
|
|
|
|
|
$ curl -X GET https://ngx-instance-0
|
|
|
|
|
|
|
|
|
|
* Check the MongoDB monitoring and backup agent on the MongoDB Cloud Manager
|
|
|
|
|
portal to verify they are working fine.
|
|
|
|
|
|
|
|
|
|
* Send some transactions to BigchainDB and verify it's up and running!
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Step 18.2: Testing Externally
|
|
|
|
|
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
|
|
|
|
|
|
|
|
|
Try to access the ``<dns/ip of your exposed bigchaindb service endpoint>:80``
|
|
|
|
|
on your browser. You must receive a json output that shows the BigchainDB
|
|
|
|
|
server version among other things.
|
|
|
|
|
|
|
|
|
|
Try to access the ``<dns/ip of your exposed mongodb service endpoint>:27017``
|
|
|
|
|
on your browser. If your IP is in the whitelist, you will receive a message
|
|
|
|
|
from the MongoDB instance stating that it doesn't allow HTTP connections to
|
|
|
|
|
the port anymore. If your IP is not in the whitelist, your access will be
|
|
|
|
|
blocked and you will not see any response from the MongoDB instance.
|
|
|
|
|
|
|
|
|
|