mirror of
https://github.com/bigchaindb/bigchaindb.git
synced 2024-06-26 03:06:43 +02:00
Remove references from BigchainDB 1.x deployment strategy
- Remove references from existing deployment model - Address comments, fix typos, minor structure changes.
This commit is contained in:
parent
93070bf9fe
commit
03219a9371
|
@ -10,7 +10,6 @@ BigchainDB Server Documentation
|
|||
production-nodes/index
|
||||
clusters
|
||||
production-deployment-template/index
|
||||
production-deployment-template-tendermint/index
|
||||
dev-and-test/index
|
||||
server-reference/index
|
||||
http-client-server-api
|
||||
|
|
|
@ -1,210 +0,0 @@
|
|||
Architecture of a BigchainDB Node
|
||||
==================================
|
||||
|
||||
A BigchainDB Production deployment is hosted on a Kubernetes cluster and includes:
|
||||
|
||||
* NGINX, OpenResty, BigchainDB, MongoDB and Tendermint
|
||||
`Kubernetes Services <https://kubernetes.io/docs/concepts/services-networking/service/>`_.
|
||||
* NGINX, OpenResty, BigchainDB and MongoDB Monitoring Agent.
|
||||
`Kubernetes Deployments <https://kubernetes.io/docs/concepts/workloads/controllers/deployment/>`_.
|
||||
* MongoDB and Tendermint `Kubernetes StatefulSet <https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/>`_.
|
||||
* Third party services like `3scale <https://3scale.net>`_,
|
||||
`MongoDB Cloud Manager <https://cloud.mongodb.com>`_ and the
|
||||
`Azure Operations Management Suite
|
||||
<https://docs.microsoft.com/en-us/azure/operations-management-suite/>`_.
|
||||
|
||||
|
||||
.. _bigchaindb-node:
|
||||
|
||||
BigchainDB Node
|
||||
---------------
|
||||
|
||||
.. aafig::
|
||||
:aspect: 60
|
||||
:scale: 100
|
||||
:background: #rgb
|
||||
:proportional:
|
||||
|
||||
+ +
|
||||
+--------------------------------------------------------------------------------------------------------------------------------------+
|
||||
| | | |
|
||||
| | | |
|
||||
| | | |
|
||||
| | | |
|
||||
| | | |
|
||||
| | | |
|
||||
| "BigchainDB API" | | "Tendermint P2P" |
|
||||
| | | "Communication/" |
|
||||
| | | "Public Key Exchange" |
|
||||
| | | |
|
||||
| | | |
|
||||
| v v |
|
||||
| |
|
||||
| +------------------+ |
|
||||
| |"NGINX Service" | |
|
||||
| +-------+----------+ |
|
||||
| | |
|
||||
| v |
|
||||
| |
|
||||
| +------------------+ |
|
||||
| | "NGINX" | |
|
||||
| | "Deployment" | |
|
||||
| | | |
|
||||
| +-------+----------+ |
|
||||
| | |
|
||||
| | |
|
||||
| | |
|
||||
| v |
|
||||
| |
|
||||
| "443" +----------+ "46656/9986" |
|
||||
| | "Rate" | |
|
||||
| +---------------------------+"Limiting"+-----------------------+ |
|
||||
| | | "Logic" | | |
|
||||
| | +----+-----+ | |
|
||||
| | | | |
|
||||
| | | | |
|
||||
| | | | |
|
||||
| | | | |
|
||||
| | | | |
|
||||
| | "27017" | | |
|
||||
| v | v |
|
||||
| +-------------+ | +------------+ |
|
||||
| |"HTTPS" | | +------------------> |"Tendermint"| |
|
||||
| |"Termination"| | | "9986" |"Service" | "46656" |
|
||||
| | | | | +-------+ | <----+ |
|
||||
| +-----+-------+ | | | +------------+ | |
|
||||
| | | | | | |
|
||||
| | | | v v |
|
||||
| | | | +------------+ +------------+ |
|
||||
| | | | |"NGINX" | |"Tendermint"| |
|
||||
| | | | |"Deployment"| |"Stateful" | |
|
||||
| | | | |"Pub-Key-Ex"| |"Set" | |
|
||||
| ^ | | +------------+ +------------+ |
|
||||
| +-----+-------+ | | |
|
||||
| "POST" |"Analyze" | "GET" | | |
|
||||
| |"Request" | | | |
|
||||
| +-----------+ +--------+ | | |
|
||||
| | +-------------+ | | | |
|
||||
| | | | | "Bi+directional, communication between" |
|
||||
| | | | | "BigchainDB(APP) and Tendermint" |
|
||||
| | | | | "BFT consensus Engine" |
|
||||
| | | | | |
|
||||
| v v | | |
|
||||
| | | |
|
||||
| +-------------+ +--------------+ +----+-------------------> +--------------+ |
|
||||
| | "OpenResty" | | "BigchainDB" | | | "MongoDB" | |
|
||||
| | "Service" | | "Service" | | | "Service" | |
|
||||
| | | +----->| | | +-------> | | |
|
||||
| +------+------+ | +------+-------+ | | +------+-------+ |
|
||||
| | | | | | | |
|
||||
| | | | | | | |
|
||||
| v | v | | v |
|
||||
| +-------------+ | +-------------+ | | +----------+ |
|
||||
| | | | | | <------------+ | |"MongoDB" | |
|
||||
| |"OpenResty" | | | "BigchainDB"| | |"Stateful"| |
|
||||
| |"Deployment" | | | "Deployment"| | |"Set" | |
|
||||
| | | | | | | +-----+----+ |
|
||||
| | | | | +---------------------------+ | |
|
||||
| | | | | | | |
|
||||
| +-----+-------+ | +-------------+ | |
|
||||
| | | | |
|
||||
| | | | |
|
||||
| v | | |
|
||||
| +-----------+ | v |
|
||||
| | "Auth" | | +------------+ |
|
||||
| | "Logic" |----------+ |"MongoDB" | |
|
||||
| | | |"Monitoring"| |
|
||||
| | | |"Agent" | |
|
||||
| +---+-------+ +-----+------+ |
|
||||
| | | |
|
||||
| | | |
|
||||
| | | |
|
||||
| | | |
|
||||
| | | |
|
||||
| | | |
|
||||
+---------------+---------------------------------------------------------------------------------------+------------------------------+
|
||||
| |
|
||||
| |
|
||||
| |
|
||||
v v
|
||||
+------------------------------------+ +------------------------------------+
|
||||
| | | |
|
||||
| | | |
|
||||
| | | |
|
||||
| "3Scale" | | "MongoDB Cloud" |
|
||||
| | | |
|
||||
| | | |
|
||||
| | | |
|
||||
+------------------------------------+ +------------------------------------+
|
||||
|
||||
|
||||
|
||||
|
||||
.. note::
|
||||
The arrows in the diagram represent the client-server communication. For
|
||||
example, A-->B implies that A initiates the connection to B.
|
||||
It does not represent the flow of data; the communication channel is always
|
||||
fully duplex.
|
||||
|
||||
|
||||
NGINX: Entrypoint and Gateway
|
||||
-----------------------------
|
||||
|
||||
We use an NGINX as HTTP proxy on port 443 (configurable) at the cloud
|
||||
entrypoint for:
|
||||
|
||||
#. Rate Limiting: We configure NGINX to allow only a certain number of requests
|
||||
(configurable) which prevents DoS attacks.
|
||||
|
||||
#. HTTPS Termination: The HTTPS connection does not carry through all the way
|
||||
to BigchainDB and terminates at NGINX for now.
|
||||
|
||||
#. Request Routing: For HTTPS connections on port 443 (or the configured BigchainDB public api port),
|
||||
the connection is proxied to:
|
||||
|
||||
#. OpenResty Service if it is a POST request.
|
||||
#. BigchainDB Service if it is a GET request.
|
||||
|
||||
|
||||
We use an NGINX TCP proxy on port 27017 (configurable) at the cloud
|
||||
entrypoint for:
|
||||
|
||||
#. Rate Limiting: We configure NGINX to allow only a certain number of requests
|
||||
(configurable) which prevents DoS attacks.
|
||||
|
||||
#. Request Routing: For connections on port 27017 (or the configured MongoDB
|
||||
public api port), the connection is proxied to the MongoDB Service.
|
||||
|
||||
|
||||
OpenResty: API Management, Authentication and Authorization
|
||||
-----------------------------------------------------------
|
||||
|
||||
We use `OpenResty <https://openresty.org/>`_ to perform authorization checks
|
||||
with 3scale using the ``app_id`` and ``app_key`` headers in the HTTP request.
|
||||
|
||||
OpenResty is NGINX plus a bunch of other
|
||||
`components <https://openresty.org/en/components.html>`_. We primarily depend
|
||||
on the LuaJIT compiler to execute the functions to authenticate the ``app_id``
|
||||
and ``app_key`` with the 3scale backend.
|
||||
|
||||
|
||||
MongoDB: Standalone
|
||||
-------------------
|
||||
|
||||
We use MongoDB as the backend database for BigchainDB.
|
||||
|
||||
We achieve security by avoiding DoS attacks at the NGINX proxy layer and by
|
||||
ensuring that MongoDB has TLS enabled for all its connections.
|
||||
|
||||
|
||||
Tendermint: BFT consensus engine
|
||||
--------------------------------
|
||||
|
||||
We use Tendermint as the backend consensus engine for BFT replication of BigchainDB.
|
||||
In a multi-node deployment, Tendermint nodes/peers communicate with each other via
|
||||
the public ports exposed by the NGINX gateway.
|
||||
|
||||
We use port **9986** (configurable) to allow tendermint nodes to access the public keys
|
||||
of the peers and port **46656** (configurable) for the rest of the communications between
|
||||
the peers.
|
||||
|
|
@ -1,20 +0,0 @@
|
|||
Production Deployment Template: Tendermint BFT
|
||||
==============================================
|
||||
|
||||
This section outlines how *we* deploy production BigchainDB,
|
||||
integrated with Tendermint(backend for BFT consensus),
|
||||
clusters on Microsoft Azure using
|
||||
Kubernetes. We improve it constantly.
|
||||
You may choose to use it as a template or reference for your own deployment,
|
||||
but *we make no claim that it is suitable for your purposes*.
|
||||
Feel free change things to suit your needs or preferences.
|
||||
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 1
|
||||
|
||||
workflow
|
||||
architecture
|
||||
node-on-kubernetes
|
||||
node-config-map-and-secrets
|
||||
bigchaindb-network-on-kubernetes
|
|
@ -1,356 +0,0 @@
|
|||
.. _how-to-configure-a-bigchaindb-tendermint-node:
|
||||
|
||||
How to Configure a BigchainDB + Tendermint Node
|
||||
===============================================
|
||||
|
||||
This page outlines the steps to set a bunch of configuration settings
|
||||
in your BigchainDB node.
|
||||
They are pushed to the Kubernetes cluster in two files,
|
||||
named ``config-map.yaml`` (a set of ConfigMaps)
|
||||
and ``secret.yaml`` (a set of Secrets).
|
||||
They are stored in the Kubernetes cluster's key-value store (etcd).
|
||||
|
||||
Make sure you did all the things listed in the section titled
|
||||
:ref:`things-each-node-operator-must-do-tmt`
|
||||
(including generation of all the SSL certificates needed
|
||||
for MongoDB auth).
|
||||
|
||||
|
||||
Edit config-map.yaml
|
||||
--------------------
|
||||
|
||||
Make a copy of the file ``k8s/configuration/config-map.yaml``
|
||||
and edit the data values in the various ConfigMaps.
|
||||
That file already contains many comments to help you
|
||||
understand each data value, but we make some additional
|
||||
remarks on some of the values below.
|
||||
|
||||
Note: None of the data values in ``config-map.yaml`` need
|
||||
to be base64-encoded. (This is unlike ``secret.yaml``,
|
||||
where all data values must be base64-encoded.
|
||||
This is true of all Kubernetes ConfigMaps and Secrets.)
|
||||
|
||||
|
||||
vars.cluster-fqdn
|
||||
~~~~~~~~~~~~~~~~~
|
||||
|
||||
The ``cluster-fqdn`` field specifies the domain you would have
|
||||
:ref:`registered before <register-a-domain-and-get-an-ssl-certificate-for-it-tmt>`.
|
||||
|
||||
|
||||
vars.cluster-frontend-port
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
The ``cluster-frontend-port`` field specifies the port on which your cluster
|
||||
will be available to all external clients.
|
||||
It is set to the HTTPS port ``443`` by default.
|
||||
|
||||
|
||||
vars.cluster-health-check-port
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
The ``cluster-healthcheck-port`` is the port number on which health check
|
||||
probes are sent to the main NGINX instance.
|
||||
It is set to ``8888`` by default.
|
||||
|
||||
|
||||
vars.cluster-dns-server-ip
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
The ``cluster-dns-server-ip`` is the IP of the DNS server for a node.
|
||||
We use DNS for service discovery. A Kubernetes deployment always has a DNS
|
||||
server (``kube-dns``) running at 10.0.0.10, and since we use Kubernetes, this is
|
||||
set to ``10.0.0.10`` by default, which is the default ``kube-dns`` IP address.
|
||||
|
||||
|
||||
vars.mdb-instance-name and Similar
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Your BigchainDB cluster organization should have a standard way
|
||||
of naming instances, so the instances in your BigchainDB node
|
||||
should conform to that standard (i.e. you can't just make up some names).
|
||||
There are some things worth noting about the ``mdb-instance-name``:
|
||||
|
||||
* This field will be the DNS name of your MongoDB instance, and Kubernetes
|
||||
maps this name to its internal DNS.
|
||||
* We use ``mdb-instance-0``, ``mdb-instance-1`` and so on in our
|
||||
documentation. Your BigchainDB cluster may use a different naming convention.
|
||||
|
||||
|
||||
vars.ngx-mdb-instance-name and Similar
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
NGINX needs the FQDN of the servers inside the cluster to be able to forward
|
||||
traffic.
|
||||
The ``ngx-openresty-instance-name``, ``ngx-mdb-instance-name`` and
|
||||
``ngx-bdb-instance-name`` are the FQDNs of the OpenResty instance, the MongoDB
|
||||
instance, and the BigchainDB instance in this Kubernetes cluster respectively.
|
||||
In Kubernetes, this is usually the name of the module specified in the
|
||||
corresponding ``vars.*-instance-name`` followed by the
|
||||
``<namespace name>.svc.cluster.local``. For example, if you run OpenResty in
|
||||
the default Kubernetes namespace, this will be
|
||||
``<vars.openresty-instance-name>.default.svc.cluster.local``
|
||||
|
||||
|
||||
vars.mongodb-frontend-port and vars.mongodb-backend-port
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
The ``mongodb-frontend-port`` is the port number on which external clients can
|
||||
access MongoDB. This needs to be restricted to only other MongoDB instances
|
||||
by enabling an authentication mechanism on MongoDB cluster.
|
||||
It is set to ``27017`` by default.
|
||||
|
||||
The ``mongodb-backend-port`` is the port number on which MongoDB is actually
|
||||
available/listening for requests in your cluster.
|
||||
It is also set to ``27017`` by default.
|
||||
|
||||
|
||||
vars.openresty-backend-port
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
The ``openresty-backend-port`` is the port number on which OpenResty is
|
||||
listening for requests.
|
||||
This is used by the NGINX instance to forward requests
|
||||
destined for the OpenResty instance to the right port.
|
||||
This is also used by OpenResty instance to bind to the correct port to
|
||||
receive requests from NGINX instance.
|
||||
It is set to ``80`` by default.
|
||||
|
||||
|
||||
vars.bigchaindb-wsserver-advertised-scheme
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
The ``bigchaindb-wsserver-advertised-scheme`` is the protocol used to access
|
||||
the WebSocket API in BigchainDB. This can be set to ``wss`` or ``ws``.
|
||||
It is set to ``wss`` by default.
|
||||
|
||||
|
||||
vars.bigchaindb-api-port, vars.bigchaindb-ws-port and Similar
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
The ``bigchaindb-api-port`` is the port number on which BigchainDB is
|
||||
listening for HTTP requests. Currently set to ``9984`` by default.
|
||||
|
||||
The ``bigchaindb-ws-port`` is the port number on which BigchainDB is
|
||||
listening for Websocket requests. Currently set to ``9985`` by default.
|
||||
|
||||
There's another :doc:`page with a complete listing of all the BigchainDB Server
|
||||
configuration settings <../server-reference/configuration>`.
|
||||
|
||||
|
||||
bdb-config.bdb-user
|
||||
~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
This is the user name that BigchainDB uses to authenticate itself to the
|
||||
backend MongoDB database.
|
||||
|
||||
We need to specify the user name *as seen in the certificate* issued to
|
||||
the BigchainDB instance in order to authenticate correctly. Use
|
||||
the following ``openssl`` command to extract the user name from the
|
||||
certificate:
|
||||
|
||||
.. code:: bash
|
||||
|
||||
$ openssl x509 -in <path to the bigchaindb certificate> \
|
||||
-inform PEM -subject -nameopt RFC2253
|
||||
|
||||
You should see an output line that resembles:
|
||||
|
||||
.. code:: bash
|
||||
|
||||
subject= emailAddress=dev@bigchaindb.com,CN=test-bdb-ssl,OU=BigchainDB-Instance,O=BigchainDB GmbH,L=Berlin,ST=Berlin,C=DE
|
||||
|
||||
The ``subject`` line states the complete user name we need to use for this
|
||||
field (``bdb-config.bdb-user``), i.e.
|
||||
|
||||
.. code:: bash
|
||||
|
||||
emailAddress=dev@bigchaindb.com,CN=test-bdb-ssl,OU=BigchainDB-Instance,O=BigchainDB GmbH,L=Berlin,ST=Berlin,C=DE
|
||||
|
||||
|
||||
tendermint-config.tm-instance-name
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Your BigchainDB cluster organization should have a standard way
|
||||
of naming instances, so the instances in your BigchainDB node
|
||||
should conform to that standard. There are some things worth noting
|
||||
about the ``tm-instance-name``:
|
||||
|
||||
* This field will be the DNS name of your Tendermint instance, and Kubernetes
|
||||
maps this name to its internal DNS, so all the peer to peer communication
|
||||
depends on this, in case of a network/multi-node deployment.
|
||||
* This parameter is also used to access the public key of a particular node.
|
||||
* We use ``tm-instance-0``, ``tm-instance-1`` and so on in our
|
||||
documentation. Your BigchainDB cluster may use a different naming convention.
|
||||
|
||||
|
||||
tendermint-config.ngx-tm-instance-name
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
NGINX needs the FQDN of the servers inside the cluster to be able to forward
|
||||
traffic.
|
||||
``ngx-tm-instance-name`` is the FQDN of the Tendermint
|
||||
instance in this Kubernetes cluster.
|
||||
In Kubernetes, this is usually the name of the module specified in the
|
||||
corresponding ``tendermint-config.*-instance-name`` followed by the
|
||||
``<namespace name>.svc.cluster.local``. For example, if you run Tendermint in
|
||||
the default Kubernetes namespace, this will be
|
||||
``<tendermint-config.tm-instance-name>.default.svc.cluster.local``
|
||||
|
||||
|
||||
tendermint-config.tm-seeds
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
``tm-seeds`` is the initial set of peers to connect to. It is a comma separated
|
||||
list of all the peers part of the cluster.
|
||||
|
||||
If you are deploying a stand-alone BigchainDB node the value should the same as
|
||||
``<tm-instance-name>``. If you are deploying a network this parameter will look
|
||||
like this:
|
||||
|
||||
.. code::
|
||||
|
||||
<tm-instance-1>,<tm-instance-2>,<tm-instance-3>,<tm-instance-4>
|
||||
|
||||
|
||||
tendermint-config.tm-validators
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
``tm-validators`` is the initial set of validators in the network. It is a comma separated list
|
||||
of all the participant validator nodes.
|
||||
|
||||
If you are deploying a stand-alone BigchainDB node the value should be the same as
|
||||
``<tm-instance-name>``. If you are deploying a network this parameter will look like
|
||||
this:
|
||||
|
||||
.. code::
|
||||
|
||||
<tm-instance-1>,<tm-instance-2>,<tm-instance-3>,<tm-instance-4>
|
||||
|
||||
|
||||
tendermint-config.tm-validator-power
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
``tm-validator-power`` represents the voting power of each validator. It is a comma separated
|
||||
list of all the participants in the network.
|
||||
|
||||
**Note**: The order of the validator power list should be the same as the ``tm-validators`` list.
|
||||
|
||||
.. code::
|
||||
|
||||
tm-validators: <tm-instance-1>,<tm-instance-2>,<tm-instance-3>,<tm-instance-4>
|
||||
|
||||
For the above list of validators the ``tm-validator-power`` list should look like this:
|
||||
|
||||
.. code::
|
||||
|
||||
tm-validator-power: <tm-instance-1-power>,<tm-instance-2-power>,<tm-instance-3-power>,<tm-instance-4-power>
|
||||
|
||||
|
||||
tendermint-config.tm-genesis-time
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
``tm-genesis-time`` represents the official time of blockchain start. Details regarding, how to generate
|
||||
this parameter are covered :ref:`here <generate-the-blockchain-id-and-genesis-time>`.
|
||||
|
||||
|
||||
tendermint-config.tm-chain-id
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
``tm-chain-id`` represents the ID of the blockchain. This must be unique for every blockchain.
|
||||
Details regarding, how to generate this parameter are covered
|
||||
:ref:`here <generate-the-blockchain-id-and-genesis-time>`.
|
||||
|
||||
|
||||
tendermint-config.tm-abci-port
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
``tm-abci-port`` has a default value ``46658`` which is used by Tendermint Core for
|
||||
ABCI(Application BlockChain Interface) traffic. BigchainDB nodes use this port
|
||||
internally to communicate with Tendermint Core.
|
||||
|
||||
|
||||
tendermint-config.tm-p2p-port
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
``tm-p2p-port`` has a default value ``46656`` which is used by Tendermint Core for
|
||||
peer to peer communication.
|
||||
|
||||
For a multi-node/zone deployment, this port needs to be available publicly for P2P
|
||||
communication between Tendermint nodes.
|
||||
|
||||
|
||||
tendermint-config.tm-rpc-port
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
``tm-rpc-port`` has a default value ``46657`` which is used by Tendermint Core for RPC
|
||||
traffic. BigchainDB nodes use this port with RPC listen address.
|
||||
|
||||
|
||||
tendermint-config.tm-pub-key-access
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
``tm-pub-key-access`` has a default value ``9986``, which is used to discover the public
|
||||
key of a tendermint node. Each Tendermint StatefulSet(Pod, Tendermint + NGINX) hosts its
|
||||
public key.
|
||||
|
||||
.. code::
|
||||
|
||||
http://tendermint-instance-1:9986/pub_key.json
|
||||
|
||||
|
||||
Edit secret.yaml
|
||||
----------------
|
||||
|
||||
Make a copy of the file ``k8s/configuration/secret.yaml``
|
||||
and edit the data values in the various Secrets.
|
||||
That file includes many comments to explain the required values.
|
||||
**In particular, note that all values must be base64-encoded.**
|
||||
There are tips at the top of the file
|
||||
explaining how to convert values into base64-encoded values.
|
||||
|
||||
Your BigchainDB node might not need all the Secrets.
|
||||
For example, if you plan to access the BigchainDB API over HTTP, you
|
||||
don't need the ``https-certs`` Secret.
|
||||
You can delete the Secrets you don't need,
|
||||
or set their data values to ``""``.
|
||||
|
||||
Note that ``ca.pem`` is just another name for ``ca.crt``
|
||||
(the certificate of your BigchainDB cluster's self-signed CA).
|
||||
|
||||
|
||||
threescale-credentials.*
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
If you're not using 3scale,
|
||||
you can delete the ``threescale-credentials`` Secret
|
||||
or leave all the values blank (``""``).
|
||||
|
||||
If you *are* using 3scale, get the values for ``secret-token``,
|
||||
``service-id``, ``version-header`` and ``service-token`` by logging in to 3scale
|
||||
portal using your admin account, click **APIs** and click on **Integration**
|
||||
for the relevant API.
|
||||
Scroll to the bottom of the page and click the small link
|
||||
in the lower right corner, labelled **Download the NGINX Config files**.
|
||||
Unzip it(if it is a ``zip`` file). Open the ``.conf`` and the ``.lua`` file.
|
||||
You should be able to find all the values in those files.
|
||||
You have to be careful because it will have values for **all** your APIs,
|
||||
and some values vary from API to API.
|
||||
The ``version-header`` is the timestamp in a line that looks like:
|
||||
|
||||
.. code::
|
||||
|
||||
proxy_set_header X-3scale-Version "2017-06-28T14:57:34Z";
|
||||
|
||||
|
||||
Deploy Your config-map.yaml and secret.yaml
|
||||
-------------------------------------------
|
||||
|
||||
You can deploy your edited ``config-map.yaml`` and ``secret.yaml``
|
||||
files to your Kubernetes cluster using the commands:
|
||||
|
||||
.. code:: bash
|
||||
|
||||
$ kubectl apply -f config-map.yaml
|
||||
|
||||
$ kubectl apply -f secret.yaml
|
File diff suppressed because it is too large
Load Diff
|
@ -1,188 +0,0 @@
|
|||
Overview
|
||||
========
|
||||
|
||||
This page summarizes the steps *we* go through
|
||||
to set up a production BigchainDB + Tendermint cluster.
|
||||
We are constantly improving them.
|
||||
You can modify them to suit your needs.
|
||||
|
||||
.. Note::
|
||||
With our BigchainDB + Tendermint deployment model, we use standalone MongoDB
|
||||
(without Replica Set), BFT replication is handled by Tendermint.
|
||||
|
||||
|
||||
1. Set Up a Self-Signed Certificate Authority
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
We use SSL/TLS and self-signed certificates
|
||||
for MongoDB authentication (and message encryption).
|
||||
The certificates are signed by the organization managing the :ref:`bigchaindb-node`.
|
||||
If your organization already has a process
|
||||
for signing certificates
|
||||
(i.e. an internal self-signed certificate authority [CA]),
|
||||
then you can skip this step.
|
||||
Otherwise, your organization must
|
||||
:ref:`set up its own self-signed certificate authority <how-to-set-up-a-self-signed-certificate-authority>`.
|
||||
|
||||
|
||||
.. _register-a-domain-and-get-an-ssl-certificate-for-it-tmt:
|
||||
|
||||
2. Register a Domain and Get an SSL Certificate for It
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
The BigchainDB APIs (HTTP API and WebSocket API) should be served using TLS,
|
||||
so the organization running the cluster
|
||||
should choose an FQDN for their API (e.g. api.organization-x.com),
|
||||
register the domain name,
|
||||
and buy an SSL/TLS certificate for the FQDN.
|
||||
|
||||
.. _things-each-node-operator-must-do-tmt:
|
||||
|
||||
Things Each Node Operator Must Do
|
||||
---------------------------------
|
||||
|
||||
Use a standard and unique naming convention for all instances.
|
||||
|
||||
☐ Name of the MongoDB instance (``mdb-instance-*``)
|
||||
|
||||
☐ Name of the BigchainDB instance (``bdb-instance-*``)
|
||||
|
||||
☐ Name of the NGINX instance (``ngx-http-instance-*`` or ``ngx-https-instance-*``)
|
||||
|
||||
☐ Name of the OpenResty instance (``openresty-instance-*``)
|
||||
|
||||
☐ Name of the MongoDB monitoring agent instance (``mdb-mon-instance-*``)
|
||||
|
||||
☐ Name of the Tendermint instance (``tendermint-instance-*``)
|
||||
|
||||
Example
|
||||
^^^^^^^
|
||||
|
||||
.. code:: text
|
||||
|
||||
{
|
||||
"MongoDB": [
|
||||
"mdb-instance-1",
|
||||
"mdb-instance-2",
|
||||
"mdb-instance-3",
|
||||
"mdb-instance-4"
|
||||
],
|
||||
"BigchainDB": [
|
||||
"bdb-instance-1",
|
||||
"bdb-instance-2",
|
||||
"bdb-instance-3",
|
||||
"bdb-instance-4"
|
||||
],
|
||||
"NGINX": [
|
||||
"ngx-instance-1",
|
||||
"ngx-instance-2",
|
||||
"ngx-instance-3",
|
||||
"ngx-instance-4"
|
||||
],
|
||||
"OpenResty": [
|
||||
"openresty-instance-1",
|
||||
"openresty-instance-2",
|
||||
"openresty-instance-3",
|
||||
"openresty-instance-4"
|
||||
],
|
||||
"MongoDB_Monitoring_Agent": [
|
||||
"mdb-mon-instance-1",
|
||||
"mdb-mon-instance-2",
|
||||
"mdb-mon-instance-3",
|
||||
"mdb-mon-instance-4"
|
||||
],
|
||||
"Tendermint": [
|
||||
"tendermint-instance-1",
|
||||
"tendermint-instance-2",
|
||||
"tendermint-instance-3",
|
||||
"tendermint-instance-4"
|
||||
]
|
||||
}
|
||||
|
||||
|
||||
☐ Generate three keys and corresponding certificate signing requests (CSRs):
|
||||
|
||||
#. Server Certificate for the MongoDB instance
|
||||
#. Client Certificate for BigchainDB Server to identify itself to MongoDB
|
||||
#. Client Certificate for MongoDB Monitoring Agent to identify itself to MongoDB
|
||||
|
||||
Use the self-signed CA to sign those three CSRs:
|
||||
|
||||
* Three certificates (one for each CSR).
|
||||
|
||||
For help, see the pages:
|
||||
|
||||
* :doc:`How to Generate a Server Certificate for MongoDB <../production-deployment-template/server-tls-certificate>`
|
||||
* :doc:`How to Generate a Client Certificate for MongoDB <../production-deployment-template/client-tls-certificate>`
|
||||
|
||||
☐ Make up an FQDN for your BigchainDB node (e.g. ``mynode.mycorp.com``).
|
||||
Make sure you've registered the associated domain name (e.g. ``mycorp.com``),
|
||||
and have an SSL certificate for the FQDN.
|
||||
(You can get an SSL certificate from any SSL certificate provider.)
|
||||
|
||||
☐ Ask the managing organization for the user name to use for authenticating to
|
||||
MongoDB.
|
||||
|
||||
☐ If the cluster uses 3scale for API authentication, monitoring and billing,
|
||||
you must ask the managing organization for all relevant 3scale credentials -
|
||||
secret token, service ID, version header and API service token.
|
||||
|
||||
☐ If the cluster uses MongoDB Cloud Manager for monitoring,
|
||||
you must ask the managing organization for the ``Project ID`` and the
|
||||
``Agent API Key``.
|
||||
(Each Cloud Manager "Project" has its own ``Project ID``. A ``Project ID`` can
|
||||
contain a number of ``Agent API Key`` s. It can be found under
|
||||
**Settings**. It was recently added to the Cloud Manager to
|
||||
allow easier periodic rotation of the ``Agent API Key`` with a constant
|
||||
``Project ID``)
|
||||
|
||||
|
||||
.. _generate-the-blockchain-id-and-genesis-time:
|
||||
|
||||
3. Generate the Blockchain ID and Genesis Time
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
Tendermint nodes require two parameters that need to be common and shared between all the
|
||||
participants in the network.
|
||||
|
||||
* ``chain_id`` : ID of the blockchain. This must be unique for every blockchain.
|
||||
|
||||
* Example: ``0001-01-01T00:00:00Z``
|
||||
|
||||
* ``genesis_time`` : Official time of blockchain start.
|
||||
|
||||
* Example: ``test-chain-9gHylg``
|
||||
|
||||
The following parameters can be generated using the ``tendermint init`` command.
|
||||
To `initializae <https://tendermint.readthedocs.io/en/master/using-tendermint.html#initialize>`_.
|
||||
You will need to `install Tendermint <https://tendermint.readthedocs.io/en/master/install.html>`_
|
||||
and verify that a ``genesis.json`` file in created under the `Root Directory
|
||||
<https://tendermint.readthedocs.io/en/master/using-tendermint.html#directory-root>`_. You can use
|
||||
the ``genesis_time`` and ``chain_id`` from this ``genesis.json``.
|
||||
|
||||
Sample ``genesis.json``:
|
||||
|
||||
.. code:: json
|
||||
|
||||
{
|
||||
"genesis_time": "0001-01-01T00:00:00Z",
|
||||
"chain_id": "test-chain-9gHylg",
|
||||
"validators": [
|
||||
{
|
||||
"pub_key": {
|
||||
"type": "ed25519",
|
||||
"data": "D12279E746D3724329E5DE33A5AC44D5910623AA6FB8CDDC63617C959383A468"
|
||||
},
|
||||
"power": 10,
|
||||
"name": ""
|
||||
}
|
||||
],
|
||||
"app_hash": ""
|
||||
}
|
||||
|
||||
|
||||
|
||||
☐ :doc:`Deploy a Kubernetes cluster on Azure <../production-deployment-template/template-kubernetes-azure>`.
|
||||
|
||||
☐ You can now proceed to set up your :ref:`BigchainDB node
|
||||
<kubernetes-template-deploy-a-single-bigchaindb-node-with-tendermint>`.
|
|
@ -1,19 +1,144 @@
|
|||
Architecture of an IPDB Node
|
||||
============================
|
||||
Architecture of a BigchainDB Node
|
||||
==================================
|
||||
|
||||
An IPDB Production deployment is hosted on a Kubernetes cluster and includes:
|
||||
A BigchainDB Production deployment is hosted on a Kubernetes cluster and includes:
|
||||
|
||||
* NGINX, OpenResty, BigchainDB and MongoDB
|
||||
* NGINX, OpenResty, BigchainDB, MongoDB and Tendermint
|
||||
`Kubernetes Services <https://kubernetes.io/docs/concepts/services-networking/service/>`_.
|
||||
* NGINX, OpenResty, BigchainDB, Monitoring Agent and Backup Agent
|
||||
* NGINX, OpenResty, BigchainDB and MongoDB Monitoring Agent.
|
||||
`Kubernetes Deployments <https://kubernetes.io/docs/concepts/workloads/controllers/deployment/>`_.
|
||||
* MongoDB `Kubernetes StatefulSet <https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/>`_.
|
||||
* MongoDB and Tendermint `Kubernetes StatefulSet <https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/>`_.
|
||||
* Third party services like `3scale <https://3scale.net>`_,
|
||||
`MongoDB Cloud Manager <https://cloud.mongodb.com>`_ and the
|
||||
`Azure Operations Management Suite
|
||||
<https://docs.microsoft.com/en-us/azure/operations-management-suite/>`_.
|
||||
|
||||
.. image:: ../_static/arch.jpg
|
||||
|
||||
.. _bigchaindb-node:
|
||||
|
||||
BigchainDB Node
|
||||
---------------
|
||||
|
||||
.. aafig::
|
||||
:aspect: 60
|
||||
:scale: 100
|
||||
:background: #rgb
|
||||
:proportional:
|
||||
|
||||
+ +
|
||||
+--------------------------------------------------------------------------------------------------------------------------------------+
|
||||
| | | |
|
||||
| | | |
|
||||
| | | |
|
||||
| | | |
|
||||
| | | |
|
||||
| | | |
|
||||
| "BigchainDB API" | | "Tendermint P2P" |
|
||||
| | | "Communication/" |
|
||||
| | | "Public Key Exchange" |
|
||||
| | | |
|
||||
| | | |
|
||||
| v v |
|
||||
| |
|
||||
| +------------------+ |
|
||||
| |"NGINX Service" | |
|
||||
| +-------+----------+ |
|
||||
| | |
|
||||
| v |
|
||||
| |
|
||||
| +------------------+ |
|
||||
| | "NGINX" | |
|
||||
| | "Deployment" | |
|
||||
| | | |
|
||||
| +-------+----------+ |
|
||||
| | |
|
||||
| | |
|
||||
| | |
|
||||
| v |
|
||||
| |
|
||||
| "443" +----------+ "46656/9986" |
|
||||
| | "Rate" | |
|
||||
| +---------------------------+"Limiting"+-----------------------+ |
|
||||
| | | "Logic" | | |
|
||||
| | +----+-----+ | |
|
||||
| | | | |
|
||||
| | | | |
|
||||
| | | | |
|
||||
| | | | |
|
||||
| | | | |
|
||||
| | "27017" | | |
|
||||
| v | v |
|
||||
| +-------------+ | +------------+ |
|
||||
| |"HTTPS" | | +------------------> |"Tendermint"| |
|
||||
| |"Termination"| | | "9986" |"Service" | "46656" |
|
||||
| | | | | +-------+ | <----+ |
|
||||
| +-----+-------+ | | | +------------+ | |
|
||||
| | | | | | |
|
||||
| | | | v v |
|
||||
| | | | +------------+ +------------+ |
|
||||
| | | | |"NGINX" | |"Tendermint"| |
|
||||
| | | | |"Deployment"| |"Stateful" | |
|
||||
| | | | |"Pub-Key-Ex"| |"Set" | |
|
||||
| ^ | | +------------+ +------------+ |
|
||||
| +-----+-------+ | | |
|
||||
| "POST" |"Analyze" | "GET" | | |
|
||||
| |"Request" | | | |
|
||||
| +-----------+ +--------+ | | |
|
||||
| | +-------------+ | | | |
|
||||
| | | | | "Bi+directional, communication between" |
|
||||
| | | | | "BigchainDB(APP) and Tendermint" |
|
||||
| | | | | "BFT consensus Engine" |
|
||||
| | | | | |
|
||||
| v v | | |
|
||||
| | | |
|
||||
| +-------------+ +--------------+ +----+-------------------> +--------------+ |
|
||||
| | "OpenResty" | | "BigchainDB" | | | "MongoDB" | |
|
||||
| | "Service" | | "Service" | | | "Service" | |
|
||||
| | | +----->| | | +-------> | | |
|
||||
| +------+------+ | +------+-------+ | | +------+-------+ |
|
||||
| | | | | | | |
|
||||
| | | | | | | |
|
||||
| v | v | | v |
|
||||
| +-------------+ | +-------------+ | | +----------+ |
|
||||
| | | | | | <------------+ | |"MongoDB" | |
|
||||
| |"OpenResty" | | | "BigchainDB"| | |"Stateful"| |
|
||||
| |"Deployment" | | | "Deployment"| | |"Set" | |
|
||||
| | | | | | | +-----+----+ |
|
||||
| | | | | +---------------------------+ | |
|
||||
| | | | | | | |
|
||||
| +-----+-------+ | +-------------+ | |
|
||||
| | | | |
|
||||
| | | | |
|
||||
| v | | |
|
||||
| +-----------+ | v |
|
||||
| | "Auth" | | +------------+ |
|
||||
| | "Logic" |----------+ |"MongoDB" | |
|
||||
| | | |"Monitoring"| |
|
||||
| | | |"Agent" | |
|
||||
| +---+-------+ +-----+------+ |
|
||||
| | | |
|
||||
| | | |
|
||||
| | | |
|
||||
| | | |
|
||||
| | | |
|
||||
| | | |
|
||||
+---------------+---------------------------------------------------------------------------------------+------------------------------+
|
||||
| |
|
||||
| |
|
||||
| |
|
||||
v v
|
||||
+------------------------------------+ +------------------------------------+
|
||||
| | | |
|
||||
| | | |
|
||||
| | | |
|
||||
| "3Scale" | | "MongoDB Cloud" |
|
||||
| | | |
|
||||
| | | |
|
||||
| | | |
|
||||
+------------------------------------+ +------------------------------------+
|
||||
|
||||
|
||||
|
||||
|
||||
.. note::
|
||||
The arrows in the diagram represent the client-server communication. For
|
||||
|
@ -22,8 +147,8 @@ An IPDB Production deployment is hosted on a Kubernetes cluster and includes:
|
|||
fully duplex.
|
||||
|
||||
|
||||
NGINX
|
||||
-----
|
||||
NGINX: Entrypoint and Gateway
|
||||
-----------------------------
|
||||
|
||||
We use an NGINX as HTTP proxy on port 443 (configurable) at the cloud
|
||||
entrypoint for:
|
||||
|
@ -51,8 +176,8 @@ entrypoint for:
|
|||
public api port), the connection is proxied to the MongoDB Service.
|
||||
|
||||
|
||||
OpenResty
|
||||
---------
|
||||
OpenResty: API Management, Authentication and Authorization
|
||||
-----------------------------------------------------------
|
||||
|
||||
We use `OpenResty <https://openresty.org/>`_ to perform authorization checks
|
||||
with 3scale using the ``app_id`` and ``app_key`` headers in the HTTP request.
|
||||
|
@ -63,13 +188,23 @@ on the LuaJIT compiler to execute the functions to authenticate the ``app_id``
|
|||
and ``app_key`` with the 3scale backend.
|
||||
|
||||
|
||||
MongoDB
|
||||
-------
|
||||
MongoDB: Standalone
|
||||
-------------------
|
||||
|
||||
We use MongoDB as the backend database for BigchainDB.
|
||||
In a multi-node deployment, MongoDB members communicate with each other via the
|
||||
public port exposed by the NGINX Service.
|
||||
|
||||
We achieve security by avoiding DoS attacks at the NGINX proxy layer and by
|
||||
ensuring that MongoDB has TLS enabled for all its connections.
|
||||
|
||||
|
||||
Tendermint: BFT consensus engine
|
||||
--------------------------------
|
||||
|
||||
We use Tendermint as the backend consensus engine for BFT replication of BigchainDB.
|
||||
In a multi-node deployment, Tendermint nodes/peers communicate with each other via
|
||||
the public ports exposed by the NGINX gateway.
|
||||
|
||||
We use port **9986** (configurable) to allow tendermint nodes to access the public keys
|
||||
of the peers and port **46656** (configurable) for the rest of the communications between
|
||||
the peers.
|
||||
|
||||
|
|
|
@ -218,7 +218,7 @@ the :doc:`deployment steps for each node <node-on-kubernetes>` N number of times
|
|||
the number of participants in the network.
|
||||
|
||||
In our Kubernetes deployment template for a single BigchainDB node, we covered the basic configurations
|
||||
settings :ref:`here <how-to-configure-a-bigchaindb-tendermint-node>`.
|
||||
settings :ref:`here <how-to-configure-a-bigchaindb-node>`.
|
||||
|
||||
Since, we index the ConfigMap and Secret Keys for the single site deployment, we need to update
|
||||
all the Kubernetes components to reflect the corresponding changes i.e. For each Kubernetes Service,
|
|
@ -1,10 +1,10 @@
|
|||
.. _configure-mongodb-cloud-manager-for-monitoring-and-backup:
|
||||
.. _configure-mongodb-cloud-manager-for-monitoring:
|
||||
|
||||
Configure MongoDB Cloud Manager for Monitoring and Backup
|
||||
=========================================================
|
||||
Configure MongoDB Cloud Manager for Monitoring
|
||||
==============================================
|
||||
|
||||
This document details the steps required to configure MongoDB Cloud Manager to
|
||||
enable monitoring and backup of data in a MongoDB Replica Set.
|
||||
enable monitoring of data in a MongoDB Replica Set.
|
||||
|
||||
|
||||
Configure MongoDB Cloud Manager for Monitoring
|
||||
|
@ -60,39 +60,3 @@ Configure MongoDB Cloud Manager for Monitoring
|
|||
|
||||
* Verify on the UI that data is being sent by the monitoring agent to the
|
||||
Cloud Manager. It may take upto 5 minutes for data to appear on the UI.
|
||||
|
||||
|
||||
Configure MongoDB Cloud Manager for Backup
|
||||
------------------------------------------
|
||||
|
||||
* Once the Backup Agent is up and running, open
|
||||
`MongoDB Cloud Manager <https://cloud.mongodb.com>`_.
|
||||
|
||||
* Click ``Login`` under ``MongoDB Cloud Manager`` and log in to the Cloud
|
||||
Manager.
|
||||
|
||||
* Select the group from the dropdown box on the page.
|
||||
|
||||
* Click ``Backup`` tab.
|
||||
|
||||
* Hover over the ``Status`` column of your backup and click ``Start``
|
||||
to start the backup.
|
||||
|
||||
* Select the replica set on the side pane.
|
||||
|
||||
* If you have authentication enabled, select the authentication mechanism as
|
||||
per your deployment. The default BigchainDB production deployment currently
|
||||
supports ``X.509 Client Certificate`` as the authentication mechanism.
|
||||
|
||||
* If you have TLS enabled, select the checkbox ``Replica set allows TLS/SSL
|
||||
connections``. This should be selected by default in case you selected
|
||||
``X.509 Client Certificate`` as the auth mechanism above.
|
||||
|
||||
* Choose the ``WiredTiger`` storage engine.
|
||||
|
||||
* Verify the details of your MongoDB instance and click on ``Start``.
|
||||
|
||||
* It may take up to 5 minutes for the backup process to start.
|
||||
During this process, the UI will show the status of the backup process.
|
||||
|
||||
* Verify that data is being backed up on the UI.
|
||||
|
|
|
@ -1,10 +1,10 @@
|
|||
Production Deployment Template
|
||||
==============================
|
||||
|
||||
This section outlines how *we* deploy production BigchainDB nodes and clusters
|
||||
on Microsoft Azure
|
||||
using Kubernetes.
|
||||
We improve it constantly.
|
||||
This section outlines how *we* deploy production BigchainDB,
|
||||
integrated with Tendermint(backend for BFT consensus),
|
||||
clusters on Microsoft Azure using
|
||||
Kubernetes. We improve it constantly.
|
||||
You may choose to use it as a template or reference for your own deployment,
|
||||
but *we make no claim that it is suitable for your purposes*.
|
||||
Feel free change things to suit your needs or preferences.
|
||||
|
@ -25,8 +25,7 @@ Feel free change things to suit your needs or preferences.
|
|||
cloud-manager
|
||||
easy-rsa
|
||||
upgrade-on-kubernetes
|
||||
add-node-on-kubernetes
|
||||
restore-from-mongodb-cloud-manager
|
||||
bigchaindb-network-on-kubernetes
|
||||
tectonic-azure
|
||||
troubleshoot
|
||||
architecture
|
||||
|
|
|
@ -11,7 +11,7 @@ and ``secret.yaml`` (a set of Secrets).
|
|||
They are stored in the Kubernetes cluster's key-value store (etcd).
|
||||
|
||||
Make sure you did all the things listed in the section titled
|
||||
:ref:`things-each-node-operator-must-do`
|
||||
:ref:`things-each-node-operator-must-do-tmt`
|
||||
(including generation of all the SSL certificates needed
|
||||
for MongoDB auth).
|
||||
|
||||
|
@ -35,7 +35,7 @@ vars.cluster-fqdn
|
|||
~~~~~~~~~~~~~~~~~
|
||||
|
||||
The ``cluster-fqdn`` field specifies the domain you would have
|
||||
:ref:`registered before <register-a-domain-and-get-an-ssl-certificate-for-it>`.
|
||||
:ref:`registered before <register-a-domain-and-get-an-ssl-certificate-for-it-tmt>`.
|
||||
|
||||
|
||||
vars.cluster-frontend-port
|
||||
|
@ -71,15 +71,8 @@ of naming instances, so the instances in your BigchainDB node
|
|||
should conform to that standard (i.e. you can't just make up some names).
|
||||
There are some things worth noting about the ``mdb-instance-name``:
|
||||
|
||||
* MongoDB reads the local ``/etc/hosts`` file while bootstrapping a replica
|
||||
set to resolve the hostname provided to the ``rs.initiate()`` command.
|
||||
It needs to ensure that the replica set is being initialized in the same
|
||||
instance where the MongoDB instance is running.
|
||||
* We use the value in the ``mdb-instance-name`` field to achieve this.
|
||||
* This field will be the DNS name of your MongoDB instance, and Kubernetes
|
||||
maps this name to its internal DNS.
|
||||
* This field will also be used by other MongoDB instances when forming a
|
||||
MongoDB replica set.
|
||||
* We use ``mdb-instance-0``, ``mdb-instance-1`` and so on in our
|
||||
documentation. Your BigchainDB cluster may use a different naming convention.
|
||||
|
||||
|
@ -145,27 +138,6 @@ There's another :doc:`page with a complete listing of all the BigchainDB Server
|
|||
configuration settings <../server-reference/configuration>`.
|
||||
|
||||
|
||||
bdb-config.bdb-keyring
|
||||
~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
This lists the BigchainDB public keys
|
||||
of all *other* nodes in your BigchainDB cluster
|
||||
(not including the public key of your BigchainDB node). Cases:
|
||||
|
||||
* If you're deploying the first node in the cluster,
|
||||
the value should be ``""`` (an empty string).
|
||||
* If you're deploying the second node in the cluster,
|
||||
the value should be the BigchainDB public key of the first/original
|
||||
node in the cluster.
|
||||
For example,
|
||||
``"EPQk5i5yYpoUwGVM8VKZRjM8CYxB6j8Lu8i8SG7kGGce"``
|
||||
* If there are two or more other nodes already in the cluster,
|
||||
the value should be a colon-separated list
|
||||
of the BigchainDB public keys
|
||||
of those other nodes.
|
||||
For example,
|
||||
``"DPjpKbmbPYPKVAuf6VSkqGCf5jzrEh69Ldef6TrLwsEQ:EPQk5i5yYpoUwGVM8VKZRjM8CYxB6j8Lu8i8SG7kGGce"``
|
||||
|
||||
bdb-config.bdb-user
|
||||
~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
|
@ -176,16 +148,16 @@ We need to specify the user name *as seen in the certificate* issued to
|
|||
the BigchainDB instance in order to authenticate correctly. Use
|
||||
the following ``openssl`` command to extract the user name from the
|
||||
certificate:
|
||||
|
||||
|
||||
.. code:: bash
|
||||
|
||||
$ openssl x509 -in <path to the bigchaindb certificate> \
|
||||
-inform PEM -subject -nameopt RFC2253
|
||||
|
||||
|
||||
You should see an output line that resembles:
|
||||
|
||||
|
||||
.. code:: bash
|
||||
|
||||
|
||||
subject= emailAddress=dev@bigchaindb.com,CN=test-bdb-ssl,OU=BigchainDB-Instance,O=BigchainDB GmbH,L=Berlin,ST=Berlin,C=DE
|
||||
|
||||
The ``subject`` line states the complete user name we need to use for this
|
||||
|
@ -196,6 +168,137 @@ field (``bdb-config.bdb-user``), i.e.
|
|||
emailAddress=dev@bigchaindb.com,CN=test-bdb-ssl,OU=BigchainDB-Instance,O=BigchainDB GmbH,L=Berlin,ST=Berlin,C=DE
|
||||
|
||||
|
||||
tendermint-config.tm-instance-name
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Your BigchainDB cluster organization should have a standard way
|
||||
of naming instances, so the instances in your BigchainDB node
|
||||
should conform to that standard. There are some things worth noting
|
||||
about the ``tm-instance-name``:
|
||||
|
||||
* This field will be the DNS name of your Tendermint instance, and Kubernetes
|
||||
maps this name to its internal DNS, so all the peer to peer communication
|
||||
depends on this, in case of a network/multi-node deployment.
|
||||
* This parameter is also used to access the public key of a particular node.
|
||||
* We use ``tm-instance-0``, ``tm-instance-1`` and so on in our
|
||||
documentation. Your BigchainDB cluster may use a different naming convention.
|
||||
|
||||
|
||||
tendermint-config.ngx-tm-instance-name
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
NGINX needs the FQDN of the servers inside the cluster to be able to forward
|
||||
traffic.
|
||||
``ngx-tm-instance-name`` is the FQDN of the Tendermint
|
||||
instance in this Kubernetes cluster.
|
||||
In Kubernetes, this is usually the name of the module specified in the
|
||||
corresponding ``tendermint-config.*-instance-name`` followed by the
|
||||
``<namespace name>.svc.cluster.local``. For example, if you run Tendermint in
|
||||
the default Kubernetes namespace, this will be
|
||||
``<tendermint-config.tm-instance-name>.default.svc.cluster.local``
|
||||
|
||||
|
||||
tendermint-config.tm-seeds
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
``tm-seeds`` is the initial set of peers to connect to. It is a comma separated
|
||||
list of all the peers part of the cluster.
|
||||
|
||||
If you are deploying a stand-alone BigchainDB node the value should the same as
|
||||
``<tm-instance-name>``. If you are deploying a network this parameter will look
|
||||
like this:
|
||||
|
||||
.. code::
|
||||
|
||||
<tm-instance-1>,<tm-instance-2>,<tm-instance-3>,<tm-instance-4>
|
||||
|
||||
|
||||
tendermint-config.tm-validators
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
``tm-validators`` is the initial set of validators in the network. It is a comma separated list
|
||||
of all the participant validator nodes.
|
||||
|
||||
If you are deploying a stand-alone BigchainDB node the value should be the same as
|
||||
``<tm-instance-name>``. If you are deploying a network this parameter will look like
|
||||
this:
|
||||
|
||||
.. code::
|
||||
|
||||
<tm-instance-1>,<tm-instance-2>,<tm-instance-3>,<tm-instance-4>
|
||||
|
||||
|
||||
tendermint-config.tm-validator-power
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
``tm-validator-power`` represents the voting power of each validator. It is a comma separated
|
||||
list of all the participants in the network.
|
||||
|
||||
**Note**: The order of the validator power list should be the same as the ``tm-validators`` list.
|
||||
|
||||
.. code::
|
||||
|
||||
tm-validators: <tm-instance-1>,<tm-instance-2>,<tm-instance-3>,<tm-instance-4>
|
||||
|
||||
For the above list of validators the ``tm-validator-power`` list should look like this:
|
||||
|
||||
.. code::
|
||||
|
||||
tm-validator-power: <tm-instance-1-power>,<tm-instance-2-power>,<tm-instance-3-power>,<tm-instance-4-power>
|
||||
|
||||
|
||||
tendermint-config.tm-genesis-time
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
``tm-genesis-time`` represents the official time of blockchain start. Details regarding, how to generate
|
||||
this parameter are covered :ref:`here <generate-the-blockchain-id-and-genesis-time>`.
|
||||
|
||||
|
||||
tendermint-config.tm-chain-id
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
``tm-chain-id`` represents the ID of the blockchain. This must be unique for every blockchain.
|
||||
Details regarding, how to generate this parameter are covered
|
||||
:ref:`here <generate-the-blockchain-id-and-genesis-time>`.
|
||||
|
||||
|
||||
tendermint-config.tm-abci-port
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
``tm-abci-port`` has a default value ``46658`` which is used by Tendermint Core for
|
||||
ABCI(Application BlockChain Interface) traffic. BigchainDB nodes use this port
|
||||
internally to communicate with Tendermint Core.
|
||||
|
||||
|
||||
tendermint-config.tm-p2p-port
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
``tm-p2p-port`` has a default value ``46656`` which is used by Tendermint Core for
|
||||
peer to peer communication.
|
||||
|
||||
For a multi-node/zone deployment, this port needs to be available publicly for P2P
|
||||
communication between Tendermint nodes.
|
||||
|
||||
|
||||
tendermint-config.tm-rpc-port
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
``tm-rpc-port`` has a default value ``46657`` which is used by Tendermint Core for RPC
|
||||
traffic. BigchainDB nodes use this port with RPC listen address.
|
||||
|
||||
|
||||
tendermint-config.tm-pub-key-access
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
``tm-pub-key-access`` has a default value ``9986``, which is used to discover the public
|
||||
key of a tendermint node. Each Tendermint StatefulSet(Pod, Tendermint + NGINX) hosts its
|
||||
public key.
|
||||
|
||||
.. code::
|
||||
|
||||
http://tendermint-instance-1:9986/pub_key.json
|
||||
|
||||
|
||||
Edit secret.yaml
|
||||
----------------
|
||||
|
||||
|
|
|
@ -1,20 +1,17 @@
|
|||
.. _kubernetes-template-deploy-a-single-node-bigchaindb:
|
||||
.. _kubernetes-template-deploy-a-single-bigchaindb-node:
|
||||
|
||||
Kubernetes Template: Deploy a Single BigchainDB Node
|
||||
====================================================
|
||||
|
||||
This page describes how to deploy the first BigchainDB node
|
||||
in a BigchainDB cluster, or a stand-alone BigchainDB node,
|
||||
This page describes how to deploy a stand-alone BigchainDB + Tendermint node,
|
||||
or a static network of BigchainDB + Tendermint nodes.
|
||||
using `Kubernetes <https://kubernetes.io/>`_.
|
||||
It assumes you already have a running Kubernetes cluster.
|
||||
|
||||
If you want to add a new BigchainDB node to an existing BigchainDB cluster,
|
||||
refer to :doc:`the page about that <add-node-on-kubernetes>`.
|
||||
|
||||
Below, we refer to many files by their directory and filename,
|
||||
such as ``configuration/config-map.yaml``. Those files are files in the
|
||||
`bigchaindb/bigchaindb repository on GitHub
|
||||
<https://github.com/bigchaindb/bigchaindb/>`_ in the ``k8s/`` directory.
|
||||
such as ``configuration/config-map-tm.yaml``. Those files are files in the
|
||||
`bigchaindb/bigchaindb repository on GitHub <https://github.com/bigchaindb/bigchaindb/>`_
|
||||
in the ``k8s/`` directory.
|
||||
Make sure you're getting those files from the appropriate Git branch on
|
||||
GitHub, i.e. the branch for the version of BigchainDB that your BigchainDB
|
||||
cluster is using.
|
||||
|
@ -32,7 +29,8 @@ The default location of the kubectl configuration file is ``~/.kube/config``.
|
|||
If you don't have that file, then you need to get it.
|
||||
|
||||
**Azure.** If you deployed your Kubernetes cluster on Azure
|
||||
using the Azure CLI 2.0 (as per :doc:`our template <template-kubernetes-azure>`),
|
||||
using the Azure CLI 2.0 (as per :doc:`our template
|
||||
<../production-deployment-template/template-kubernetes-azure>`),
|
||||
then you can get the ``~/.kube/config`` file using:
|
||||
|
||||
.. code:: bash
|
||||
|
@ -109,7 +107,8 @@ Step 3: Configure Your BigchainDB Node
|
|||
|
||||
See the page titled :ref:`how-to-configure-a-bigchaindb-node`.
|
||||
|
||||
.. _start-the-nginx-service:
|
||||
|
||||
.. _start-the-nginx-service-tmt:
|
||||
|
||||
Step 4: Start the NGINX Service
|
||||
-------------------------------
|
||||
|
@ -126,7 +125,7 @@ Step 4: Start the NGINX Service
|
|||
Step 4.1: Vanilla NGINX
|
||||
^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
* This configuration is located in the file ``nginx-http/nginx-http-svc.yaml``.
|
||||
* This configuration is located in the file ``nginx-http/nginx-http-svc-tm.yaml``.
|
||||
|
||||
* Set the ``metadata.name`` and ``metadata.labels.name`` to the value
|
||||
set in ``ngx-instance-name`` in the ConfigMap above.
|
||||
|
@ -140,11 +139,21 @@ Step 4.1: Vanilla NGINX
|
|||
``cluster-frontend-port`` in the ConfigMap above. This is the
|
||||
``public-cluster-port`` in the file which is the ingress in to the cluster.
|
||||
|
||||
* Set ``ports[1].port`` and ``ports[1].targetPort`` to the value set in the
|
||||
``tm-pub-access-port`` in the ConfigMap above. This is the
|
||||
``tm-pub-key-access`` in the file which specifies where Public Key for
|
||||
the Tendermint instance is available.
|
||||
|
||||
* Set ``ports[2].port`` and ``ports[2].targetPort`` to the value set in the
|
||||
``tm-p2p-port`` in the ConfigMap above. This is the
|
||||
``tm-p2p-port`` in the file which is used for P2P communication for Tendermint
|
||||
nodes.
|
||||
|
||||
* Start the Kubernetes Service:
|
||||
|
||||
.. code:: bash
|
||||
|
||||
$ kubectl --context k8s-bdb-test-cluster-0 apply -f nginx-http/nginx-http-svc.yaml
|
||||
$ kubectl --context k8s-bdb-test-cluster-0 apply -f nginx-http/nginx-http-svc-tm.yaml
|
||||
|
||||
|
||||
Step 4.2: NGINX with HTTPS
|
||||
|
@ -156,7 +165,7 @@ Step 4.2: NGINX with HTTPS
|
|||
* You should have already created the necessary Kubernetes Secrets in the previous
|
||||
step (i.e. ``https-certs``).
|
||||
|
||||
* This configuration is located in the file ``nginx-https/nginx-https-svc.yaml``.
|
||||
* This configuration is located in the file ``nginx-https/nginx-https-svc-tm.yaml``.
|
||||
|
||||
* Set the ``metadata.name`` and ``metadata.labels.name`` to the value
|
||||
set in ``ngx-instance-name`` in the ConfigMap above.
|
||||
|
@ -175,14 +184,25 @@ Step 4.2: NGINX with HTTPS
|
|||
``public-mdb-port`` in the file which specifies where MongoDB is
|
||||
available.
|
||||
|
||||
* Set ``ports[2].port`` and ``ports[2].targetPort`` to the value set in the
|
||||
``tm-pub-access-port`` in the ConfigMap above. This is the
|
||||
``tm-pub-key-access`` in the file which specifies where Public Key for
|
||||
the Tendermint instance is available.
|
||||
|
||||
* Set ``ports[3].port`` and ``ports[3].targetPort`` to the value set in the
|
||||
``tm-p2p-port`` in the ConfigMap above. This is the
|
||||
``tm-p2p-port`` in the file which is used for P2P communication between Tendermint
|
||||
nodes.
|
||||
|
||||
|
||||
* Start the Kubernetes Service:
|
||||
|
||||
.. code:: bash
|
||||
|
||||
$ kubectl --context k8s-bdb-test-cluster-0 apply -f nginx-https/nginx-https-svc.yaml
|
||||
$ kubectl --context k8s-bdb-test-cluster-0 apply -f nginx-https/nginx-https-svc-tm.yaml
|
||||
|
||||
|
||||
.. _assign-dns-name-to-the-nginx-public-ip:
|
||||
.. _assign-dns-name-to-nginx-public-ip-tmt:
|
||||
|
||||
Step 5: Assign DNS Name to the NGINX Public IP
|
||||
----------------------------------------------
|
||||
|
@ -221,16 +241,16 @@ changes to be applied.
|
|||
To verify the DNS setting is operational, you can run ``nslookup <DNS
|
||||
name added in Azure configuration>`` from your local Linux shell.
|
||||
|
||||
This will ensure that when you scale the replica set later, other MongoDB
|
||||
members in the replica set can reach this instance.
|
||||
This will ensure that when you scale to different geographical zones, other Tendermint
|
||||
nodes in the network can reach this instance.
|
||||
|
||||
|
||||
.. _start-the-mongodb-kubernetes-service:
|
||||
.. _start-the-mongodb-kubernetes-service-tmt:
|
||||
|
||||
Step 6: Start the MongoDB Kubernetes Service
|
||||
--------------------------------------------
|
||||
|
||||
* This configuration is located in the file ``mongodb/mongo-svc.yaml``.
|
||||
* This configuration is located in the file ``mongodb/mongo-svc-tm.yaml``.
|
||||
|
||||
* Set the ``metadata.name`` and ``metadata.labels.name`` to the value
|
||||
set in ``mdb-instance-name`` in the ConfigMap above.
|
||||
|
@ -249,15 +269,15 @@ Step 6: Start the MongoDB Kubernetes Service
|
|||
|
||||
.. code:: bash
|
||||
|
||||
$ kubectl --context k8s-bdb-test-cluster-0 apply -f mongodb/mongo-svc.yaml
|
||||
$ kubectl --context k8s-bdb-test-cluster-0 apply -f mongodb/mongo-svc-tm.yaml
|
||||
|
||||
|
||||
.. _start-the-bigchaindb-kubernetes-service:
|
||||
.. _start-the-bigchaindb-kubernetes-service-tmt:
|
||||
|
||||
Step 7: Start the BigchainDB Kubernetes Service
|
||||
-----------------------------------------------
|
||||
|
||||
* This configuration is located in the file ``bigchaindb/bigchaindb-svc.yaml``.
|
||||
* This configuration is located in the file ``bigchaindb/bigchaindb-svc-tm.yaml``.
|
||||
|
||||
* Set the ``metadata.name`` and ``metadata.labels.name`` to the value
|
||||
set in ``bdb-instance-name`` in the ConfigMap above.
|
||||
|
@ -277,19 +297,24 @@ Step 7: Start the BigchainDB Kubernetes Service
|
|||
This is the ``bdb-ws-port`` in the file which specifies where BigchainDB
|
||||
listens for Websocket connections.
|
||||
|
||||
* Set ``ports[2].port`` and ``ports[2].targetPort`` to the value set in the
|
||||
``tm-abci-port`` in the ConfigMap above.
|
||||
This is the ``tm-abci-port`` in the file which specifies the port used
|
||||
for ABCI communication.
|
||||
|
||||
* Start the Kubernetes Service:
|
||||
|
||||
.. code:: bash
|
||||
|
||||
$ kubectl --context k8s-bdb-test-cluster-0 apply -f bigchaindb/bigchaindb-svc.yaml
|
||||
$ kubectl --context k8s-bdb-test-cluster-0 apply -f bigchaindb/bigchaindb-svc-tm.yaml
|
||||
|
||||
|
||||
.. _start-the-openresty-kubernetes-service:
|
||||
.. _start-the-openresty-kubernetes-service-tmt:
|
||||
|
||||
Step 8: Start the OpenResty Kubernetes Service
|
||||
----------------------------------------------
|
||||
|
||||
* This configuration is located in the file ``nginx-openresty/nginx-openresty-svc.yaml``.
|
||||
* This configuration is located in the file ``nginx-openresty/nginx-openresty-svc-tm.yaml``.
|
||||
|
||||
* Set the ``metadata.name`` and ``metadata.labels.name`` to the value
|
||||
set in ``openresty-instance-name`` in the ConfigMap above.
|
||||
|
@ -303,26 +328,64 @@ Step 8: Start the OpenResty Kubernetes Service
|
|||
|
||||
.. code:: bash
|
||||
|
||||
$ kubectl --context k8s-bdb-test-cluster-0 apply -f nginx-openresty/nginx-openresty-svc.yaml
|
||||
$ kubectl --context k8s-bdb-test-cluster-0 apply -f nginx-openresty/nginx-openresty-svc-tm.yaml
|
||||
|
||||
|
||||
.. _start-the-nginx-kubernetes-deployment:
|
||||
.. _start-the-tendermint-kubernetes-service-tmt:
|
||||
|
||||
Step 9: Start the NGINX Kubernetes Deployment
|
||||
---------------------------------------------
|
||||
Step 9: Start the Tendermint Kubernetes Service
|
||||
-----------------------------------------------
|
||||
|
||||
* NGINX is used as a proxy to OpenResty, BigchainDB and MongoDB instances in
|
||||
* This configuration is located in the file ``tendermint/tendermint-svc.yaml``.
|
||||
|
||||
* Set the ``metadata.name`` and ``metadata.labels.name`` to the value
|
||||
set in ``tm-instance-name`` in the ConfigMap above.
|
||||
|
||||
* Set the ``spec.selector.app`` to the value set in ``tm-instance-name`` in
|
||||
the ConfigMap followed by ``-ss``. For example, if the value set in the
|
||||
``tm-instance-name`` is ``tm-instance-0``, set the
|
||||
``spec.selector.app`` to ``tm-instance-0-ss``.
|
||||
|
||||
* Set ``ports[0].port`` and ``ports[0].targetPort`` to the value set in the
|
||||
``tm-p2p-port`` in the ConfigMap above.
|
||||
This is the ``p2p`` in the file which specifies where Tendermint peers
|
||||
communicate.
|
||||
|
||||
* Set ``ports[1].port`` and ``ports[1].targetPort`` to the value set in the
|
||||
``tm-rpc-port`` in the ConfigMap above.
|
||||
This is the ``rpc`` in the file which specifies the port used by Tendermint core
|
||||
for RPC traffic.
|
||||
|
||||
* Set ``ports[2].port`` and ``ports[2].targetPort`` to the value set in the
|
||||
``tm-pub-key-access`` in the ConfigMap above.
|
||||
This is the ``pub-key-access`` in the file which specifies the port to host/distribute
|
||||
the public key for the Tendermint node.
|
||||
|
||||
* Start the Kubernetes Service:
|
||||
|
||||
.. code:: bash
|
||||
|
||||
$ kubectl --context k8s-bdb-test-cluster-0 apply -f tendermint/tendermint-svc.yaml
|
||||
|
||||
|
||||
.. _start-the-nginx-deployment-tmt:
|
||||
|
||||
Step 10: Start the NGINX Kubernetes Deployment
|
||||
----------------------------------------------
|
||||
|
||||
* NGINX is used as a proxy to OpenResty, BigchainDB, Tendermint and MongoDB instances in
|
||||
the node. It proxies HTTP/HTTPS requests on the ``cluster-frontend-port``
|
||||
to the corresponding OpenResty or BigchainDB backend, and TCP connections
|
||||
on ``mongodb-frontend-port`` to the MongoDB backend.
|
||||
to the corresponding OpenResty or BigchainDB backend, TCP connections
|
||||
on ``mongodb-frontend-port``, ``tm-p2p-port`` and ``tm-pub-key-access``
|
||||
to MongoDB and Tendermint respectively.
|
||||
|
||||
* As in step 4, you have the option to use vanilla NGINX without HTTPS or
|
||||
NGINX with HTTPS support.
|
||||
|
||||
Step 9.1: Vanilla NGINX
|
||||
^^^^^^^^^^^^^^^^^^^^^^^
|
||||
Step 10.1: Vanilla NGINX
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
* This configuration is located in the file ``nginx-http/nginx-http-dep.yaml``.
|
||||
* This configuration is located in the file ``nginx-http/nginx-http-dep-tm.yaml``.
|
||||
|
||||
* Set the ``metadata.name`` and ``spec.template.metadata.labels.app``
|
||||
to the value set in ``ngx-instance-name`` in the ConfigMap followed by a
|
||||
|
@ -330,9 +393,10 @@ Step 9.1: Vanilla NGINX
|
|||
``ngx-http-instance-0``, set the fields to ``ngx-http-instance-0-dep``.
|
||||
|
||||
* Set the ports to be exposed from the pod in the
|
||||
``spec.containers[0].ports`` section. We currently expose 3 ports -
|
||||
``mongodb-frontend-port``, ``cluster-frontend-port`` and
|
||||
``cluster-health-check-port``. Set them to the values specified in the
|
||||
``spec.containers[0].ports`` section. We currently expose 5 ports -
|
||||
``mongodb-frontend-port``, ``cluster-frontend-port``,
|
||||
``cluster-health-check-port``, ``tm-pub-key-access`` and ``tm-p2p-port``.
|
||||
Set them to the values specified in the
|
||||
ConfigMap.
|
||||
|
||||
* The configuration uses the following values set in the ConfigMap:
|
||||
|
@ -346,19 +410,22 @@ Step 9.1: Vanilla NGINX
|
|||
- ``ngx-bdb-instance-name``
|
||||
- ``bigchaindb-api-port``
|
||||
- ``bigchaindb-ws-port``
|
||||
- ``ngx-tm-instance-name``
|
||||
- ``tm-pub-key-access``
|
||||
- ``tm-p2p-port``
|
||||
|
||||
* Start the Kubernetes Deployment:
|
||||
|
||||
.. code:: bash
|
||||
|
||||
$ kubectl --context k8s-bdb-test-cluster-0 apply -f nginx-http/nginx-http-dep.yaml
|
||||
$ kubectl --context k8s-bdb-test-cluster-0 apply -f nginx-http/nginx-http-dep-tm.yaml
|
||||
|
||||
|
||||
Step 9.2: NGINX with HTTPS
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
Step 10.2: NGINX with HTTPS
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
* This configuration is located in the file
|
||||
``nginx-https/nginx-https-dep.yaml``.
|
||||
``nginx-https/nginx-https-dep-tm.yaml``.
|
||||
|
||||
* Set the ``metadata.name`` and ``spec.template.metadata.labels.app``
|
||||
to the value set in ``ngx-instance-name`` in the ConfigMap followed by a
|
||||
|
@ -366,9 +433,10 @@ Step 9.2: NGINX with HTTPS
|
|||
``ngx-https-instance-0``, set the fields to ``ngx-https-instance-0-dep``.
|
||||
|
||||
* Set the ports to be exposed from the pod in the
|
||||
``spec.containers[0].ports`` section. We currently expose 3 ports -
|
||||
``mongodb-frontend-port``, ``cluster-frontend-port`` and
|
||||
``cluster-health-check-port``. Set them to the values specified in the
|
||||
``spec.containers[0].ports`` section. We currently expose 6 ports -
|
||||
``mongodb-frontend-port``, ``cluster-frontend-port``,
|
||||
``cluster-health-check-port``, ``tm-pub-key-access`` and ``tm-p2p-port``
|
||||
. Set them to the values specified in the
|
||||
ConfigMap.
|
||||
|
||||
* The configuration uses the following values set in the ConfigMap:
|
||||
|
@ -385,6 +453,9 @@ Step 9.2: NGINX with HTTPS
|
|||
- ``ngx-bdb-instance-name``
|
||||
- ``bigchaindb-api-port``
|
||||
- ``bigchaindb-ws-port``
|
||||
- ``ngx-tm-instance-name``
|
||||
- ``tm-pub-key-access``
|
||||
- ``tm-p2p-port```
|
||||
|
||||
* The configuration uses the following values set in the Secret:
|
||||
|
||||
|
@ -394,12 +465,12 @@ Step 9.2: NGINX with HTTPS
|
|||
|
||||
.. code:: bash
|
||||
|
||||
$ kubectl --context k8s-bdb-test-cluster-0 apply -f nginx-https/nginx-https-dep.yaml
|
||||
$ kubectl --context k8s-bdb-test-cluster-0 apply -f nginx-https/nginx-https-dep-tm.yaml
|
||||
|
||||
|
||||
.. _create-kubernetes-storage-classes-for-mongodb:
|
||||
.. _create-kubernetes-storage-class-mdb-tmt:
|
||||
|
||||
Step 10: Create Kubernetes Storage Classes for MongoDB
|
||||
Step 11: Create Kubernetes Storage Classes for MongoDB
|
||||
------------------------------------------------------
|
||||
|
||||
MongoDB needs somewhere to store its data persistently,
|
||||
|
@ -428,7 +499,7 @@ The first thing to do is create the Kubernetes storage classes.
|
|||
First, you need an Azure storage account.
|
||||
If you deployed your Kubernetes cluster on Azure
|
||||
using the Azure CLI 2.0
|
||||
(as per :doc:`our template <template-kubernetes-azure>`),
|
||||
(as per :doc:`our template <../production-deployment-template/template-kubernetes-azure>`),
|
||||
then the `az acs create` command already created a
|
||||
storage account in the same location and resource group
|
||||
as your Kubernetes cluster.
|
||||
|
@ -440,7 +511,7 @@ in the same data center.
|
|||
Premium storage is higher-cost and higher-performance.
|
||||
It uses solid state drives (SSD).
|
||||
You can create a `storage account <https://docs.microsoft.com/en-us/azure/storage/common/storage-create-storage-account>`_
|
||||
for Premium storage and associate it with your Azure resource group.
|
||||
for Premium storage and associate it with your Azure resource group.
|
||||
For future reference, the command to create a storage account is
|
||||
`az storage account create <https://docs.microsoft.com/en-us/cli/azure/storage/account#create>`_.
|
||||
|
||||
|
@ -456,7 +527,7 @@ specify the location you are using in Azure.
|
|||
|
||||
If you want to use a custom storage account with the Storage Class, you
|
||||
can also update `parameters.storageAccount` and provide the Azure storage
|
||||
account name.
|
||||
account name.
|
||||
|
||||
Create the required storage classes using:
|
||||
|
||||
|
@ -468,10 +539,10 @@ Create the required storage classes using:
|
|||
You can check if it worked using ``kubectl get storageclasses``.
|
||||
|
||||
|
||||
.. _create-kubernetes-persistent-volume-claims:
|
||||
.. _create-kubernetes-persistent-volume-claim-mdb-tmt:
|
||||
|
||||
Step 11: Create Kubernetes Persistent Volume Claims
|
||||
---------------------------------------------------
|
||||
Step 12: Create Kubernetes Persistent Volume Claims for MongoDB
|
||||
---------------------------------------------------------------
|
||||
|
||||
Next, you will create two PersistentVolumeClaim objects ``mongo-db-claim`` and
|
||||
``mongo-configdb-claim``.
|
||||
|
@ -517,18 +588,18 @@ but it should become "Bound" fairly quickly.
|
|||
* Run the following command to update a PV's reclaim policy to <Retain>
|
||||
|
||||
.. Code:: bash
|
||||
|
||||
|
||||
$ kubectl --context k8s-bdb-test-cluster-0 patch pv <pv-name> -p '{"spec":{"persistentVolumeReclaimPolicy":"Retain"}}'
|
||||
|
||||
For notes on recreating a private volume form a released Azure disk resource consult
|
||||
:ref:`cluster-troubleshooting`.
|
||||
:doc:`the page about cluster troubleshooting <../production-deployment-template/troubleshoot>`.
|
||||
|
||||
.. _start-a-kubernetes-statefulset-for-mongodb:
|
||||
.. _start-kubernetes-stateful-set-mongodb-tmt:
|
||||
|
||||
Step 12: Start a Kubernetes StatefulSet for MongoDB
|
||||
Step 13: Start a Kubernetes StatefulSet for MongoDB
|
||||
---------------------------------------------------
|
||||
|
||||
* This configuration is located in the file ``mongodb/mongo-ss.yaml``.
|
||||
* This configuration is located in the file ``mongodb/mongo-ss-tm.yaml``.
|
||||
|
||||
* Set the ``spec.serviceName`` to the value set in ``mdb-instance-name`` in
|
||||
the ConfigMap.
|
||||
|
@ -570,9 +641,8 @@ Step 12: Start a Kubernetes StatefulSet for MongoDB
|
|||
* The configuration uses the following values set in the ConfigMap:
|
||||
|
||||
- ``mdb-instance-name``
|
||||
- ``mongodb-replicaset-name``
|
||||
- ``mongodb-backend-port``
|
||||
|
||||
|
||||
* The configuration uses the following values set in the Secret:
|
||||
|
||||
- ``mdb-certs``
|
||||
|
@ -595,7 +665,7 @@ Step 12: Start a Kubernetes StatefulSet for MongoDB
|
|||
|
||||
.. code:: bash
|
||||
|
||||
$ kubectl --context k8s-bdb-test-cluster-0 apply -f mongodb/mongo-ss.yaml
|
||||
$ kubectl --context k8s-bdb-test-cluster-0 apply -f mongodb/mongo-ss-tm.yaml
|
||||
|
||||
* It might take up to 10 minutes for the disks, specified in the Persistent
|
||||
Volume Claims above, to be created and attached to the pod.
|
||||
|
@ -608,9 +678,10 @@ Step 12: Start a Kubernetes StatefulSet for MongoDB
|
|||
|
||||
$ kubectl --context k8s-bdb-test-cluster-0 get pods -w
|
||||
|
||||
.. _configure-users-and-access-control-for-mongodb:
|
||||
|
||||
Step 13: Configure Users and Access Control for MongoDB
|
||||
.. _configure-users-and-access-control-mongodb-tmt:
|
||||
|
||||
Step 14: Configure Users and Access Control for MongoDB
|
||||
-------------------------------------------------------
|
||||
|
||||
* In this step, you will create a user on MongoDB with authorization
|
||||
|
@ -638,28 +709,6 @@ Step 13: Configure Users and Access Control for MongoDB
|
|||
--sslCAFile /etc/mongod/ca/ca.pem \
|
||||
--sslPEMKeyFile /etc/mongod/ssl/mdb-instance.pem
|
||||
|
||||
* Initialize the replica set using:
|
||||
|
||||
.. code:: bash
|
||||
|
||||
> rs.initiate( {
|
||||
_id : "bigchain-rs",
|
||||
members: [ {
|
||||
_id : 0,
|
||||
host :"<hostname>:27017"
|
||||
} ]
|
||||
} )
|
||||
|
||||
The ``hostname`` in this case will be the value set in
|
||||
``mdb-instance-name`` in the ConfigMap.
|
||||
For example, if the value set in the ``mdb-instance-name`` is
|
||||
``mdb-instance-0``, set the ``hostname`` above to the value ``mdb-instance-0``.
|
||||
|
||||
* The instance should be voted as the ``PRIMARY`` in the replica set (since
|
||||
this is the only instance in the replica set till now).
|
||||
This can be observed from the mongo shell prompt,
|
||||
which will read ``PRIMARY>``.
|
||||
|
||||
* Create a user ``adminUser`` on the ``admin`` database with the
|
||||
authorization to create other users. This will only work the first time you
|
||||
log in to the mongo shell. For further details, see `localhost
|
||||
|
@ -717,8 +766,7 @@ Step 13: Configure Users and Access Control for MongoDB
|
|||
]
|
||||
} )
|
||||
|
||||
* You can similarly create users for MongoDB Monitoring Agent and MongoDB
|
||||
Backup Agent. For example:
|
||||
* You can similarly create user for MongoDB Monitoring Agent. For example:
|
||||
|
||||
.. code:: bash
|
||||
|
||||
|
@ -730,18 +778,127 @@ Step 13: Configure Users and Access Control for MongoDB
|
|||
]
|
||||
} )
|
||||
|
||||
PRIMARY> db.getSiblingDB("$external").runCommand( {
|
||||
createUser: 'emailAddress=dev@bigchaindb.com,CN=test-mdb-bak-ssl,OU=MongoDB-Bak-Instance,O=BigchainDB GmbH,L=Berlin,ST=Berlin,C=DE',
|
||||
writeConcern: { w: 'majority' , wtimeout: 5000 },
|
||||
roles: [
|
||||
{ role: 'backup', db: 'admin' }
|
||||
]
|
||||
} )
|
||||
|
||||
.. _create-kubernetes-storage-class-tmt:
|
||||
|
||||
Step 15: Create Kubernetes Storage Classes for Tendermint
|
||||
----------------------------------------------------------
|
||||
|
||||
Tendermint needs somewhere to store its data persistently, it uses
|
||||
LevelDB as the persistent storage layer.
|
||||
|
||||
The Kubernetes template for configuration of Storage Class is located in the
|
||||
file ``tendermint/tendermint-sc.yaml``.
|
||||
|
||||
Details about how to create a Azure Storage account and how Kubernetes Storage Class works
|
||||
are already covered in this document: :ref:`create-kubernetes-storage-class-mdb-tmt`.
|
||||
|
||||
Create the required storage classes using:
|
||||
|
||||
.. code:: bash
|
||||
|
||||
$ kubectl --context k8s-bdb-test-cluster-0 apply -f tendermint/tendermint-sc.yaml
|
||||
|
||||
|
||||
.. _start-a-kubernetes-deployment-for-mongodb-monitoring-agent:
|
||||
You can check if it worked using ``kubectl get storageclasses``.
|
||||
|
||||
Step 14: Start a Kubernetes Deployment for MongoDB Monitoring Agent
|
||||
.. _create-kubernetes-persistent-volume-claim-tmt:
|
||||
|
||||
Step 16: Create Kubernetes Persistent Volume Claims for Tendermint
|
||||
------------------------------------------------------------------
|
||||
|
||||
Next, you will create two PersistentVolumeClaim objects ``tendermint-db-claim`` and
|
||||
``tendermint-config-db-claim``.
|
||||
|
||||
This configuration is located in the file ``tendermint/tendermint-pvc.yaml``.
|
||||
|
||||
Details about Kubernetes Persistent Volumes, Persistent Volume Claims
|
||||
and how they work with Azure are already covered in this
|
||||
document: :ref:`create-kubernetes-persistent-volume-claim-mdb-tmt`.
|
||||
|
||||
Create the required Persistent Volume Claims using:
|
||||
|
||||
.. code:: bash
|
||||
|
||||
$ kubectl --context k8s-bdb-test-cluster-0 apply -f tendermint/tendermint-pvc.yaml
|
||||
|
||||
You can check its status using:
|
||||
|
||||
.. code::
|
||||
|
||||
kubectl get pvc -w
|
||||
|
||||
|
||||
.. _create-kubernetes-stateful-set-tmt:
|
||||
|
||||
Step 17: Start a Kubernetes StatefulSet for Tendermint
|
||||
------------------------------------------------------
|
||||
|
||||
* This configuration is located in the file ``tendermint/tendermint-ss.yaml``.
|
||||
|
||||
* Set the ``spec.serviceName`` to the value set in ``tm-instance-name`` in
|
||||
the ConfigMap.
|
||||
For example, if the value set in the ``tm-instance-name``
|
||||
is ``tm-instance-0``, set the field to ``tm-instance-0``.
|
||||
|
||||
* Set ``metadata.name``, ``spec.template.metadata.name`` and
|
||||
``spec.template.metadata.labels.app`` to the value set in
|
||||
``tm-instance-name`` in the ConfigMap, followed by
|
||||
``-ss``.
|
||||
For example, if the value set in the
|
||||
``tm-instance-name`` is ``tm-instance-0``, set the fields to the value
|
||||
``tm-insance-0-ss``.
|
||||
|
||||
* Note how the Tendermint container uses the ``tendermint-db-claim`` and the
|
||||
``tendermint-config-db-claim`` PersistentVolumeClaims for its ``/tendermint`` and
|
||||
``/tendermint_node_data`` directories (mount paths).
|
||||
|
||||
* As we gain more experience running Tendermint in testing and production, we
|
||||
will tweak the ``resources.limits.cpu`` and ``resources.limits.memory``.
|
||||
|
||||
We deploy Tendermint as POD(Tendermint + NGINX), Tendermint is used as the consensus
|
||||
engine while NGINX is used to serve the public key of the Tendermint instance.
|
||||
|
||||
* For the NGINX container,set the ports to be exposed from the container
|
||||
``spec.containers[0].ports[0]`` section. Set it to the value specified
|
||||
for ``tm-pub-key-access`` from ConfigMap.
|
||||
|
||||
* For the Tendermint container, Set the ports to be exposed from the container in the
|
||||
``spec.containers[1].ports`` section. We currently expose two Tendermint ports.
|
||||
Set it to the value specified for ``tm-p2p-port`` and ``tm-rpc-port``
|
||||
in the ConfigMap, repectively
|
||||
|
||||
* The configuration uses the following values set in the ConfigMap:
|
||||
|
||||
- ``tm-pub-key-access``
|
||||
- ``tm-seeds``
|
||||
- ``tm-validator-power``
|
||||
- ``tm-validators``
|
||||
- ``tm-genesis-time``
|
||||
- ``tm-chain-id``
|
||||
- ``tm-abci-port``
|
||||
- ``bdb-instance-name``
|
||||
|
||||
* Create the Tendermint StatefulSet using:
|
||||
|
||||
.. code:: bash
|
||||
|
||||
$ kubectl --context k8s-bdb-test-cluster-0 apply -f tendermint/tendermint-ss.yaml
|
||||
|
||||
* It might take up to 10 minutes for the disks, specified in the Persistent
|
||||
Volume Claims above, to be created and attached to the pod.
|
||||
The UI might show that the pod has errored with the message
|
||||
"timeout expired waiting for volumes to attach/mount". Use the CLI below
|
||||
to check the status of the pod in this case, instead of the UI.
|
||||
This happens due to a bug in Azure ACS.
|
||||
|
||||
.. code:: bash
|
||||
|
||||
$ kubectl --context k8s-bdb-test-cluster-0 get pods -w
|
||||
|
||||
.. _start-kubernetes-deployment-for-mdb-mon-agent-tmt:
|
||||
|
||||
Step 18: Start a Kubernetes Deployment for MongoDB Monitoring Agent
|
||||
-------------------------------------------------------------------
|
||||
|
||||
* This configuration is located in the file
|
||||
|
@ -768,40 +925,13 @@ Step 14: Start a Kubernetes Deployment for MongoDB Monitoring Agent
|
|||
$ kubectl --context k8s-bdb-test-cluster-0 apply -f mongodb-monitoring-agent/mongo-mon-dep.yaml
|
||||
|
||||
|
||||
.. _start-a-kubernetes-deployment-for-mongodb-backup-agent:
|
||||
.. _start-kubernetes-deployment-bdb-tmt:
|
||||
|
||||
Step 15: Start a Kubernetes Deployment for MongoDB Backup Agent
|
||||
---------------------------------------------------------------
|
||||
|
||||
* This configuration is located in the file
|
||||
``mongodb-backup-agent/mongo-backup-dep.yaml``.
|
||||
|
||||
* Set ``metadata.name``, ``spec.template.metadata.name`` and
|
||||
``spec.template.metadata.labels.app`` to the value set in
|
||||
``mdb-bak-instance-name`` in the ConfigMap, followed by
|
||||
``-dep``.
|
||||
For example, if the value set in the
|
||||
``mdb-bak-instance-name`` is ``mdb-bak-instance-0``, set the fields to the
|
||||
value ``mdb-bak-instance-0-dep``.
|
||||
|
||||
* The configuration uses the following values set in the Secret:
|
||||
|
||||
- ``mdb-bak-certs``
|
||||
- ``ca-auth``
|
||||
- ``cloud-manager-credentials``
|
||||
|
||||
* Start the Kubernetes Deployment using:
|
||||
|
||||
.. code:: bash
|
||||
|
||||
$ kubectl --context k8s-bdb-test-cluster-0 apply -f mongodb-backup-agent/mongo-backup-dep.yaml
|
||||
|
||||
|
||||
Step 16: Start a Kubernetes Deployment for BigchainDB
|
||||
Step 19: Start a Kubernetes Deployment for BigchainDB
|
||||
-----------------------------------------------------
|
||||
|
||||
* This configuration is located in the file
|
||||
``bigchaindb/bigchaindb-dep.yaml``.
|
||||
``bigchaindb/bigchaindb-dep-tm.yaml``.
|
||||
|
||||
* Set ``metadata.name`` and ``spec.template.metadata.labels.app`` to the
|
||||
value set in ``bdb-instance-name`` in the ConfigMap, followed by
|
||||
|
@ -810,21 +940,14 @@ Step 16: Start a Kubernetes Deployment for BigchainDB
|
|||
``bdb-instance-name`` is ``bdb-instance-0``, set the fields to the
|
||||
value ``bdb-insance-0-dep``.
|
||||
|
||||
* Set the value of ``BIGCHAINDB_KEYPAIR_PRIVATE`` (not base64-encoded).
|
||||
(In the future, we'd like to pull the BigchainDB private key from
|
||||
the Secret named ``bdb-private-key``,
|
||||
but a Secret can only be mounted as a file,
|
||||
so BigchainDB Server would have to be modified to look for it
|
||||
in a file.)
|
||||
|
||||
* As we gain more experience running BigchainDB in testing and production,
|
||||
we will tweak the ``resources.limits`` values for CPU and memory, and as
|
||||
richer monitoring and probing becomes available in BigchainDB, we will
|
||||
tweak the ``livenessProbe`` and ``readinessProbe`` parameters.
|
||||
|
||||
* Set the ports to be exposed from the pod in the
|
||||
``spec.containers[0].ports`` section. We currently expose 2 ports -
|
||||
``bigchaindb-api-port`` and ``bigchaindb-ws-port``. Set them to the
|
||||
``spec.containers[0].ports`` section. We currently expose 3 ports -
|
||||
``bigchaindb-api-port``, ``bigchaindb-ws-port`` and ``tm-abci-port``. Set them to the
|
||||
values specified in the ConfigMap.
|
||||
|
||||
* The configuration uses the following values set in the ConfigMap:
|
||||
|
@ -845,6 +968,8 @@ Step 16: Start a Kubernetes Deployment for BigchainDB
|
|||
- ``bigchaindb-database-connection-timeout``
|
||||
- ``bigchaindb-log-level``
|
||||
- ``bdb-user``
|
||||
- ``tm-instance-name``
|
||||
- ``tm-rpc-port``
|
||||
|
||||
* The configuration uses the following values set in the Secret:
|
||||
|
||||
|
@ -855,15 +980,15 @@ Step 16: Start a Kubernetes Deployment for BigchainDB
|
|||
|
||||
.. code:: bash
|
||||
|
||||
$ kubectl --context k8s-bdb-test-cluster-0 apply -f bigchaindb/bigchaindb-dep.yaml
|
||||
$ kubectl --context k8s-bdb-test-cluster-0 apply -f bigchaindb/bigchaindb-dep-tm.yaml
|
||||
|
||||
|
||||
* You can check its status using the command ``kubectl get deployments -w``
|
||||
|
||||
|
||||
.. _start-a-kubernetes-deployment-for-openresty:
|
||||
.. _start-kubernetes-deployment-openresty-tmt:
|
||||
|
||||
Step 17: Start a Kubernetes Deployment for OpenResty
|
||||
Step 20: Start a Kubernetes Deployment for OpenResty
|
||||
----------------------------------------------------
|
||||
|
||||
* This configuration is located in the file
|
||||
|
@ -902,21 +1027,21 @@ Step 17: Start a Kubernetes Deployment for OpenResty
|
|||
* You can check its status using the command ``kubectl get deployments -w``
|
||||
|
||||
|
||||
Step 18: Configure the MongoDB Cloud Manager
|
||||
Step 21: Configure the MongoDB Cloud Manager
|
||||
--------------------------------------------
|
||||
|
||||
Refer to the
|
||||
:ref:`documentation <configure-mongodb-cloud-manager-for-monitoring-and-backup>`
|
||||
:doc:`documentation <../production-deployment-template/cloud-manager>`
|
||||
for details on how to configure the MongoDB Cloud Manager to enable
|
||||
monitoring and backup.
|
||||
|
||||
|
||||
.. _verify-the-bigchaindb-node-setup:
|
||||
.. _verify-and-test-bdb-tmt:
|
||||
|
||||
Step 19: Verify the BigchainDB Node Setup
|
||||
Step 22: Verify the BigchainDB Node Setup
|
||||
-----------------------------------------
|
||||
|
||||
Step 19.1: Testing Internally
|
||||
Step 22.1: Testing Internally
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
To test the setup of your BigchainDB node, you could use a Docker container
|
||||
|
@ -967,6 +1092,18 @@ To test the BigchainDB instance:
|
|||
|
||||
$ wsc -er ws://bdb-instance-0:9985/api/v1/streams/valid_transactions
|
||||
|
||||
To test the Tendermint instance:
|
||||
|
||||
.. code:: bash
|
||||
|
||||
$ nslookup tm-instance-0
|
||||
|
||||
$ dig +noall +answer _bdb-api-port._tcp.tm-instance-0.default.svc.cluster.local SRV
|
||||
|
||||
$ dig +noall +answer _bdb-ws-port._tcp.tm-instance-0.default.svc.cluster.local SRV
|
||||
|
||||
$ curl -X GET http://tm-instance-0:9986/pub_key.json
|
||||
|
||||
|
||||
To test the OpenResty instance:
|
||||
|
||||
|
@ -1020,10 +1157,10 @@ The above curl command should result in the response
|
|||
``It looks like you are trying to access MongoDB over HTTP on the native driver port.``
|
||||
|
||||
|
||||
Step 19.2: Testing Externally
|
||||
Step 22.2: Testing Externally
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
Check the MongoDB monitoring and backup agent on the MongoDB Cloud Manager
|
||||
Check the MongoDB monitoring agent on the MongoDB Cloud Manager
|
||||
portal to verify they are working fine.
|
||||
|
||||
If you are using the NGINX with HTTP support, accessing the URL
|
||||
|
@ -1035,3 +1172,7 @@ If you are using the NGINX with HTTPS support, use ``https`` instead of
|
|||
|
||||
Use the Python Driver to send some transactions to the BigchainDB node and
|
||||
verify that your node or cluster works as expected.
|
||||
|
||||
Next, you can set up log analytics and monitoring, by following our templates:
|
||||
|
||||
* :doc:`../production-deployment-template/log-analytics`.
|
|
@ -123,8 +123,6 @@ Next, you can follow one of our following deployment templates:
|
|||
|
||||
* :doc:`node-on-kubernetes`.
|
||||
|
||||
* :doc:`../production-deployment-template-tendermint/node-on-kubernetes`
|
||||
|
||||
|
||||
Tectonic References
|
||||
-------------------
|
||||
|
|
|
@ -224,6 +224,5 @@ CAUTION: You might end up deleting resources other than the ACS cluster.
|
|||
--name <name of resource group containing the cluster>
|
||||
|
||||
|
||||
Next, you can :doc:`run a BigchainDB node(Non-BFT) <node-on-kubernetes>` or :doc:`run a BigchainDB
|
||||
node/cluster(BFT) <../production-deployment-template-tendermint/node-on-kubernetes>`
|
||||
Next, you can :doc: `run a BigchainDB node/cluster(BFT) <node-on-kubernetes>`
|
||||
on your new Kubernetes cluster.
|
|
@ -6,28 +6,13 @@ to set up a production BigchainDB cluster.
|
|||
We are constantly improving them.
|
||||
You can modify them to suit your needs.
|
||||
|
||||
|
||||
Things the Managing Organization Must Do First
|
||||
----------------------------------------------
|
||||
.. Note::
|
||||
We use standalone MongoDB (without Replica Set), BFT replication is handled by Tendermint.
|
||||
|
||||
|
||||
1. Set Up a Self-Signed Certificate Authority
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
.. _register-a-domain-and-get-an-ssl-certificate-for-it-tmt:
|
||||
|
||||
We use SSL/TLS and self-signed certificates
|
||||
for MongoDB authentication (and message encryption).
|
||||
The certificates are signed by the organization managing the cluster.
|
||||
If your organization already has a process
|
||||
for signing certificates
|
||||
(i.e. an internal self-signed certificate authority [CA]),
|
||||
then you can skip this step.
|
||||
Otherwise, your organization must
|
||||
:ref:`set up its own self-signed certificate authority <how-to-set-up-a-self-signed-certificate-authority>`.
|
||||
|
||||
|
||||
.. _register-a-domain-and-get-an-ssl-certificate-for-it:
|
||||
|
||||
2. Register a Domain and Get an SSL Certificate for It
|
||||
1. Register a Domain and Get an SSL Certificate for It
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
The BigchainDB APIs (HTTP API and WebSocket API) should be served using TLS,
|
||||
|
@ -36,83 +21,149 @@ should choose an FQDN for their API (e.g. api.organization-x.com),
|
|||
register the domain name,
|
||||
and buy an SSL/TLS certificate for the FQDN.
|
||||
|
||||
.. _things-each-node-operator-must-do:
|
||||
|
||||
.. _generate-the-blockchain-id-and-genesis-time:
|
||||
|
||||
2. Generate the Blockchain ID and Genesis Time
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
Tendermint nodes require two parameters that need to be common and shared between all the
|
||||
participants in the network.
|
||||
|
||||
* ``chain_id`` : ID of the blockchain. This must be unique for every blockchain.
|
||||
|
||||
* Example: ``test-chain-9gHylg``
|
||||
|
||||
* ``genesis_time`` : Official time of blockchain start.
|
||||
|
||||
* Example: ``0001-01-01T00:00:00Z``
|
||||
|
||||
The preceding parameters can be generated using the ``tendermint init`` command.
|
||||
To `initialize <https://tendermint.readthedocs.io/en/master/using-tendermint.html#initialize>`_.
|
||||
,you will need to `install Tendermint <https://tendermint.readthedocs.io/en/master/install.html>`_
|
||||
and verify that a ``genesis.json`` file is created under the `Root Directory
|
||||
<https://tendermint.readthedocs.io/en/master/using-tendermint.html#directory-root>`_. You can use
|
||||
the ``genesis_time`` and ``chain_id`` from this example ``genesis.json`` file:
|
||||
|
||||
.. code:: json
|
||||
|
||||
{
|
||||
"genesis_time": "0001-01-01T00:00:00Z",
|
||||
"chain_id": "test-chain-9gHylg",
|
||||
"validators": [
|
||||
{
|
||||
"pub_key": {
|
||||
"type": "ed25519",
|
||||
"data": "D12279E746D3724329E5DE33A5AC44D5910623AA6FB8CDDC63617C959383A468"
|
||||
},
|
||||
"power": 10,
|
||||
"name": ""
|
||||
}
|
||||
],
|
||||
"app_hash": ""
|
||||
}
|
||||
|
||||
.. _things-each-node-operator-must-do-tmt:
|
||||
|
||||
Things Each Node Operator Must Do
|
||||
---------------------------------
|
||||
|
||||
☐ Every MongoDB instance in the cluster must have a unique (one-of-a-kind) name.
|
||||
Ask the organization managing your cluster if they have a standard
|
||||
way of naming instances in the cluster.
|
||||
For example, maybe they assign a unique number to each node,
|
||||
so that if you're operating node 12, your MongoDB instance would be named
|
||||
``mdb-instance-12``.
|
||||
Similarly, other instances must also have unique names in the cluster.
|
||||
☐ Set Up a Self-Signed Certificate Authority
|
||||
|
||||
#. Name of the MongoDB instance (``mdb-instance-*``)
|
||||
#. Name of the BigchainDB instance (``bdb-instance-*``)
|
||||
#. Name of the NGINX instance (``ngx-http-instance-*`` or ``ngx-https-instance-*``)
|
||||
#. Name of the OpenResty instance (``openresty-instance-*``)
|
||||
#. Name of the MongoDB monitoring agent instance (``mdb-mon-instance-*``)
|
||||
#. Name of the MongoDB backup agent instance (``mdb-bak-instance-*``)
|
||||
We use SSL/TLS and self-signed certificates
|
||||
for MongoDB authentication (and message encryption).
|
||||
The certificates are signed by the organization managing the :ref:`bigchaindb-node`.
|
||||
If your organization already has a process
|
||||
for signing certificates
|
||||
(i.e. an internal self-signed certificate authority [CA]),
|
||||
then you can skip this step.
|
||||
Otherwise, your organization must
|
||||
:ref:`set up its own self-signed certificate authority <how-to-set-up-a-self-signed-certificate-authority>`.
|
||||
|
||||
|
||||
☐ Generate four keys and corresponding certificate signing requests (CSRs):
|
||||
☐ Follow Standard and Unique Naming Convention
|
||||
|
||||
#. Server Certificate (a.k.a. Member Certificate) for the MongoDB instance
|
||||
☐ Name of the MongoDB instance (``mdb-instance-*``)
|
||||
|
||||
☐ Name of the BigchainDB instance (``bdb-instance-*``)
|
||||
|
||||
☐ Name of the NGINX instance (``ngx-http-instance-*`` or ``ngx-https-instance-*``)
|
||||
|
||||
☐ Name of the OpenResty instance (``openresty-instance-*``)
|
||||
|
||||
☐ Name of the MongoDB monitoring agent instance (``mdb-mon-instance-*``)
|
||||
|
||||
☐ Name of the Tendermint instance (``tm-instance-*``)
|
||||
|
||||
**Example**
|
||||
|
||||
|
||||
.. code:: text
|
||||
|
||||
{
|
||||
"MongoDB": [
|
||||
"mdb-instance-1",
|
||||
"mdb-instance-2",
|
||||
"mdb-instance-3",
|
||||
"mdb-instance-4"
|
||||
],
|
||||
"BigchainDB": [
|
||||
"bdb-instance-1",
|
||||
"bdb-instance-2",
|
||||
"bdb-instance-3",
|
||||
"bdb-instance-4"
|
||||
],
|
||||
"NGINX": [
|
||||
"ngx-instance-1",
|
||||
"ngx-instance-2",
|
||||
"ngx-instance-3",
|
||||
"ngx-instance-4"
|
||||
],
|
||||
"OpenResty": [
|
||||
"openresty-instance-1",
|
||||
"openresty-instance-2",
|
||||
"openresty-instance-3",
|
||||
"openresty-instance-4"
|
||||
],
|
||||
"MongoDB_Monitoring_Agent": [
|
||||
"mdb-mon-instance-1",
|
||||
"mdb-mon-instance-2",
|
||||
"mdb-mon-instance-3",
|
||||
"mdb-mon-instance-4"
|
||||
],
|
||||
"Tendermint": [
|
||||
"tendermint-instance-1",
|
||||
"tendermint-instance-2",
|
||||
"tendermint-instance-3",
|
||||
"tendermint-instance-4"
|
||||
]
|
||||
}
|
||||
|
||||
|
||||
☐ Generate three keys and corresponding certificate signing requests (CSRs):
|
||||
|
||||
#. Server Certificate for the MongoDB instance
|
||||
#. Client Certificate for BigchainDB Server to identify itself to MongoDB
|
||||
#. Client Certificate for MongoDB Monitoring Agent to identify itself to MongoDB
|
||||
#. Client Certificate for MongoDB Backup Agent to identify itself to MongoDB
|
||||
|
||||
Ask the managing organization to use its self-signed CA to sign those four CSRs.
|
||||
They should send you:
|
||||
|
||||
* Four certificates (one for each CSR you sent them).
|
||||
* One ``ca.crt`` file: their CA certificate.
|
||||
* One ``crl.pem`` file: a certificate revocation list.
|
||||
|
||||
For help, see the pages:
|
||||
|
||||
* :ref:`how-to-generate-a-server-certificate-for-mongodb`
|
||||
* :ref:`how-to-generate-a-client-certificate-for-mongodb`
|
||||
|
||||
|
||||
☐ Every node in a BigchainDB cluster needs its own
|
||||
BigchainDB keypair (i.e. a public key and corresponding private key).
|
||||
You can generate a BigchainDB keypair for your node, for example,
|
||||
using the `BigchainDB Python Driver <http://docs.bigchaindb.com/projects/py-driver/en/latest/index.html>`_.
|
||||
|
||||
.. code:: python
|
||||
|
||||
from bigchaindb_driver.crypto import generate_keypair
|
||||
print(generate_keypair())
|
||||
|
||||
|
||||
☐ Share your BigchaindB *public* key with all the other nodes
|
||||
in the BigchainDB cluster.
|
||||
Don't share your private key.
|
||||
|
||||
|
||||
☐ Get the BigchainDB public keys of all the other nodes in the cluster.
|
||||
That list of public keys is known as the BigchainDB "keyring."
|
||||
Use the self-signed CA to sign those three CSRs. For help, see the pages:
|
||||
|
||||
* :doc:`How to Generate a Server Certificate for MongoDB <../production-deployment-template/server-tls-certificate>`
|
||||
* :doc:`How to Generate a Client Certificate for MongoDB <../production-deployment-template/client-tls-certificate>`
|
||||
|
||||
☐ Make up an FQDN for your BigchainDB node (e.g. ``mynode.mycorp.com``).
|
||||
Make sure you've registered the associated domain name (e.g. ``mycorp.com``),
|
||||
and have an SSL certificate for the FQDN.
|
||||
(You can get an SSL certificate from any SSL certificate provider.)
|
||||
|
||||
|
||||
☐ Ask the managing organization for the user name to use for authenticating to
|
||||
☐ Ask the BigchainDB Node operator/owner for the username to use for authenticating to
|
||||
MongoDB.
|
||||
|
||||
|
||||
☐ If the cluster uses 3scale for API authentication, monitoring and billing,
|
||||
you must ask the managing organization for all relevant 3scale credentials -
|
||||
you must ask the BigchainDB node operator/owner for all relevant 3scale credentials -
|
||||
secret token, service ID, version header and API service token.
|
||||
|
||||
|
||||
☐ If the cluster uses MongoDB Cloud Manager for monitoring and backup,
|
||||
☐ If the cluster uses MongoDB Cloud Manager for monitoring,
|
||||
you must ask the managing organization for the ``Project ID`` and the
|
||||
``Agent API Key``.
|
||||
(Each Cloud Manager "Project" has its own ``Project ID``. A ``Project ID`` can
|
||||
|
@ -122,11 +173,7 @@ allow easier periodic rotation of the ``Agent API Key`` with a constant
|
|||
``Project ID``)
|
||||
|
||||
|
||||
☐ :doc:`Deploy a Kubernetes cluster on Azure <template-kubernetes-azure>`.
|
||||
☐ :doc:`Deploy a Kubernetes cluster on Azure <../production-deployment-template/template-kubernetes-azure>`.
|
||||
|
||||
|
||||
☐ You can now proceed to set up your BigchainDB node based on whether it is the
|
||||
:ref:`first node in a new cluster
|
||||
<kubernetes-template-deploy-a-single-node-bigchaindb>` or a
|
||||
:ref:`node that will be added to an existing cluster
|
||||
<kubernetes-template-add-a-bigchaindb-node-to-an-existing-cluster>`.
|
||||
☐ You can now proceed to set up your :ref:`BigchainDB node
|
||||
<kubernetes-template-deploy-a-single-bigchaindb-node>`.
|
||||
|
|
Loading…
Reference in New Issue
Block a user