Merge branch 'master' into kyber-master-feat-cors
This commit is contained in:
commit
7055c21fdd
|
@ -1,11 +1,9 @@
|
|||
benchmarking-tests export-ignore
|
||||
deploy-cluster-aws export-ignore
|
||||
docs export-ignore
|
||||
ntools export-ignore
|
||||
speed-tests export-ignore
|
||||
tests export-ignore
|
||||
.gitattributes export-ignore
|
||||
.gitignore export-ignore
|
||||
.travis.yml export-ignore
|
||||
*.md export-ignore
|
||||
codecov.yml export-ignore
|
||||
codecov.yml export-ignore
|
||||
|
|
|
@ -71,8 +71,6 @@ deploy-cluster-aws/confiles/
|
|||
deploy-cluster-aws/client_confile
|
||||
deploy-cluster-aws/hostlist.py
|
||||
deploy-cluster-aws/ssh_key.py
|
||||
benchmarking-tests/hostlist.py
|
||||
benchmarking-tests/ssh_key.py
|
||||
|
||||
# Ansible-specific files
|
||||
ntools/one-m/ansible/hosts
|
||||
|
@ -80,7 +78,7 @@ ntools/one-m/ansible/ansible.cfg
|
|||
|
||||
# Just in time documentation
|
||||
docs/server/source/schema
|
||||
docs/server/source/drivers-clients/samples
|
||||
docs/server/source/http-samples
|
||||
|
||||
# Terraform state files
|
||||
# See https://stackoverflow.com/a/41482391
|
||||
|
|
73
CHANGELOG.md
73
CHANGELOG.md
|
@ -15,6 +15,79 @@ For reference, the possible headings are:
|
|||
* **External Contributors** to list contributors outside of BigchainDB GmbH.
|
||||
* **Notes**
|
||||
|
||||
## [0.10.1] - 2017-04-19
|
||||
Tag name: v0.10.1
|
||||
|
||||
## Added
|
||||
* Documentation for the BigchainDB settings `wsserver.host` and `wsserver.port`. [Pull Request #1408](https://github.com/bigchaindb/bigchaindb/pull/1408)
|
||||
|
||||
## Fixed
|
||||
* Fixed `Dockerfile`, which was failing to build. It now starts `FROM python:3.6` (instead of `FROM ubuntu:xenial`). [Pull Request #1410](https://github.com/bigchaindb/bigchaindb/pull/1410)
|
||||
* Fixed the `Makefile` so that `release` depends on `dist`. [Pull Request #1405](https://github.com/bigchaindb/bigchaindb/pull/1405)
|
||||
|
||||
## [0.10.0] - 2017-04-18
|
||||
Tag name: v0.10.0
|
||||
|
||||
### Added
|
||||
* Improved logging. Added logging to file. Added `--log-level` option to `bigchaindb start` command. Added new logging configuration settings. Pull Requests
|
||||
[#1285](https://github.com/bigchaindb/bigchaindb/pull/1285),
|
||||
[#1307](https://github.com/bigchaindb/bigchaindb/pull/1307),
|
||||
[#1324](https://github.com/bigchaindb/bigchaindb/pull/1324),
|
||||
[#1326](https://github.com/bigchaindb/bigchaindb/pull/1326),
|
||||
[#1327](https://github.com/bigchaindb/bigchaindb/pull/1327),
|
||||
[#1330](https://github.com/bigchaindb/bigchaindb/pull/1330),
|
||||
[#1365](https://github.com/bigchaindb/bigchaindb/pull/1365),
|
||||
[#1394](https://github.com/bigchaindb/bigchaindb/pull/1394),
|
||||
[#1396](https://github.com/bigchaindb/bigchaindb/pull/1396),
|
||||
[#1398](https://github.com/bigchaindb/bigchaindb/pull/1398) and
|
||||
[#1402](https://github.com/bigchaindb/bigchaindb/pull/1402)
|
||||
* Events API using WebSocket protocol. Pull Requests
|
||||
[#1086](https://github.com/bigchaindb/bigchaindb/pull/1086),
|
||||
[#1347](https://github.com/bigchaindb/bigchaindb/pull/1347),
|
||||
[#1349](https://github.com/bigchaindb/bigchaindb/pull/1349),
|
||||
[#1356](https://github.com/bigchaindb/bigchaindb/pull/1356),
|
||||
[#1368](https://github.com/bigchaindb/bigchaindb/pull/1368),
|
||||
[#1401](https://github.com/bigchaindb/bigchaindb/pull/1401) and
|
||||
[#1403](https://github.com/bigchaindb/bigchaindb/pull/1403)
|
||||
* Initial support for using SSL with MongoDB (work in progress). Pull Requests
|
||||
[#1299](https://github.com/bigchaindb/bigchaindb/pull/1299) and
|
||||
[#1348](https://github.com/bigchaindb/bigchaindb/pull/1348)
|
||||
|
||||
### Changed
|
||||
* The main BigchainDB Dockerfile (and its generated Docker image) now contains only BigchainDB Server. (It used to contain both BigchainDB Server and RethinkDB.) You must now run MongoDB or RethinkDB in a separate Docker container. [Pull Request #1174](https://github.com/bigchaindb/bigchaindb/pull/1174)
|
||||
* Made separate schemas for CREATE and TRANSFER transactions. [Pull Request #1257](https://github.com/bigchaindb/bigchaindb/pull/1257)
|
||||
* When signing transactions with threshold conditions, we now sign all subconditions for a public key. [Pull Request #1294](https://github.com/bigchaindb/bigchaindb/pull/1294)
|
||||
* Many changes to the voting-related code, including how we validate votes and prevent duplicate votes by the same node. Pull Requests [#1215](https://github.com/bigchaindb/bigchaindb/pull/1215) and [#1258](https://github.com/bigchaindb/bigchaindb/pull/1258)
|
||||
|
||||
### Removed
|
||||
* Removed the `bigchaindb load` command. Pull Requests
|
||||
[#1261](https://github.com/bigchaindb/bigchaindb/pull/1261),
|
||||
[#1273](https://github.com/bigchaindb/bigchaindb/pull/1273) and
|
||||
[#1301](https://github.com/bigchaindb/bigchaindb/pull/1301)
|
||||
* Removed old `/speed-tests` and `/benchmarking-tests` directories. [Pull Request #1359](https://github.com/bigchaindb/bigchaindb/pull/1359)
|
||||
|
||||
### Fixed
|
||||
* Fixed the URL of the BigchainDB docs returned by the HTTP API. [Pull Request #1178](https://github.com/bigchaindb/bigchaindb/pull/1178)
|
||||
* Fixed the MongoDB changefeed: it wasn't reporting update operations. [Pull Request #1193](https://github.com/bigchaindb/bigchaindb/pull/1193)
|
||||
* Fixed the block-creation process: it wasn't checking if the transaction was previously included in:
|
||||
* a valid block. [Pull Request #1208](https://github.com/bigchaindb/bigchaindb/pull/1208)
|
||||
* the block-under-construction. Pull Requests [#1237](https://github.com/bigchaindb/bigchaindb/issues/1237) and [#1377](https://github.com/bigchaindb/bigchaindb/issues/1377)
|
||||
|
||||
### External Contributors
|
||||
In alphabetical order by GitHub username:
|
||||
* @anryko - [Pull Request #1277](https://github.com/bigchaindb/bigchaindb/pull/1277)
|
||||
* @anujism - [Pull Request #1366](https://github.com/bigchaindb/bigchaindb/pull/1366)
|
||||
* @jackric - [Pull Request #1365](https://github.com/bigchaindb/bigchaindb/pull/1365)
|
||||
* @lavinasachdev3 - [Pull Request #1358](https://github.com/bigchaindb/bigchaindb/pull/1358)
|
||||
* @morrme - [Pull Request #1340](https://github.com/bigchaindb/bigchaindb/pull/1340)
|
||||
* @tomconte - [Pull Request #1299](https://github.com/bigchaindb/bigchaindb/pull/1299)
|
||||
* @tymlez - Pull Requests [#1108](https://github.com/bigchaindb/bigchaindb/pull/1108) & [#1209](https://github.com/bigchaindb/bigchaindb/pull/1209)
|
||||
|
||||
### Notes
|
||||
* MongoDB is now the recommended database backend (not RethinkDB).
|
||||
* There are some initial docs about how to deploy a BigchainDB node on Kubernetes. It's work in progress.
|
||||
|
||||
|
||||
## [0.9.5] - 2017-03-29
|
||||
Tag name: v0.9.5
|
||||
|
||||
|
|
|
@ -145,6 +145,13 @@ Once you accept and submit the CLA, we'll email you with further instructions. (
|
|||
|
||||
Someone will then merge your branch or suggest changes. If we suggest changes, you won't have to open a new pull request, you can just push new code to the same branch (on `origin`) as you did before creating the pull request.
|
||||
|
||||
### Tip: Upgrading All BigchainDB Dependencies
|
||||
|
||||
Over time, your versions of the Python packages used by BigchainDB will get out of date. You can upgrade them using:
|
||||
```text
|
||||
pip install --upgrade -e .[dev]
|
||||
```
|
||||
|
||||
## Quick Links
|
||||
|
||||
* [BigchainDB Community links](https://www.bigchaindb.com/community)
|
||||
|
|
39
Dockerfile
39
Dockerfile
|
@ -1,40 +1,17 @@
|
|||
FROM ubuntu:xenial
|
||||
|
||||
ENV LANG en_US.UTF-8
|
||||
ENV DEBIAN_FRONTEND noninteractive
|
||||
|
||||
FROM python:3.6
|
||||
LABEL maintainer "dev@bigchaindb.com"
|
||||
RUN mkdir -p /usr/src/app
|
||||
COPY . /usr/src/app/
|
||||
WORKDIR /usr/src/app
|
||||
|
||||
RUN locale-gen en_US.UTF-8 && \
|
||||
apt-get -q update && \
|
||||
apt-get install -qy --no-install-recommends \
|
||||
python3 \
|
||||
python3-pip \
|
||||
libffi-dev \
|
||||
python3-dev \
|
||||
build-essential && \
|
||||
\
|
||||
pip3 install --upgrade --no-cache-dir pip setuptools && \
|
||||
\
|
||||
pip3 install --no-cache-dir -e . && \
|
||||
\
|
||||
apt-get remove -qy --purge gcc cpp binutils perl && \
|
||||
apt-get -qy autoremove && \
|
||||
apt-get -q clean all && \
|
||||
rm -rf /usr/share/perl /usr/share/perl5 /usr/share/man /usr/share/info /usr/share/doc && \
|
||||
rm -rf /var/lib/apt/lists/*
|
||||
|
||||
RUN apt-get -qq update \
|
||||
&& apt-get -y upgrade \
|
||||
&& pip install --no-cache-dir . \
|
||||
&& apt-get autoremove \
|
||||
&& apt-get clean
|
||||
VOLUME ["/data"]
|
||||
WORKDIR /data
|
||||
|
||||
ENV BIGCHAINDB_CONFIG_PATH /data/.bigchaindb
|
||||
ENV BIGCHAINDB_SERVER_BIND 0.0.0.0:9984
|
||||
# BigchainDB Server doesn't need BIGCHAINDB_API_ENDPOINT any more
|
||||
# but maybe our Docker or Docker Compose stuff does?
|
||||
# ENV BIGCHAINDB_API_ENDPOINT http://bigchaindb:9984/api/v1
|
||||
|
||||
ENV BIGCHAINDB_WSSERVER_HOST 0.0.0.0
|
||||
ENTRYPOINT ["bigchaindb"]
|
||||
|
||||
CMD ["start"]
|
||||
|
|
|
@ -1,13 +1,21 @@
|
|||
FROM python:3.5
|
||||
FROM python:3.6
|
||||
LABEL maintainer "dev@bigchaindb.com"
|
||||
|
||||
RUN apt-get update && apt-get install -y python3.4 vim
|
||||
RUN apt-get update \
|
||||
&& apt-get install -y vim \
|
||||
&& pip install pynacl \
|
||||
&& apt-get autoremove \
|
||||
&& apt-get clean
|
||||
|
||||
VOLUME ["/data"]
|
||||
WORKDIR /data
|
||||
|
||||
ENV BIGCHAINDB_CONFIG_PATH /data/.bigchaindb
|
||||
ENV BIGCHAINDB_SERVER_BIND 0.0.0.0:9984
|
||||
ENV BIGCHAINDB_WSSERVER_HOST 0.0.0.0
|
||||
|
||||
RUN mkdir -p /usr/src/app
|
||||
WORKDIR /usr/src/app
|
||||
|
||||
RUN pip install --upgrade pip
|
||||
|
||||
COPY . /usr/src/app/
|
||||
|
||||
WORKDIR /usr/src/app
|
||||
RUN pip install --no-cache-dir -e .[dev]
|
||||
RUN bigchaindb -y configure rethinkdb
|
||||
RUN bigchaindb -y configure mongodb
|
||||
|
|
|
@ -51,3 +51,15 @@ END BLOCK
|
|||
(END OF EMAIL)
|
||||
|
||||
The next step is to wait for them to copy that comment into the comments of the indicated pull request. Once they do so, it's safe to merge the pull request.
|
||||
|
||||
## How to Handle CLA Agreement Emails with No Associated Pull Request
|
||||
|
||||
Reply with an email like this:
|
||||
|
||||
Hi [First Name],
|
||||
|
||||
Today I got an email (copied below) to tell me that you agreed to the BigchainDB Contributor License Agreement. Did you intend to do that?
|
||||
|
||||
If no, then you can ignore this email.
|
||||
|
||||
If yes, then there's another step to connect your email address with your GitHub account. To do that, you must first create a pull request in one of the BigchainDB repositories on GitHub. Once you've done that, please reply to this email with a link to the pull request. Then I'll send you a special block of text to paste into the comments on that pull request.
|
||||
|
|
2
Makefile
2
Makefile
|
@ -70,7 +70,7 @@ docs: ## generate Sphinx HTML documentation, including API docs
|
|||
servedocs: docs ## compile the docs watching for changes
|
||||
watchmedo shell-command -p '*.rst' -c '$(MAKE) -C docs html' -R -D .
|
||||
|
||||
release: clean ## package and upload a release
|
||||
release: dist ## package and upload a release
|
||||
twine upload dist/*
|
||||
|
||||
dist: clean ## builds source (and not for now, wheel package)
|
||||
|
|
|
@ -27,6 +27,7 @@ A patch release is similar to a minor release, but piggybacks on an existing min
|
|||
1. Apply the changes you want, e.g. using `git cherry-pick`.
|
||||
1. Update the `CHANGELOG.md` file
|
||||
1. Increment the patch version in `bigchaindb/version.py`, e.g. "0.9.1"
|
||||
1. Commit that change, and push the updated branch to GitHub
|
||||
1. Follow steps outlined in [Common Steps](#common-steps)
|
||||
1. Cherry-pick the `CHANGELOG.md` update commit (made above) to the `master` branch
|
||||
|
||||
|
|
|
@ -1,3 +0,0 @@
|
|||
# Benchmarking tests
|
||||
|
||||
This folder contains util files and test case folders to benchmark the performance of a BigchainDB cluster.
|
|
@ -1,154 +0,0 @@
|
|||
import multiprocessing as mp
|
||||
import uuid
|
||||
import argparse
|
||||
import csv
|
||||
import time
|
||||
import logging
|
||||
import rethinkdb as r
|
||||
|
||||
from bigchaindb.common.transaction import Transaction
|
||||
|
||||
from bigchaindb import Bigchain
|
||||
from bigchaindb.utils import ProcessGroup
|
||||
from bigchaindb.commands import utils
|
||||
|
||||
|
||||
SIZE_OF_FILLER = {'minimal': 0,
|
||||
'small': 10**3,
|
||||
'medium': 10**4,
|
||||
'large': 10**5}
|
||||
|
||||
|
||||
logging.basicConfig(level=logging.INFO)
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
def create_write_transaction(tx_left, payload_filler):
|
||||
b = Bigchain()
|
||||
payload_dict = {}
|
||||
if payload_filler:
|
||||
payload_dict['filler'] = payload_filler
|
||||
while tx_left > 0:
|
||||
# Include a random uuid string in the payload
|
||||
# to prevent duplicate transactions
|
||||
# (i.e. transactions with the same hash)
|
||||
payload_dict['msg'] = str(uuid.uuid4())
|
||||
tx = Transaction.create([b.me], [b.me], payload=payload_dict)
|
||||
tx = tx.sign([b.me_private])
|
||||
b.write_transaction(tx)
|
||||
tx_left -= 1
|
||||
|
||||
|
||||
def run_add_backlog(args):
|
||||
tx_left = args.num_transactions // mp.cpu_count()
|
||||
payload_filler = 'x' * SIZE_OF_FILLER[args.payload_size]
|
||||
workers = ProcessGroup(target=create_write_transaction,
|
||||
args=(tx_left, payload_filler))
|
||||
workers.start()
|
||||
|
||||
|
||||
def run_gather_metrics(args):
|
||||
# setup a rethinkdb connection
|
||||
conn = r.connect(args.bigchaindb_host, 28015, 'bigchain')
|
||||
|
||||
# setup csv writer
|
||||
csv_file = open(args.csvfile, 'w')
|
||||
csv_writer = csv.writer(csv_file)
|
||||
|
||||
# query for the number of transactions on the backlog
|
||||
num_transactions = r.table('backlog').count().run(conn)
|
||||
num_transactions_received = 0
|
||||
initial_time = None
|
||||
logger.info('Starting gathering metrics.')
|
||||
logger.info('{} transasctions in the backlog'.format(num_transactions))
|
||||
logger.info('This process should exit automatically. '
|
||||
'If this does not happen you can exit at any time using Ctrl-C '
|
||||
'saving all the metrics gathered up to this point.')
|
||||
|
||||
logger.info('\t{:<20} {:<20} {:<20} {:<20}'.format(
|
||||
'timestamp',
|
||||
'tx in block',
|
||||
'tx/s',
|
||||
'% complete'
|
||||
))
|
||||
|
||||
# listen to the changefeed
|
||||
try:
|
||||
for change in r.table('bigchain').changes().run(conn):
|
||||
# check only for new blocks
|
||||
if change['old_val'] is None:
|
||||
block_num_transactions = len(
|
||||
change['new_val']['block']['transactions']
|
||||
)
|
||||
time_now = time.time()
|
||||
csv_writer.writerow(
|
||||
[str(time_now), str(block_num_transactions)]
|
||||
)
|
||||
|
||||
# log statistics
|
||||
if initial_time is None:
|
||||
initial_time = time_now
|
||||
|
||||
num_transactions_received += block_num_transactions
|
||||
elapsed_time = time_now - initial_time
|
||||
percent_complete = round(
|
||||
(num_transactions_received / num_transactions) * 100
|
||||
)
|
||||
|
||||
if elapsed_time != 0:
|
||||
transactions_per_second = round(
|
||||
num_transactions_received / elapsed_time
|
||||
)
|
||||
else:
|
||||
transactions_per_second = float('nan')
|
||||
|
||||
logger.info('\t{:<20} {:<20} {:<20} {:<20}'.format(
|
||||
time_now,
|
||||
block_num_transactions,
|
||||
transactions_per_second,
|
||||
percent_complete
|
||||
))
|
||||
|
||||
if (num_transactions - num_transactions_received) == 0:
|
||||
break
|
||||
except KeyboardInterrupt:
|
||||
logger.info('Interrupted. Exiting early...')
|
||||
finally:
|
||||
# close files
|
||||
csv_file.close()
|
||||
|
||||
|
||||
def main():
|
||||
parser = argparse.ArgumentParser(description='BigchainDB benchmarking utils')
|
||||
subparsers = parser.add_subparsers(title='Commands', dest='command')
|
||||
|
||||
# add transactions to backlog
|
||||
backlog_parser = subparsers.add_parser('add-backlog',
|
||||
help='Add transactions to the backlog')
|
||||
backlog_parser.add_argument('num_transactions',
|
||||
metavar='num_transactions',
|
||||
type=int, default=0,
|
||||
help='Number of transactions to add to the backlog')
|
||||
backlog_parser.add_argument('-s', '--payload-size',
|
||||
choices=SIZE_OF_FILLER.keys(),
|
||||
default='minimal',
|
||||
help='Payload size')
|
||||
|
||||
# metrics
|
||||
metrics_parser = subparsers.add_parser('gather-metrics',
|
||||
help='Gather metrics to a csv file')
|
||||
|
||||
metrics_parser.add_argument('-b', '--bigchaindb-host',
|
||||
required=True,
|
||||
help=('Bigchaindb node hostname to connect '
|
||||
'to gather cluster metrics'))
|
||||
|
||||
metrics_parser.add_argument('-c', '--csvfile',
|
||||
required=True,
|
||||
help='Filename to save the metrics')
|
||||
|
||||
utils.start(parser, globals())
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
main()
|
|
@ -1,46 +0,0 @@
|
|||
from __future__ import with_statement, unicode_literals
|
||||
|
||||
from fabric.api import sudo, env, hosts
|
||||
from fabric.api import task, parallel
|
||||
from fabric.contrib.files import sed
|
||||
from fabric.operations import run, put
|
||||
from fabric.context_managers import settings
|
||||
|
||||
from hostlist import public_dns_names
|
||||
from ssh_key import ssh_key_path
|
||||
|
||||
# Ignore known_hosts
|
||||
# http://docs.fabfile.org/en/1.10/usage/env.html#disable-known-hosts
|
||||
env.disable_known_hosts = True
|
||||
|
||||
# What remote servers should Fabric connect to? With what usernames?
|
||||
env.user = 'ubuntu'
|
||||
env.hosts = public_dns_names
|
||||
|
||||
# SSH key files to try when connecting:
|
||||
# http://docs.fabfile.org/en/1.10/usage/env.html#key-filename
|
||||
env.key_filename = ssh_key_path
|
||||
|
||||
|
||||
@task
|
||||
@parallel
|
||||
def put_benchmark_utils():
|
||||
put('benchmark_utils.py')
|
||||
|
||||
|
||||
@task
|
||||
@parallel
|
||||
def prepare_backlog(num_transactions=10000):
|
||||
run('python3 benchmark_utils.py add-backlog {}'.format(num_transactions))
|
||||
|
||||
|
||||
@task
|
||||
@parallel
|
||||
def start_bigchaindb():
|
||||
run('screen -d -m bigchaindb start &', pty=False)
|
||||
|
||||
|
||||
@task
|
||||
@parallel
|
||||
def kill_bigchaindb():
|
||||
run('killall bigchaindb')
|
|
@ -1,20 +0,0 @@
|
|||
# Transactions per second
|
||||
|
||||
Measure how many blocks per second are created on the _bigchain_ with a pre filled backlog.
|
||||
|
||||
1. Deploy an aws cluster https://docs.bigchaindb.com/projects/server/en/latest/clusters-feds/aws-testing-cluster.html
|
||||
2. Make a symbolic link to hostlist.py: `ln -s ../deploy-cluster-aws/hostlist.py .`
|
||||
3. Make a symbolic link to bigchaindb.pem:
|
||||
```bash
|
||||
mkdir pem
|
||||
cd pem
|
||||
ln -s ../deploy-cluster-aws/pem/bigchaindb.pem .
|
||||
```
|
||||
|
||||
Then:
|
||||
|
||||
```bash
|
||||
fab put_benchmark_utils
|
||||
fab prepare_backlog:<num txs per node> # wait for process to finish
|
||||
fab start_bigchaindb
|
||||
```
|
|
@ -1,28 +1,53 @@
|
|||
import copy
|
||||
import logging
|
||||
import os
|
||||
|
||||
from bigchaindb.log.configs import SUBSCRIBER_LOGGING_CONFIG as log_config
|
||||
|
||||
# from functools import reduce
|
||||
# PORT_NUMBER = reduce(lambda x, y: x * y, map(ord, 'BigchainDB')) % 2**16
|
||||
# basically, the port number is 9984
|
||||
|
||||
_database_rethinkdb = {
|
||||
'backend': os.environ.get('BIGCHAINDB_DATABASE_BACKEND', 'rethinkdb'),
|
||||
|
||||
_base_database_rethinkdb = {
|
||||
'host': os.environ.get('BIGCHAINDB_DATABASE_HOST', 'localhost'),
|
||||
'port': int(os.environ.get('BIGCHAINDB_DATABASE_PORT', 28015)),
|
||||
'name': os.environ.get('BIGCHAINDB_DATABASE_NAME', 'bigchain'),
|
||||
'connection_timeout': 5000,
|
||||
'max_tries': 3,
|
||||
}
|
||||
|
||||
_database_mongodb = {
|
||||
'backend': os.environ.get('BIGCHAINDB_DATABASE_BACKEND', 'mongodb'),
|
||||
# The following variable is used by `bigchaindb configure` to
|
||||
# prompt the user for database values. We cannot rely on
|
||||
# _base_database_rethinkdb.keys() or _base_database_mongodb.keys()
|
||||
# because dicts are unordered. I tried to configure
|
||||
|
||||
_database_keys_map = {
|
||||
'mongodb': ('host', 'port', 'name', 'replicaset'),
|
||||
'rethinkdb': ('host', 'port', 'name')
|
||||
}
|
||||
|
||||
_base_database_mongodb = {
|
||||
'host': os.environ.get('BIGCHAINDB_DATABASE_HOST', 'localhost'),
|
||||
'port': int(os.environ.get('BIGCHAINDB_DATABASE_PORT', 27017)),
|
||||
'name': os.environ.get('BIGCHAINDB_DATABASE_NAME', 'bigchain'),
|
||||
'replicaset': os.environ.get('BIGCHAINDB_DATABASE_REPLICASET', 'bigchain-rs'),
|
||||
'ssl': bool(os.environ.get('BIGCHAINDB_DATABASE_SSL', False)),
|
||||
'login': os.environ.get('BIGCHAINDB_DATABASE_LOGIN'),
|
||||
'password': os.environ.get('BIGCHAINDB_DATABASE_PASSWORD')
|
||||
}
|
||||
|
||||
_database_rethinkdb = {
|
||||
'backend': os.environ.get('BIGCHAINDB_DATABASE_BACKEND', 'rethinkdb'),
|
||||
'connection_timeout': 5000,
|
||||
'max_tries': 3,
|
||||
}
|
||||
_database_rethinkdb.update(_base_database_rethinkdb)
|
||||
|
||||
_database_mongodb = {
|
||||
'backend': os.environ.get('BIGCHAINDB_DATABASE_BACKEND', 'mongodb'),
|
||||
'connection_timeout': 5000,
|
||||
'max_tries': 3,
|
||||
}
|
||||
_database_mongodb.update(_base_database_mongodb)
|
||||
|
||||
_database_map = {
|
||||
'mongodb': _database_mongodb,
|
||||
|
@ -34,9 +59,15 @@ config = {
|
|||
# Note: this section supports all the Gunicorn settings:
|
||||
# - http://docs.gunicorn.org/en/stable/settings.html
|
||||
'bind': os.environ.get('BIGCHAINDB_SERVER_BIND') or 'localhost:9984',
|
||||
'loglevel': logging.getLevelName(
|
||||
log_config['handlers']['console']['level']).lower(),
|
||||
'workers': None, # if none, the value will be cpu_count * 2 + 1
|
||||
'threads': None, # if none, the value will be cpu_count * 2 + 1
|
||||
},
|
||||
'wsserver': {
|
||||
'host': os.environ.get('BIGCHAINDB_WSSERVER_HOST') or 'localhost',
|
||||
'port': int(os.environ.get('BIGCHAINDB_WSSERVER_PORT', 9985)),
|
||||
},
|
||||
'database': _database_map[
|
||||
os.environ.get('BIGCHAINDB_DATABASE_BACKEND', 'rethinkdb')
|
||||
],
|
||||
|
@ -47,19 +78,17 @@ config = {
|
|||
'keyring': [],
|
||||
'backlog_reassign_delay': 120,
|
||||
'log': {
|
||||
# TODO Document here or elsewhere.
|
||||
# Example of config:
|
||||
# 'file': '/var/log/bigchaindb.log',
|
||||
# 'level_console': 'info',
|
||||
# 'level_logfile': 'info',
|
||||
# 'datefmt_console': '%Y-%m-%d %H:%M:%S',
|
||||
# 'datefmt_logfile': '%Y-%m-%d %H:%M:%S',
|
||||
# 'fmt_console': '%(asctime)s [%(levelname)s] (%(name)s) %(message)s',
|
||||
# 'fmt_logfile': '%(asctime)s [%(levelname)s] (%(name)s) %(message)s',
|
||||
# 'granular_levels': {
|
||||
# 'bichaindb.backend': 'info',
|
||||
# 'bichaindb.core': 'info',
|
||||
# },
|
||||
'file': log_config['handlers']['file']['filename'],
|
||||
'error_file': log_config['handlers']['errors']['filename'],
|
||||
'level_console': logging.getLevelName(
|
||||
log_config['handlers']['console']['level']).lower(),
|
||||
'level_logfile': logging.getLevelName(
|
||||
log_config['handlers']['file']['level']).lower(),
|
||||
'datefmt_console': log_config['formatters']['console']['datefmt'],
|
||||
'datefmt_logfile': log_config['formatters']['file']['datefmt'],
|
||||
'fmt_console': log_config['formatters']['console']['format'],
|
||||
'fmt_logfile': log_config['formatters']['file']['format'],
|
||||
'granular_levels': {},
|
||||
},
|
||||
}
|
||||
|
||||
|
|
|
@ -12,7 +12,8 @@ import sys
|
|||
from bigchaindb.common import crypto
|
||||
from bigchaindb.common.exceptions import (StartupError,
|
||||
DatabaseAlreadyExists,
|
||||
KeypairNotFoundException)
|
||||
KeypairNotFoundException,
|
||||
DatabaseDoesNotExist)
|
||||
import bigchaindb
|
||||
from bigchaindb import backend, processes
|
||||
from bigchaindb.backend import schema
|
||||
|
@ -87,26 +88,25 @@ def run_configure(args, skip_if_exists=False):
|
|||
# select the correct config defaults based on the backend
|
||||
print('Generating default configuration for backend {}'
|
||||
.format(args.backend), file=sys.stderr)
|
||||
database_keys = bigchaindb._database_keys_map[args.backend]
|
||||
conf['database'] = bigchaindb._database_map[args.backend]
|
||||
|
||||
if not args.yes:
|
||||
for key in ('bind', ):
|
||||
val = conf['server'][key]
|
||||
conf['server'][key] = \
|
||||
input_on_stderr('API Server {}? (default `{}`): '.format(key, val)) \
|
||||
or val
|
||||
conf['server'][key] = input_on_stderr('API Server {}? (default `{}`): '.format(key, val), val)
|
||||
|
||||
for key in ('host', 'port', 'name'):
|
||||
for key in ('host', 'port'):
|
||||
val = conf['wsserver'][key]
|
||||
conf['wsserver'][key] = input_on_stderr('WebSocket Server {}? (default `{}`): '.format(key, val), val)
|
||||
|
||||
for key in database_keys:
|
||||
val = conf['database'][key]
|
||||
conf['database'][key] = \
|
||||
input_on_stderr('Database {}? (default `{}`): '.format(key, val)) \
|
||||
or val
|
||||
conf['database'][key] = input_on_stderr('Database {}? (default `{}`): '.format(key, val), val)
|
||||
|
||||
val = conf['backlog_reassign_delay']
|
||||
conf['backlog_reassign_delay'] = \
|
||||
input_on_stderr(('Stale transaction reassignment delay (in '
|
||||
'seconds)? (default `{}`): '.format(val))) \
|
||||
or val
|
||||
conf['backlog_reassign_delay'] = input_on_stderr(
|
||||
'Stale transaction reassignment delay (in seconds)? (default `{}`): '.format(val), val)
|
||||
|
||||
if config_path != '-':
|
||||
bigchaindb.config_utils.write_config(conf, config_path)
|
||||
|
@ -166,7 +166,10 @@ def run_drop(args):
|
|||
|
||||
conn = backend.connect()
|
||||
dbname = bigchaindb.config['database']['name']
|
||||
schema.drop_database(conn, dbname)
|
||||
try:
|
||||
schema.drop_database(conn, dbname)
|
||||
except DatabaseDoesNotExist:
|
||||
print("Cannot drop '{name}'. The database does not exist.".format(name=dbname), file=sys.stderr)
|
||||
|
||||
|
||||
@configure_bigchaindb
|
|
@ -36,7 +36,10 @@ def configure_bigchaindb(command):
|
|||
def configure(args):
|
||||
try:
|
||||
config_from_cmdline = {
|
||||
'log': {'level_console': args.log_level},
|
||||
'log': {
|
||||
'level_console': args.log_level,
|
||||
'level_logfile': args.log_level,
|
||||
},
|
||||
'server': {'loglevel': args.log_level},
|
||||
}
|
||||
except AttributeError:
|
||||
|
@ -74,12 +77,50 @@ def start_logging_process(command):
|
|||
return start_logging
|
||||
|
||||
|
||||
def _convert(value, default=None, convert=None):
|
||||
def convert_bool(value):
|
||||
if value.lower() in ('true', 't', 'yes', 'y'):
|
||||
return True
|
||||
if value.lower() in ('false', 'f', 'no', 'n'):
|
||||
return False
|
||||
raise ValueError('{} cannot be converted to bool'.format(value))
|
||||
|
||||
if value == '':
|
||||
value = None
|
||||
|
||||
if convert is None:
|
||||
if default is not None:
|
||||
convert = type(default)
|
||||
else:
|
||||
convert = str
|
||||
|
||||
if convert == bool:
|
||||
convert = convert_bool
|
||||
|
||||
if value is None:
|
||||
return default
|
||||
else:
|
||||
return convert(value)
|
||||
|
||||
|
||||
# We need this because `input` always prints on stdout, while it should print
|
||||
# to stderr. It's a very old bug, check it out here:
|
||||
# - https://bugs.python.org/issue1927
|
||||
def input_on_stderr(prompt=''):
|
||||
def input_on_stderr(prompt='', default=None, convert=None):
|
||||
"""Output a string to stderr and wait for input.
|
||||
|
||||
Args:
|
||||
prompt (str): the message to display.
|
||||
default: the default value to return if the user
|
||||
leaves the field empty
|
||||
convert (callable): a callable to be used to convert
|
||||
the value the user inserted. If None, the type of
|
||||
``default`` will be used.
|
||||
"""
|
||||
|
||||
print(prompt, end='', file=sys.stderr)
|
||||
return builtins.input()
|
||||
value = builtins.input()
|
||||
return _convert(value, default, convert)
|
||||
|
||||
|
||||
def start_rethinkdb():
|
||||
|
@ -198,6 +239,7 @@ base_parser.add_argument('-c', '--config',
|
|||
'(use "-" for stdout)')
|
||||
|
||||
base_parser.add_argument('-l', '--log-level',
|
||||
type=str.upper, # convert to uppercase for comparison to choices
|
||||
choices=['DEBUG', 'INFO', 'WARNING', 'ERROR', 'CRITICAL'],
|
||||
default='INFO',
|
||||
help='Log level')
|
||||
|
|
|
@ -132,7 +132,8 @@ definitions:
|
|||
- public_keys
|
||||
properties:
|
||||
amount:
|
||||
type: integer
|
||||
type: string
|
||||
pattern: "^[0-9]{1,20}$"
|
||||
description: |
|
||||
Integral amount of the asset represented by this output.
|
||||
In the case of a non divisible asset, this will always be 1.
|
||||
|
@ -158,10 +159,6 @@ definitions:
|
|||
"$ref": "#/definitions/public_keys"
|
||||
description: |
|
||||
List of public keys associated with the conditions on an output.
|
||||
amount:
|
||||
type: integer
|
||||
description: |
|
||||
Integral amount of the asset represented by this condition.
|
||||
input:
|
||||
type: "object"
|
||||
description:
|
||||
|
|
|
@ -209,6 +209,8 @@ class Output(object):
|
|||
owners before a Transaction was confirmed.
|
||||
"""
|
||||
|
||||
MAX_AMOUNT = 9 * 10 ** 18
|
||||
|
||||
def __init__(self, fulfillment, public_keys=None, amount=1):
|
||||
"""Create an instance of a :class:`~.Output`.
|
||||
|
||||
|
@ -229,6 +231,8 @@ class Output(object):
|
|||
raise TypeError('`amount` must be an int')
|
||||
if amount < 1:
|
||||
raise AmountError('`amount` must be greater than 0')
|
||||
if amount > self.MAX_AMOUNT:
|
||||
raise AmountError('`amount` must be <= %s' % self.MAX_AMOUNT)
|
||||
|
||||
self.fulfillment = fulfillment
|
||||
self.amount = amount
|
||||
|
@ -264,7 +268,7 @@ class Output(object):
|
|||
output = {
|
||||
'public_keys': self.public_keys,
|
||||
'condition': condition,
|
||||
'amount': self.amount
|
||||
'amount': str(self.amount),
|
||||
}
|
||||
return output
|
||||
|
||||
|
@ -381,7 +385,11 @@ class Output(object):
|
|||
except KeyError:
|
||||
# NOTE: Hashlock condition case
|
||||
fulfillment = data['condition']['uri']
|
||||
return cls(fulfillment, data['public_keys'], data['amount'])
|
||||
try:
|
||||
amount = int(data['amount'])
|
||||
except ValueError:
|
||||
raise AmountError('Invalid amount: %s' % data['amount'])
|
||||
return cls(fulfillment, data['public_keys'], amount)
|
||||
|
||||
|
||||
class Transaction(object):
|
||||
|
@ -686,22 +694,16 @@ class Transaction(object):
|
|||
key_pairs = {gen_public_key(PrivateKey(private_key)):
|
||||
PrivateKey(private_key) for private_key in private_keys}
|
||||
|
||||
for index, input_ in enumerate(self.inputs):
|
||||
# NOTE: We clone the current transaction but only add the output
|
||||
# and input we're currently working on plus all
|
||||
# previously signed ones.
|
||||
tx_partial = Transaction(self.operation, self.asset, [input_],
|
||||
self.outputs, self.metadata,
|
||||
self.version)
|
||||
|
||||
tx_partial_dict = tx_partial.to_dict()
|
||||
tx_partial_dict = Transaction._remove_signatures(tx_partial_dict)
|
||||
tx_serialized = Transaction._to_str(tx_partial_dict)
|
||||
self._sign_input(input_, index, tx_serialized, key_pairs)
|
||||
tx_dict = self.to_dict()
|
||||
tx_dict = Transaction._remove_signatures(tx_dict)
|
||||
tx_serialized = Transaction._to_str(tx_dict)
|
||||
for i, input_ in enumerate(self.inputs):
|
||||
self.inputs[i] = self._sign_input(input_, tx_serialized, key_pairs)
|
||||
return self
|
||||
|
||||
def _sign_input(self, input_, index, tx_serialized, key_pairs):
|
||||
"""Signs a single Input with a partial Transaction as message.
|
||||
@classmethod
|
||||
def _sign_input(cls, input_, message, key_pairs):
|
||||
"""Signs a single Input.
|
||||
|
||||
Note:
|
||||
This method works only for the following Cryptoconditions
|
||||
|
@ -712,31 +714,27 @@ class Transaction(object):
|
|||
Args:
|
||||
input_ (:class:`~bigchaindb.common.transaction.
|
||||
Input`) The Input to be signed.
|
||||
index (int): The index of the input to be signed.
|
||||
tx_serialized (str): The Transaction to be used as message.
|
||||
message (str): The message to be signed
|
||||
key_pairs (dict): The keys to sign the Transaction with.
|
||||
"""
|
||||
if isinstance(input_.fulfillment, Ed25519Fulfillment):
|
||||
self._sign_simple_signature_fulfillment(input_, index,
|
||||
tx_serialized, key_pairs)
|
||||
return cls._sign_simple_signature_fulfillment(input_, message,
|
||||
key_pairs)
|
||||
elif isinstance(input_.fulfillment, ThresholdSha256Fulfillment):
|
||||
self._sign_threshold_signature_fulfillment(input_, index,
|
||||
tx_serialized,
|
||||
key_pairs)
|
||||
return cls._sign_threshold_signature_fulfillment(input_, message,
|
||||
key_pairs)
|
||||
else:
|
||||
raise ValueError("Fulfillment couldn't be matched to "
|
||||
'Cryptocondition fulfillment type.')
|
||||
|
||||
def _sign_simple_signature_fulfillment(self, input_, index,
|
||||
tx_serialized, key_pairs):
|
||||
@classmethod
|
||||
def _sign_simple_signature_fulfillment(cls, input_, message, key_pairs):
|
||||
"""Signs a Ed25519Fulfillment.
|
||||
|
||||
Args:
|
||||
input_ (:class:`~bigchaindb.common.transaction.
|
||||
Input`) The input to be signed.
|
||||
index (int): The index of the input to be
|
||||
signed.
|
||||
tx_serialized (str): The Transaction to be used as message.
|
||||
message (str): The message to be signed
|
||||
key_pairs (dict): The keys to sign the Transaction with.
|
||||
"""
|
||||
# NOTE: To eliminate the dangers of accidentally signing a condition by
|
||||
|
@ -748,23 +746,21 @@ class Transaction(object):
|
|||
try:
|
||||
# cryptoconditions makes no assumptions of the encoding of the
|
||||
# message to sign or verify. It only accepts bytestrings
|
||||
input_.fulfillment.sign(tx_serialized.encode(), key_pairs[public_key])
|
||||
input_.fulfillment.sign(message.encode(), key_pairs[public_key])
|
||||
except KeyError:
|
||||
raise KeypairMismatchException('Public key {} is not a pair to '
|
||||
'any of the private keys'
|
||||
.format(public_key))
|
||||
self.inputs[index] = input_
|
||||
return input_
|
||||
|
||||
def _sign_threshold_signature_fulfillment(self, input_, index,
|
||||
tx_serialized, key_pairs):
|
||||
@classmethod
|
||||
def _sign_threshold_signature_fulfillment(cls, input_, message, key_pairs):
|
||||
"""Signs a ThresholdSha256Fulfillment.
|
||||
|
||||
Args:
|
||||
input_ (:class:`~bigchaindb.common.transaction.
|
||||
Input`) The Input to be signed.
|
||||
index (int): The index of the Input to be
|
||||
signed.
|
||||
tx_serialized (str): The Transaction to be used as message.
|
||||
message (str): The message to be signed
|
||||
key_pairs (dict): The keys to sign the Transaction with.
|
||||
"""
|
||||
input_ = deepcopy(input_)
|
||||
|
@ -794,8 +790,8 @@ class Transaction(object):
|
|||
# cryptoconditions makes no assumptions of the encoding of the
|
||||
# message to sign or verify. It only accepts bytestrings
|
||||
for subffill in subffills:
|
||||
subffill.sign(tx_serialized.encode(), private_key)
|
||||
self.inputs[index] = input_
|
||||
subffill.sign(message.encode(), private_key)
|
||||
return input_
|
||||
|
||||
def inputs_valid(self, outputs=None):
|
||||
"""Validates the Inputs in the Transaction against given
|
||||
|
@ -848,24 +844,17 @@ class Transaction(object):
|
|||
raise ValueError('Inputs and '
|
||||
'output_condition_uris must have the same count')
|
||||
|
||||
def gen_tx(input_, output, output_condition_uri=None):
|
||||
"""Splits multiple IO Transactions into partial single IO
|
||||
Transactions.
|
||||
"""
|
||||
tx = Transaction(self.operation, self.asset, [input_],
|
||||
self.outputs, self.metadata, self.version)
|
||||
tx_dict = tx.to_dict()
|
||||
tx_dict = Transaction._remove_signatures(tx_dict)
|
||||
tx_serialized = Transaction._to_str(tx_dict)
|
||||
tx_dict = self.to_dict()
|
||||
tx_dict = Transaction._remove_signatures(tx_dict)
|
||||
tx_serialized = Transaction._to_str(tx_dict)
|
||||
|
||||
return self.__class__._input_valid(input_,
|
||||
self.operation,
|
||||
tx_serialized,
|
||||
output_condition_uri)
|
||||
def validate(i, output_condition_uri=None):
|
||||
""" Validate input against output condition URI """
|
||||
return self._input_valid(self.inputs[i], self.operation,
|
||||
tx_serialized, output_condition_uri)
|
||||
|
||||
partial_transactions = map(gen_tx, self.inputs,
|
||||
self.outputs, output_condition_uris)
|
||||
return all(partial_transactions)
|
||||
return all(validate(i, cond)
|
||||
for i, cond in enumerate(output_condition_uris))
|
||||
|
||||
@staticmethod
|
||||
def _input_valid(input_, operation, tx_serialized, output_condition_uri=None):
|
||||
|
|
|
@ -19,14 +19,17 @@ class Bigchain(object):
|
|||
Create, read, sign, write transactions to the database
|
||||
"""
|
||||
|
||||
# return if a block has been voted invalid
|
||||
BLOCK_INVALID = 'invalid'
|
||||
# return if a block is valid, or tx is in valid block
|
||||
"""return if a block has been voted invalid"""
|
||||
|
||||
BLOCK_VALID = TX_VALID = 'valid'
|
||||
# return if block is undecided, or tx is in undecided block
|
||||
"""return if a block is valid, or tx is in valid block"""
|
||||
|
||||
BLOCK_UNDECIDED = TX_UNDECIDED = 'undecided'
|
||||
# return if transaction is in backlog
|
||||
"""return if block is undecided, or tx is in undecided block"""
|
||||
|
||||
TX_IN_BACKLOG = 'backlog'
|
||||
"""return if transaction is in backlog"""
|
||||
|
||||
def __init__(self, public_key=None, private_key=None, keyring=[], connection=None, backlog_reassign_delay=None):
|
||||
"""Initialize the Bigchain instance
|
||||
|
@ -321,43 +324,57 @@ class Bigchain(object):
|
|||
def get_spent(self, txid, output):
|
||||
"""Check if a `txid` was already used as an input.
|
||||
|
||||
A transaction can be used as an input for another transaction. Bigchain needs to make sure that a
|
||||
given `txid` is only used once.
|
||||
A transaction can be used as an input for another transaction. Bigchain
|
||||
needs to make sure that a given `(txid, output)` is only used once.
|
||||
|
||||
This method will check if the `(txid, output)` has already been
|
||||
spent in a transaction that is in either the `VALID`, `UNDECIDED` or
|
||||
`BACKLOG` state.
|
||||
|
||||
Args:
|
||||
txid (str): The id of the transaction
|
||||
output (num): the index of the output in the respective transaction
|
||||
|
||||
Returns:
|
||||
The transaction (Transaction) that used the `txid` as an input else
|
||||
`None`
|
||||
The transaction (Transaction) that used the `(txid, output)` as an
|
||||
input else `None`
|
||||
|
||||
Raises:
|
||||
CriticalDoubleSpend: If the given `(txid, output)` was spent in
|
||||
more than one valid transaction.
|
||||
"""
|
||||
# checks if an input was already spent
|
||||
# checks if the bigchain has any transaction with input {'txid': ...,
|
||||
# 'output': ...}
|
||||
transactions = list(backend.query.get_spent(self.connection, txid, output))
|
||||
transactions = list(backend.query.get_spent(self.connection, txid,
|
||||
output))
|
||||
|
||||
# a transaction_id should have been spent at most one time
|
||||
if transactions:
|
||||
# determine if these valid transactions appear in more than one valid block
|
||||
num_valid_transactions = 0
|
||||
for transaction in transactions:
|
||||
# ignore invalid blocks
|
||||
# FIXME: Isn't there a faster solution than doing I/O again?
|
||||
if self.get_transaction(transaction['id']):
|
||||
num_valid_transactions += 1
|
||||
if num_valid_transactions > 1:
|
||||
raise core_exceptions.CriticalDoubleSpend(
|
||||
'`{}` was spent more than once. There is a problem'
|
||||
' with the chain'.format(txid))
|
||||
# determine if these valid transactions appear in more than one valid
|
||||
# block
|
||||
num_valid_transactions = 0
|
||||
non_invalid_transactions = []
|
||||
for transaction in transactions:
|
||||
# ignore transactions in invalid blocks
|
||||
# FIXME: Isn't there a faster solution than doing I/O again?
|
||||
_, status = self.get_transaction(transaction['id'],
|
||||
include_status=True)
|
||||
if status == self.TX_VALID:
|
||||
num_valid_transactions += 1
|
||||
# `txid` can only have been spent in at most on valid block.
|
||||
if num_valid_transactions > 1:
|
||||
raise core_exceptions.CriticalDoubleSpend(
|
||||
'`{}` was spent more than once. There is a problem'
|
||||
' with the chain'.format(txid))
|
||||
# if its not and invalid transaction
|
||||
if status is not None:
|
||||
non_invalid_transactions.append(transaction)
|
||||
|
||||
if num_valid_transactions:
|
||||
return Transaction.from_dict(transactions[0])
|
||||
else:
|
||||
# all queried transactions were invalid
|
||||
return None
|
||||
else:
|
||||
return None
|
||||
if non_invalid_transactions:
|
||||
return Transaction.from_dict(non_invalid_transactions[0])
|
||||
|
||||
# Either no transaction was returned spending the `(txid, output)` as
|
||||
# input or the returned transactions are not valid.
|
||||
|
||||
def get_outputs(self, owner):
|
||||
"""Retrieve a list of links to transaction outputs for a given public
|
||||
|
@ -372,32 +389,37 @@ class Bigchain(object):
|
|||
"""
|
||||
# get all transactions in which owner is in the `owners_after` list
|
||||
response = backend.query.get_owned_ids(self.connection, owner)
|
||||
links = []
|
||||
return [
|
||||
TransactionLink(tx['id'], index)
|
||||
for tx in response
|
||||
if not self.is_tx_strictly_in_invalid_block(tx['id'])
|
||||
for index, output in enumerate(tx['outputs'])
|
||||
if utils.output_has_owner(output, owner)
|
||||
]
|
||||
|
||||
for tx in response:
|
||||
# disregard transactions from invalid blocks
|
||||
validity = self.get_blocks_status_containing_tx(tx['id'])
|
||||
if Bigchain.BLOCK_VALID not in validity.values():
|
||||
if Bigchain.BLOCK_UNDECIDED not in validity.values():
|
||||
continue
|
||||
def is_tx_strictly_in_invalid_block(self, txid):
|
||||
"""
|
||||
Checks whether the transaction with the given ``txid``
|
||||
*strictly* belongs to an invalid block.
|
||||
|
||||
# NOTE: It's OK to not serialize the transaction here, as we do not
|
||||
# use it after the execution of this function.
|
||||
# a transaction can contain multiple outputs so we need to iterate over all of them
|
||||
# to get a list of outputs available to spend
|
||||
for index, output in enumerate(tx['outputs']):
|
||||
# for simple signature conditions there are no subfulfillments
|
||||
# check if the owner is in the condition `owners_after`
|
||||
if len(output['public_keys']) == 1:
|
||||
if output['condition']['details']['public_key'] == owner:
|
||||
links.append(TransactionLink(tx['id'], index))
|
||||
else:
|
||||
# for transactions with multiple `public_keys` there will be several subfulfillments nested
|
||||
# in the condition. We need to iterate the subfulfillments to make sure there is a
|
||||
# subfulfillment for `owner`
|
||||
if utils.condition_details_has_owner(output['condition']['details'], owner):
|
||||
links.append(TransactionLink(tx['id'], index))
|
||||
return links
|
||||
Args:
|
||||
txid (str): Transaction id.
|
||||
|
||||
Returns:
|
||||
bool: ``True`` if the transaction *strictly* belongs to a
|
||||
block that is invalid. ``False`` otherwise.
|
||||
|
||||
Note:
|
||||
Since a transaction may be in multiple blocks, with
|
||||
different statuses, the term "strictly" is used to
|
||||
emphasize that if a transaction is said to be in an invalid
|
||||
block, it means that it is not in any other block that is
|
||||
either valid or undecided.
|
||||
|
||||
"""
|
||||
validity = self.get_blocks_status_containing_tx(txid)
|
||||
return (Bigchain.BLOCK_VALID not in validity.values() and
|
||||
Bigchain.BLOCK_UNDECIDED not in validity.values())
|
||||
|
||||
def get_owned_ids(self, owner):
|
||||
"""Retrieve a list of ``txid`` s that can be used as inputs.
|
||||
|
|
|
@ -0,0 +1,33 @@
|
|||
from enum import Enum
|
||||
from multiprocessing import Queue
|
||||
|
||||
|
||||
class EventTypes(Enum):
|
||||
BLOCK_VALID = 1
|
||||
BLOCK_INVALID = 2
|
||||
|
||||
|
||||
class Event:
|
||||
|
||||
def __init__(self, event_type, event_data):
|
||||
self.type = event_type
|
||||
self.data = event_data
|
||||
|
||||
|
||||
class EventHandler:
|
||||
|
||||
def __init__(self, events_queue):
|
||||
self.events_queue = events_queue
|
||||
|
||||
def put_event(self, event, timeout=None):
|
||||
# TODO: handle timeouts
|
||||
self.events_queue.put(event, timeout=None)
|
||||
|
||||
def get_event(self, timeout=None):
|
||||
# TODO: handle timeouts
|
||||
return self.events_queue.get(timeout=None)
|
||||
|
||||
|
||||
def setup_events_queue():
|
||||
# TODO: set bounds to the queue
|
||||
return Queue()
|
|
@ -8,3 +8,7 @@ class CriticalDoubleSpend(BigchainDBError):
|
|||
|
||||
class CriticalDoubleInclusion(BigchainDBError):
|
||||
"""Data integrity error that requires attention"""
|
||||
|
||||
|
||||
class CriticalDuplicateVote(BigchainDBError):
|
||||
"""Data integrity error that requires attention"""
|
||||
|
|
|
@ -41,18 +41,22 @@ SUBSCRIBER_LOGGING_CONFIG = {
|
|||
'level': logging.INFO,
|
||||
},
|
||||
'file': {
|
||||
'class': 'logging.FileHandler',
|
||||
'class': 'logging.handlers.RotatingFileHandler',
|
||||
'filename': join(DEFAULT_LOG_DIR, 'bigchaindb.log'),
|
||||
'mode': 'w',
|
||||
'maxBytes': 209715200,
|
||||
'backupCount': 5,
|
||||
'formatter': 'file',
|
||||
'level': logging.INFO,
|
||||
},
|
||||
'errors': {
|
||||
'class': 'logging.FileHandler',
|
||||
'class': 'logging.handlers.RotatingFileHandler',
|
||||
'filename': join(DEFAULT_LOG_DIR, 'bigchaindb-errors.log'),
|
||||
'mode': 'w',
|
||||
'level': logging.ERROR,
|
||||
'maxBytes': 209715200,
|
||||
'backupCount': 5,
|
||||
'formatter': 'file',
|
||||
'level': logging.ERROR,
|
||||
},
|
||||
},
|
||||
'loggers': {},
|
||||
|
|
|
@ -49,7 +49,7 @@ def setup_logging(*, user_log_config=None):
|
|||
setup_sub_logger(user_log_config=user_log_config)
|
||||
|
||||
|
||||
def create_subscriber_logging_config(*, user_log_config=None):
|
||||
def create_subscriber_logging_config(*, user_log_config=None): # noqa: C901
|
||||
sub_log_config = deepcopy(SUBSCRIBER_LOGGING_CONFIG)
|
||||
|
||||
if not user_log_config:
|
||||
|
@ -59,6 +59,10 @@ def create_subscriber_logging_config(*, user_log_config=None):
|
|||
filename = user_log_config['file']
|
||||
sub_log_config['handlers']['file']['filename'] = filename
|
||||
|
||||
if 'error_file' in user_log_config:
|
||||
error_filename = user_log_config['error_file']
|
||||
sub_log_config['handlers']['errors']['filename'] = error_filename
|
||||
|
||||
if 'level_console' in user_log_config:
|
||||
level = _normalize_log_level(user_log_config['level_console'])
|
||||
sub_log_config['handlers']['console']['level'] = level
|
||||
|
|
|
@ -187,6 +187,11 @@ class Block(object):
|
|||
if not self.is_signature_valid():
|
||||
raise InvalidSignature('Invalid block signature')
|
||||
|
||||
# Check that the block contains no duplicated transactions
|
||||
txids = [tx.id for tx in self.transactions]
|
||||
if len(txids) != len(set(txids)):
|
||||
raise DuplicateTransaction('Block has duplicate transaction')
|
||||
|
||||
def _validate_block_transactions(self, bigchain):
|
||||
"""Validate Block transactions.
|
||||
|
||||
|
@ -196,10 +201,6 @@ class Block(object):
|
|||
Raises:
|
||||
ValidationError: If an invalid transaction is found
|
||||
"""
|
||||
txids = [tx.id for tx in self.transactions]
|
||||
if len(txids) != len(set(txids)):
|
||||
raise DuplicateTransaction('Block has duplicate transaction')
|
||||
|
||||
for tx in self.transactions:
|
||||
# If a transaction is not valid, `validate_transactions` will
|
||||
# throw an an exception and block validation will be canceled.
|
||||
|
|
|
@ -13,6 +13,7 @@ from bigchaindb import backend
|
|||
from bigchaindb.backend.changefeed import ChangeFeed
|
||||
from bigchaindb.models import Block
|
||||
from bigchaindb import Bigchain
|
||||
from bigchaindb.events import EventHandler, Event, EventTypes
|
||||
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
@ -22,8 +23,11 @@ logger_results = logging.getLogger('pipeline.election.results')
|
|||
class Election:
|
||||
"""Election class."""
|
||||
|
||||
def __init__(self):
|
||||
def __init__(self, events_queue=None):
|
||||
self.bigchain = Bigchain()
|
||||
self.event_handler = None
|
||||
if events_queue:
|
||||
self.event_handler = EventHandler(events_queue)
|
||||
|
||||
def check_for_quorum(self, next_vote):
|
||||
"""
|
||||
|
@ -42,6 +46,7 @@ class Election:
|
|||
next_block = self.bigchain.get_block(block_id)
|
||||
|
||||
result = self.bigchain.block_election(next_block)
|
||||
self.handle_block_events(result, block_id)
|
||||
if result['status'] == self.bigchain.BLOCK_INVALID:
|
||||
return Block.from_dict(next_block)
|
||||
|
||||
|
@ -67,9 +72,21 @@ class Election:
|
|||
self.bigchain.write_transaction(tx)
|
||||
return invalid_block
|
||||
|
||||
def handle_block_events(self, result, block_id):
|
||||
if self.event_handler:
|
||||
if result['status'] == self.bigchain.BLOCK_UNDECIDED:
|
||||
return
|
||||
elif result['status'] == self.bigchain.BLOCK_INVALID:
|
||||
event_type = EventTypes.BLOCK_INVALID
|
||||
elif result['status'] == self.bigchain.BLOCK_VALID:
|
||||
event_type = EventTypes.BLOCK_VALID
|
||||
|
||||
def create_pipeline():
|
||||
election = Election()
|
||||
event = Event(event_type, self.bigchain.get_block(block_id))
|
||||
self.event_handler.put_event(event)
|
||||
|
||||
|
||||
def create_pipeline(events_queue=None):
|
||||
election = Election(events_queue=events_queue)
|
||||
|
||||
election_pipeline = Pipeline([
|
||||
Node(election.check_for_quorum),
|
||||
|
@ -84,8 +101,8 @@ def get_changefeed():
|
|||
return backend.get_changefeed(connection, 'votes', ChangeFeed.INSERT)
|
||||
|
||||
|
||||
def start():
|
||||
pipeline = create_pipeline()
|
||||
def start(events_queue=None):
|
||||
pipeline = create_pipeline(events_queue=events_queue)
|
||||
pipeline.setup(indata=get_changefeed())
|
||||
pipeline.start()
|
||||
return pipeline
|
||||
|
|
|
@ -3,7 +3,8 @@ import multiprocessing as mp
|
|||
|
||||
import bigchaindb
|
||||
from bigchaindb.pipelines import vote, block, election, stale
|
||||
from bigchaindb.web import server
|
||||
from bigchaindb.events import setup_events_queue
|
||||
from bigchaindb.web import server, websocket_server
|
||||
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
@ -25,6 +26,13 @@ BANNER = """
|
|||
def start():
|
||||
logger.info('Initializing BigchainDB...')
|
||||
|
||||
# Create the events queue
|
||||
# The events queue needs to be initialized once and shared between
|
||||
# processes. This seems the best way to do it
|
||||
# At this point only the election processs and the event consumer require
|
||||
# this queue.
|
||||
events_queue = setup_events_queue()
|
||||
|
||||
# start the processes
|
||||
logger.info('Starting block')
|
||||
block.start()
|
||||
|
@ -36,12 +44,18 @@ def start():
|
|||
stale.start()
|
||||
|
||||
logger.info('Starting election')
|
||||
election.start()
|
||||
election.start(events_queue=events_queue)
|
||||
|
||||
# start the web api
|
||||
app_server = server.create_server(bigchaindb.config['server'])
|
||||
p_webapi = mp.Process(name='webapi', target=app_server.run)
|
||||
p_webapi.start()
|
||||
|
||||
logger.info('WebSocket server started')
|
||||
p_websocket_server = mp.Process(name='ws',
|
||||
target=websocket_server.start,
|
||||
args=(events_queue,))
|
||||
p_websocket_server.start()
|
||||
|
||||
# start message
|
||||
logger.info(BANNER.format(bigchaindb.config['server']['bind']))
|
||||
|
|
|
@ -113,6 +113,19 @@ def condition_details_has_owner(condition_details, owner):
|
|||
return False
|
||||
|
||||
|
||||
def output_has_owner(output, owner):
|
||||
# TODO
|
||||
# Check whether it is really necessary to treat the single key case
|
||||
# differently from the multiple keys case, and why not just use the same
|
||||
# function for both cases.
|
||||
if len(output['public_keys']) > 1:
|
||||
return condition_details_has_owner(
|
||||
output['condition']['details'], owner)
|
||||
elif len(output['public_keys']) == 1:
|
||||
return output['condition']['details']['public_key'] == owner
|
||||
# TODO raise proper exception, e.g. invalid tx payload?
|
||||
|
||||
|
||||
def is_genesis_block(block):
|
||||
"""Check if the block is the genesis block.
|
||||
|
||||
|
|
|
@ -1,2 +1,2 @@
|
|||
__version__ = '0.10.0.dev'
|
||||
__short_version__ = '0.10.dev'
|
||||
__version__ = '0.11.0.dev'
|
||||
__short_version__ = '0.11.dev'
|
||||
|
|
|
@ -1,6 +1,7 @@
|
|||
import collections
|
||||
|
||||
from bigchaindb.common.schema import SchemaValidationError, validate_vote_schema
|
||||
from bigchaindb.exceptions import CriticalDuplicateVote
|
||||
from bigchaindb.common.utils import serialize
|
||||
from bigchaindb.common.crypto import PublicKey
|
||||
|
||||
|
@ -33,7 +34,8 @@ class Voting:
|
|||
n_voters = len(eligible_voters)
|
||||
eligible_votes, ineligible_votes = \
|
||||
cls.partition_eligible_votes(votes, eligible_voters)
|
||||
results = cls.count_votes(eligible_votes)
|
||||
by_voter = cls.dedupe_by_voter(eligible_votes)
|
||||
results = cls.count_votes(by_voter)
|
||||
results['block_id'] = block['id']
|
||||
results['status'] = cls.decide_votes(n_voters, **results['counts'])
|
||||
results['ineligible'] = ineligible_votes
|
||||
|
@ -60,38 +62,29 @@ class Voting:
|
|||
return eligible, ineligible
|
||||
|
||||
@classmethod
|
||||
def count_votes(cls, eligible_votes):
|
||||
def dedupe_by_voter(cls, eligible_votes):
|
||||
"""
|
||||
Throw a critical error if there is a duplicate vote
|
||||
"""
|
||||
by_voter = {}
|
||||
for vote in eligible_votes:
|
||||
pubkey = vote['node_pubkey']
|
||||
if pubkey in by_voter:
|
||||
raise CriticalDuplicateVote(pubkey)
|
||||
by_voter[pubkey] = vote
|
||||
return by_voter
|
||||
|
||||
@classmethod
|
||||
def count_votes(cls, by_voter):
|
||||
"""
|
||||
Given a list of eligible votes, (votes from known nodes that are listed
|
||||
as voters), produce the number that say valid and the number that say
|
||||
invalid.
|
||||
|
||||
* Detect if there are multiple votes from a single node and return them
|
||||
in a separate "cheat" dictionary.
|
||||
* Votes must agree on previous block, otherwise they become invalid.
|
||||
|
||||
note:
|
||||
The sum of votes returned by this function does not necessarily
|
||||
equal the length of the list of votes fed in. It may differ for
|
||||
example if there are found to be multiple votes submitted by a
|
||||
single voter.
|
||||
invalid. Votes must agree on previous block, otherwise they become invalid.
|
||||
"""
|
||||
prev_blocks = collections.Counter()
|
||||
cheat = []
|
||||
malformed = []
|
||||
|
||||
# Group by pubkey to detect duplicate voting
|
||||
by_voter = collections.defaultdict(list)
|
||||
for vote in eligible_votes:
|
||||
by_voter[vote['node_pubkey']].append(vote)
|
||||
|
||||
for pubkey, votes in by_voter.items():
|
||||
if len(votes) > 1:
|
||||
cheat.append(votes)
|
||||
continue
|
||||
|
||||
vote = votes[0]
|
||||
|
||||
for vote in by_voter.values():
|
||||
if not cls.verify_vote_schema(vote):
|
||||
malformed.append(vote)
|
||||
continue
|
||||
|
@ -111,7 +104,6 @@ class Voting:
|
|||
'n_valid': n_valid,
|
||||
'n_invalid': len(by_voter) - n_valid,
|
||||
},
|
||||
'cheat': cheat,
|
||||
'malformed': malformed,
|
||||
'previous_block': prev_block,
|
||||
'other_previous_block': dict(prev_blocks),
|
||||
|
|
|
@ -22,7 +22,7 @@ class StandaloneApplication(gunicorn.app.base.BaseApplication):
|
|||
- http://docs.gunicorn.org/en/latest/custom.html
|
||||
"""
|
||||
|
||||
def __init__(self, app, options=None):
|
||||
def __init__(self, app, *, options=None):
|
||||
'''Initialize a new standalone application.
|
||||
|
||||
Args:
|
||||
|
@ -32,7 +32,7 @@ class StandaloneApplication(gunicorn.app.base.BaseApplication):
|
|||
'''
|
||||
self.options = options or {}
|
||||
self.application = app
|
||||
super(StandaloneApplication, self).__init__()
|
||||
super().__init__()
|
||||
|
||||
def load_config(self):
|
||||
config = dict((key, value) for key, value in self.options.items()
|
||||
|
@ -106,5 +106,5 @@ def create_server(settings):
|
|||
settings['logger_class'] = 'bigchaindb.log.loggers.HttpServerLogger'
|
||||
app = create_app(debug=settings.get('debug', False),
|
||||
threads=settings['threads'])
|
||||
standalone = StandaloneApplication(app, settings)
|
||||
standalone = StandaloneApplication(app, options=settings)
|
||||
return standalone
|
||||
|
|
|
@ -5,6 +5,9 @@ import logging
|
|||
|
||||
from flask import jsonify, request
|
||||
|
||||
from bigchaindb import config
|
||||
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
|
@ -21,3 +24,8 @@ def make_error(status_code, message=None):
|
|||
def base_url():
|
||||
return '%s://%s/' % (request.environ['wsgi.url_scheme'],
|
||||
request.environ['HTTP_HOST'])
|
||||
|
||||
|
||||
def base_ws_uri():
|
||||
"""Base websocket uri."""
|
||||
return 'ws://{host}:{port}'.format(**config['wsserver'])
|
||||
|
|
|
@ -1,8 +1,6 @@
|
|||
"""This module provides the blueprint for the blocks API endpoints.
|
||||
|
||||
For more information please refer to the documentation on ReadTheDocs:
|
||||
- https://docs.bigchaindb.com/projects/server/en/latest/drivers-clients/
|
||||
http-client-server-api.html
|
||||
For more information please refer to the documentation: http://bigchaindb.com/http-api
|
||||
"""
|
||||
from flask import current_app
|
||||
from flask_restful import Resource, reqparse
|
||||
|
|
|
@ -4,8 +4,9 @@ import flask
|
|||
from flask_restful import Resource
|
||||
|
||||
import bigchaindb
|
||||
from bigchaindb.web.views.base import base_url
|
||||
from bigchaindb.web.views.base import base_url, base_ws_uri
|
||||
from bigchaindb import version
|
||||
from bigchaindb.web.websocket_server import EVENTS_ENDPOINT
|
||||
|
||||
|
||||
class RootIndex(Resource):
|
||||
|
@ -30,16 +31,18 @@ class RootIndex(Resource):
|
|||
class ApiV1Index(Resource):
|
||||
def get(self):
|
||||
api_root = base_url() + 'api/v1/'
|
||||
websocket_root = base_ws_uri() + EVENTS_ENDPOINT
|
||||
docs_url = [
|
||||
'https://docs.bigchaindb.com/projects/server/en/v',
|
||||
version.__version__,
|
||||
'/drivers-clients/http-client-server-api.html',
|
||||
'/http-client-server-api.html',
|
||||
]
|
||||
return {
|
||||
return flask.jsonify({
|
||||
'_links': {
|
||||
'docs': ''.join(docs_url),
|
||||
'self': api_root,
|
||||
'statuses': api_root + 'statuses/',
|
||||
'transactions': api_root + 'transactions/',
|
||||
'streams_v1': websocket_root,
|
||||
},
|
||||
}
|
||||
})
|
||||
|
|
|
@ -1,8 +1,6 @@
|
|||
"""This module provides the blueprint for the statuses API endpoints.
|
||||
|
||||
For more information please refer to the documentation on ReadTheDocs:
|
||||
- https://docs.bigchaindb.com/projects/server/en/latest/drivers-clients/
|
||||
http-client-server-api.html
|
||||
For more information please refer to the documentation: http://bigchaindb.com/http-api
|
||||
"""
|
||||
from flask import current_app
|
||||
from flask_restful import Resource, reqparse
|
||||
|
|
|
@ -1,8 +1,6 @@
|
|||
"""This module provides the blueprint for some basic API endpoints.
|
||||
|
||||
For more information please refer to the documentation on ReadTheDocs:
|
||||
- https://docs.bigchaindb.com/projects/server/en/latest/drivers-clients/
|
||||
http-client-server-api.html
|
||||
For more information please refer to the documentation: http://bigchaindb.com/http-api
|
||||
"""
|
||||
import logging
|
||||
|
||||
|
|
|
@ -1,8 +1,6 @@
|
|||
"""This module provides the blueprint for the votes API endpoints.
|
||||
|
||||
For more information please refer to the documentation on ReadTheDocs:
|
||||
- https://docs.bigchaindb.com/projects/server/en/latest/drivers-clients/
|
||||
http-client-server-api.html
|
||||
For more information please refer to the documentation: http://bigchaindb.com/http-api
|
||||
"""
|
||||
from flask import current_app
|
||||
from flask_restful import Resource, reqparse
|
||||
|
|
|
@ -0,0 +1,154 @@
|
|||
"""WebSocket server for the BigchainDB Event Stream API."""
|
||||
|
||||
# NOTE
|
||||
#
|
||||
# This module contains some functions and utilities that might belong to other
|
||||
# modules. For now, I prefer to keep everything in this module. Why? Because
|
||||
# those functions are needed only here.
|
||||
#
|
||||
# When we will extend this part of the project and we find that we need those
|
||||
# functionalities elsewhere, we can start creating new modules and organizing
|
||||
# things in a better way.
|
||||
|
||||
|
||||
import json
|
||||
import asyncio
|
||||
import logging
|
||||
import threading
|
||||
from uuid import uuid4
|
||||
|
||||
import aiohttp
|
||||
from aiohttp import web
|
||||
|
||||
from bigchaindb import config
|
||||
from bigchaindb.events import EventTypes
|
||||
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
POISON_PILL = 'POISON_PILL'
|
||||
EVENTS_ENDPOINT = '/api/v1/streams/valid_tx'
|
||||
|
||||
|
||||
def _multiprocessing_to_asyncio(in_queue, out_queue, loop):
|
||||
"""Bridge between a synchronous multiprocessing queue
|
||||
and an asynchronous asyncio queue.
|
||||
|
||||
Args:
|
||||
in_queue (multiprocessing.Queue): input queue
|
||||
out_queue (asyncio.Queue): output queue
|
||||
"""
|
||||
|
||||
while True:
|
||||
value = in_queue.get()
|
||||
loop.call_soon_threadsafe(out_queue.put_nowait, value)
|
||||
|
||||
|
||||
class Dispatcher:
|
||||
"""Dispatch events to websockets.
|
||||
|
||||
This class implements a simple publish/subscribe pattern.
|
||||
"""
|
||||
|
||||
def __init__(self, event_source):
|
||||
"""Create a new instance.
|
||||
|
||||
Args:
|
||||
event_source: a source of events. Elements in the queue
|
||||
should be strings.
|
||||
"""
|
||||
|
||||
self.event_source = event_source
|
||||
self.subscribers = {}
|
||||
|
||||
def subscribe(self, uuid, websocket):
|
||||
"""Add a websocket to the list of subscribers.
|
||||
|
||||
Args:
|
||||
uuid (str): a unique identifier for the websocket.
|
||||
websocket: the websocket to publish information.
|
||||
"""
|
||||
|
||||
self.subscribers[uuid] = websocket
|
||||
|
||||
@asyncio.coroutine
|
||||
def publish(self):
|
||||
"""Publish new events to the subscribers."""
|
||||
|
||||
while True:
|
||||
event = yield from self.event_source.get()
|
||||
str_buffer = []
|
||||
|
||||
if event == POISON_PILL:
|
||||
return
|
||||
|
||||
if isinstance(event, str):
|
||||
str_buffer.append(event)
|
||||
|
||||
elif event.type == EventTypes.BLOCK_VALID:
|
||||
block = event.data
|
||||
|
||||
for tx in block['block']['transactions']:
|
||||
asset_id = tx['id'] if tx['operation'] == 'CREATE' else tx['asset']['id']
|
||||
data = {'block_id': block['id'],
|
||||
'asset_id': asset_id,
|
||||
'tx_id': tx['id']}
|
||||
str_buffer.append(json.dumps(data))
|
||||
|
||||
for _, websocket in self.subscribers.items():
|
||||
for str_item in str_buffer:
|
||||
websocket.send_str(str_item)
|
||||
|
||||
|
||||
@asyncio.coroutine
|
||||
def websocket_handler(request):
|
||||
"""Handle a new socket connection."""
|
||||
|
||||
logger.debug('New websocket connection.')
|
||||
websocket = web.WebSocketResponse()
|
||||
yield from websocket.prepare(request)
|
||||
uuid = uuid4()
|
||||
request.app['dispatcher'].subscribe(uuid, websocket)
|
||||
|
||||
while True:
|
||||
# Consume input buffer
|
||||
msg = yield from websocket.receive()
|
||||
if msg.type == aiohttp.WSMsgType.ERROR:
|
||||
logger.debug('Websocket exception: %s', websocket.exception())
|
||||
return
|
||||
|
||||
|
||||
def init_app(event_source, *, loop=None):
|
||||
"""Init the application server.
|
||||
|
||||
Return:
|
||||
An aiohttp application.
|
||||
"""
|
||||
|
||||
dispatcher = Dispatcher(event_source)
|
||||
|
||||
# Schedule the dispatcher
|
||||
loop.create_task(dispatcher.publish())
|
||||
|
||||
app = web.Application(loop=loop)
|
||||
app['dispatcher'] = dispatcher
|
||||
app.router.add_get(EVENTS_ENDPOINT, websocket_handler)
|
||||
return app
|
||||
|
||||
|
||||
def start(sync_event_source, loop=None):
|
||||
"""Create and start the WebSocket server."""
|
||||
|
||||
if not loop:
|
||||
loop = asyncio.get_event_loop()
|
||||
|
||||
event_source = asyncio.Queue(loop=loop)
|
||||
|
||||
bridge = threading.Thread(target=_multiprocessing_to_asyncio,
|
||||
args=(sync_event_source, event_source, loop),
|
||||
daemon=True)
|
||||
bridge.start()
|
||||
|
||||
app = init_app(event_source, loop=loop)
|
||||
aiohttp.web.run_app(app,
|
||||
host=config['wsserver']['host'],
|
||||
port=config['wsserver']['port'])
|
|
@ -29,8 +29,6 @@ coverage:
|
|||
- "docs/*"
|
||||
- "tests/*"
|
||||
- "bigchaindb/version.py"
|
||||
- "benchmarking-tests/*"
|
||||
- "speed-tests/*"
|
||||
- "ntools/*"
|
||||
- "k8s/*"
|
||||
|
||||
|
|
|
@ -2,7 +2,7 @@ version: '2'
|
|||
|
||||
services:
|
||||
mdb:
|
||||
image: mongo:3.4.1
|
||||
image: mongo:3.4.3
|
||||
ports:
|
||||
- "27017"
|
||||
command: mongod --replSet=bigchain-rs
|
||||
|
@ -28,7 +28,7 @@ services:
|
|||
- /data
|
||||
command: "true"
|
||||
|
||||
bdb:
|
||||
bdb-rdb:
|
||||
build:
|
||||
context: .
|
||||
dockerfile: Dockerfile-dev
|
||||
|
@ -37,6 +37,7 @@ services:
|
|||
- ./bigchaindb:/usr/src/app/bigchaindb
|
||||
- ./tests:/usr/src/app/tests
|
||||
- ./docs:/usr/src/app/docs
|
||||
- ./k8s:/usr/src/app/k8s
|
||||
- ./setup.py:/usr/src/app/setup.py
|
||||
- ./setup.cfg:/usr/src/app/setup.cfg
|
||||
- ./pytest.ini:/usr/src/app/pytest.ini
|
||||
|
@ -50,7 +51,7 @@ services:
|
|||
- "9984"
|
||||
command: bigchaindb start
|
||||
|
||||
bdb-mdb:
|
||||
bdb:
|
||||
build:
|
||||
context: .
|
||||
dockerfile: Dockerfile-dev
|
||||
|
@ -58,6 +59,7 @@ services:
|
|||
- ./bigchaindb:/usr/src/app/bigchaindb
|
||||
- ./tests:/usr/src/app/tests
|
||||
- ./docs:/usr/src/app/docs
|
||||
- ./k8s:/usr/src/app/k8s
|
||||
- ./setup.py:/usr/src/app/setup.py
|
||||
- ./setup.cfg:/usr/src/app/setup.cfg
|
||||
- ./pytest.ini:/usr/src/app/pytest.ini
|
||||
|
|
|
@ -53,7 +53,7 @@ At a high level, one can communicate with a BigchainDB cluster (set of nodes) us
|
|||
</style>
|
||||
|
||||
<div class="buttondiv">
|
||||
<a class="button" href="http://docs.bigchaindb.com/projects/server/en/latest/drivers-clients/http-client-server-api.html">HTTP API Docs</a>
|
||||
<a class="button" href="http://bigchaindb.com/http-api">HTTP API Docs</a>
|
||||
</div>
|
||||
<div class="buttondiv">
|
||||
<a class="button" href="http://docs.bigchaindb.com/projects/py-driver/en/latest/index.html">Python Driver Docs</a>
|
||||
|
|
|
@ -1,21 +1,21 @@
|
|||
# Terminology
|
||||
|
||||
There is some specialized terminology associated with BigchainDB. To get started, you should at least know what what we mean by a BigchainDB *node*, *cluster* and *consortium*.
|
||||
There is some specialized terminology associated with BigchainDB. To get started, you should at least know the following:
|
||||
|
||||
|
||||
## Node
|
||||
## BigchainDB Node
|
||||
|
||||
A **BigchainDB node** is a machine or set of closely-linked machines running RethinkDB/MongoDB Server, BigchainDB Server, and related software. (A "machine" might be a bare-metal server, a virtual machine or a container.) Each node is controlled by one person or organization.
|
||||
A **BigchainDB node** is a machine or set of closely-linked machines running RethinkDB/MongoDB Server, BigchainDB Server, and related software. Each node is controlled by one person or organization.
|
||||
|
||||
|
||||
## Cluster
|
||||
## BigchainDB Cluster
|
||||
|
||||
A set of BigchainDB nodes can connect to each other to form a **cluster**. Each node in the cluster runs the same software. A cluster contains one logical RethinkDB datastore. A cluster may have additional machines to do things such as cluster monitoring.
|
||||
A set of BigchainDB nodes can connect to each other to form a **BigchainDB cluster**. Each node in the cluster runs the same software. A cluster contains one logical RethinkDB/MongoDB datastore. A cluster may have additional machines to do things such as cluster monitoring.
|
||||
|
||||
|
||||
## Consortium
|
||||
## BigchainDB Consortium
|
||||
|
||||
The people and organizations that run the nodes in a cluster belong to a **consortium** (i.e. another organization). A consortium must have some sort of governance structure to make decisions. If a cluster is run by a single company, then the "consortium" is just that company.
|
||||
The people and organizations that run the nodes in a cluster belong to a **BigchainDB consortium** (i.e. another organization). A consortium must have some sort of governance structure to make decisions. If a cluster is run by a single company, then the "consortium" is just that company.
|
||||
|
||||
**What's the Difference Between a Cluster and a Consortium?**
|
||||
|
||||
|
|
|
@ -269,7 +269,7 @@ def main():
|
|||
ctx['block_list'] = pretty_json(block_list)
|
||||
|
||||
base_path = os.path.join(os.path.dirname(__file__),
|
||||
'source/drivers-clients/samples')
|
||||
'source/http-samples')
|
||||
if not os.path.exists(base_path):
|
||||
os.makedirs(base_path)
|
||||
|
||||
|
|
Binary file not shown.
Before Width: | Height: | Size: 82 KiB After Width: | Height: | Size: 38 KiB |
|
@ -18,7 +18,7 @@ pip install awscli
|
|||
|
||||
## Create an AWS Access Key
|
||||
|
||||
The next thing you'll need is an AWS access key. If you don't have one, you can create one using the [instructions in the AWS documentation](http://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSGettingStartedGuide/AWSCredentials.html). You should get an access key ID (e.g. AKIAIOSFODNN7EXAMPLE) and a secret access key (e.g. wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY).
|
||||
The next thing you'll need is AWS access keys (access key ID and secret access key). If you don't have those, see [the AWS documentation about access keys](https://docs.aws.amazon.com/general/latest/gr/aws-sec-cred-types.html#access-keys-and-secret-access-keys).
|
||||
|
||||
You should also pick a default AWS region name (e.g. `eu-central-1`). That's where your cluster will run. The AWS documentation has [a list of them](http://docs.aws.amazon.com/general/latest/gr/rande.html#ec2_region).
|
||||
|
||||
|
|
|
@ -6,10 +6,10 @@ Command Line Interface
|
|||
:special-members: __init__
|
||||
|
||||
|
||||
:mod:`bigchaindb.commands.bigchain`
|
||||
-----------------------------------
|
||||
:mod:`bigchaindb.commands.bigchaindb`
|
||||
-------------------------------------
|
||||
|
||||
.. automodule:: bigchaindb.commands.bigchain
|
||||
.. automodule:: bigchaindb.commands.bigchaindb
|
||||
|
||||
|
||||
:mod:`bigchaindb.commands.utils`
|
||||
|
|
|
@ -1,5 +0,0 @@
|
|||
#########
|
||||
Consensus
|
||||
#########
|
||||
|
||||
.. automodule:: bigchaindb.consensus
|
|
@ -0,0 +1,101 @@
|
|||
# Run BigchainDB with Docker On Mac
|
||||
|
||||
**NOT for Production Use**
|
||||
|
||||
Those developing on Mac can follow this document to run BigchainDB in docker
|
||||
containers for a quick dev setup.
|
||||
Running BigchainDB on Mac (Docker or otherwise) is not officially supported.
|
||||
|
||||
Support is very much limited as there are certain things that work differently
|
||||
in Docker for Mac than Docker for other platforms.
|
||||
Also, we do not use mac for our development and testing. :)
|
||||
|
||||
This page may not be up to date with various settings and docker updates at
|
||||
all the times.
|
||||
|
||||
These steps work as of this writing (2017.Mar.09) and might break in the
|
||||
future with updates to Docker for mac.
|
||||
Community contribution to make BigchainDB run on Docker for Mac will always be
|
||||
welcome.
|
||||
|
||||
|
||||
## Prerequisite
|
||||
|
||||
Install Docker for Mac.
|
||||
|
||||
## (Optional) For a clean start
|
||||
|
||||
1. Stop all BigchainDB and RethinkDB/MongoDB containers.
|
||||
2. Delete all BigchainDB docker images.
|
||||
3. Delete the ~/bigchaindb_docker folder.
|
||||
|
||||
|
||||
## Pull the images
|
||||
|
||||
Pull the bigchaindb and other required docker images from docker hub.
|
||||
|
||||
```text
|
||||
docker pull bigchaindb/bigchaindb:master
|
||||
docker pull [rethinkdb:2.3|mongo:3.4.1]
|
||||
```
|
||||
|
||||
## Create the BigchainDB configuration file on Mac
|
||||
```text
|
||||
docker run \
|
||||
--rm \
|
||||
--volume $HOME/bigchaindb_docker:/data \
|
||||
bigchaindb/bigchaindb:master \
|
||||
-y configure \
|
||||
[mongodb|rethinkdb]
|
||||
```
|
||||
|
||||
To ensure that BigchainDB connects to the backend database bound to the virtual
|
||||
interface `172.17.0.1`, you must edit the BigchainDB configuration file
|
||||
(`~/bigchaindb_docker/.bigchaindb`) and change database.host from `localhost`
|
||||
to `172.17.0.1`.
|
||||
|
||||
|
||||
## Run the backend database on Mac
|
||||
|
||||
From v0.9 onwards, you can run RethinkDB or MongoDB.
|
||||
|
||||
We use the virtual interface created by the Docker daemon to allow
|
||||
communication between the BigchainDB and database containers.
|
||||
It has an IP address of 172.17.0.1 by default.
|
||||
|
||||
You can also use docker host networking or bind to your primary (eth)
|
||||
interface, if needed.
|
||||
|
||||
### For RethinkDB backend
|
||||
```text
|
||||
docker run \
|
||||
--name=rethinkdb \
|
||||
--publish=28015:28015 \
|
||||
--publish=8080:8080 \
|
||||
--restart=always \
|
||||
--volume $HOME/bigchaindb_docker:/data \
|
||||
rethinkdb:2.3
|
||||
```
|
||||
|
||||
### For MongoDB backend
|
||||
```text
|
||||
docker run \
|
||||
--name=mongodb \
|
||||
--publish=27017:27017 \
|
||||
--restart=always \
|
||||
--volume=$HOME/bigchaindb_docker/db:/data/db \
|
||||
--volume=$HOME/bigchaindb_docker/configdb:/data/configdb \
|
||||
mongo:3.4.1 --replSet=bigchain-rs
|
||||
```
|
||||
|
||||
### Run BigchainDB on Mac
|
||||
```text
|
||||
docker run \
|
||||
--name=bigchaindb \
|
||||
--publish=9984:9984 \
|
||||
--restart=always \
|
||||
--volume=$HOME/bigchaindb_docker:/data \
|
||||
bigchaindb/bigchaindb \
|
||||
start
|
||||
```
|
||||
|
|
@ -1,25 +0,0 @@
|
|||
# Example RethinkDB Storage Setups
|
||||
|
||||
## Example Amazon EC2 Setups
|
||||
|
||||
We have some scripts for [deploying a _test_ BigchainDB cluster on AWS](../clusters-feds/aws-testing-cluster.html). Those scripts include command sequences to set up storage for RethinkDB.
|
||||
In particular, look in the file [/deploy-cluster-aws/fabfile.py](https://github.com/bigchaindb/bigchaindb/blob/master/deploy-cluster-aws/fabfile.py), under `def prep_rethinkdb_storage(USING_EBS)`. Note that there are two cases:
|
||||
|
||||
1. **Using EBS ([Amazon Elastic Block Store](https://aws.amazon.com/ebs/)).** This is always an option, and for some instance types ("EBS-only"), it's the only option.
|
||||
2. **Using an "instance store" volume provided with an Amazon EC2 instance.** Note that our scripts only use one of the (possibly many) volumes in the instance store.
|
||||
|
||||
There's some explanation of the steps in the [Amazon EC2 documentation about making an Amazon EBS volume available for use](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-using-volumes.html).
|
||||
|
||||
You shouldn't use an EC2 "instance store" to store RethinkDB data for a production node, because it's not replicated and it's only intended for temporary, ephemeral data. If the associated instance crashes, is stopped, or is terminated, the data in the instance store is lost forever. Amazon EBS storage is replicated, has incremental snapshots, and is low-latency.
|
||||
|
||||
|
||||
## Example Using Amazon EFS
|
||||
|
||||
TODO
|
||||
|
||||
|
||||
## Other Examples?
|
||||
|
||||
TODO
|
||||
|
||||
Maybe RAID, ZFS, ... (over EBS volumes, i.e. a DIY Amazon EFS)
|
|
@ -8,9 +8,10 @@ This is a page of notes on the ports potentially used by BigchainDB nodes and th
|
|||
Assuming you aren't exposing the RethinkDB web interface on port 8080 (or any other port, because [there are more secure ways to access it](https://www.rethinkdb.com/docs/security/#binding-the-web-interface-port)), there are only three ports that should expect unsolicited inbound traffic:
|
||||
|
||||
1. **Port 22** can expect inbound SSH (TCP) traffic from the node administrator (i.e. a small set of IP addresses).
|
||||
2. **Port 9984** can expect inbound HTTP (TCP) traffic from BigchainDB clients sending transactions to the BigchainDB HTTP API.
|
||||
3. If you're using RethinkDB, **Port 29015** can expect inbound TCP traffic from other RethinkDB nodes in the RethinkDB cluster (for RethinkDB intracluster communications).
|
||||
4. If you're using MongoDB, **Port 27017** can expect inbound TCP traffic from other nodes.
|
||||
1. **Port 9984** can expect inbound HTTP (TCP) traffic from BigchainDB clients sending transactions to the BigchainDB HTTP API.
|
||||
1. **Port 9985** can expect inbount WebSocket traffic from BigchainDB clients.
|
||||
1. If you're using RethinkDB, **Port 29015** can expect inbound TCP traffic from other RethinkDB nodes in the RethinkDB cluster (for RethinkDB intracluster communications).
|
||||
1. If you're using MongoDB, **Port 27017** can expect inbound TCP traffic from other nodes.
|
||||
|
||||
All other ports should only get inbound traffic in response to specific requests from inside the node.
|
||||
|
||||
|
@ -59,6 +60,11 @@ If Gunicorn and the reverse proxy are running on the same server, then you'll ha
|
|||
You may want to have Gunicorn and the reverse proxy running on different servers, so that both can listen on port 9984. That would also help isolate the effects of a denial-of-service attack.
|
||||
|
||||
|
||||
## Port 9985
|
||||
|
||||
Port 9985 is the default port for the [BigchainDB WebSocket Event Stream API](../websocket-event-stream-api.html).
|
||||
|
||||
|
||||
## Port 28015
|
||||
|
||||
Port 28015 is the default port used by RethinkDB client driver connections (TCP). If your BigchainDB node is just one server, then Port 28015 only needs to listen on localhost, because all the client drivers will be running on localhost. Port 28015 doesn't need to accept inbound traffic from the outside world.
|
||||
|
|
|
@ -10,10 +10,10 @@ Appendices
|
|||
install-os-level-deps
|
||||
install-latest-pip
|
||||
run-with-docker
|
||||
docker-on-mac
|
||||
json-serialization
|
||||
cryptography
|
||||
the-Bigchain-class
|
||||
consensus
|
||||
pipelines
|
||||
backend
|
||||
commands
|
||||
|
@ -21,6 +21,7 @@ Appendices
|
|||
generate-key-pair-for-ssh
|
||||
firewall-notes
|
||||
ntp-notes
|
||||
example-rethinkdb-storage-setups
|
||||
rethinkdb-reqs
|
||||
rethinkdb-backup
|
||||
licenses
|
||||
install-with-lxd
|
||||
|
|
|
@ -24,7 +24,7 @@ deserialize(serialize(data)) == data
|
|||
True
|
||||
```
|
||||
|
||||
Since BigchainDB performs a lot of serialization we decided to use [python-rapidjson](https://github.com/kenrobbins/python-rapidjson)
|
||||
Since BigchainDB performs a lot of serialization we decided to use [python-rapidjson](https://github.com/python-rapidjson/python-rapidjson)
|
||||
which is a python wrapper for [rapidjson](https://github.com/miloyip/rapidjson) a fast and fully RFC complient JSON parser.
|
||||
|
||||
```python
|
||||
|
|
|
@ -1,6 +1,6 @@
|
|||
# Backing Up & Restoring Data
|
||||
# Backing Up and Restoring Data
|
||||
|
||||
There are several ways to backup and restore the data in a BigchainDB cluster.
|
||||
This page was written when BigchainDB only worked with RethinkDB, so its focus is on RethinkDB-based backup. BigchainDB now supports MongoDB as a backend database and we recommend that you use MongoDB in production. Nevertheless, some of the following backup ideas are still relevant regardless of the backend database being used, so we moved this page to the Appendices.
|
||||
|
||||
|
||||
## RethinkDB's Replication as a form of Backup
|
|
@ -1,20 +1,8 @@
|
|||
# Production Node Requirements
|
||||
# RethinkDB Requirements
|
||||
|
||||
Note: This section will be broken apart into several pages, e.g. NTP requirements, RethinkDB requirements, BigchainDB requirements, etc. and those pages will add more details.
|
||||
[The RethinkDB documentation](https://rethinkdb.com/docs/) should be your first source of information about its requirements. This page serves mostly to document some of its more obscure requirements.
|
||||
|
||||
|
||||
## OS Requirements
|
||||
|
||||
* RethinkDB Server [will run on any modern OS](https://www.rethinkdb.com/docs/install/). Note that the Fedora package isn't officially supported. Also, official support for Windows is fairly recent ([April 2016](https://rethinkdb.com/blog/2.3-release/)).
|
||||
* BigchainDB Server requires Python 3.4+ and Python 3.4+ [will run on any modern OS](https://docs.python.org/3.4/using/index.html).
|
||||
* BigchaindB Server uses the Python `multiprocessing` package and [some functionality in the `multiprocessing` package doesn't work on OS X](https://docs.python.org/3.4/library/multiprocessing.html#multiprocessing.Queue.qsize). You can still use Mac OS X if you use Docker or a virtual machine.
|
||||
|
||||
The BigchainDB core dev team uses recent LTS versions of Ubuntu and recent versions of Fedora.
|
||||
|
||||
We don't test BigchainDB on Windows or Mac OS X, but you can try.
|
||||
|
||||
* If you run into problems on Windows, then you may want to try using Vagrant. One of our community members ([@Mec-Is](https://github.com/Mec-iS)) wrote [a page about how to install BigchainDB on a VM with Vagrant](https://gist.github.com/Mec-iS/b84758397f1b21f21700).
|
||||
* If you have Mac OS X and want to experiment with BigchainDB, then you could do that [using Docker](../appendices/run-with-docker.html).
|
||||
RethinkDB Server [will run on any modern OS](https://www.rethinkdb.com/docs/install/). Note that the Fedora package isn't officially supported. Also, official support for Windows is fairly recent ([April 2016](https://rethinkdb.com/blog/2.3-release/)).
|
||||
|
||||
|
||||
## Storage Requirements
|
||||
|
@ -28,6 +16,20 @@ For RethinkDB's failover mechanisms to work, [every RethinkDB table must have at
|
|||
|
||||
As for the read & write rates, what do you expect those to be for your situation? It's not enough for the storage system alone to handle those rates: the interconnects between the nodes must also be able to handle them.
|
||||
|
||||
**Storage Notes Specific to RethinkDB**
|
||||
|
||||
* The RethinkDB storage engine has a number of SSD optimizations, so you _can_ benefit from using SSDs. ([source](https://www.rethinkdb.com/docs/architecture/))
|
||||
|
||||
* If you have an N-node RethinkDB cluster and 1) you want to use it to store an amount of data D (unique records, before replication), 2) you want the replication factor to be R (all tables), and 3) you want N shards (all tables), then each BigchainDB node must have storage space of at least R×D/N.
|
||||
|
||||
* RethinkDB tables can have [at most 64 shards](https://rethinkdb.com/limitations/). What does that imply? Suppose you only have one table, with 64 shards. How big could that table be? It depends on how much data can be stored in each node. If the maximum amount of data that a node can store is d, then the biggest-possible shard is d, and the biggest-possible table size is 64 times that. (All shard replicas would have to be stored on other nodes beyond the initial 64.) If there are two tables, the second table could also have 64 shards, stored on 64 other maxed-out nodes, so the total amount of unique data in the database would be (64 shards/table)×(2 tables)×d. In general, if you have T tables, the maximum amount of unique data that can be stored in the database (i.e. the amount of data before replication) is 64×T×d.
|
||||
|
||||
* When you set up storage for your RethinkDB data, you may have to select a filesystem. (Sometimes, the filesystem is already decided by the choice of storage.) We recommend using a filesystem that supports direct I/O (Input/Output). Many compressed or encrypted file systems don't support direct I/O. The ext4 filesystem supports direct I/O (but be careful: if you enable the data=journal mode, then direct I/O support will be disabled; the default is data=ordered). If your chosen filesystem supports direct I/O and you're using Linux, then you don't need to do anything to request or enable direct I/O. RethinkDB does that.
|
||||
|
||||
<p style="background-color: lightgrey;">What is direct I/O? It allows RethinkDB to write directly to the storage device (or use its own in-memory caching mechanisms), rather than relying on the operating system's file read and write caching mechanisms. (If you're using Linux, a write-to-file normally writes to the in-memory Page Cache first; only later does that Page Cache get flushed to disk. The Page Cache is also used when reading files.)</p>
|
||||
|
||||
* RethinkDB stores its data in a specific directory. You can tell RethinkDB _which_ directory using the RethinkDB config file, as explained below. In this documentation, we assume the directory is `/data`. If you set up a separate device (partition, RAID array, or logical volume) to store the RethinkDB data, then mount that device on `/data`.
|
||||
|
||||
|
||||
## Memory (RAM) Requirements
|
||||
|
|
@ -25,7 +25,7 @@ docker run \
|
|||
--interactive \
|
||||
--rm \
|
||||
--tty \
|
||||
--volume "$HOME/bigchaindb_docker:/data" \
|
||||
--volume $HOME/bigchaindb_docker:/data \
|
||||
bigchaindb/bigchaindb \
|
||||
-y configure \
|
||||
[mongodb|rethinkdb]
|
||||
|
@ -45,7 +45,7 @@ Let's analyze that command:
|
|||
`$HOME/bigchaindb_docker` to the container directory `/data`;
|
||||
this allows us to have the data persisted on the host machine,
|
||||
you can read more in the [official Docker
|
||||
documentation](https://docs.docker.com/engine/tutorials/dockervolumes/#/mount-a-host-directory-as-a-data-volume)
|
||||
documentation](https://docs.docker.com/engine/tutorials/dockervolumes)
|
||||
* `bigchaindb/bigchaindb` the image to use. All the options after the container name are passed on to the entrypoint inside the container.
|
||||
* `-y configure` execute the `configure` sub-command (of the `bigchaindb`
|
||||
command) inside the container, with the `-y` option to automatically use all the default config values
|
||||
|
@ -76,13 +76,13 @@ docker run \
|
|||
--publish=172.17.0.1:28015:28015 \
|
||||
--publish=172.17.0.1:58080:8080 \
|
||||
--restart=always \
|
||||
--volume "$HOME/bigchaindb_docker:/data" \
|
||||
--volume $HOME/bigchaindb_docker:/data \
|
||||
rethinkdb:2.3
|
||||
```
|
||||
|
||||
<!-- Don't hyperlink http://172.17.0.1:58080/ because Sphinx will fail when you do "make linkcheck" -->
|
||||
|
||||
You can also access the RethinkDB dashboard at
|
||||
[http://172.17.0.1:58080/](http://172.17.0.1:58080/)
|
||||
You can also access the RethinkDB dashboard at http://172.17.0.1:58080/
|
||||
|
||||
|
||||
#### For MongoDB
|
||||
|
@ -95,7 +95,7 @@ be owned by this user in the host.
|
|||
If there is no owner with UID 999, you can create the corresponding user and
|
||||
group.
|
||||
|
||||
`groupadd -r --gid 999 mongodb && useradd -r --uid 999 -g mongodb mongodb`
|
||||
`useradd -r --uid 999 mongodb` OR `groupadd -r --gid 999 mongodb && useradd -r --uid 999 -g mongodb mongodb` should work.
|
||||
|
||||
|
||||
```text
|
||||
|
@ -156,3 +156,4 @@ docker build --tag local-bigchaindb .
|
|||
```
|
||||
|
||||
Now you can use your own image to run BigchainDB containers.
|
||||
|
||||
|
|
|
@ -0,0 +1,454 @@
|
|||
First Node or Bootstrap Node Setup
|
||||
==================================
|
||||
|
||||
This document is a work in progress and will evolve over time to include
|
||||
security, websocket and other settings.
|
||||
|
||||
Step 1: Set Up the Cluster
|
||||
--------------------------
|
||||
|
||||
.. code:: bash
|
||||
|
||||
az group create --name bdb-test-cluster-0 --location westeurope --debug --output json
|
||||
|
||||
ssh-keygen -t rsa -C "k8s-bdb-test-cluster-0" -f ~/.ssh/k8s-bdb-test-cluster-0
|
||||
|
||||
az acs create --name k8s-bdb-test-cluster-0 \
|
||||
--resource-group bdb-test-cluster-0 \
|
||||
--master-count 3 \
|
||||
--agent-count 2 \
|
||||
--admin-username ubuntu \
|
||||
--agent-vm-size Standard_D2_v2 \
|
||||
--dns-prefix k8s-bdb-test-cluster-0 \
|
||||
--ssh-key-value ~/.ssh/k8s-bdb-test-cluster-0.pub \
|
||||
--orchestrator-type kubernetes \
|
||||
--debug --output json
|
||||
|
||||
az acs kubernetes get-credentials \
|
||||
--resource-group bdb-test-cluster-0 \
|
||||
--name k8s-bdb-test-cluster-0 \
|
||||
--debug --output json
|
||||
|
||||
echo -e "Host k8s-bdb-test-cluster-0.westeurope.cloudapp.azure.com\n ForwardAgent yes" >> ~/.ssh/config
|
||||
|
||||
|
||||
Step 2: Connect to the Cluster UI - (optional)
|
||||
----------------------------------------------
|
||||
|
||||
* Get the kubectl context for this cluster using ``kubectl config view``.
|
||||
|
||||
* For the above commands, the context would be ``k8s-bdb-test-cluster-0``.
|
||||
|
||||
.. code:: bash
|
||||
|
||||
kubectl --context k8s-bdb-test-cluster-0 proxy -p 8001
|
||||
|
||||
Step 3. Configure the Cluster
|
||||
-----------------------------
|
||||
|
||||
* Use the ConfigMap in ``configuration/config-map.yaml`` file for configuring
|
||||
the cluster.
|
||||
|
||||
* Log in the the MongoDB Cloud Manager and select the group that will monitor
|
||||
and backup this cluster from the dropdown box.
|
||||
|
||||
* Go to Settings, Group Settings and copy the ``Agent Api Key``.
|
||||
|
||||
* Replace the ``<api key here>`` field with this key.
|
||||
|
||||
* Since this is the first node of the cluster, ensure that the ``data.fqdn``
|
||||
field has the value ``mdb-instance-0``.
|
||||
|
||||
* We only support the value ``all`` in the ``data.allowed-hosts`` field for now.
|
||||
|
||||
* Create the ConfigMap
|
||||
|
||||
.. code:: bash
|
||||
|
||||
kubectl --context k8s-bdb-test-cluster-0 apply -f configuration/config-map.yaml
|
||||
|
||||
Step 4. Start the NGINX Service
|
||||
-------------------------------
|
||||
|
||||
* This will will give us a public IP for the cluster.
|
||||
|
||||
* Once you complete this step, you might need to wait up to 10 mins for the
|
||||
public IP to be assigned.
|
||||
|
||||
* You have the option to use vanilla NGINX or an OpenResty NGINX integrated
|
||||
with 3scale API Gateway.
|
||||
|
||||
|
||||
Step 4.1. Vanilla NGINX
|
||||
^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
* This configuration is located in the file ``nginx/nginx-svc.yaml``.
|
||||
|
||||
* Since this is the first node, rename ``metadata.name`` and ``metadata.labels.name``
|
||||
to ``ngx-instance-0``, and ``spec.selector.app`` to ``ngx-instance-0-dep``.
|
||||
|
||||
* Start the Kubernetes Service:
|
||||
|
||||
.. code:: bash
|
||||
|
||||
kubectl --context k8s-bdb-test-cluster-0 apply -f nginx/nginx-svc.yaml
|
||||
|
||||
|
||||
Step 4.2. OpenResty NGINX + 3scale
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
* You have to enable HTTPS for this one and will need an HTTPS certificate
|
||||
for your domain
|
||||
|
||||
* Assuming that the public key chain is named ``cert.pem`` and private key is
|
||||
``cert.key``, run the following commands to encode the certificates into
|
||||
single continuous string that can be embedded in yaml.
|
||||
|
||||
.. code:: bash
|
||||
|
||||
cat cert.pem | base64 -w 0 > cert.pem.b64
|
||||
|
||||
cat cert.key | base64 -w 0 > cert.key.b64
|
||||
|
||||
|
||||
* Copy the contents of ``cert.pem.b64`` in the ``cert.pem`` field, and the
|
||||
contents of ``cert.key.b64`` in the ``cert.key`` field in the file
|
||||
``nginx-3scale/nginx-3scale-secret.yaml``
|
||||
|
||||
* Create the Kubernetes Secret:
|
||||
|
||||
.. code:: bash
|
||||
|
||||
kubectl --context k8s-bdb-test-cluster-0 apply -f nginx-3scale/nginx-3scale-secret.yaml
|
||||
|
||||
* Since this is the first node, rename ``metadata.name`` and ``metadata.labels.name``
|
||||
to ``ngx-instance-0``, and ``spec.selector.app`` to ``ngx-instance-0-dep`` in
|
||||
``nginx-3scale/nginx-3scale-svc.yaml`` file.
|
||||
|
||||
* Start the Kubernetes Service:
|
||||
|
||||
.. code:: bash
|
||||
|
||||
kubectl --context k8s-bdb-test-cluster-0 apply -f nginx-3scale/nginx-3scale-svc.yaml
|
||||
|
||||
|
||||
Step 5. Assign DNS Name to the NGINX Public IP
|
||||
----------------------------------------------
|
||||
|
||||
* The following command can help you find out if the nginx service strated above
|
||||
has been assigned a public IP or external IP address:
|
||||
|
||||
.. code:: bash
|
||||
|
||||
kubectl --context k8s-bdb-test-cluster-0 get svc -w
|
||||
|
||||
* Once a public IP is assigned, you can log in to the Azure portal and map it to
|
||||
a DNS name.
|
||||
|
||||
* We usually start with bdb-test-cluster-0, bdb-test-cluster-1 and so on.
|
||||
|
||||
* Let us assume that we assigned the unique name of ``bdb-test-cluster-0`` here.
|
||||
|
||||
|
||||
Step 6. Start the Mongo Kubernetes Service
|
||||
------------------------------------------
|
||||
|
||||
* Change ``metadata.name`` and ``metadata.labels.name`` to
|
||||
``mdb-instance-0``, and ``spec.selector.app`` to ``mdb-instance-0-ss``.
|
||||
|
||||
.. code:: bash
|
||||
|
||||
kubectl --context k8s-bdb-test-cluster-0 apply -f mongodb/mongo-svc.yaml
|
||||
|
||||
|
||||
Step 7. Start the BigchainDB Kubernetes Service
|
||||
-----------------------------------------------
|
||||
|
||||
* Change ``metadata.name`` and ``metadata.labels.name`` to
|
||||
``bdb-instance-0``, and ``spec.selector.app`` to ``bdb-instance-0-dep``.
|
||||
|
||||
.. code:: bash
|
||||
|
||||
kubectl --context k8s-bdb-test-cluster-0 apply -f bigchaindb/bigchaindb-svc.yaml
|
||||
|
||||
|
||||
Step 8. Start the NGINX Kubernetes Deployment
|
||||
---------------------------------------------
|
||||
|
||||
* As in step 4, you have the option to use vanilla NGINX or an OpenResty NGINX
|
||||
integrated with 3scale API Gateway.
|
||||
|
||||
Step 8.1. Vanilla NGINX
|
||||
^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
* This configuration is located in the file ``nginx/nginx-dep.yaml``.
|
||||
|
||||
* Since this is the first node, change the ``metadata.name`` and
|
||||
``spec.template.metadata.labels.app`` to ``ngx-instance-0-dep``.
|
||||
|
||||
* Set ``MONGODB_BACKEND_HOST`` env var to
|
||||
``mdb-instance-0.default.svc.cluster.local``.
|
||||
|
||||
* Set ``BIGCHAINDB_BACKEND_HOST`` env var to
|
||||
``bdb-instance-0.default.svc.cluster.local``.
|
||||
|
||||
* Set ``MONGODB_FRONTEND_PORT`` to
|
||||
``$(NGX_INSTANCE_0_SERVICE_PORT_NGX_PUBLIC_MDB_PORT)``.
|
||||
|
||||
* Set ``BIGCHAINDB_FRONTEND_PORT`` to
|
||||
``$(NGX_INSTANCE_0_SERVICE_PORT_NGX_PUBLIC_BDB_PORT)``.
|
||||
|
||||
.. code:: bash
|
||||
|
||||
kubectl --context k8s-bdb-test-cluster-0 apply -f nginx/nginx-dep.yaml
|
||||
|
||||
Step 8.2. OpenResty NGINX + 3scale
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
* This configuration is located in the file
|
||||
``nginx-3scale/nginx-3scale-dep.yaml``.
|
||||
|
||||
* Since this is the first node, change the metadata.name and
|
||||
spec.template.metadata.labels.app to ``ngx-instance-0-dep``.
|
||||
|
||||
* Set ``MONGODB_BACKEND_HOST`` env var to
|
||||
``mdb-instance-0.default.svc.cluster.local``.
|
||||
|
||||
* Set ``BIGCHAINDB_BACKEND_HOST`` env var to
|
||||
``bdb-instance-0.default.svc.cluster.local``.
|
||||
|
||||
* Set ``MONGODB_FRONTEND_PORT`` to
|
||||
``$(NGX_INSTANCE_0_SERVICE_PORT_NGX_PUBLIC_MDB_PORT)``.
|
||||
|
||||
* Set ``BIGCHAINDB_FRONTEND_PORT`` to
|
||||
``$(NGX_INSTANCE_0_SERVICE_PORT_NGX_PUBLIC_BDB_PORT)``.
|
||||
|
||||
* Also, replace the placeholder strings for the env vars with the values
|
||||
obtained from 3scale. You will need the Secret Token, Service ID, Version Header
|
||||
and Provider Key from 3scale.
|
||||
|
||||
* The ``THREESCALE_FRONTEND_API_DNS_NAME`` will be DNS name registered for your
|
||||
HTTPS certificate.
|
||||
|
||||
* You can set the ``THREESCALE_UPSTREAM_API_PORT`` to any port other than 9984,
|
||||
9985, 443, 8888 and 27017. We usually use port ``9999``.
|
||||
|
||||
.. code:: bash
|
||||
|
||||
kubectl --context k8s-bdb-test-cluster-0 apply -f nginx-3scale/nginx-3scale-dep.yaml
|
||||
|
||||
|
||||
Step 9. Create a Kubernetes Storage Class for MongoDB
|
||||
-----------------------------------------------------
|
||||
|
||||
.. code:: bash
|
||||
|
||||
kubectl --context k8s-bdb-test-cluster-0 apply -f mongodb/mongo-sc.yaml
|
||||
|
||||
|
||||
Step 10. Create a Kubernetes PersistentVolumeClaim
|
||||
--------------------------------------------------
|
||||
|
||||
.. code:: bash
|
||||
|
||||
kubectl --context k8s-bdb-test-cluster-0 apply -f mongodb/mongo-pvc.yaml
|
||||
|
||||
|
||||
Step 11. Start a Kubernetes StatefulSet for MongoDB
|
||||
---------------------------------------------------
|
||||
|
||||
* Change ``spec.serviceName`` to ``mdb-instance-0``.
|
||||
|
||||
* Change the ``metadata.name``, ``template.metadata.name`` and
|
||||
``template.metadata.labels.app`` to ``mdb-instance-0-ss``.
|
||||
|
||||
* It might take up to 10 minutes for the disks to be created and attached to
|
||||
the pod.
|
||||
|
||||
* The UI might show that the pod has errored with the
|
||||
message "timeout expired waiting for volumes to attach/mount".
|
||||
|
||||
* Use the CLI below to check the status of the pod in this case,
|
||||
instead of the UI. This happens due to a bug in Azure ACS.
|
||||
|
||||
.. code:: bash
|
||||
|
||||
kubectl --context k8s-bdb-test-cluster-0 apply -f mongodb/mongo-ss.yaml
|
||||
|
||||
* You can check the status of the pod using the command:
|
||||
|
||||
.. code:: bash
|
||||
|
||||
kubectl --context k8s-bdb-test-cluster-0 get po -w
|
||||
|
||||
|
||||
Step 12. Start a Kubernetes Deployment for Bigchaindb
|
||||
-----------------------------------------------------
|
||||
|
||||
* Change both ``metadata.name`` and ``spec.template.metadata.labels.app``
|
||||
to ``bdb-instance-0-dep``.
|
||||
|
||||
* Set ``BIGCHAINDB_DATABASE_HOST`` to ``mdb-instance-0``.
|
||||
|
||||
* Set the appropriate ``BIGCHAINDB_KEYPAIR_PUBLIC``,
|
||||
``BIGCHAINDB_KEYPAIR_PRIVATE`` values.
|
||||
|
||||
* One way to generate BigchainDB keypair is to run a Python shell with
|
||||
the command
|
||||
``from bigchaindb_driver import crypto; crypto.generate_keypair()``.
|
||||
|
||||
.. code:: bash
|
||||
|
||||
kubectl --context k8s-bdb-test-cluster-0 apply -f bigchaindb/bigchaindb-dep.yaml
|
||||
|
||||
|
||||
Step 13. Start a Kubernetes Deployment for MongoDB Monitoring Agent
|
||||
-------------------------------------------------------------------
|
||||
|
||||
* Change both metadata.name and spec.template.metadata.labels.app to
|
||||
``mdb-mon-instance-0-dep``.
|
||||
|
||||
.. code:: bash
|
||||
|
||||
kubectl --context k8s-bdb-test-cluster-0 apply -f mongodb-monitoring-agent/mongo-mon-dep.yaml
|
||||
|
||||
* Get the pod name and check its logs:
|
||||
|
||||
.. code:: bash
|
||||
|
||||
kubectl --context k8s-bdb-test-cluster-0 get po
|
||||
|
||||
kubectl --context k8s-bdb-test-cluster-0 logs -f <pod name>
|
||||
|
||||
|
||||
Step 14. Configure MongoDB Cloud Manager for Monitoring
|
||||
-------------------------------------------------------
|
||||
|
||||
* Open `MongoDB Cloud Manager <https://cloud.mongodb.com>`_.
|
||||
|
||||
* Click ``Login`` under ``MongoDB Cloud Manager`` and log in to the Cloud Manager.
|
||||
|
||||
* Select the group from the dropdown box on the page.
|
||||
|
||||
* Go to Settings, Group Settings and add a Preferred Hostnames regexp as
|
||||
``^mdb-instance-[0-9]{1,2}$``. It may take up to 5 mins till this setting
|
||||
is in effect. You may refresh the browser window and verify whether the changes
|
||||
have been saved or not.
|
||||
|
||||
* Next, click the ``Deployment`` tab, and then the ``Manage Existing`` button.
|
||||
|
||||
* On the ``Import your deployment for monitoring`` page, enter the hostname as
|
||||
``mdb-instance-0``, port number as ``27017``, with no authentication and no
|
||||
TLS/SSL settings.
|
||||
|
||||
* Once the deployment is found, click the ``Continue`` button.
|
||||
This may take about a minute or two.
|
||||
|
||||
* Do not add ``Automation Agent`` when given an option to add it.
|
||||
|
||||
* Verify on the UI that data is being by the monitoring agent.
|
||||
|
||||
|
||||
Step 15. Start a Kubernetes Deployment for MongoDB Backup Agent
|
||||
---------------------------------------------------------------
|
||||
|
||||
* Change both ``metadata.name`` and ``spec.template.metadata.labels.app``
|
||||
to ``mdb-backup-instance-0-dep``.
|
||||
|
||||
.. code:: bash
|
||||
|
||||
kubectl --context k8s-bdb-test-cluster-0 apply -f mongodb-backup-agent/mongo-backup-dep.yaml
|
||||
|
||||
* Get the pod name and check its logs:
|
||||
|
||||
.. code:: bash
|
||||
|
||||
kubectl --context k8s-bdb-test-cluster-0 get po
|
||||
|
||||
kubectl --context k8s-bdb-test-cluster-0 logs -f <pod name>
|
||||
|
||||
|
||||
Step 16. Configure MongoDB Cloud Manager for Backup
|
||||
---------------------------------------------------
|
||||
|
||||
* Open `MongoDB Cloud Manager <https://cloud.mongodb.com>`_.
|
||||
|
||||
* Click ``Login`` under ``MongoDB Cloud Manager`` and log in to the Cloud
|
||||
Manager.
|
||||
|
||||
* Select the group from the dropdown box on the page.
|
||||
|
||||
* Click ``Backup`` tab.
|
||||
|
||||
* Click on the ``Begin Setup``.
|
||||
|
||||
* Click on ``Next``, select the replica set from the dropdown menu.
|
||||
|
||||
* Verify the details of your MongoDB instance and click on ``Start`` again.
|
||||
|
||||
* It might take up to 5 minutes to start the backup process.
|
||||
|
||||
* Verify that data is being backed up on the UI.
|
||||
|
||||
|
||||
Step 17. Verify that the Cluster is Correctly Set Up
|
||||
----------------------------------------------------
|
||||
|
||||
* Start the toolbox container in the cluster
|
||||
|
||||
.. code:: bash
|
||||
|
||||
kubectl --context k8s-bdb-test-cluster-0 \
|
||||
run -it toolbox \
|
||||
--image bigchaindb/toolbox \
|
||||
--image-pull-policy=Always \
|
||||
--restart=Never --rm
|
||||
|
||||
* Verify MongoDB instance
|
||||
|
||||
.. code:: bash
|
||||
|
||||
nslookup mdb-instance-0
|
||||
|
||||
dig +noall +answer _mdb-port._tcp.mdb-instance-0.default.svc.cluster.local SRV
|
||||
|
||||
curl -X GET http://mdb-instance-0:27017
|
||||
|
||||
* Verify BigchainDB instance
|
||||
|
||||
.. code:: bash
|
||||
|
||||
nslookup bdb-instance-0
|
||||
|
||||
dig +noall +answer _bdb-port._tcp.bdb-instance-0.default.svc.cluster.local SRV
|
||||
|
||||
curl -X GET http://bdb-instance-0:9984
|
||||
|
||||
* Verify NGINX instance
|
||||
|
||||
.. code:: bash
|
||||
|
||||
nslookup ngx-instance-0
|
||||
|
||||
dig +noall +answer _ngx-public-mdb-port._tcp.ngx-instance-0.default.svc.cluster.local SRV
|
||||
|
||||
curl -X GET http://ngx-instance-0:27017 # results in curl: (56) Recv failure: Connection reset by peer
|
||||
|
||||
dig +noall +answer _ngx-public-bdb-port._tcp.ngx-instance-0.default.svc.cluster.local SRV
|
||||
|
||||
* If you have run the vanilla NGINX instance, run
|
||||
|
||||
.. code:: bash
|
||||
|
||||
curl -X GET http://ngx-instance-0:80
|
||||
|
||||
* If you have the OpenResty NGINX + 3scale instance, run
|
||||
|
||||
.. code:: bash
|
||||
|
||||
curl -X GET https://ngx-instance-0
|
||||
|
||||
* Check the MongoDB monitoring and backup agent on the MOngoDB Coud Manager portal to verify they are working fine.
|
||||
|
||||
* Send some transactions to BigchainDB and verify it's up and running!
|
||||
|
|
@ -17,4 +17,5 @@ If you find the cloud deployment templates for nodes helpful, then you may also
|
|||
node-on-kubernetes
|
||||
add-node-on-kubernetes
|
||||
upgrade-on-kubernetes
|
||||
|
||||
first-node
|
||||
log-analytics
|
||||
|
|
|
@ -0,0 +1,256 @@
|
|||
Log Analytics on Azure
|
||||
======================
|
||||
|
||||
This section documents how to create and configure a Log Analytics workspace on
|
||||
Azure, for a Kubernetes-based deployment.
|
||||
|
||||
The documented approach is based on an integration of Microsoft's Operations
|
||||
Management Suite (OMS) with a Kubernetes-based Azure Container Service cluster.
|
||||
|
||||
The :ref:`oms-k8s-references` contains links to more detailed documentation on
|
||||
Azure, and Kubernetes.
|
||||
|
||||
There are three main steps involved:
|
||||
|
||||
1. Create a workspace (``LogAnalyticsOMS``).
|
||||
2. Create a ``ContainersOMS`` solution under the workspace.
|
||||
3. Deploy the OMS agent(s).
|
||||
|
||||
Steps 1 and 2 rely on `Azure Resource Manager templates`_ and can be done with
|
||||
one template so we'll cover them together. Step 3 relies on a
|
||||
`Kubernetes DaemonSet`_ and will be covered separately.
|
||||
|
||||
Minimum Requirements
|
||||
--------------------
|
||||
This document assumes that you have already deployed a Kubernetes cluster, and
|
||||
that you have the Kubernetes command line ``kubectl`` installed.
|
||||
|
||||
Creating a workspace and adding a containers solution
|
||||
-----------------------------------------------------
|
||||
For the sake of this document and example, we'll assume an existing resource
|
||||
group named:
|
||||
|
||||
* ``resource_group``
|
||||
|
||||
and the workspace we'll create will be named:
|
||||
|
||||
* ``work_space``
|
||||
|
||||
If you feel creative you may replace these names by more interesting ones.
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
$ az group deployment create --debug \
|
||||
--resource-group resource_group \
|
||||
--name "Microsoft.LogAnalyticsOMS" \
|
||||
--template-file log_analytics_oms.json \
|
||||
--parameters @log_analytics_oms.parameters.json
|
||||
|
||||
An example of a simple tenplate file (``--template-file``):
|
||||
|
||||
.. code-block:: json
|
||||
|
||||
{
|
||||
"$schema": "http://schema.management.azure.com/schemas/2014-04-01-preview/deploymentTemplate.json#",
|
||||
"contentVersion": "1.0.0.0",
|
||||
"parameters": {
|
||||
"sku": {
|
||||
"type": "String"
|
||||
},
|
||||
"workspaceName": {
|
||||
"type": "String"
|
||||
},
|
||||
"solutionType": {
|
||||
"type": "String"
|
||||
},
|
||||
},
|
||||
"resources": [
|
||||
{
|
||||
"apiVersion": "2015-03-20",
|
||||
"type": "Microsoft.OperationalInsights/workspaces",
|
||||
"name": "[parameters('workspaceName')]",
|
||||
"location": "[resourceGroup().location]",
|
||||
"properties": {
|
||||
"sku": {
|
||||
"name": "[parameters('sku')]"
|
||||
}
|
||||
},
|
||||
"resources": [
|
||||
{
|
||||
"apiVersion": "2015-11-01-preview",
|
||||
"location": "[resourceGroup().location]",
|
||||
"name": "[Concat(parameters('solutionType'), '(', parameters('workspaceName'), ')')]",
|
||||
"type": "Microsoft.OperationsManagement/solutions",
|
||||
"id": "[Concat(resourceGroup().id, '/providers/Microsoft.OperationsManagement/solutions/', parameters('solutionType'), '(', parameters('workspaceName'), ')')]",
|
||||
"dependsOn": [
|
||||
"[concat('Microsoft.OperationalInsights/workspaces/', parameters('workspaceName'))]"
|
||||
],
|
||||
"properties": {
|
||||
"workspaceResourceId": "[resourceId('Microsoft.OperationalInsights/workspaces/', parameters('workspaceName'))]"
|
||||
},
|
||||
"plan": {
|
||||
"publisher": "Microsoft",
|
||||
"product": "[Concat('OMSGallery/', parameters('solutionType'))]",
|
||||
"name": "[Concat(parameters('solutionType'), '(', parameters('workspaceName'), ')')]",
|
||||
"promotionCode": ""
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
|
||||
An example of the associated parameter file (``--parameters``):
|
||||
|
||||
.. code-block:: json
|
||||
|
||||
{
|
||||
"$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentParameters.json#",
|
||||
"contentVersion": "1.0.0.0",
|
||||
"parameters": {
|
||||
"sku": {
|
||||
"value": "Free"
|
||||
},
|
||||
"workspaceName": {
|
||||
"value": "work_space"
|
||||
},
|
||||
"solutionType": {
|
||||
"value": "Containers"
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
Deploying the OMS agent(s)
|
||||
--------------------------
|
||||
In order to deploy an OMS agent two important pieces of information are needed:
|
||||
|
||||
* workspace id
|
||||
* workspace key
|
||||
|
||||
Obtaining the workspace id:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
$ az resource show \
|
||||
--resource-group resource_group
|
||||
--resource-type Microsoft.OperationalInsights/workspaces
|
||||
--name work_space \
|
||||
| grep customerId
|
||||
"customerId": "12345678-1234-1234-1234-123456789012",
|
||||
|
||||
Obtaining the workspace key:
|
||||
|
||||
Until we figure out a way to this via the command line please see instructions
|
||||
under `Obtain your workspace ID and key
|
||||
<https://docs.microsoft.com/en-us/azure/container-service/container-service-kubernetes-oms#obtain-your-workspace-id-and-key>`_.
|
||||
|
||||
Once you have the workspace id and key you can include them in the following
|
||||
YAML file (:download:`oms-daemonset.yaml
|
||||
<../../../../k8s/logging-and-monitoring/oms-daemonset.yaml>`):
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
# oms-daemonset.yaml
|
||||
apiVersion: extensions/v1beta1
|
||||
kind: DaemonSet
|
||||
metadata:
|
||||
name: omsagent
|
||||
spec:
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: omsagent
|
||||
spec:
|
||||
containers:
|
||||
- env:
|
||||
- name: WSID
|
||||
value: <workspace_id>
|
||||
- name: KEY
|
||||
value: <workspace_key>
|
||||
image: microsoft/oms
|
||||
name: omsagent
|
||||
ports:
|
||||
- containerPort: 25225
|
||||
protocol: TCP
|
||||
securityContext:
|
||||
privileged: true
|
||||
volumeMounts:
|
||||
- mountPath: /var/run/docker.sock
|
||||
name: docker-sock
|
||||
volumes:
|
||||
- name: docker-sock
|
||||
hostPath:
|
||||
path: /var/run/docker.sock
|
||||
|
||||
To deploy the agent simply run the following command:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
$ kubectl create -f oms-daemonset.yaml
|
||||
|
||||
|
||||
Some useful management tasks
|
||||
----------------------------
|
||||
List workspaces:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
$ az resource list \
|
||||
--resource-group resource_group \
|
||||
--resource-type Microsoft.OperationalInsights/workspaces
|
||||
|
||||
List solutions:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
$ az resource list \
|
||||
--resource-group resource_group \
|
||||
--resource-type Microsoft.OperationsManagement/solutions
|
||||
|
||||
Deleting the containers solution:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
$ az group deployment delete --debug \
|
||||
--resource-group resource_group \
|
||||
--name Microsoft.ContainersOMS
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
$ az resource delete \
|
||||
--resource-group resource_group \
|
||||
--resource-type Microsoft.OperationsManagement/solutions \
|
||||
--name "Containers(work_space)"
|
||||
|
||||
Deleting the workspace:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
$ az group deployment delete --debug \
|
||||
--resource-group resource_group \
|
||||
--name Microsoft.LogAnalyticsOMS
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
$ az resource delete \
|
||||
--resource-group resource_group \
|
||||
--resource-type Microsoft.OperationalInsights/workspaces \
|
||||
--name work_space
|
||||
|
||||
|
||||
.. _oms-k8s-references:
|
||||
|
||||
References
|
||||
----------
|
||||
|
||||
* `Monitor an Azure Container Service cluster with Microsoft Operations Management Suite (OMS) <https://docs.microsoft.com/en-us/azure/container-service/container-service-kubernetes-oms>`_
|
||||
* `Manage Log Analytics using Azure Resource Manager templates <https://docs.microsoft.com/en-us/azure/log-analytics/log-analytics-template-workspace-configuration>`_
|
||||
* `azure commands for deployments <https://docs.microsoft.com/en-us/cli/azure/group/deployment>`_
|
||||
(``az group deployment``)
|
||||
* `Understand the structure and syntax of Azure Resource Manager templates <https://docs.microsoft.com/en-us/azure/azure-resource-manager/resource-group-authoring-templates>`_
|
||||
* `Kubernetes DaemonSet`_
|
||||
|
||||
|
||||
|
||||
.. _Azure Resource Manager templates: https://docs.microsoft.com/en-us/azure/azure-resource-manager/resource-group-authoring-templates
|
||||
.. _Kubernetes DaemonSet: https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/
|
|
@ -157,7 +157,7 @@ Step 5: Create the Config Map - Optional
|
|||
|
||||
This step is required only if you are planning to set up multiple
|
||||
`BigchainDB nodes
|
||||
<https://docs.bigchaindb.com/en/latest/terminology.html#node>`_.
|
||||
<https://docs.bigchaindb.com/en/latest/terminology.html>`_.
|
||||
|
||||
MongoDB reads the local ``/etc/hosts`` file while bootstrapping a replica set
|
||||
to resolve the hostname provided to the ``rs.initiate()`` command. It needs to
|
||||
|
@ -268,7 +268,7 @@ Step 7: Initialize a MongoDB Replica Set - Optional
|
|||
|
||||
This step is required only if you are planning to set up multiple
|
||||
`BigchainDB nodes
|
||||
<https://docs.bigchaindb.com/en/latest/terminology.html#node>`_.
|
||||
<https://docs.bigchaindb.com/en/latest/terminology.html>`_.
|
||||
|
||||
|
||||
Login to the running MongoDB instance and access the mongo shell using:
|
||||
|
@ -315,7 +315,7 @@ Step 8: Create a DNS record - Optional
|
|||
|
||||
This step is required only if you are planning to set up multiple
|
||||
`BigchainDB nodes
|
||||
<https://docs.bigchaindb.com/en/latest/terminology.html#node>`_.
|
||||
<https://docs.bigchaindb.com/en/latest/terminology.html>`_.
|
||||
|
||||
**Azure.** Select the current Azure resource group and look for the ``Public IP``
|
||||
resource. You should see at least 2 entries there - one for the Kubernetes
|
||||
|
@ -426,9 +426,8 @@ on the cluster and query the internal DNS and IP endpoints.
|
|||
$ kubectl run -it toolbox -- image <docker image to run> --restart=Never --rm
|
||||
|
||||
There is a generic image based on alpine:3.5 with the required utilities
|
||||
hosted at Docker Hub under ``bigchaindb/toolbox``.
|
||||
The corresponding Dockerfile is `here
|
||||
<https://github.com/bigchaindb/bigchaindb/k8s/toolbox/Dockerfile>`_.
|
||||
hosted at Docker Hub under `bigchaindb/toolbox <https://hub.docker.com/r/bigchaindb/toolbox/>`_.
|
||||
The corresponding Dockerfile is in the bigchaindb/bigchaindb repository on GitHub, at `https://github.com/bigchaindb/bigchaindb/blob/master/k8s/toolbox/Dockerfile <https://github.com/bigchaindb/bigchaindb/blob/master/k8s/toolbox/Dockerfile>`_.
|
||||
|
||||
You can use it as below to get started immediately:
|
||||
|
||||
|
|
|
@ -81,4 +81,4 @@ where, as before, `<key-name>` must be replaced.
|
|||
|
||||
## Next Steps
|
||||
|
||||
You could make changes to the Ansible playbook (and the resources it uses) to make the node more production-worthy. See [the section on production node assumptions, components and requirements](../nodes/index.html).
|
||||
You could make changes to the Ansible playbook (and the resources it uses) to make the node more production-worthy. See [the section on production node assumptions, components and requirements](../production-nodes/index.html).
|
||||
|
|
|
@ -53,7 +53,7 @@ on the node and mark it as unscheduleable
|
|||
|
||||
kubectl drain $NODENAME
|
||||
|
||||
There are `more details in the Kubernetes docs <https://kubernetes.io/docs/admin/cluster-management/#maintenance-on-a-node>`_,
|
||||
There are `more details in the Kubernetes docs <https://kubernetes.io/docs/concepts/cluster-administration/cluster-management/#maintenance-on-a-node>`_,
|
||||
including instructions to make the node scheduleable again.
|
||||
|
||||
To manually upgrade the host OS,
|
||||
|
@ -82,13 +82,13 @@ A typical upgrade workflow for a single Deployment would be:
|
|||
|
||||
$ KUBE_EDITOR=nano kubectl edit deployment/<name of Deployment>
|
||||
|
||||
The `kubectl edit <https://kubernetes.io/docs/user-guide/kubectl/kubectl_edit/>`_
|
||||
command opens the specified editor (nano in the above example),
|
||||
The ``kubectl edit`` command
|
||||
opens the specified editor (nano in the above example),
|
||||
allowing you to edit the specified Deployment *in the Kubernetes cluster*.
|
||||
You can change the version tag on the Docker image, for example.
|
||||
Don't forget to save your edits before exiting the editor.
|
||||
The Kubernetes docs have more information about
|
||||
`updating a Deployment <https://kubernetes.io/docs/user-guide/deployments/#updating-a-deployment>`_.
|
||||
`Deployments <https://kubernetes.io/docs/concepts/workloads/controllers/deployment/>`_ (including updating them).
|
||||
|
||||
|
||||
The upgrade story for the MongoDB StatefulSet is *different*.
|
||||
|
|
|
@ -1,6 +1,6 @@
|
|||
# Deploy a Testing Cluster on AWS
|
||||
# Deploy a RethinkDB-Based Testing Cluster on AWS
|
||||
|
||||
This section explains a way to deploy a cluster of BigchainDB nodes on Amazon Web Services (AWS) for testing purposes.
|
||||
This section explains a way to deploy a _RethinkDB-based_ cluster of BigchainDB nodes on Amazon Web Services (AWS) for testing purposes.
|
||||
|
||||
## Why?
|
||||
|
||||
|
|
|
@ -5,6 +5,5 @@ Clusters
|
|||
:maxdepth: 1
|
||||
|
||||
set-up-a-cluster
|
||||
backup
|
||||
aws-testing-cluster
|
||||
|
||||
|
|
|
@ -3,7 +3,9 @@
|
|||
This section is about how to set up a BigchainDB cluster where each node is operated by a different operator. If you want to set up and run a testing cluster on AWS (where all nodes are operated by you), then see [the section about that](aws-testing-cluster.html).
|
||||
|
||||
|
||||
## Initial Checklist
|
||||
## Initial Questions
|
||||
|
||||
There are many questions that must be answered before setting up a BigchainDB cluster. For example:
|
||||
|
||||
* Do you have a governance process for making consortium-level decisions, such as how to admit new members?
|
||||
* What will you store in creation transactions (data payload)? Is there a data schema?
|
||||
|
@ -15,14 +17,12 @@ This section is about how to set up a BigchainDB cluster where each node is oper
|
|||
|
||||
The consortium must decide some things before setting up the initial cluster (initial set of BigchainDB nodes):
|
||||
|
||||
1. Who will operate a node in the initial cluster?
|
||||
2. What will the replication factor be? (It must be 3 or more for [RethinkDB failover](https://rethinkdb.com/docs/failover/) to work.)
|
||||
3. Which node will be responsible for sending the commands to configure the RethinkDB database?
|
||||
1. Who will operate each node in the initial cluster?
|
||||
2. What will the replication factor be? (It should be 3 or more.)
|
||||
3. Who will deploy the first node, second node, etc.?
|
||||
|
||||
Once those things have been decided, each node operator can begin setting up their BigchainDB (production) node.
|
||||
Once those things have been decided, the cluster deployment process can begin. The process for deploying a production node is outlined in [the section on production nodes](../production-nodes/index.html).
|
||||
|
||||
Each node operator will eventually need two pieces of information from all other nodes:
|
||||
|
||||
1. Their RethinkDB hostname, e.g. `rdb.farm2.organization.org`
|
||||
2. Their BigchainDB public key, e.g. `Eky3nkbxDTMgkmiJC8i5hKyVFiAQNmPP4a2G4JdDxJCK`
|
||||
Every time a new BigchainDB node is added, every other node must update their [BigchainDB keyring](../server-reference/configuration.html#keyring) (one of the BigchainDB configuration settings): they must add the public key of the new node.
|
||||
|
||||
To secure communications between BigchainDB nodes, each BigchainDB node can use a firewall or similar, and doing that will require additional coordination.
|
||||
|
|
|
@ -25,9 +25,16 @@ The (single) output of a threshold condition can be used as one of the inputs of
|
|||
When one creates a condition, one can calculate its fulfillment length (e.g.
|
||||
96). The more complex the condition, the larger its fulfillment length will be.
|
||||
A BigchainDB federation can put an upper limit on the complexity of the
|
||||
conditions, either directly by setting an allowed maximum fulfillment length,
|
||||
or indirectly by setting a maximum allowed transaction size which would limit
|
||||
conditions, either directly by setting a maximum allowed fulfillment length,
|
||||
or
|
||||
`indirectly <https://github.com/bigchaindb/bigchaindb/issues/356#issuecomment-288085251>`_
|
||||
by :ref:`setting a maximum allowed transaction size <Enforcing a Max Transaction Size>`
|
||||
which would limit
|
||||
the overall complexity accross all inputs and outputs of a transaction.
|
||||
Note: At the time of writing, there was no configuration setting
|
||||
to set a maximum allowed fulfillment length,
|
||||
so the only real option was to
|
||||
:ref:`set a maximum allowed transaction size <Enforcing a Max Transaction Size>`.
|
||||
|
||||
If someone tries to make a condition where the output of a threshold condition feeds into the input of another “earlier” threshold condition (i.e. in a closed logical circuit), then their computer will take forever to calculate the (infinite) “condition URI”, at least in theory. In practice, their computer will run out of memory or their client software will timeout after a while.
|
||||
|
||||
|
|
|
@ -49,4 +49,4 @@ Here's some explanation of the contents of a :ref:`transaction <transaction>`:
|
|||
|
||||
Later, when we get to the models for the block and the vote, we'll see that both include a signature (from the node which created it). You may wonder why transactions don't have signatures... The answer is that they do! They're just hidden inside the ``fulfillment`` string of each input. A creation transaction is signed by whoever created it. A transfer transaction is signed by whoever currently controls or owns it.
|
||||
|
||||
What gets signed? For each input in the transaction, the "fullfillment message" that gets signed includes the ``operation``, ``data``, ``version``, ``id``, corresponding ``condition``, and the fulfillment itself, except with its fulfillment string set to ``null``. The computed signature goes into creating the ``fulfillment`` string of the input.
|
||||
What gets signed? For each input in the transaction, the "fullfillment message" that gets signed includes the JSON serialized body of the transaction, minus any fulfillment strings. The computed signature goes into creating the ``fulfillment`` string of the input.
|
||||
|
|
|
@ -23,7 +23,9 @@ Start RethinkDB using:
|
|||
$ rethinkdb
|
||||
```
|
||||
|
||||
You can verify that RethinkDB is running by opening the RethinkDB web interface in your web browser. It should be at [http://localhost:8080/](http://localhost:8080/).
|
||||
You can verify that RethinkDB is running by opening the RethinkDB web interface in your web browser. It should be at http://localhost:8080/
|
||||
|
||||
<!-- Don't hyperlink http://localhost:8080/ because Sphinx will fail when you do "make linkcheck" -->
|
||||
|
||||
To run BigchainDB Server, do:
|
||||
```text
|
||||
|
@ -87,28 +89,28 @@ Start RethinkDB:
|
|||
docker-compose up -d rdb
|
||||
```
|
||||
|
||||
The RethinkDB web interface should be accessible at <http://localhost:58080/>.
|
||||
The RethinkDB web interface should be accessible at http://localhost:58080/.
|
||||
Depending on which platform, and/or how you are running docker, you may need
|
||||
to change `localhost` for the `ip` of the machine that is running docker. As a
|
||||
dummy example, if the `ip` of that machine was `0.0.0.0`, you would access the
|
||||
web interface at: <http://0.0.0.0:58080/>.
|
||||
web interface at: http://0.0.0.0:58080/.
|
||||
|
||||
Start a BigchainDB node:
|
||||
|
||||
```bash
|
||||
docker-compose up -d bdb
|
||||
docker-compose up -d bdb-rdb
|
||||
```
|
||||
|
||||
You can monitor the logs:
|
||||
|
||||
```bash
|
||||
docker-compose logs -f bdb
|
||||
docker-compose logs -f bdb-rdb
|
||||
```
|
||||
|
||||
If you wish to run the tests:
|
||||
|
||||
```bash
|
||||
docker-compose run --rm bdb py.test -v -n auto
|
||||
docker-compose run --rm bdb-rdb py.test -v -n auto
|
||||
```
|
||||
|
||||
### Docker with MongoDB
|
||||
|
@ -128,19 +130,19 @@ $ docker-compose port mdb 27017
|
|||
Start a BigchainDB node:
|
||||
|
||||
```bash
|
||||
docker-compose up -d bdb-mdb
|
||||
docker-compose up -d bdb
|
||||
```
|
||||
|
||||
You can monitor the logs:
|
||||
|
||||
```bash
|
||||
docker-compose logs -f bdb-mdb
|
||||
docker-compose logs -f bdb
|
||||
```
|
||||
|
||||
If you wish to run the tests:
|
||||
|
||||
```bash
|
||||
docker-compose run --rm bdb-mdb py.test -v --database-backend=mongodb
|
||||
docker-compose run --rm bdb py.test -v --database-backend=mongodb
|
||||
```
|
||||
|
||||
### Accessing the HTTP API
|
||||
|
|
|
@ -1,31 +1,27 @@
|
|||
Drivers & Clients
|
||||
=================
|
||||
|
||||
Currently, the only language-native driver is written in the Python language.
|
||||
Libraries and Tools Maintained by the BigchainDB Team
|
||||
-----------------------------------------------------
|
||||
|
||||
We also provide the Transaction CLI to be able to script the building of
|
||||
transactions. You may be able to wrap this tool inside the language of
|
||||
your choice, and then use the HTTP API directly to post transactions.
|
||||
|
||||
If you use a language other than Python, you may want to look at the current
|
||||
community projects listed below.
|
||||
* `The Python Driver <https://docs.bigchaindb.com/projects/py-driver/en/latest/index.html>`_
|
||||
* `The Transaction CLI <https://docs.bigchaindb.com/projects/cli/en/latest/>`_ is
|
||||
a command-line interface for building BigchainDB transactions.
|
||||
You may be able to call it from inside the language of
|
||||
your choice, and then use :ref:`the HTTP API <The HTTP Client-Server API>`
|
||||
to post transactions.
|
||||
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 1
|
||||
|
||||
http-client-server-api
|
||||
websocket-event-stream-api
|
||||
The Python Driver <https://docs.bigchaindb.com/projects/py-driver/en/latest/index.html>
|
||||
Transaction CLI <https://docs.bigchaindb.com/projects/cli/en/latest/>
|
||||
|
||||
|
||||
Community Driven Libraries and Tools
|
||||
Community-Driven Libraries and Tools
|
||||
------------------------------------
|
||||
Please note that some of these projects may be work in progress, but may
|
||||
nevertheless be very useful.
|
||||
|
||||
.. note::
|
||||
|
||||
Some of these projects are a work in progress,
|
||||
but may still be useful.
|
||||
|
||||
* `Javascript transaction builder <https://github.com/sohkai/js-bigchaindb-quickstart>`_
|
||||
* `Haskell transaction builder <https://github.com/libscott/bigchaindb-hs>`_
|
||||
* `Haskell transaction builder <https://github.com/bigchaindb/bigchaindb-hs>`_
|
||||
* `Go driver <https://github.com/zbo14/envoke/blob/master/bigchain/bigchain.go>`_
|
||||
* `Java driver <https://github.com/mgrand/bigchaindb-java-driver>`_
|
||||
* `Ruby driver <https://github.com/LicenseRocks/bigchaindb_ruby>`_
|
||||
|
|
|
@ -22,7 +22,7 @@ or ``https://example.com:9984``
|
|||
then you should get an HTTP response
|
||||
with something like the following in the body:
|
||||
|
||||
.. literalinclude:: samples/index-response.http
|
||||
.. literalinclude:: http-samples/index-response.http
|
||||
:language: http
|
||||
|
||||
|
||||
|
@ -35,7 +35,7 @@ or ``https://example.com:9984/api/v1/``,
|
|||
then you should get an HTTP response
|
||||
that allows you to discover the BigchainDB API endpoints:
|
||||
|
||||
.. literalinclude:: samples/api-index-response.http
|
||||
.. literalinclude:: http-samples/api-index-response.http
|
||||
:language: http
|
||||
|
||||
|
||||
|
@ -46,20 +46,24 @@ Transactions
|
|||
|
||||
Get the transaction with the ID ``tx_id``.
|
||||
|
||||
This endpoint returns a transaction only if a ``VALID`` block on
|
||||
``bigchain`` exists.
|
||||
This endpoint returns a transaction if it was included in a ``VALID`` block,
|
||||
if it is still waiting to be processed (``BACKLOG``) or is still in an
|
||||
undecided block (``UNDECIDED``). All instances of a transaction in invalid
|
||||
blocks are ignored and treated as if they don't exist. If a request is made
|
||||
for a transaction and instances of that transaction are found only in
|
||||
invalid blocks, then the response will be ``404 Not Found``.
|
||||
|
||||
:param tx_id: transaction ID
|
||||
:type tx_id: hex string
|
||||
|
||||
**Example request**:
|
||||
|
||||
.. literalinclude:: samples/get-tx-id-request.http
|
||||
.. literalinclude:: http-samples/get-tx-id-request.http
|
||||
:language: http
|
||||
|
||||
**Example response**:
|
||||
|
||||
.. literalinclude:: samples/get-tx-id-response.http
|
||||
.. literalinclude:: http-samples/get-tx-id-response.http
|
||||
:language: http
|
||||
|
||||
:resheader Content-Type: ``application/json``
|
||||
|
@ -106,12 +110,12 @@ Transactions
|
|||
|
||||
**Example request**:
|
||||
|
||||
.. literalinclude:: samples/get-tx-by-asset-request.http
|
||||
.. literalinclude:: http-samples/get-tx-by-asset-request.http
|
||||
:language: http
|
||||
|
||||
**Example response**:
|
||||
|
||||
.. literalinclude:: samples/get-tx-by-asset-response.http
|
||||
.. literalinclude:: http-samples/get-tx-by-asset-response.http
|
||||
:language: http
|
||||
|
||||
:resheader Content-Type: ``application/json``
|
||||
|
@ -135,12 +139,12 @@ Transactions
|
|||
|
||||
**Example request**:
|
||||
|
||||
.. literalinclude:: samples/post-tx-request.http
|
||||
.. literalinclude:: http-samples/post-tx-request.http
|
||||
:language: http
|
||||
|
||||
**Example response**:
|
||||
|
||||
.. literalinclude:: samples/post-tx-response.http
|
||||
.. literalinclude:: http-samples/post-tx-response.http
|
||||
:language: http
|
||||
|
||||
:resheader Content-Type: ``application/json``
|
||||
|
@ -223,12 +227,12 @@ Statuses
|
|||
|
||||
**Example request**:
|
||||
|
||||
.. literalinclude:: samples/get-statuses-tx-request.http
|
||||
.. literalinclude:: http-samples/get-statuses-tx-request.http
|
||||
:language: http
|
||||
|
||||
**Example response**:
|
||||
|
||||
.. literalinclude:: samples/get-statuses-tx-valid-response.http
|
||||
.. literalinclude:: http-samples/get-statuses-tx-valid-response.http
|
||||
:language: http
|
||||
|
||||
:resheader Content-Type: ``application/json``
|
||||
|
@ -246,17 +250,17 @@ Statuses
|
|||
|
||||
**Example request**:
|
||||
|
||||
.. literalinclude:: samples/get-statuses-block-request.http
|
||||
.. literalinclude:: http-samples/get-statuses-block-request.http
|
||||
:language: http
|
||||
|
||||
**Example response**:
|
||||
|
||||
.. literalinclude:: samples/get-statuses-block-invalid-response.http
|
||||
.. literalinclude:: http-samples/get-statuses-block-invalid-response.http
|
||||
:language: http
|
||||
|
||||
**Example response**:
|
||||
|
||||
.. literalinclude:: samples/get-statuses-block-valid-response.http
|
||||
.. literalinclude:: http-samples/get-statuses-block-valid-response.http
|
||||
:language: http
|
||||
|
||||
:resheader Content-Type: ``application/json``
|
||||
|
@ -294,12 +298,12 @@ Blocks
|
|||
|
||||
**Example request**:
|
||||
|
||||
.. literalinclude:: samples/get-block-request.http
|
||||
.. literalinclude:: http-samples/get-block-request.http
|
||||
:language: http
|
||||
|
||||
**Example response**:
|
||||
|
||||
.. literalinclude:: samples/get-block-response.http
|
||||
.. literalinclude:: http-samples/get-block-response.http
|
||||
:language: http
|
||||
|
||||
|
||||
|
@ -349,12 +353,12 @@ Blocks
|
|||
|
||||
**Example request**:
|
||||
|
||||
.. literalinclude:: samples/get-block-txid-request.http
|
||||
.. literalinclude:: http-samples/get-block-txid-request.http
|
||||
:language: http
|
||||
|
||||
**Example response**:
|
||||
|
||||
.. literalinclude:: samples/get-block-txid-response.http
|
||||
.. literalinclude:: http-samples/get-block-txid-response.http
|
||||
:language: http
|
||||
|
||||
:resheader Content-Type: ``application/json``
|
||||
|
@ -380,12 +384,12 @@ Votes
|
|||
|
||||
**Example request**:
|
||||
|
||||
.. literalinclude:: samples/get-vote-request.http
|
||||
.. literalinclude:: http-samples/get-vote-request.http
|
||||
:language: http
|
||||
|
||||
**Example response**:
|
||||
|
||||
.. literalinclude:: samples/get-vote-response.http
|
||||
.. literalinclude:: http-samples/get-vote-response.http
|
||||
:language: http
|
||||
|
||||
:resheader Content-Type: ``application/json``
|
||||
|
@ -402,7 +406,7 @@ Determining the API Root URL
|
|||
When you start BigchainDB Server using ``bigchaindb start``,
|
||||
an HTTP API is exposed at some address. The default is:
|
||||
|
||||
`http://localhost:9984/api/v1/ <http://localhost:9984/api/v1/>`_
|
||||
``http://localhost:9984/api/v1/``
|
||||
|
||||
It's bound to ``localhost``,
|
||||
so you can access it from the same machine,
|
|
@ -8,9 +8,11 @@ BigchainDB Server Documentation
|
|||
introduction
|
||||
quickstart
|
||||
cloud-deployment-templates/index
|
||||
nodes/index
|
||||
production-nodes/index
|
||||
dev-and-test/index
|
||||
server-reference/index
|
||||
http-client-server-api
|
||||
websocket-event-stream-api
|
||||
drivers-clients/index
|
||||
clusters-feds/index
|
||||
data-models/index
|
||||
|
|
|
@ -1,10 +0,0 @@
|
|||
Production Node Assumptions, Components & Requirements
|
||||
======================================================
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 1
|
||||
|
||||
node-assumptions
|
||||
node-components
|
||||
node-requirements
|
||||
setup-run-node
|
|
@ -1,13 +0,0 @@
|
|||
# Production Node Assumptions
|
||||
|
||||
If you're not sure what we mean by a BigchainDB *node*, *cluster*, *consortium*, or *production node*, then see [the section in the Introduction where we defined those terms](../introduction.html#some-basic-vocabulary).
|
||||
|
||||
We make some assumptions about production nodes:
|
||||
|
||||
1. **Each production node is set up and managed by an experienced professional system administrator (or a team of them).**
|
||||
|
||||
2. Each production node in a cluster is managed by a different person or team.
|
||||
|
||||
Because of the first assumption, we don't provide a detailed cookbook explaining how to secure a server, or other things that a sysadmin should know. (We do provide some [templates](../cloud-deployment-templates/index.html), but those are just a starting point.)
|
||||
|
||||
|
|
@ -1,23 +0,0 @@
|
|||
# Production Node Components
|
||||
|
||||
A BigchainDB node must include, at least:
|
||||
|
||||
* BigchainDB Server and
|
||||
* RethinkDB Server.
|
||||
|
||||
When doing development and testing, it's common to install both on the same machine, but in a production environment, it may make more sense to install them on separate machines.
|
||||
|
||||
In a production environment, a BigchainDB node should have several other components, including:
|
||||
|
||||
* nginx or similar, as a reverse proxy and/or load balancer for the Gunicorn server(s) inside the node
|
||||
* An NTP daemon running on all machines running BigchainDB code, and possibly other machines
|
||||
* A RethinkDB proxy server
|
||||
* A RethinkDB "wire protocol firewall" (in the future: this component doesn't exist yet)
|
||||
* Scalable storage for RethinkDB (e.g. using RAID)
|
||||
* Monitoring software, to monitor all the machines in the node
|
||||
* Configuration management agents (if you're using a configuration managment system that uses agents)
|
||||
* Maybe more
|
||||
|
||||
The relationship between these components is illustrated below.
|
||||
|
||||
![Components of a node](../_static/Node-components.png)
|
|
@ -1,193 +0,0 @@
|
|||
# Set Up and Run a Cluster Node
|
||||
|
||||
This is a page of general guidelines for setting up a production node. It says nothing about how to upgrade software, storage, processing, etc. or other details of node management. It will be expanded more in the future.
|
||||
|
||||
|
||||
## Get a Server
|
||||
|
||||
The first step is to get a server (or equivalent) which meets [the requirements for a BigchainDB node](node-requirements.html).
|
||||
|
||||
|
||||
## Secure Your Server
|
||||
|
||||
The steps that you must take to secure your server depend on your server OS and where your server is physically located. There are many articles and books about how to secure a server. Here we just cover special considerations when securing a BigchainDB node.
|
||||
|
||||
There are some [notes on BigchainDB-specific firewall setup](../appendices/firewall-notes.html) in the Appendices.
|
||||
|
||||
|
||||
## Sync Your System Clock
|
||||
|
||||
A BigchainDB node uses its system clock to generate timestamps for blocks and votes, so that clock should be kept in sync with some standard clock(s). The standard way to do that is to run an NTP daemon (Network Time Protocol daemon) on the node. (You could also use tlsdate, which uses TLS timestamps rather than NTP, but don't: it's not very accurate and it will break with TLS 1.3, which removes the timestamp.)
|
||||
|
||||
NTP is a standard protocol. There are many NTP daemons implementing it. We don't recommend a particular one. On the contrary, we recommend that different nodes in a cluster run different NTP daemons, so that a problem with one daemon won't affect all nodes.
|
||||
|
||||
Please see the [notes on NTP daemon setup](../appendices/ntp-notes.html) in the Appendices.
|
||||
|
||||
|
||||
## Set Up Storage for RethinkDB Data
|
||||
|
||||
Below are some things to consider when setting up storage for the RethinkDB data. The Appendices have a [section with concrete examples](../appendices/example-rethinkdb-storage-setups.html).
|
||||
|
||||
We suggest you set up a separate storage "device" (partition, RAID array, or logical volume) to store the RethinkDB data. Here are some questions to ask:
|
||||
|
||||
* How easy will it be to add storage in the future? Will I have to shut down my server?
|
||||
* How big can the storage get? (Remember that [RAID](https://en.wikipedia.org/wiki/RAID) can be used to make several physical drives look like one.)
|
||||
* How fast can it read & write data? How many input/output operations per second (IOPS)?
|
||||
* How does IOPS scale as more physical hard drives are added?
|
||||
* What's the latency?
|
||||
* What's the reliability? Is there replication?
|
||||
* What's in the Service Level Agreement (SLA), if applicable?
|
||||
* What's the cost?
|
||||
|
||||
There are many options and tradeoffs. Don't forget to look into Amazon Elastic Block Store (EBS) and Amazon Elastic File System (EFS), or their equivalents from other providers.
|
||||
|
||||
**Storage Notes Specific to RethinkDB**
|
||||
|
||||
* The RethinkDB storage engine has a number of SSD optimizations, so you _can_ benefit from using SSDs. ([source](https://www.rethinkdb.com/docs/architecture/))
|
||||
|
||||
* If you want a RethinkDB cluster to store an amount of data D, with a replication factor of R (on every table), and the cluster has N nodes, then each node will need to be able to store R×D/N data.
|
||||
|
||||
* RethinkDB tables can have [at most 64 shards](https://rethinkdb.com/limitations/). For example, if you have only one table and more than 64 nodes, some nodes won't have the primary of any shard, i.e. they will have replicas only. In other words, once you pass 64 nodes, adding more nodes won't provide more storage space for new data. If the biggest single-node storage available is d, then the most you can store in a RethinkDB cluster is < 64×d: accomplished by putting one primary shard in each of 64 nodes, with all replica shards on other nodes. (This is assuming one table. If there are T tables, then the most you can store is < 64×d×T.)
|
||||
|
||||
* When you set up storage for your RethinkDB data, you may have to select a filesystem. (Sometimes, the filesystem is already decided by the choice of storage.) We recommend using a filesystem that supports direct I/O (Input/Output). Many compressed or encrypted file systems don't support direct I/O. The ext4 filesystem supports direct I/O (but be careful: if you enable the data=journal mode, then direct I/O support will be disabled; the default is data=ordered). If your chosen filesystem supports direct I/O and you're using Linux, then you don't need to do anything to request or enable direct I/O. RethinkDB does that.
|
||||
|
||||
<p style="background-color: lightgrey;">What is direct I/O? It allows RethinkDB to write directly to the storage device (or use its own in-memory caching mechanisms), rather than relying on the operating system's file read and write caching mechanisms. (If you're using Linux, a write-to-file normally writes to the in-memory Page Cache first; only later does that Page Cache get flushed to disk. The Page Cache is also used when reading files.)</p>
|
||||
|
||||
* RethinkDB stores its data in a specific directory. You can tell RethinkDB _which_ directory using the RethinkDB config file, as explained below. In this documentation, we assume the directory is `/data`. If you set up a separate device (partition, RAID array, or logical volume) to store the RethinkDB data, then mount that device on `/data`.
|
||||
|
||||
|
||||
## Install RethinkDB Server
|
||||
|
||||
If you don't already have RethinkDB Server installed, you must install it. The RethinkDB documentation has instructions for [how to install RethinkDB Server on a variety of operating systems](https://rethinkdb.com/docs/install/).
|
||||
|
||||
|
||||
## Configure RethinkDB Server
|
||||
|
||||
Create a RethinkDB configuration file (text file) named `instance1.conf` with the following contents (explained below):
|
||||
```text
|
||||
directory=/data
|
||||
bind=all
|
||||
direct-io
|
||||
# Replace node?_hostname with actual node hostnames below, e.g. rdb.examples.com
|
||||
join=node0_hostname:29015
|
||||
join=node1_hostname:29015
|
||||
join=node2_hostname:29015
|
||||
# continue until there's a join= line for each node in the cluster
|
||||
```
|
||||
|
||||
* `directory=/data` tells the RethinkDB node to store its share of the database data in `/data`.
|
||||
* `bind=all` binds RethinkDB to all local network interfaces (e.g. loopback, Ethernet, wireless, whatever is available), so it can communicate with the outside world. (The default is to bind only to local interfaces.)
|
||||
* `direct-io` tells RethinkDB to use direct I/O (explained earlier). Only include this line if your file system supports direct I/O.
|
||||
* `join=hostname:29015` lines: A cluster node needs to find out the hostnames of all the other nodes somehow. You _could_ designate one node to be the one that every other node asks, and put that node's hostname in the config file, but that wouldn't be very decentralized. Instead, we include _every_ node in the list of nodes-to-ask.
|
||||
|
||||
If you're curious about the RethinkDB config file, there's [a RethinkDB documentation page about it](https://www.rethinkdb.com/docs/config-file/). The [explanations of the RethinkDB command-line options](https://rethinkdb.com/docs/cli-options/) are another useful reference.
|
||||
|
||||
See the [RethinkDB documentation on securing your cluster](https://rethinkdb.com/docs/security/).
|
||||
|
||||
|
||||
## Install Python 3.4+
|
||||
|
||||
If you don't already have it, then you should [install Python 3.4+](https://www.python.org/downloads/).
|
||||
|
||||
If you're testing or developing BigchainDB on a stand-alone node, then you should probably create a Python 3.4+ virtual environment and activate it (e.g. using virtualenv or conda). Later we will install several Python packages and you probably only want those installed in the virtual environment.
|
||||
|
||||
|
||||
## Install BigchainDB Server
|
||||
|
||||
First, [install the OS-level dependencies of BigchainDB Server (link)](../appendices/install-os-level-deps.html).
|
||||
|
||||
With OS-level dependencies installed, you can install BigchainDB Server with `pip` or from source.
|
||||
|
||||
|
||||
### How to Install BigchainDB with pip
|
||||
|
||||
BigchainDB (i.e. both the Server and the officially-supported drivers) is distributed as a Python package on PyPI so you can install it using `pip`. First, make sure you have an up-to-date Python 3.4+ version of `pip` installed:
|
||||
```text
|
||||
pip -V
|
||||
```
|
||||
|
||||
If it says that `pip` isn't installed, or it says `pip` is associated with a Python version less than 3.4, then you must install a `pip` version associated with Python 3.4+. In the following instructions, we call it `pip3` but you may be able to use `pip` if that refers to the same thing. See [the `pip` installation instructions](https://pip.pypa.io/en/stable/installing/).
|
||||
|
||||
On Ubuntu 16.04, we found that this works:
|
||||
```text
|
||||
sudo apt-get install python3-pip
|
||||
```
|
||||
|
||||
That should install a Python 3 version of `pip` named `pip3`. If that didn't work, then another way to get `pip3` is to do `sudo apt-get install python3-setuptools` followed by `sudo easy_install3 pip`.
|
||||
|
||||
You can upgrade `pip` (`pip3`) and `setuptools` to the latest versions using:
|
||||
```text
|
||||
pip3 install --upgrade pip setuptools
|
||||
pip3 -V
|
||||
```
|
||||
|
||||
Now you can install BigchainDB Server (and officially-supported BigchainDB drivers) using:
|
||||
```text
|
||||
pip3 install bigchaindb
|
||||
```
|
||||
|
||||
(If you're not in a virtualenv and you want to install bigchaindb system-wide, then put `sudo` in front.)
|
||||
|
||||
Note: You can use `pip3` to upgrade the `bigchaindb` package to the latest version using `pip3 install --upgrade bigchaindb`.
|
||||
|
||||
|
||||
### How to Install BigchainDB from Source
|
||||
|
||||
If you want to install BitchainDB from source because you want to use the very latest bleeding-edge code, clone the public repository:
|
||||
```text
|
||||
git clone git@github.com:bigchaindb/bigchaindb.git
|
||||
python setup.py install
|
||||
```
|
||||
|
||||
|
||||
## Configure BigchainDB Server
|
||||
|
||||
Start by creating a default BigchainDB config file:
|
||||
```text
|
||||
bigchaindb -y configure rethinkdb
|
||||
```
|
||||
|
||||
(There's documentation for the `bigchaindb` command is in the section on [the BigchainDB Command Line Interface (CLI)](bigchaindb-cli.html).)
|
||||
|
||||
Edit the created config file:
|
||||
|
||||
* Open `$HOME/.bigchaindb` (the created config file) in your text editor.
|
||||
* Change `"server": {"bind": "localhost:9984", ... }` to `"server": {"bind": "0.0.0.0:9984", ... }`. This makes it so traffic can come from any IP address to port 9984 (the HTTP Client-Server API port).
|
||||
* Change `"keyring": []` to `"keyring": ["public_key_of_other_node_A", "public_key_of_other_node_B", "..."]` i.e. a list of the public keys of all the other nodes in the cluster. The keyring should _not_ include your node's public key.
|
||||
|
||||
For more information about the BigchainDB config file, see [Configuring a BigchainDB Node](configuration.html).
|
||||
|
||||
|
||||
## Run RethinkDB Server
|
||||
|
||||
Start RethinkDB using:
|
||||
```text
|
||||
rethinkdb --config-file path/to/instance1.conf
|
||||
```
|
||||
|
||||
except replace the path with the actual path to `instance1.conf`.
|
||||
|
||||
Note: It's possible to [make RethinkDB start at system startup](https://www.rethinkdb.com/docs/start-on-startup/).
|
||||
|
||||
You can verify that RethinkDB is running by opening the RethinkDB web interface in your web browser. It should be at `http://rethinkdb-hostname:8080/`. If you're running RethinkDB on localhost, that would be [http://localhost:8080/](http://localhost:8080/).
|
||||
|
||||
|
||||
## Run BigchainDB Server
|
||||
|
||||
After all node operators have started RethinkDB, but before they start BigchainDB, one designated node operator must configure the RethinkDB database by running the following commands:
|
||||
```text
|
||||
bigchaindb init
|
||||
bigchaindb set-shards numshards
|
||||
bigchaindb set-replicas numreplicas
|
||||
```
|
||||
|
||||
where:
|
||||
|
||||
* `bigchaindb init` creates the database within RethinkDB, the tables, the indexes, and the genesis block.
|
||||
* `numshards` should be set to the number of nodes in the initial cluster.
|
||||
* `numreplicas` should be set to the database replication factor decided by the consortium. It must be 3 or more for [RethinkDB failover](https://rethinkdb.com/docs/failover/) to work.
|
||||
|
||||
Once the RethinkDB database is configured, every node operator can start BigchainDB using:
|
||||
```text
|
||||
bigchaindb start
|
||||
```
|
|
@ -0,0 +1,12 @@
|
|||
Production Nodes
|
||||
================
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 1
|
||||
|
||||
node-assumptions
|
||||
node-components
|
||||
node-requirements
|
||||
setup-run-node
|
||||
reverse-proxy-notes
|
||||
|
|
@ -0,0 +1,16 @@
|
|||
# Production Node Assumptions
|
||||
|
||||
Be sure you know the key BigchainDB terminology:
|
||||
|
||||
* [BigchainDB node, BigchainDB cluster and BigchainDB consortum](https://docs.bigchaindb.com/en/latest/terminology.html)
|
||||
* [dev/test node, bare-bones node and production node](../introduction.html)
|
||||
|
||||
We make some assumptions about production nodes:
|
||||
|
||||
1. Production nodes use MongoDB, not RethinkDB.
|
||||
1. Each production node is set up and managed by an experienced professional system administrator or a team of them.
|
||||
1. Each production node in a cluster is managed by a different person or team.
|
||||
|
||||
You can use RethinkDB when building prototypes, but we don't advise or support using it in production.
|
||||
|
||||
We don't provide a detailed cookbook explaining how to secure a server, or other things that a sysadmin should know. (We do provide some [templates](../cloud-deployment-templates/index.html), but those are just a starting point.)
|
|
@ -0,0 +1,22 @@
|
|||
# Production Node Components
|
||||
|
||||
A production BigchainDB node must include:
|
||||
|
||||
* BigchainDB Server
|
||||
* MongoDB Server 3.4+ (mongod)
|
||||
* Scalable storage for MongoDB
|
||||
|
||||
It could also include several other components, including:
|
||||
|
||||
* NGINX or similar, to provide authentication, rate limiting, etc.
|
||||
* An NTP daemon running on all machines running BigchainDB Server or mongod, and possibly other machines
|
||||
* **Not** MongoDB Automation Agent. It's for automating the deployment of an entire MongoDB cluster, not just one MongoDB node within a cluster.
|
||||
* MongoDB Monitoring Agent
|
||||
* MongoDB Backup Agent
|
||||
* Log aggregation software
|
||||
* Monitoring software
|
||||
* Maybe more
|
||||
|
||||
The relationship between the main components is illustrated below. Note that BigchainDB Server must be able to communicate with the _primary_ MongoDB instance, and any of the MongoDB instances might be the primary, so BigchainDB Server must be able to communicate with all the MongoDB instances. Also, all MongoDB instances must be able to communicate with each other.
|
||||
|
||||
![Components of a production node](../_static/Node-components.png)
|
|
@ -0,0 +1,17 @@
|
|||
# Production Node Requirements
|
||||
|
||||
**This page is about the requirements of BigchainDB Server.** You can find the requirements of MongoDB, NGINX, your NTP daemon, your monitoring software, and other [production node components](node-components.html) in the documentation for that software.
|
||||
|
||||
|
||||
## OS Requirements
|
||||
|
||||
BigchainDB Server requires Python 3.4+ and Python 3.4+ [will run on any modern OS](https://docs.python.org/3.4/using/index.html), but we recommend using an LTS version of [Ubuntu Server](https://www.ubuntu.com/server) or a similarly server-grade Linux distribution.
|
||||
|
||||
_Don't use macOS_ (formerly OS X, formerly Mac OS X), because it's not a server-grade operating system. Also, BigchaindB Server uses the Python multiprocessing package and [some functionality in the multiprocessing package doesn't work on Mac OS X](https://docs.python.org/3.4/library/multiprocessing.html#multiprocessing.Queue.qsize).
|
||||
|
||||
|
||||
## General Considerations
|
||||
|
||||
BigchainDB Server runs many concurrent processes, so more RAM and more CPU cores is better.
|
||||
|
||||
As mentioned on the page about [production node components](node-components.html), every machine running BigchainDB Server should be running an NTP daemon.
|
|
@ -0,0 +1,72 @@
|
|||
# Using a Reverse Proxy
|
||||
|
||||
You may want to:
|
||||
|
||||
* rate limit inbound HTTP requests,
|
||||
* authenticate/authorize inbound HTTP requests,
|
||||
* block requests with an HTTP request body that's too large, or
|
||||
* enable HTTPS (TLS) between your users and your node.
|
||||
|
||||
While we could have built all that into BigchainDB Server,
|
||||
we didn't, because you can do all that (and more)
|
||||
using a reverse proxy such as NGINX or HAProxy.
|
||||
(You would put it in front of your BigchainDB Server,
|
||||
so that all inbound HTTP requests would arrive
|
||||
at the reverse proxy before *maybe* being proxied
|
||||
onwards to your BigchainDB Server.)
|
||||
For detailed instructions, see the documentation
|
||||
for your reverse proxy.
|
||||
|
||||
Below, we note how a reverse proxy can be used
|
||||
to do some BigchainDB-specific things.
|
||||
|
||||
You may also be interested in
|
||||
[our NGINX configuration file template](https://github.com/bigchaindb/nginx_3scale/blob/master/nginx.conf.template)
|
||||
(open source, on GitHub).
|
||||
|
||||
|
||||
## Enforcing a Max Transaction Size
|
||||
|
||||
The BigchainDB HTTP API has several endpoints,
|
||||
but only one of them, the `POST /transactions` endpoint,
|
||||
expects a non-empty HTTP request body:
|
||||
the transaction (JSON) being submitted by the user.
|
||||
|
||||
If you want to enforce a maximum-allowed transaction size
|
||||
(discarding any that are larger),
|
||||
then you can do so by configuring a maximum request body size
|
||||
in your reverse proxy.
|
||||
For example, NGINX has the `client_max_body_size`
|
||||
configuration setting. You could set it to 15 kB
|
||||
with the following line in your NGINX config file:
|
||||
|
||||
```text
|
||||
client_max_body_size 15k;
|
||||
```
|
||||
|
||||
For more information, see
|
||||
[the NGINX docs about client_max_body_size](https://nginx.org/en/docs/http/ngx_http_core_module.html#client_max_body_size).
|
||||
|
||||
Note: By enforcing a maximum transaction size, you
|
||||
[indirectly enforce a maximum crypto-conditions complexity](https://github.com/bigchaindb/bigchaindb/issues/356#issuecomment-288085251).
|
||||
|
||||
|
||||
**Aside: Why 15 kB?**
|
||||
|
||||
Both [RethinkDB](https://rethinkdb.com/limitations/) and
|
||||
[MongoDB have a maximum document size of 16 MB](https://docs.mongodb.com/manual/reference/limits/#limit-bson-document-size).
|
||||
In BigchainDB, the biggest documents are the blocks.
|
||||
A BigchainDB block can contain up to 1000 transactions,
|
||||
plus some other data (e.g. the timestamp).
|
||||
If we ignore the other data as negligible relative to all the transactions,
|
||||
then a block of size 16 MB
|
||||
will have an average transaction size of (16 MB)/1000 = 16 kB.
|
||||
Therefore by limiting the max transaction size to 15 kB,
|
||||
you can be fairly sure that no blocks will ever be
|
||||
bigger than 16 MB.
|
||||
|
||||
Note: Technically, the documents that MongoDB stores aren't the JSON
|
||||
that BigchainDB users think of; they're JSON converted to BSON.
|
||||
Moreover, [one can use GridFS with MongoDB to store larger documents](https://docs.mongodb.com/manual/core/gridfs/).
|
||||
Therefore the above calculation shoud be seen as a rough guide,
|
||||
not the last word.
|
|
@ -0,0 +1,137 @@
|
|||
# Set Up and Run a Cluster Node
|
||||
|
||||
This is a page of general guidelines for setting up a production BigchainDB node. Before continuing, make sure you've read the pages about production node [assumptions](node-assumptions.html), [components](node-components.html) and [requirements](node-requirements.html).
|
||||
|
||||
Note: These are just guidelines. You can modify them to suit your needs. For example, if you want to initialize the MongoDB replica set before installing BigchainDB, you _can_ do that. If you'd prefer to use Docker and Kubernetes, you can (and [we have a template](../cloud-deployment-templates/node-on-kubernetes.html)). We don't cover all possible setup procedures here.
|
||||
|
||||
|
||||
## Security Guidelines
|
||||
|
||||
There are many articles, websites and books about securing servers, virtual machines, networks, etc. Consult those.
|
||||
There are some [notes on BigchainDB-specific firewall setup](../appendices/firewall-notes.html) in the Appendices.
|
||||
|
||||
|
||||
## Sync Your System Clock
|
||||
|
||||
A BigchainDB node uses its system clock to generate timestamps for blocks and votes, so that clock should be kept in sync with some standard clock(s). The standard way to do that is to run an NTP daemon (Network Time Protocol daemon) on the node.
|
||||
|
||||
MongoDB also recommends having an NTP daemon running on all MongoDB nodes.
|
||||
|
||||
NTP is a standard protocol. There are many NTP daemons implementing it. We don't recommend a particular one. On the contrary, we recommend that different nodes in a cluster run different NTP daemons, so that a problem with one daemon won't affect all nodes.
|
||||
|
||||
Please see the [notes on NTP daemon setup](../appendices/ntp-notes.html) in the Appendices.
|
||||
|
||||
|
||||
## Set Up Storage for MongoDB
|
||||
|
||||
We suggest you set up a separate storage device (partition, RAID array, or logical volume) to store the data in the MongoDB database. Here are some questions to ask:
|
||||
|
||||
* How easy will it be to add storage in the future? Will I have to shut down my server?
|
||||
* How big can the storage get? (Remember that [RAID](https://en.wikipedia.org/wiki/RAID) can be used to make several physical drives look like one.)
|
||||
* How fast can it read & write data? How many input/output operations per second (IOPS)?
|
||||
* How does IOPS scale as more physical hard drives are added?
|
||||
* What's the latency?
|
||||
* What's the reliability? Is there replication?
|
||||
* What's in the Service Level Agreement (SLA), if applicable?
|
||||
* What's the cost?
|
||||
|
||||
There are many options and tradeoffs.
|
||||
|
||||
Consult the MongoDB documentation for its recommendations regarding storage hardware, software and settings, e.g. in the [MongoDB Production Notes](https://docs.mongodb.com/manual/administration/production-notes/).
|
||||
|
||||
|
||||
## Install and Run MongoDB
|
||||
|
||||
* [Install MongoDB 3.4+](https://docs.mongodb.com/manual/installation/). (BigchainDB only works with MongoDB 3.4+.)
|
||||
* [Run MongoDB (mongod)](https://docs.mongodb.com/manual/reference/program/mongod/)
|
||||
|
||||
|
||||
## Install BigchainDB Server
|
||||
|
||||
### Install BigchainDB Server Dependencies
|
||||
|
||||
Before you can install BigchainDB Server, you must [install its OS-level dependencies](../appendices/install-os-level-deps.html) and you may have to [install Python 3.4+](https://www.python.org/downloads/).
|
||||
|
||||
### How to Install BigchainDB Server with pip
|
||||
|
||||
BigchainDB is distributed as a Python package on PyPI so you can install it using `pip`. First, make sure you have an up-to-date Python 3.4+ version of `pip` installed:
|
||||
```text
|
||||
pip -V
|
||||
```
|
||||
|
||||
If it says that `pip` isn't installed, or it says `pip` is associated with a Python version less than 3.4, then you must install a `pip` version associated with Python 3.4+. In the following instructions, we call it `pip3` but you may be able to use `pip` if that refers to the same thing. See [the `pip` installation instructions](https://pip.pypa.io/en/stable/installing/).
|
||||
|
||||
On Ubuntu 16.04, we found that this works:
|
||||
```text
|
||||
sudo apt-get install python3-pip
|
||||
```
|
||||
|
||||
That should install a Python 3 version of `pip` named `pip3`. If that didn't work, then another way to get `pip3` is to do `sudo apt-get install python3-setuptools` followed by `sudo easy_install3 pip`.
|
||||
|
||||
You can upgrade `pip` (`pip3`) and `setuptools` to the latest versions using:
|
||||
```text
|
||||
pip3 install --upgrade pip setuptools
|
||||
pip3 -V
|
||||
```
|
||||
|
||||
Now you can install BigchainDB Server using:
|
||||
```text
|
||||
pip3 install bigchaindb
|
||||
```
|
||||
|
||||
(If you're not in a virtualenv and you want to install bigchaindb system-wide, then put `sudo` in front.)
|
||||
|
||||
Note: You can use `pip3` to upgrade the `bigchaindb` package to the latest version using `pip3 install --upgrade bigchaindb`.
|
||||
|
||||
|
||||
### How to Install BigchainDB Server from Source
|
||||
|
||||
If you want to install BitchainDB from source because you want to use the very latest bleeding-edge code, clone the public repository:
|
||||
```text
|
||||
git clone git@github.com:bigchaindb/bigchaindb.git
|
||||
cd bigchaindb
|
||||
python setup.py install
|
||||
```
|
||||
|
||||
|
||||
## Configure BigchainDB Server
|
||||
|
||||
Start by creating a default BigchainDB config file for a MongoDB backend:
|
||||
```text
|
||||
bigchaindb -y configure mongodb
|
||||
```
|
||||
|
||||
(There's documentation for the `bigchaindb` command is in the section on [the BigchainDB Command Line Interface (CLI)](../server-reference/bigchaindb-cli.html).)
|
||||
|
||||
Edit the created config file by opening `$HOME/.bigchaindb` (the created config file) in your text editor:
|
||||
|
||||
* Change `"server": {"bind": "localhost:9984", ... }` to `"server": {"bind": "0.0.0.0:9984", ... }`. This makes it so traffic can come from any IP address to port 9984 (the HTTP Client-Server API port).
|
||||
* Change `"keyring": []` to `"keyring": ["public_key_of_other_node_A", "public_key_of_other_node_B", "..."]` i.e. a list of the public keys of all the other nodes in the cluster. The keyring should _not_ include your node's public key.
|
||||
* Ensure that `database.host` and `database.port` are set to the hostname and port of your MongoDB instance. (The port is usually 27017, unless you changed it.)
|
||||
|
||||
For more information about the BigchainDB config file, see the page about the [BigchainDB configuration settings](../server-reference/configuration.html).
|
||||
|
||||
|
||||
## Get All Other Nodes to Update Their Keyring
|
||||
|
||||
All other BigchainDB nodes in the cluster must add your new node's public key to their BigchainDB keyring. Currently, the only way to get BigchainDB Server to "notice" a changed keyring is to shut it down and start it back up again (with the new keyring).
|
||||
|
||||
|
||||
## Maybe Update the MongoDB Replica Set
|
||||
|
||||
**If this isn't the first node in the BigchainDB cluster**, then someone with an existing BigchainDB node (not you) must add your MongoDB instance to the MongoDB replica set. They can do so (on their node) using:
|
||||
```text
|
||||
bigchaindb add-replicas your-mongod-hostname:27017
|
||||
```
|
||||
|
||||
where they must replace `your-mongod-hostname` with the actual hostname of your MongoDB instance, and they may have to replace `27017` with the actual port.
|
||||
|
||||
|
||||
## Start BigchainDB
|
||||
|
||||
**Warning: If you're not deploying the first node in the BigchainDB cluster, then don't start BigchainDB before your MongoDB instance has been added to the MongoDB replica set (as outlined above).**
|
||||
|
||||
```text
|
||||
# See warning above
|
||||
bigchaindb start
|
||||
```
|
|
@ -16,14 +16,19 @@ For convenience, here's a list of all the relevant environment variables (docume
|
|||
`BIGCHAINDB_DATABASE_PORT`<br>
|
||||
`BIGCHAINDB_DATABASE_NAME`<br>
|
||||
`BIGCHAINDB_DATABASE_REPLICASET`<br>
|
||||
`BIGCHAINDB_DATABASE_CONNECTION_TIMEOUT`<br>
|
||||
`BIGCHAINDB_DATABASE_MAX_TRIES`<br>
|
||||
`BIGCHAINDB_SERVER_BIND`<br>
|
||||
`BIGCHAINDB_SERVER_LOGLEVEL`<br>
|
||||
`BIGCHAINDB_SERVER_WORKERS`<br>
|
||||
`BIGCHAINDB_SERVER_THREADS`<br>
|
||||
`BIGCHAINDB_WSSERVER_HOST`<br>
|
||||
`BIGCHAINDB_WSSERVER_PORT`<br>
|
||||
`BIGCHAINDB_CONFIG_PATH`<br>
|
||||
`BIGCHAINDB_BACKLOG_REASSIGN_DELAY`<br>
|
||||
`BIGCHAINDB_CONSENSUS_PLUGIN`<br>
|
||||
`BIGCHAINDB_LOG`<br>
|
||||
`BIGCHAINDB_LOG_FILE`<br>
|
||||
`BIGCHAINDB_LOG_ERROR_FILE`<br>
|
||||
`BIGCHAINDB_LOG_LEVEL_CONSOLE`<br>
|
||||
`BIGCHAINDB_LOG_LEVEL_LOGFILE`<br>
|
||||
`BIGCHAINDB_LOG_DATEFMT_CONSOLE`<br>
|
||||
|
@ -85,9 +90,18 @@ Note how the keys in the list are separated by colons.
|
|||
```
|
||||
|
||||
|
||||
## database.backend, database.host, database.port, database.name & database.replicaset
|
||||
## database.*
|
||||
|
||||
The database backend to use (`rethinkdb` or `mongodb`) and its hostname, port and name. If the database backend is `mongodb`, then there's a fifth setting: the name of the replica set. If the database backend is `rethinkdb`, you *can* set the name of the replica set, but it won't be used for anything.
|
||||
The settings with names of the form `database.*` are for the database backend
|
||||
(currently either RethinkDB or MongoDB). They are:
|
||||
|
||||
* `database.backend` is either `rethinkdb` or `mongodb`.
|
||||
* `database.host` is the hostname (FQDN) of the backend database.
|
||||
* `database.port` is self-explanatory.
|
||||
* `database.name` is a user-chosen name for the database inside RethinkDB or MongoDB, e.g. `bigchain`.
|
||||
* `database.replicaset` is only relevant if using MongoDB; it's the name of the MongoDB replica set, e.g. `bigchain-rs`.
|
||||
* `database.connection_timeout` is the maximum number of milliseconds that BigchainDB will wait before giving up on one attempt to connect to the database backend. Note: At the time of writing, this setting was only used by MongoDB; there was an open [issue to make RethinkDB use it as well](https://github.com/bigchaindb/bigchaindb/issues/1337).
|
||||
* `database.max_tries` is the maximum number of times that BigchainDB will try to establish a connection with the database backend. If 0, then it will try forever.
|
||||
|
||||
**Example using environment variables**
|
||||
```text
|
||||
|
@ -96,6 +110,8 @@ export BIGCHAINDB_DATABASE_HOST=localhost
|
|||
export BIGCHAINDB_DATABASE_PORT=27017
|
||||
export BIGCHAINDB_DATABASE_NAME=bigchain
|
||||
export BIGCHAINDB_DATABASE_REPLICASET=bigchain-rs
|
||||
export BIGCHAINDB_DATABASE_CONNECTION_TIMEOUT=5000
|
||||
export BIGCHAINDB_DATABASE_MAX_TRIES=3
|
||||
```
|
||||
|
||||
**Default values**
|
||||
|
@ -105,8 +121,10 @@ If (no environment variables were set and there's no local config file), or you
|
|||
"database": {
|
||||
"backend": "rethinkdb",
|
||||
"host": "localhost",
|
||||
"port": 28015,
|
||||
"name": "bigchain",
|
||||
"port": 28015
|
||||
"connection_timeout": 5000,
|
||||
"max_tries": 3
|
||||
}
|
||||
```
|
||||
|
||||
|
@ -115,24 +133,31 @@ If you used `bigchaindb -y configure mongodb` to create a default local config f
|
|||
"database": {
|
||||
"backend": "mongodb",
|
||||
"host": "localhost",
|
||||
"name": "bigchain",
|
||||
"port": 27017,
|
||||
"replicaset": "bigchain-rs"
|
||||
"name": "bigchain",
|
||||
"replicaset": "bigchain-rs",
|
||||
"connection_timeout": 5000,
|
||||
"max_tries": 3
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
## server.bind, server.workers & server.threads
|
||||
## server.bind, server.loglevel, server.workers & server.threads
|
||||
|
||||
These settings are for the [Gunicorn HTTP server](http://gunicorn.org/), which is used to serve the [HTTP client-server API](../drivers-clients/http-client-server-api.html).
|
||||
These settings are for the [Gunicorn HTTP server](http://gunicorn.org/), which is used to serve the [HTTP client-server API](../http-client-server-api.html).
|
||||
|
||||
`server.bind` is where to bind the Gunicorn HTTP server socket. It's a string. It can be any valid value for [Gunicorn's bind setting](http://docs.gunicorn.org/en/stable/settings.html#bind). If you want to allow IPv4 connections from anyone, on port 9984, use '0.0.0.0:9984'. In a production setting, we recommend you use Gunicorn behind a reverse proxy server. If Gunicorn and the reverse proxy are running on the same machine, then use 'localhost:PORT' where PORT is _not_ 9984 (because the reverse proxy needs to listen on port 9984). Maybe use PORT=9983 in that case because we know 9983 isn't used. If Gunicorn and the reverse proxy are running on different machines, then use 'A.B.C.D:9984' where A.B.C.D is the IP address of the reverse proxy. There's [more information about deploying behind a reverse proxy in the Gunicorn documentation](http://docs.gunicorn.org/en/stable/deploy.html). (They call it a proxy.)
|
||||
|
||||
`server.loglevel` sets the log level of Gunicorn's Error log outputs. See
|
||||
[Gunicorn's documentation](http://docs.gunicorn.org/en/latest/settings.html#loglevel)
|
||||
for more information.
|
||||
|
||||
`server.workers` is [the number of worker processes](http://docs.gunicorn.org/en/stable/settings.html#workers) for handling requests. If `None` (the default), the value will be (cpu_count * 2 + 1). `server.threads` is [the number of threads-per-worker](http://docs.gunicorn.org/en/stable/settings.html#threads) for handling requests. If `None` (the default), the value will be (cpu_count * 2 + 1). The HTTP server will be able to handle `server.workers` * `server.threads` requests simultaneously.
|
||||
|
||||
**Example using environment variables**
|
||||
```text
|
||||
export BIGCHAINDB_SERVER_BIND=0.0.0.0:9984
|
||||
export BIGCHAINDB_SERVER_LOGLEVEL=debug
|
||||
export BIGCHAINDB_SERVER_WORKERS=5
|
||||
export BIGCHAINDB_SERVER_THREADS=5
|
||||
```
|
||||
|
@ -141,6 +166,7 @@ export BIGCHAINDB_SERVER_THREADS=5
|
|||
```js
|
||||
"server": {
|
||||
"bind": "0.0.0.0:9984",
|
||||
"loglevel": "debug",
|
||||
"workers": 5,
|
||||
"threads": 5
|
||||
}
|
||||
|
@ -150,11 +176,46 @@ export BIGCHAINDB_SERVER_THREADS=5
|
|||
```js
|
||||
"server": {
|
||||
"bind": "localhost:9984",
|
||||
"loglevel": "info",
|
||||
"workers": null,
|
||||
"threads": null
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
## wsserver.host and wsserver.port
|
||||
|
||||
These settings are for the
|
||||
[aiohttp server](https://aiohttp.readthedocs.io/en/stable/index.html),
|
||||
which is used to serve the
|
||||
[WebSocket Event Stream API](../websocket-event-stream-api.html).
|
||||
`wsserver.host` is where to bind the aiohttp server socket and
|
||||
`wsserver.port` is the corresponding port.
|
||||
If you want to allow connections from anyone, on port 9985,
|
||||
set `wsserver.host` to 0.0.0.0 and `wsserver.port` to 9985.
|
||||
|
||||
**Example using environment variables**
|
||||
```text
|
||||
export BIGCHAINDB_WSSERVER_HOST=0.0.0.0
|
||||
export BIGCHAINDB_WSSERVER_PORT=9985
|
||||
```
|
||||
|
||||
**Example config file snippet**
|
||||
```js
|
||||
"wsserver": {
|
||||
"host": "0.0.0.0",
|
||||
"port": 65000
|
||||
}
|
||||
```
|
||||
|
||||
**Default values (from a config file)**
|
||||
```js
|
||||
"wsserver": {
|
||||
"host": "localhost",
|
||||
"port": 9985
|
||||
}
|
||||
```
|
||||
|
||||
## backlog_reassign_delay
|
||||
|
||||
Specifies how long, in seconds, transactions can remain in the backlog before being reassigned. Long-waiting transactions must be reassigned because the assigned node may no longer be responsive. The default duration is 120 seconds.
|
||||
|
@ -169,21 +230,9 @@ export BIGCHAINDB_BACKLOG_REASSIGN_DELAY=30
|
|||
"backlog_reassign_delay": 120
|
||||
```
|
||||
|
||||
## consensus_plugin
|
||||
|
||||
The [consensus plugin](../appendices/consensus.html) to use.
|
||||
|
||||
**Example using an environment variable**
|
||||
```text
|
||||
export BIGCHAINDB_CONSENSUS_PLUGIN=default
|
||||
```
|
||||
|
||||
**Example config file snippet: the default**
|
||||
```js
|
||||
"consensus_plugin": "default"
|
||||
```
|
||||
|
||||
## log
|
||||
|
||||
The `log` key is expected to point to a mapping (set of key/value pairs)
|
||||
holding the logging configuration.
|
||||
|
||||
|
@ -193,6 +242,7 @@ holding the logging configuration.
|
|||
{
|
||||
"log": {
|
||||
"file": "/var/log/bigchaindb.log",
|
||||
"error_file": "/var/log/bigchaindb-errors.log",
|
||||
"level_console": "info",
|
||||
"level_logfile": "info",
|
||||
"datefmt_console": "%Y-%m-%d %H:%M:%S",
|
||||
|
@ -206,21 +256,19 @@ holding the logging configuration.
|
|||
}
|
||||
```
|
||||
|
||||
**Defaults to**: `"{}"`.
|
||||
|
||||
Please note that although the default is `"{}"` as per the configuration file,
|
||||
internal defaults are used, such that the actual operational default is:
|
||||
**Defaults to**:
|
||||
|
||||
```
|
||||
{
|
||||
"log": {
|
||||
"file": "~/bigchaindb.log",
|
||||
"error_file": "~/bigchaindb-errors.log",
|
||||
"level_console": "info",
|
||||
"level_logfile": "info",
|
||||
"datefmt_console": "%Y-%m-%d %H:%M:%S",
|
||||
"datefmt_logfile": "%Y-%m-%d %H:%M:%S",
|
||||
"fmt_console": "%(asctime)s [%(levelname)s] (%(name)s) %(message)s",
|
||||
"fmt_logfile": "%(asctime)s [%(levelname)s] (%(name)s) %(message)s",
|
||||
"fmt_logfile": "[%(asctime)s] [%(levelname)s] (%(name)s) %(message)s (%(processName)-10s - pid: %(process)d)",
|
||||
"fmt_console": "[%(asctime)s] [%(levelname)s] (%(name)s) %(message)s (%(processName)-10s - pid: %(process)d)",
|
||||
"granular_levels": {}
|
||||
}
|
||||
```
|
||||
|
@ -228,8 +276,8 @@ internal defaults are used, such that the actual operational default is:
|
|||
The next subsections explain each field of the `log` configuration.
|
||||
|
||||
|
||||
### log.file
|
||||
The full path to the file where logs should be written to.
|
||||
### log.file & log.error_file
|
||||
The full paths to the files where logs and error logs should be written to.
|
||||
|
||||
**Example**:
|
||||
|
||||
|
@ -237,15 +285,41 @@ The full path to the file where logs should be written to.
|
|||
{
|
||||
"log": {
|
||||
"file": "/var/log/bigchaindb/bigchaindb.log"
|
||||
"error_file": "/var/log/bigchaindb/bigchaindb-errors.log"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Defaults to**: `"~/bigchaindb.log"`.
|
||||
**Defaults to**:
|
||||
|
||||
* `"~/bigchaindb.log"`
|
||||
* `"~/bigchaindb-errors.log"`
|
||||
|
||||
Please note that the user running `bigchaindb` must have write access to the
|
||||
location.
|
||||
|
||||
locations.
|
||||
|
||||
#### Log rotation
|
||||
|
||||
Log files have a size limit of 200 MB and will be rotated up to five times.
|
||||
|
||||
For example if we consider the log file setting:
|
||||
|
||||
```
|
||||
{
|
||||
"log": {
|
||||
"file": "~/bigchain.log"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
logs would always be written to `bigchain.log`. Each time the file
|
||||
`bigchain.log` reaches 200 MB it would be closed and renamed
|
||||
`bigchain.log.1`. If `bigchain.log.1` and `bigchain.log.2` already exist they
|
||||
would be renamed `bigchain.log.2` and `bigchain.log.3`. This pattern would be
|
||||
applied up to `bigchain.log.5` after which `bigchain.log.5` would be
|
||||
overwritten by `bigchain.log.4`, thus ending the rotation cycle of whatever
|
||||
logs were in `bigchain.log.5`.
|
||||
|
||||
|
||||
### log.level_console
|
||||
The log level used to log to the console. Possible allowed values are the ones
|
||||
|
|
|
@ -2,7 +2,9 @@ The WebSocket Event Stream API
|
|||
==============================
|
||||
|
||||
.. important::
|
||||
This is currently scheduled to be implemented in BigchainDB Server 0.10.
|
||||
The WebSocket Event Stream runs on a different port than the Web API. The
|
||||
default port for the Web API is `9984`, while the one for the Event Stream
|
||||
is `9985`.
|
||||
|
||||
BigchainDB provides real-time event streams over the WebSocket protocol with
|
||||
the Event Stream API.
|
||||
|
@ -28,7 +30,7 @@ response contains a ``streams_<version>`` property in ``_links``::
|
|||
|
||||
{
|
||||
"_links": {
|
||||
"streams_v1": "ws://example.com:9984/api/v1/streams/"
|
||||
"streams_v1": "ws://example.com:9985/api/v1/streams/"
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -80,9 +82,9 @@ the transaction's ID, associated asset ID, and containing block's ID.
|
|||
Example message::
|
||||
|
||||
{
|
||||
"txid": "<sha3-256 hash>",
|
||||
"assetid": "<sha3-256 hash>",
|
||||
"blockid": "<sha3-256 hash>"
|
||||
"tx_id": "<sha3-256 hash>",
|
||||
"asset_id": "<sha3-256 hash>",
|
||||
"block_id": "<sha3-256 hash>"
|
||||
}
|
||||
|
||||
|
|
@ -1,49 +1,31 @@
|
|||
###############################################################
|
||||
# This config file runs bigchaindb:master as a k8s Deployment #
|
||||
# This config file runs bigchaindb:0.10.1 as a k8s Deployment #
|
||||
# and it connects to the mongodb backend running as a #
|
||||
# separate pod #
|
||||
###############################################################
|
||||
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: bdb-svc
|
||||
namespace: default
|
||||
labels:
|
||||
name: bdb-svc
|
||||
spec:
|
||||
selector:
|
||||
app: bdb-dep
|
||||
ports:
|
||||
- port: 9984
|
||||
targetPort: 9984
|
||||
name: bdb-port
|
||||
type: ClusterIP
|
||||
clusterIP: None
|
||||
---
|
||||
apiVersion: extensions/v1beta1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: bdb-dep
|
||||
name: bdb-instance-0-dep
|
||||
spec:
|
||||
replicas: 1
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: bdb-dep
|
||||
app: bdb-instance-0-dep
|
||||
spec:
|
||||
terminationGracePeriodSeconds: 10
|
||||
containers:
|
||||
- name: bigchaindb
|
||||
image: bigchaindb/bigchaindb:master
|
||||
image: bigchaindb/bigchaindb:0.10.1
|
||||
imagePullPolicy: IfNotPresent
|
||||
args:
|
||||
- start
|
||||
env:
|
||||
- name: BIGCHAINDB_DATABASE_HOST
|
||||
value: mdb-svc
|
||||
value: mdb-instance-0
|
||||
- name: BIGCHAINDB_DATABASE_PORT
|
||||
# TODO(Krish): remove hardcoded port
|
||||
value: "27017"
|
||||
- name: BIGCHAINDB_DATABASE_REPLICASET
|
||||
value: bigchain-rs
|
||||
|
@ -54,13 +36,20 @@ spec:
|
|||
- name: BIGCHAINDB_SERVER_BIND
|
||||
value: 0.0.0.0:9984
|
||||
- name: BIGCHAINDB_KEYPAIR_PUBLIC
|
||||
value: EEWUAhsk94ZUHhVw7qx9oZiXYDAWc9cRz93eMrsTG4kZ
|
||||
value: "<public key here>"
|
||||
- name: BIGCHAINDB_KEYPAIR_PRIVATE
|
||||
value: 3CjmRhu718gT1Wkba3LfdqX5pfYuBdaMPLd7ENUga5dm
|
||||
value: "<private key here>"
|
||||
- name: BIGCHAINDB_BACKLOG_REASSIGN_DELAY
|
||||
value: "120"
|
||||
- name: BIGCHAINDB_KEYRING
|
||||
value: ""
|
||||
- name: BIGCHAINDB_DATABASE_MAXTRIES
|
||||
value: "3"
|
||||
- name: BIGCHAINDB_DATABASE_CONNECTION_TIMEOUT
|
||||
value: "120"
|
||||
- name: BIGCHAINDB_LOG_LEVEL_CONSOLE
|
||||
value: debug
|
||||
# The following env var is not required for the bootstrap/first node
|
||||
#- name: BIGCHAINDB_KEYRING
|
||||
# value: ""
|
||||
ports:
|
||||
- containerPort: 9984
|
||||
hostPort: 9984
|
||||
|
|
|
@ -0,0 +1,16 @@
|
|||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: bdb-instance-0
|
||||
namespace: default
|
||||
labels:
|
||||
name: bdb-instance-0
|
||||
spec:
|
||||
selector:
|
||||
app: bdb-instance-0-dep
|
||||
ports:
|
||||
- port: 9984
|
||||
targetPort: 9984
|
||||
name: bdb-port
|
||||
type: ClusterIP
|
||||
clusterIP: None
|
|
@ -0,0 +1,36 @@
|
|||
#######################################################
|
||||
# This YAML file desribes a ConfigMap for the cluster #
|
||||
#######################################################
|
||||
|
||||
apiVersion: v1
|
||||
kind: ConfigMap
|
||||
metadata:
|
||||
name: mdb-mon
|
||||
namespace: default
|
||||
data:
|
||||
api-key: "<api key here>"
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: ConfigMap
|
||||
metadata:
|
||||
name: mdb-backup
|
||||
namespace: default
|
||||
data:
|
||||
api-key: "<api key here>"
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: ConfigMap
|
||||
metadata:
|
||||
name: mdb-fqdn
|
||||
namespace: default
|
||||
data:
|
||||
fqdn: mdb-instance-0
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: ConfigMap
|
||||
metadata:
|
||||
name: mongodb-whitelist
|
||||
namespace: default
|
||||
data:
|
||||
allowed-hosts: "all"
|
||||
|
|
@ -0,0 +1,49 @@
|
|||
{
|
||||
"$schema": "http://schema.management.azure.com/schemas/2014-04-01-preview/deploymentTemplate.json#",
|
||||
"contentVersion": "1.0.0.0",
|
||||
"parameters": {
|
||||
"sku": {
|
||||
"type": "String"
|
||||
},
|
||||
"workspaceName": {
|
||||
"type": "String"
|
||||
},
|
||||
"solutionType": {
|
||||
"type": "String"
|
||||
}
|
||||
},
|
||||
"resources": [
|
||||
{
|
||||
"apiVersion": "2015-03-20",
|
||||
"type": "Microsoft.OperationalInsights/workspaces",
|
||||
"name": "[parameters('workspaceName')]",
|
||||
"location": "[resourceGroup().location]",
|
||||
"properties": {
|
||||
"sku": {
|
||||
"name": "[parameters('sku')]"
|
||||
}
|
||||
},
|
||||
"resources": [
|
||||
{
|
||||
"apiVersion": "2015-11-01-preview",
|
||||
"location": "[resourceGroup().location]",
|
||||
"name": "[Concat(parameters('solutionType'), '(', parameters('workspaceName'), ')')]",
|
||||
"type": "Microsoft.OperationsManagement/solutions",
|
||||
"id": "[Concat(resourceGroup().id, '/providers/Microsoft.OperationsManagement/solutions/', parameters('solutionType'), '(', parameters('workspaceName'), ')')]",
|
||||
"dependsOn": [
|
||||
"[concat('Microsoft.OperationalInsights/workspaces/', parameters('workspaceName'))]"
|
||||
],
|
||||
"properties": {
|
||||
"workspaceResourceId": "[resourceId('Microsoft.OperationalInsights/workspaces/', parameters('workspaceName'))]"
|
||||
},
|
||||
"plan": {
|
||||
"publisher": "Microsoft",
|
||||
"product": "[Concat('OMSGallery/', parameters('solutionType'))]",
|
||||
"name": "[Concat(parameters('solutionType'), '(', parameters('workspaceName'), ')')]",
|
||||
"promotionCode": ""
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
|
@ -0,0 +1,15 @@
|
|||
{
|
||||
"$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentParameters.json#",
|
||||
"contentVersion": "1.0.0.0",
|
||||
"parameters": {
|
||||
"sku": {
|
||||
"value": "Free"
|
||||
},
|
||||
"workspaceName": {
|
||||
"value": "rg-abc-logs"
|
||||
},
|
||||
"solutionType": {
|
||||
"value": "Containers"
|
||||
}
|
||||
}
|
||||
}
|
|
@ -0,0 +1,30 @@
|
|||
apiVersion: extensions/v1beta1
|
||||
kind: DaemonSet
|
||||
metadata:
|
||||
name: omsagent
|
||||
spec:
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: omsagent
|
||||
spec:
|
||||
containers:
|
||||
- env:
|
||||
- name: WSID
|
||||
value: <insert-workspace-id-here>
|
||||
- name: KEY
|
||||
value: <insert-workspace-key-here>
|
||||
image: microsoft/oms
|
||||
name: omsagent
|
||||
ports:
|
||||
- containerPort: 25225
|
||||
protocol: TCP
|
||||
securityContext:
|
||||
privileged: true
|
||||
volumeMounts:
|
||||
- mountPath: /var/run/docker.sock
|
||||
name: docker-sock
|
||||
volumes:
|
||||
- name: docker-sock
|
||||
hostPath:
|
||||
path: /var/run/docker.sock
|
|
@ -0,0 +1,19 @@
|
|||
FROM ubuntu:xenial
|
||||
LABEL maintainer "dev@bigchaindb.com"
|
||||
ARG DEBIAN_FRONTEND=noninteractive
|
||||
ARG DEB_FILE=mongodb-mms-backup-agent_latest_amd64.ubuntu1604.deb
|
||||
ARG FILE_URL="https://cloud.mongodb.com/download/agent/backup/"$DEB_FILE
|
||||
WORKDIR /
|
||||
RUN apt update \
|
||||
&& apt -y upgrade \
|
||||
&& apt -y install --no-install-recommends curl ca-certificates logrotate \
|
||||
libsasl2-2 \
|
||||
&& curl -OL $FILE_URL \
|
||||
&& dpkg -i $DEB_FILE \
|
||||
&& rm -f $DEB_FILE \
|
||||
&& apt -y purge curl \
|
||||
&& apt -y autoremove \
|
||||
&& apt clean
|
||||
COPY mongodb_backup_agent_entrypoint.bash /
|
||||
RUN chown -R mongodb-mms-agent:mongodb-mms-agent /etc/mongodb-mms/
|
||||
ENTRYPOINT ["/mongodb_backup_agent_entrypoint.bash"]
|
|
@ -0,0 +1,5 @@
|
|||
#!/bin/bash
|
||||
|
||||
docker build -t bigchaindb/mongodb-backup-agent:1.0 .
|
||||
|
||||
docker push bigchaindb/mongodb-backup-agent:1.0
|
|
@ -0,0 +1,21 @@
|
|||
#!/bin/bash
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
MONGODB_BACKUP_CONF_FILE=/etc/mongodb-mms/backup-agent.config
|
||||
|
||||
mms_api_key=`printenv MMS_API_KEY`
|
||||
|
||||
if [[ -z "${mms_api_key}" ]]; then
|
||||
echo "Invalid environment settings detected. Exiting!"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
sed -i '/mmsApiKey/d' $MONGODB_BACKUP_CONF_FILE
|
||||
sed -i '/mothership/d' $MONGODB_BACKUP_CONF_FILE
|
||||
|
||||
echo "mmsApiKey="${mms_api_key} >> $MONGODB_BACKUP_CONF_FILE
|
||||
echo "mothership=api-backup.eu-west-1.mongodb.com" >> $MONGODB_BACKUP_CONF_FILE
|
||||
|
||||
echo "INFO: starting mdb backup..."
|
||||
exec mongodb-mms-backup-agent -c $MONGODB_BACKUP_CONF_FILE
|
|
@ -0,0 +1,27 @@
|
|||
apiVersion: extensions/v1beta1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: mdb-backup-instance-0-dep
|
||||
spec:
|
||||
replicas: 1
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: mdb-backup-instance-0-dep
|
||||
spec:
|
||||
terminationGracePeriodSeconds: 10
|
||||
containers:
|
||||
- name: mdb-backup
|
||||
image: bigchaindb/mongodb-backup-agent:1.0
|
||||
imagePullPolicy: Always
|
||||
env:
|
||||
- name: MMS_API_KEY
|
||||
valueFrom:
|
||||
configMapKeyRef:
|
||||
name: mdb-backup
|
||||
key: api-key
|
||||
resources:
|
||||
limits:
|
||||
cpu: 200m
|
||||
memory: 768Mi
|
||||
restartPolicy: Always
|
|
@ -0,0 +1,54 @@
|
|||
# Dockerfile for MongoDB Monitoring Agent
|
||||
# Use it to create bigchaindb/mongodb-monitoring-agent
|
||||
# on Docker Hub.
|
||||
|
||||
# "Never install the Monitoring Agent on the same server as a data bearing mongod instance."
|
||||
# More help:
|
||||
# https://docs.cloudmanager.mongodb.com/tutorial/install-monitoring-agent-with-deb-package/
|
||||
|
||||
FROM ubuntu:xenial
|
||||
LABEL maintainer "dev@bigchaindb.com"
|
||||
# Using ARG, one can set DEBIAN_FRONTEND=noninteractive and others
|
||||
# just for the duration of the build:
|
||||
ARG DEBIAN_FRONTEND=noninteractive
|
||||
ARG DEB_FILE=mongodb-mms-monitoring-agent_latest_amd64.ubuntu1604.deb
|
||||
ARG FILE_URL="https://cloud.mongodb.com/download/agent/monitoring/"$DEB_FILE
|
||||
|
||||
# Download the Monitoring Agent as a .deb package and install it
|
||||
WORKDIR /
|
||||
RUN apt update \
|
||||
&& apt -y upgrade \
|
||||
&& apt -y install --no-install-recommends curl ca-certificates logrotate \
|
||||
libsasl2-2 \
|
||||
&& curl -OL $FILE_URL \
|
||||
&& dpkg -i $DEB_FILE \
|
||||
&& rm -f $DEB_FILE \
|
||||
&& apt -y purge curl \
|
||||
&& apt -y autoremove \
|
||||
&& apt clean
|
||||
|
||||
# The above installation puts a default config file in
|
||||
# /etc/mongodb-mms/monitoring-agent.config
|
||||
# It should contain a line like: "mmsApiKey="
|
||||
# i.e. with no value specified.
|
||||
# We need to set that value to the "agent API key" value from Cloud Manager,
|
||||
# but of course that value varies from user to user,
|
||||
# so we can't hard-code it into the Docker image.
|
||||
|
||||
# Kubernetes can set an MMS_API_KEY environment variable
|
||||
# in the container
|
||||
# (including from Secrets or ConfigMaps)
|
||||
# An entrypoint bash script can then use the value of MMS_API_KEY
|
||||
# to write the mmsApiKey value in the config file
|
||||
# /etc/mongodb-mms/monitoring-agent.config
|
||||
# before running the MongoDB Monitoring Agent.
|
||||
|
||||
# The MongoDB Monitoring Agent has other
|
||||
# config settings besides mmsApiKey,
|
||||
# but it's the only one that *must* be set. See:
|
||||
# https://docs.cloudmanager.mongodb.com/reference/monitoring-agent/
|
||||
|
||||
COPY mongodb_mon_agent_entrypoint.bash /
|
||||
RUN chown -R mongodb-mms-agent:mongodb-mms-agent /etc/mongodb-mms/
|
||||
#USER mongodb-mms-agent - BUG(Krish) Uncomment after tests are complete
|
||||
ENTRYPOINT ["/mongodb_mon_agent_entrypoint.bash"]
|
|
@ -0,0 +1,5 @@
|
|||
#!/bin/bash
|
||||
|
||||
docker build -t bigchaindb/mongodb-monitoring-agent:1.0 .
|
||||
|
||||
docker push bigchaindb/mongodb-monitoring-agent:1.0
|
|
@ -0,0 +1,30 @@
|
|||
#!/bin/bash
|
||||
|
||||
set -euo pipefail
|
||||
# -e Abort at the first failed line (i.e. if exit status is not 0)
|
||||
# -u Abort when undefined variable is used
|
||||
# -o pipefail (Bash-only) Piped commands return the status
|
||||
# of the last failed command, rather than the status of the last command
|
||||
|
||||
MONGODB_MON_CONF_FILE=/etc/mongodb-mms/monitoring-agent.config
|
||||
|
||||
mms_api_key=`printenv MMS_API_KEY`
|
||||
|
||||
if [[ -z "${mms_api_key}" ]]; then
|
||||
echo "Invalid environment settings detected. Exiting!"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Delete all lines containing "mmsApiKey" in the MongoDB Monitoring Agent
|
||||
# config file /etc/mongodb-mms/monitoring-agent.config
|
||||
sed -i '/mmsApiKey/d' $MONGODB_MON_CONF_FILE
|
||||
|
||||
# Append a new line of the form
|
||||
# mmsApiKey=value_of_MMS_API_KEY
|
||||
echo "mmsApiKey="${mms_api_key} >> $MONGODB_MON_CONF_FILE
|
||||
|
||||
# start mdb monitoring agent
|
||||
echo "INFO: starting mdb monitor..."
|
||||
exec mongodb-mms-monitoring-agent \
|
||||
--conf $MONGODB_MON_CONF_FILE \
|
||||
--loglevel debug
|
|
@ -0,0 +1,38 @@
|
|||
############################################################
|
||||
# This config file defines a k8s Deployment for the #
|
||||
# bigchaindb/mongodb-monitoring-agent:latest Docker image #
|
||||
# #
|
||||
# It connects to a MongoDB instance in a separate pod, #
|
||||
# all remote MongoDB instances in the cluster, #
|
||||
# and also to MongoDB Cloud Manager (an external service). #
|
||||
# Notes: #
|
||||
# MongoDB agents connect to Cloud Manager on port 443. #
|
||||
############################################################
|
||||
|
||||
apiVersion: extensions/v1beta1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: mdb-mon-instance-0-dep
|
||||
spec:
|
||||
replicas: 1
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: mdb-mon-instance-0-dep
|
||||
spec:
|
||||
terminationGracePeriodSeconds: 10
|
||||
containers:
|
||||
- name: mdb-mon
|
||||
image: bigchaindb/mongodb-monitoring-agent:1.0
|
||||
imagePullPolicy: Always
|
||||
env:
|
||||
- name: MMS_API_KEY
|
||||
valueFrom:
|
||||
configMapKeyRef:
|
||||
name: mdb-mon
|
||||
key: api-key
|
||||
resources:
|
||||
limits:
|
||||
cpu: 200m
|
||||
memory: 768Mi
|
||||
restartPolicy: Always
|
|
@ -1,4 +1,4 @@
|
|||
FROM mongo:3.4.2
|
||||
FROM mongo:3.4.3
|
||||
LABEL maintainer "dev@bigchaindb.com"
|
||||
WORKDIR /
|
||||
RUN apt-get update \
|
||||
|
|
|
@ -12,7 +12,7 @@ GOINSTALL=$(GOCMD) install
|
|||
GOFMT=gofmt -s -w
|
||||
|
||||
DOCKER_IMAGE_NAME?=bigchaindb/mongodb
|
||||
DOCKER_IMAGE_TAG?=latest
|
||||
DOCKER_IMAGE_TAG?=3.4.3
|
||||
|
||||
PWD=$(shell pwd)
|
||||
BINARY_PATH=$(PWD)/mongod_entrypoint/
|
||||
|
|
|
@ -1,13 +0,0 @@
|
|||
#####################################################################
|
||||
# This YAML file desribes a ConfigMap with the FQDN of the mongo #
|
||||
# instance to be started. MongoDB instance uses the value from this #
|
||||
# ConfigMap to bootstrap itself during startup. #
|
||||
#####################################################################
|
||||
|
||||
apiVersion: v1
|
||||
kind: ConfigMap
|
||||
metadata:
|
||||
name: mdb-fqdn
|
||||
namespace: default
|
||||
data:
|
||||
fqdn: mdb-instance-0.westeurope.cloudapp.azure.com
|
|
@ -4,45 +4,25 @@
|
|||
# It depends on the configdb and db k8s pvc. #
|
||||
########################################################################
|
||||
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: mdb-svc
|
||||
namespace: default
|
||||
labels:
|
||||
name: mdb-svc
|
||||
spec:
|
||||
selector:
|
||||
app: mdb-ss
|
||||
ports:
|
||||
- port: 27017
|
||||
targetPort: 27017
|
||||
name: mdb-port
|
||||
type: ClusterIP
|
||||
clusterIP: None
|
||||
---
|
||||
apiVersion: apps/v1beta1
|
||||
kind: StatefulSet
|
||||
metadata:
|
||||
name: mdb-ss
|
||||
name: mdb-instance-0-ss
|
||||
namespace: default
|
||||
spec:
|
||||
serviceName: mdb-svc
|
||||
serviceName: mdb-instance-0
|
||||
replicas: 1
|
||||
template:
|
||||
metadata:
|
||||
name: mdb-ss
|
||||
name: mdb-instance-0-ss
|
||||
labels:
|
||||
app: mdb-ss
|
||||
app: mdb-instance-0-ss
|
||||
spec:
|
||||
terminationGracePeriodSeconds: 10
|
||||
containers:
|
||||
- name: mongodb
|
||||
# TODO(FIXME): Do not use latest in production as it is harder to track
|
||||
# versions during updates and rollbacks. Also, once fixed, change the
|
||||
# imagePullPolicy to IfNotPresent for faster bootup
|
||||
image: bigchaindb/mongodb:latest
|
||||
imagePullPolicy: Always
|
||||
image: bigchaindb/mongodb:3.4.3
|
||||
imagePullPolicy: IfNotPresent
|
||||
env:
|
||||
- name: MONGODB_FQDN
|
||||
valueFrom:
|
||||
|
|
|
@ -0,0 +1,16 @@
|
|||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: mdb-instance-0
|
||||
namespace: default
|
||||
labels:
|
||||
name: mdb-instance-0
|
||||
spec:
|
||||
selector:
|
||||
app: mdb-instance-0-ss
|
||||
ports:
|
||||
- port: 27017
|
||||
targetPort: 27017
|
||||
name: mdb-port
|
||||
type: ClusterIP
|
||||
clusterIP: None
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue