Merge remote-tracking branch 'origin/1.3' into kyber-master

This commit is contained in:
gautam 2017-12-21 14:08:53 +01:00
commit 44328d7626
181 changed files with 4519 additions and 976 deletions

View File

@ -7,11 +7,14 @@ if [[ -n ${TOXENV} ]]; then
elif [[ "${BIGCHAINDB_DATABASE_BACKEND}" == mongodb && \
-z "${BIGCHAINDB_DATABASE_SSL}" ]]; then
# Run the full suite of tests for MongoDB over an unsecure connection
pytest -sv --database-backend=mongodb --cov=bigchaindb
pytest -sv --database-backend=mongodb -m "serial"
pytest -sv --database-backend=mongodb --cov=bigchaindb -m "not serial"
elif [[ "${BIGCHAINDB_DATABASE_BACKEND}" == mongodb && \
"${BIGCHAINDB_DATABASE_SSL}" == true ]]; then
# Run a sub-set of tests over SSL; those marked as 'pytest.mark.bdb_ssl'.
pytest -sv --database-backend=mongodb-ssl --cov=bigchaindb -m bdb_ssl
else
pytest -sv -n auto --cov=bigchaindb
# Run the full suite of tests for RethinkDB (the default backend when testing)
pytest -sv -m "serial"
pytest -sv --cov=bigchaindb -m "not serial"
fi

26
.github/issue_template.md vendored Normal file
View File

@ -0,0 +1,26 @@
* BigchainDB version:
* Operating System:
* Deployment Type: `[Docker|Host|IPDB|Other]`
* If you are using IPDB, please specify your network type `[test, prod]`
and the `bdb_url(BigchainDB URL)` you are using.
* For every other type of deployment, please specify the documentation/instructions
you are following.
* BigchainDB driver: `[yes|no]`
* If using a driver please specify, driver type `[python|js|java]`
and version.
### Description
Describe what you were trying to get done.
Tell us what happened, what went wrong, and what you expected to happen.
### Steps to Reproduce
If you have the precise steps to reproduce, please specify. If you can reproduce
ocassionally, please provide additional information; e.g. screenshots, commands, logs, etc.
### What I Did
```
Paste the command(s) you ran and the output.
If there was a crash, please include the traceback/error message here.
```

34
.github/pull_request_template.md vendored Normal file
View File

@ -0,0 +1,34 @@
## Description
A few sentences describing the overall goals of the pull request's commits.
## Issues This PR Fixes
Fixes #NNNN
Fixes #NNNN
## Related PRs
List related PRs against other branches e.g. for backporting features/bugfixes
to previous release branches:
Repo/Branch | PR
------ | ------
some_other_PR | [link]()
## Todos
- [ ] Tested and working on development environment
- [ ] Unit tests (if appropriate)
- [ ] Added/Updated all related documentation. Add [link]() if different from this PR
- [ ] DevOps Support needed e.g. create Runscope API test if new endpoint added or
update deployment docs. Create a ticket and add [link]()
## Deployment Notes
Notes about how to deploy this work. For example, running a migration against the production DB.
## How to QA
Outline the steps to test or reproduce the PR here.
## Impacted Areas in Application
List general components of the application that this PR will affect:
- Scale
- Performance
- Security etc.

1
.gitignore vendored
View File

@ -77,7 +77,6 @@ ntools/one-m/ansible/hosts
ntools/one-m/ansible/ansible.cfg
# Just in time documentation
docs/server/source/schema
docs/server/source/http-samples
# Terraform state files

View File

@ -32,6 +32,62 @@ For reference, the possible headings are:
* **External Contributors** to list contributors outside of BigchainDB GmbH.
* **Notes**
## [1.3] - 2017-11-21
Tag name: v1.3.0
### Added
* Metadata full-text search. [Pull request #1812](https://github.com/bigchaindb/bigchaindb/pull/1812)
### Notes
* Improved documentation about blocks and votes. [Pull request #1855](https://github.com/bigchaindb/bigchaindb/pull/1855)
## [1.2] - 2017-11-13
Tag name: v1.2.0
### Added
* New and improved installation setup docs and code. Pull requests [#1775](https://github.com/bigchaindb/bigchaindb/pull/1775) and [#1785](https://github.com/bigchaindb/bigchaindb/pull/1785)
* New BigchainDB configuration setting to set the port number of the log server: `log.port`. [Pull request #1796](https://github.com/bigchaindb/bigchaindb/pull/1796)
* New secondary index on `id` in the bigchain table. That will make some queries execute faster. [Pull request #1803](https://github.com/bigchaindb/bigchaindb/pull/1803)
* When using MongoDB, there are some restrictions on allowed names for keys (JSON keys). Those restrictions were always there but now BigchainDB checks key names explicitly, rather than leaving that to MongoDB. Pull requests [#1807](https://github.com/bigchaindb/bigchaindb/pull/1807) and [#1811](https://github.com/bigchaindb/bigchaindb/pull/1811)
* When using MongoDB, there are some restrictions on the allowed values of "language" (if that key is used in the values of `metadata` or `asset.data`). Those restrictions were always there but now BigchainDB checks the values explicitly, rather than leaving that to MongoDB. Pull requests [#1806](https://github.com/bigchaindb/bigchaindb/pull/1806) and [#1811](https://github.com/bigchaindb/bigchaindb/pull/1811)
* There's a new page in the root docs about permissions in BigchainDB. [Pull request #1788](https://github.com/bigchaindb/bigchaindb/pull/1788)
* There's a new option in the `bigchaindb start` command: `bigchaindb start --no-init` will avoid doing `bigchaindb init` if it wasn't done already. [Pull request #1814](https://github.com/bigchaindb/bigchaindb/pull/1814)
### Fixed
* Fixed a bug where setting the log level in a BigchainDB config file didn't have any effect. It does now. [Pull request #1797](https://github.com/bigchaindb/bigchaindb/pull/1797)
* The docs were wrong about there being no Ping/Pong support in the Events API. There is, so the docs were fixed. [Pull request #1799](https://github.com/bigchaindb/bigchaindb/pull/1799)
* Fixed an issue with closing WebSocket connections properly. [Pull request #1819](https://github.com/bigchaindb/bigchaindb/pull/1819)
### Notes
* Many changes were made to the Kubernetes-based production deployment template and code.
## [1.1] - 2017-09-26
Tag name: v1.1.0
### Added
* Support for server-side plugins that can add support for alternate event consumers (other than the WebSocket API). [Pull request #1707](https://github.com/bigchaindb/bigchaindb/pull/1707)
* New configuration settings to set the *advertised* wsserver scheme, host and port. (The *advertised* ones are the ones that external users use to connect to the WebSocket API.) [Pull request #1703](https://github.com/bigchaindb/bigchaindb/pull/1703)
* Support for secure (TLS) WebSocket connections. [Pull request #1619](https://github.com/bigchaindb/bigchaindb/pull/1619)
* A new page of documentation about the contents of a condition (inside a transaction). [Pull request #1668](https://github.com/bigchaindb/bigchaindb/pull/1668)
### Changed
* We updated our definition of the **public API** (at the top of this document). [Pull request #1700](https://github.com/bigchaindb/bigchaindb/pull/1700)
* The HTTP API Logger now logs the request path and method as well. [Pull request #1644](https://github.com/bigchaindb/bigchaindb/pull/1644)
### External Contributors
* @carchrae - [Pull request #1731](https://github.com/bigchaindb/bigchaindb/pull/1731)
* @ivanbakel - [Pull request #1706](https://github.com/bigchaindb/bigchaindb/pull/1706)
* @ketanbhatt - Pull requests [#1643](https://github.com/bigchaindb/bigchaindb/pull/1643) and [#1644](https://github.com/bigchaindb/bigchaindb/pull/1644)
### Notes
* New drivers & tools from our community:
* [Java driver](https://github.com/authenteq/java-bigchaindb-driver), by [Authenteq](https://authenteq.com/)
* [Ruby library](https://rubygems.org/gems/bigchaindb), by @nileshtrivedi
* Many improvements to our production deployment template (which uses Kubernetes).
* The production deployment template for the multi-node case was out of date. We updated that and verified it. [Pull request #1713](https://github.com/bigchaindb/bigchaindb/pull/1713)
## [1.0.1] - 2017-07-13
Tag name: v1.0.1

View File

@ -145,6 +145,20 @@ Once you accept and submit the CLA, we'll email you with further instructions. (
Someone will then merge your branch or suggest changes. If we suggest changes, you won't have to open a new pull request, you can just push new code to the same branch (on `origin`) as you did before creating the pull request.
### Pull Request Guidelines
Before you submit a pull request, check that it meets these guidelines:
1. The pull request should include tests.
2. If the pull request adds functionality, the docs should be updated. Put
your new functionality into a function with a docstring, and add the
feature to the list in README.rst.
3. The pull request should work for Python 3.5, and pass the flake8 check.
Check https://travis-ci.org/bigchaindb/bigchaindb-driver/pull_requests
and make sure that the tests pass for all supported Python versions.
4. Follow the pull request template while creating new PRs, the template will
be visible to you when you create a new pull request.
### Tip: Upgrading All BigchainDB Dependencies
Over time, your versions of the Python packages used by BigchainDB will get out of date. You can upgrade them using:

View File

@ -1,23 +1,27 @@
# Code Licenses
For all code in this repository, BigchainDB GmbH ("We") either:
Except as noted in the **Exceptions** section below, for all code in this repository, BigchainDB GmbH ("We") either:
1. owns the copyright, or
2. owns the right to sublicense it (because all external contributors must agree to a Contributor License Agreement).
2. owns the right to sublicense it under any license (because all external contributors must agree to a Contributor License Agreement).
Therefore We can choose how to license all the code in this repository. We can license it to Joe Xname under one license and Company Yname under a different license.
Therefore We can choose how to license all the code in this repository (except for the Exceptions). We can license it to Joe Xname under one license and Company Yname under a different license.
The two general options are:
1. You can get it under a commercial license for a fee. We can negotiate the terms of that license. It's not like we have some standard take-it-or-leave it commercial license. If you want to modify it and keep your modifications private, then that's certainly possible. Just ask.
2. You can get it under the AGPLv3 license for free. You don't even have to ask us. That's because all code in _this_ repository is licensed under the GNU Affero General Public License version 3 (AGPLv3), the full text of which can be found at [http://www.gnu.org/licenses/agpl.html](http://www.gnu.org/licenses/agpl.html).
If you don't like the AGPL license, then just ignore it. It doesn't affect any other license.
If you don't like the AGPL license, then contact us to get a different license.
All short code snippets embedded in the official BigchainDB _documentation_ are also licensed under the Apache License, Version 2.0, the full text of which can be found at [http://www.apache.org/licenses/LICENSE-2.0](http://www.apache.org/licenses/LICENSE-2.0).
All short code snippets embedded in the official BigchainDB _documentation_ are licensed under the Apache License, Version 2.0, the full text of which can be found at [http://www.apache.org/licenses/LICENSE-2.0](http://www.apache.org/licenses/LICENSE-2.0).
For the licenses on all other BigchainDB-related code, see the LICENSE file in the associated repository.
# Documentation Licenses
The official BigchainDB documentation, _except for the short code snippets embedded within it_, is licensed under a Creative Commons Attribution-ShareAlike 4.0 International license, the full text of which can be found at [http://creativecommons.org/licenses/by-sa/4.0/legalcode](http://creativecommons.org/licenses/by-sa/4.0/legalcode).
# Exceptions
The contents of the `k8s/nginx-openresty/` directory are licensed as described in the `LICENSE.md` file in that directory.

View File

@ -13,8 +13,10 @@ BigchainDB is a scalable blockchain database. [The whitepaper](https://www.bigch
## Get Started with BigchainDB Server
### [Quickstart](https://docs.bigchaindb.com/projects/server/en/latest/quickstart.html)
### [Set Up & Run a Dev/Test Node](https://docs.bigchaindb.com/projects/server/en/latest/dev-and-test/setup-run-node.html)
### [Set Up & Run a Dev/Test Node](https://docs.bigchaindb.com/projects/server/en/latest/dev-and-test/index.html)
### [Run BigchainDB Server with Docker](https://docs.bigchaindb.com/projects/server/en/latest/appendices/run-with-docker.html)
### [Run BigchainDB Server with Vagrant](https://docs.bigchaindb.com/projects/server/en/latest/appendices/run-with-vagrant.html)
### [Run BigchainDB Server with Ansible](https://docs.bigchaindb.com/projects/server/en/latest/appendices/run-with-ansible.html)
## Links for Everyone

View File

@ -16,13 +16,13 @@ except release candidates are labelled like
A minor release is preceeded by a feature freeze and created from the 'master' branch. This is a summary of the steps we go through to release a new minor version of BigchainDB Server.
1. Update the `CHANGELOG.md` file in master
1. In `k8s/bigchaindb/bigchaindb-dep.yaml`, find the line of the form `image: bigchaindb/bigchaindb:0.8.1` and change the version number to the new version number, e.g. `0.9.0`. (This is the Docker image that Kubernetes should pull from Docker Hub.) Commit that change to master
1. In `k8s/bigchaindb/bigchaindb-dep.yaml` AND in `k8s/dev-setup/bigchaindb.yaml`, find the line of the form `image: bigchaindb/bigchaindb:0.8.1` and change the version number to the new version number, e.g. `0.9.0`. (This is the Docker image that Kubernetes should pull from Docker Hub.) Commit that change to master
1. Create and checkout a new branch for the minor release, named after the minor version, without a preceeding 'v', e.g. `git checkout -b 0.9` (*not* 0.9.0, this new branch will be for e.g. 0.9.0, 0.9.1, 0.9.2, etc. each of which will be identified by a tagged commit)
1. Push the new branch to GitHub, e.g. `git push origin 0.9`
1. Create and checkout a new branch off of the 0.9 branch. Let's call it branch T for now
1. In `bigchaindb/version.py`, update `__version__` and `__short_version__`, e.g. to `0.9` and `0.9.0` (with no `.dev` on the end)
1. Commit those changes, push the new branch T to GitHub, and use the pushed branch T to create a new pull request merging the T branch into the 0.9 branch.
1. Wait for all the tests to pass!
1. Wait for all the tests to pass! Then merge T into 0.9.
1. Follow steps outlined in [Common Steps](#common-steps)
1. In 'master' branch, Edit `bigchaindb/version.py`, increment the minor version to the next planned release, e.g. `0.10.0.dev`. (Exception: If you just released `X.Y.Zrc1` then increment the minor version to `X.Y.Zrc2`.) This step is so people reading the latest docs will know that they're for the latest (master branch) version of BigchainDB Server, not the docs at the time of the most recent release (which are also available).
1. Go to [Docker Hub](https://hub.docker.com/), sign in, go to bigchaindb/bigchaindb, go to Settings - Build Settings, and under the build with Docker Tag Name equal to `latest`, change the Name to the number of the new release, e.g. `0.9`
@ -38,7 +38,7 @@ A patch release is similar to a minor release, but piggybacks on an existing min
1. Update the `CHANGELOG.md` file
1. Increment the patch version in `bigchaindb/version.py`, e.g. `0.9.1`
1. Commit that change
1. In `k8s/bigchaindb/bigchaindb-dep.yaml`, find the line of the form `image: bigchaindb/bigchaindb:0.9.0` and change the version number to the new version number, e.g. `0.9.1`. (This is the Docker image that Kubernetes should pull from Docker Hub.)
1. In `k8s/bigchaindb/bigchaindb-dep.yaml` AND in `k8s/dev-setup/bigchaindb.yaml`, find the line of the form `image: bigchaindb/bigchaindb:0.9.0` and change the version number to the new version number, e.g. `0.9.1`. (This is the Docker image that Kubernetes should pull from Docker Hub.)
1. Commit that change
1. Push the updated minor release branch to GitHub
1. Follow steps outlined in [Common Steps](#common-steps)
@ -59,8 +59,7 @@ These steps are common between minor and patch releases:
1. Make sure your local Git is in the same state as the release: e.g. `git fetch <remote-name>` and `git checkout v0.9.1`
1. Make sure you have a `~/.pypirc` file containing credentials for PyPI
1. Do a `make release` to build and publish the new `bigchaindb` package on PyPI
1. [Login to readthedocs.org](https://readthedocs.org/accounts/login/)
as a maintainer of the BigchainDB Server docs, and:
1. [Login to readthedocs.org](https://readthedocs.org/accounts/login/) and go to the **BigchainDB Server** project (*not* **BigchainDB**), then:
- Go to Admin --> Advanced Settings
and make sure that "Default branch:" (i.e. what "latest" points to)
is set to the new release's tag, e.g. `v0.9.1`.

View File

@ -97,6 +97,7 @@ config = {
'fmt_console': log_config['formatters']['console']['format'],
'fmt_logfile': log_config['formatters']['file']['format'],
'granular_levels': {},
'port': log_config['root']['port']
},
'graphite': {
'host': os.environ.get('BIGCHAINDB_GRAPHITE_HOST', 'localhost'),

View File

@ -265,6 +265,16 @@ def write_assets(conn, assets):
return
@register_query(MongoDBConnection)
def write_metadata(conn, metadata):
try:
return conn.run(
conn.collection('metadata')
.insert_many(metadata, ordered=False))
except OperationError:
return
@register_query(MongoDBConnection)
def get_assets(conn, asset_ids):
return conn.run(
@ -273,6 +283,14 @@ def get_assets(conn, asset_ids):
projection={'_id': False}))
@register_query(MongoDBConnection)
def get_metadata(conn, txn_ids):
return conn.run(
conn.collection('metadata')
.find({'id': {'$in': txn_ids}},
projection={'_id': False}))
@register_query(MongoDBConnection)
def count_blocks(conn):
return conn.run(
@ -348,9 +366,9 @@ def get_new_blocks_feed(conn, start_block_id):
@register_query(MongoDBConnection)
def text_search(conn, search, *, language='english', case_sensitive=False,
diacritic_sensitive=False, text_score=False, limit=0):
diacritic_sensitive=False, text_score=False, limit=0, table='assets'):
cursor = conn.run(
conn.collection('assets')
conn.collection(table)
.find({'$text': {
'$search': search,
'$language': language,
@ -363,7 +381,7 @@ def text_search(conn, search, *, language='english', case_sensitive=False,
if text_score:
return cursor
return (_remove_text_score(asset) for asset in cursor)
return (_remove_text_score(obj) for obj in cursor)
def _remove_text_score(asset):

View File

@ -27,7 +27,7 @@ def create_database(conn, dbname):
@register_schema(MongoDBConnection)
def create_tables(conn, dbname):
for table_name in ['bigchain', 'backlog', 'votes', 'assets']:
for table_name in ['bigchain', 'backlog', 'votes', 'assets', 'metadata']:
logger.info('Create `%s` table.', table_name)
# create the table
# TODO: read and write concerns can be declared here
@ -40,6 +40,7 @@ def create_indexes(conn, dbname):
create_backlog_secondary_index(conn, dbname)
create_votes_secondary_index(conn, dbname)
create_assets_secondary_index(conn, dbname)
create_metadata_secondary_index(conn, dbname)
@register_schema(MongoDBConnection)
@ -50,6 +51,11 @@ def drop_database(conn, dbname):
def create_bigchain_secondary_index(conn, dbname):
logger.info('Create `bigchain` secondary index.')
# secondary index on block id which should be unique
conn.conn[dbname]['bigchain'].create_index('id',
name='block_id',
unique=True)
# to order blocks by timestamp
conn.conn[dbname]['bigchain'].create_index([('block.timestamp',
ASCENDING)],
@ -116,3 +122,17 @@ def create_assets_secondary_index(conn, dbname):
# full text search index
conn.conn[dbname]['assets'].create_index([('$**', TEXT)], name='text')
def create_metadata_secondary_index(conn, dbname):
logger.info('Create `metadata` secondary index.')
# unique index on the id of the metadata.
# the id is the txid of the transaction for which the metadata
# was specified
conn.conn[dbname]['metadata'].create_index('id',
name='transaction_id',
unique=True)
# full text search index
conn.conn[dbname]['metadata'].create_index([('$**', TEXT)], name='text')

View File

@ -254,6 +254,19 @@ def write_assets(connection, assets):
raise NotImplementedError
@singledispatch
def write_metadata(connection, metadata):
"""Write a list of metadata to the metadata table.
Args:
metadata (list): a list of metadata to write.
Returns:
The database response.
"""
raise NotImplementedError
@singledispatch
def get_assets(connection, asset_ids):
"""Get a list of assets from the assets table.
@ -268,6 +281,20 @@ def get_assets(connection, asset_ids):
raise NotImplementedError
@singledispatch
def get_metadata(connection, txn_ids):
"""Get a list of metadata from the metadata table.
Args:
txn_ids (list): a list of ids for the metadata to be retrieved from
the database.
Returns:
metadata (list): the list of returned metadata.
"""
raise NotImplementedError
@singledispatch
def count_blocks(connection):
"""Count the number of blocks in the bigchain table.
@ -360,7 +387,7 @@ def get_new_blocks_feed(connection, start_block_id):
@singledispatch
def text_search(conn, search, *, language='english', case_sensitive=False,
diacritic_sensitive=False, text_score=False, limit=0):
diacritic_sensitive=False, text_score=False, limit=0, table=None):
"""Return all the assets that match the text search.
The results are sorted by text score.

View File

@ -173,6 +173,13 @@ def write_assets(connection, assets):
.insert(assets, durability=WRITE_DURABILITY))
@register_query(RethinkDBConnection)
def write_metadata(connection, metadata):
return connection.run(
r.table('metadata')
.insert(metadata, durability=WRITE_DURABILITY))
@register_query(RethinkDBConnection)
def get_assets(connection, asset_ids):
return connection.run(
@ -180,6 +187,13 @@ def get_assets(connection, asset_ids):
.get_all(*asset_ids))
@register_query(RethinkDBConnection)
def get_metadata(connection, txn_ids):
return connection.run(
r.table('metadata', read_mode=READ_MODE)
.get_all(*txn_ids))
@register_query(RethinkDBConnection)
def count_blocks(connection):
return connection.run(

View File

@ -23,7 +23,7 @@ def create_database(connection, dbname):
@register_schema(RethinkDBConnection)
def create_tables(connection, dbname):
for table_name in ['bigchain', 'backlog', 'votes', 'assets']:
for table_name in ['bigchain', 'backlog', 'votes', 'assets', 'metadata']:
logger.info('Create `%s` table.', table_name)
connection.run(r.db(dbname).table_create(table_name))

View File

@ -16,10 +16,17 @@ import logging
import bigchaindb
from bigchaindb.backend.connection import connect
from bigchaindb.common.exceptions import ValidationError
from bigchaindb.common.utils import validate_all_values_for_key
logger = logging.getLogger(__name__)
TABLES = ('bigchain', 'backlog', 'votes', 'assets')
TABLES = ('bigchain', 'backlog', 'votes', 'assets', 'metadata')
VALID_LANGUAGES = ('danish', 'dutch', 'english', 'finnish', 'french', 'german',
'hungarian', 'italian', 'norwegian', 'portuguese', 'romanian',
'russian', 'spanish', 'swedish', 'turkish', 'none',
'da', 'nl', 'en', 'fi', 'fr', 'de', 'hu', 'it', 'nb', 'pt',
'ro', 'ru', 'es', 'sv', 'tr')
@singledispatch
@ -99,3 +106,44 @@ def init_database(connection=None, dbname=None):
create_database(connection, dbname)
create_tables(connection, dbname)
create_indexes(connection, dbname)
def validate_language_key(obj, key):
"""Validate all nested "language" key in `obj`.
Args:
obj (dict): dictionary whose "language" key is to be validated.
Returns:
None: validation successful
Raises:
ValidationError: will raise exception in case language is not valid.
"""
backend = bigchaindb.config['database']['backend']
if backend == 'mongodb':
data = obj.get(key, {})
if isinstance(data, dict):
validate_all_values_for_key(data, 'language', validate_language)
def validate_language(value):
"""Check if `value` is a valid language.
https://docs.mongodb.com/manual/reference/text-search-languages/
Args:
value (str): language to validated
Returns:
None: validation successful
Raises:
ValidationError: will raise exception in case language is not valid.
"""
if value not in VALID_LANGUAGES:
error_str = ('MongoDB does not support text search for the '
'language "{}". If you do not understand this error '
'message then please rename key/field "language" to '
'something else like "lang".').format(value)
raise ValidationError(error_str)

View File

@ -196,7 +196,9 @@ def run_start(args):
logger.info('RethinkDB started with PID %s' % proc.pid)
try:
_run_init()
if not args.skip_initialize_database:
logger.info('Initializing database')
_run_init()
except DatabaseAlreadyExists:
pass
except KeypairNotFoundException:
@ -300,6 +302,12 @@ def create_parser():
action='store_true',
help='Run RethinkDB on start')
start_parser.add_argument('--no-init',
dest='skip_initialize_database',
default=False,
action='store_true',
help='Skip database initialization')
# parser for configuring the number of shards
sharding_parser = subparsers.add_parser('set-shards',
help='Configure number of shards')

View File

@ -34,16 +34,18 @@ def configure_bigchaindb(command):
"""
@functools.wraps(command)
def configure(args):
config_from_cmdline = None
try:
config_from_cmdline = {
'log': {
'level_console': args.log_level,
'level_logfile': args.log_level,
},
'server': {'loglevel': args.log_level},
}
if args.log_level is not None:
config_from_cmdline = {
'log': {
'level_console': args.log_level,
'level_logfile': args.log_level,
},
'server': {'loglevel': args.log_level},
}
except AttributeError:
config_from_cmdline = None
pass
bigchaindb.config_utils.autoconfigure(
filename=args.config, config=config_from_cmdline, force=True)
command(args)
@ -238,10 +240,11 @@ base_parser.add_argument('-c', '--config',
help='Specify the location of the configuration file '
'(use "-" for stdout)')
# NOTE: this flag should not have any default value because that will override
# the environment variables provided to configure the logger.
base_parser.add_argument('-l', '--log-level',
type=str.upper, # convert to uppercase for comparison to choices
choices=['DEBUG', 'INFO', 'WARNING', 'ERROR', 'CRITICAL'],
default='INFO',
help='Log level')
base_parser.add_argument('-y', '--yes', '--yes-please',

View File

@ -3,11 +3,12 @@
This directory contains the schemas for the different JSON documents BigchainDB uses.
The aim is to provide:
- a strict definition/documentation of the data structures used in BigchainDB
- a language independent tool to validate the structure of incoming/outcoming
data (there are several ready to use
[implementations](http://json-schema.org/implementations.html) written in
different languages)
- a strict definition of the data structures used in BigchainDB
- a language independent tool to validate the structure of incoming/outcoming
data (there are several ready to use
[implementations](http://json-schema.org/implementations.html) written in
different languages)
## Learn about JSON Schema

View File

@ -13,24 +13,11 @@ from bigchaindb.common.exceptions import SchemaValidationError
logger = logging.getLogger(__name__)
def drop_schema_descriptions(node):
""" Drop descriptions from schema, since they clutter log output """
if 'description' in node:
del node['description']
for n in node.get('properties', {}).values():
drop_schema_descriptions(n)
for n in node.get('definitions', {}).values():
drop_schema_descriptions(n)
for n in node.get('anyOf', []):
drop_schema_descriptions(n)
def _load_schema(name):
""" Load a schema from disk """
path = os.path.join(os.path.dirname(__file__), name + '.yaml')
with open(path) as handle:
schema = yaml.safe_load(handle)
drop_schema_descriptions(schema)
fast_schema = rapidjson_schema.loads(rapidjson.dumps(schema))
return path, (schema, fast_schema)

View File

@ -4,8 +4,6 @@ id: "http://www.bigchaindb.com/schema/transaction.json"
type: object
additionalProperties: false
title: Transaction Schema
description: |
A transaction represents the creation or transfer of assets in BigchainDB.
required:
- id
- inputs
@ -17,48 +15,24 @@ required:
properties:
id:
"$ref": "#/definitions/sha3_hexdigest"
description: |
A sha3 digest of the transaction. The ID is calculated by removing all
derived hashes and signatures from the transaction, serializing it to
JSON with keys in sorted order and then hashing the resulting string
with sha3.
operation:
"$ref": "#/definitions/operation"
asset:
"$ref": "#/definitions/asset"
description: |
Description of the asset being transacted.
See: `Asset`_.
inputs:
type: array
title: "Transaction inputs"
description: |
Array of the inputs of a transaction.
See: Input_.
items:
"$ref": "#/definitions/input"
outputs:
type: array
description: |
Array of outputs provided by this transaction.
See: Output_.
items:
"$ref": "#/definitions/output"
metadata:
"$ref": "#/definitions/metadata"
description: |
User provided transaction metadata. This field may be ``null`` or may
contain an id and an object with freeform metadata.
See: `Metadata`_.
version:
type: string
pattern: "^1\\.0$"
description: |
BigchainDB transaction schema version.
definitions:
offset:
type: integer
@ -78,53 +52,25 @@ definitions:
uuid4:
pattern: "[a-f0-9]{8}-[a-f0-9]{4}-4[a-f0-9]{3}-[89ab][a-f0-9]{3}-[a-f0-9]{12}"
type: string
description: |
A `UUID <https://tools.ietf.org/html/rfc4122.html>`_
of type 4 (random).
operation:
type: string
description: |
Type of the transaction:
A ``CREATE`` transaction creates an asset in BigchainDB. This
transaction has outputs but no inputs, so a dummy input is created.
A ``TRANSFER`` transaction transfers ownership of an asset, by providing
an input that meets the conditions of an earlier transaction's outputs.
A ``GENESIS`` transaction is a special case transaction used as the
sole member of the first block in a BigchainDB ledger.
enum:
- CREATE
- TRANSFER
- GENESIS
asset:
type: object
description: |
Description of the asset being transacted. In the case of a ``TRANSFER``
transaction, this field contains only the ID of asset. In the case
of a ``CREATE`` transaction, this field contains only the user-defined
payload.
additionalProperties: false
properties:
id:
"$ref": "#/definitions/sha3_hexdigest"
description: |
ID of the transaction that created the asset.
data:
description: |
User provided metadata associated with the asset. May also be ``null``.
anyOf:
- type: object
additionalProperties: true
- type: 'null'
output:
type: object
description: |
A transaction output. Describes the quantity of an asset and the
requirements that must be met to spend the output.
See also: Input_.
additionalProperties: false
required:
- amount
@ -134,15 +80,7 @@ definitions:
amount:
type: string
pattern: "^[0-9]{1,20}$"
description: |
Integral amount of the asset represented by this output.
In the case of a non divisible asset, this will always be 1.
condition:
description: |
Describes the condition that needs to be met to spend the output. Has the properties:
- **details**: Details of the condition.
- **uri**: Condition encoded as an ASCII string.
type: object
additionalProperties: false
required:
@ -158,13 +96,8 @@ definitions:
subtypes=ed25519-sha-256(&)?){2,3}$"
public_keys:
"$ref": "#/definitions/public_keys"
description: |
List of public keys associated with the conditions on an output.
input:
type: "object"
description:
An input spends a previous output, by providing one or more fulfillments
that fulfill the conditions of the previous output.
additionalProperties: false
required:
- owners_before
@ -172,13 +105,7 @@ definitions:
properties:
owners_before:
"$ref": "#/definitions/public_keys"
description: |
List of public keys of the previous owners of the asset.
fulfillment:
description: |
Fulfillment of an `Output.condition`_, or, put a different way, a payload
that satisfies the condition of a previous output to prove that the
creator(s) of this transaction have control over the listed asset.
anyOf:
- type: string
pattern: "^[a-zA-Z0-9_-]*$"
@ -186,8 +113,6 @@ definitions:
fulfills:
anyOf:
- type: 'object'
description: |
Reference to the output that is being spent.
additionalProperties: false
required:
- output_index
@ -195,26 +120,16 @@ definitions:
properties:
output_index:
"$ref": "#/definitions/offset"
description: |
Index of the output containing the condition being fulfilled
transaction_id:
"$ref": "#/definitions/sha3_hexdigest"
description: |
Transaction ID containing the output to spend
- type: 'null'
metadata:
anyOf:
- type: object
description: |
User provided transaction metadata. This field may be ``null`` or may
contain an non empty object with freeform metadata.
additionalProperties: true
minProperties: 1
- type: 'null'
condition_details:
description: |
Details needed to reconstruct the condition associated with an output.
Currently, BigchainDB only supports ed25519 and threshold condition types.
anyOf:
- type: object
additionalProperties: false

View File

@ -10,8 +10,6 @@ properties:
properties:
id:
"$ref": "#/definitions/sha3_hexdigest"
description: |
ID of the transaction that created the asset.
required:
- id
inputs:

View File

@ -4,13 +4,6 @@ id: "http://www.bigchaindb.com/schema/vote.json"
type: object
additionalProperties: false
title: Vote Schema
description: |
A Vote is an endorsement of a Block (identified by a hash) by
a node (identified by a public key).
The outer Vote object contains the details of the vote being made
as well as the signature and identifying information of the node
passing the vote.
required:
- node_pubkey
- signature
@ -19,18 +12,12 @@ properties:
node_pubkey:
type: "string"
pattern: "[1-9a-zA-Z^OIl]{43,44}"
description: |
Ed25519 public key identifying the voting node.
signature:
type: "string"
pattern: "[1-9a-zA-Z^OIl]{86,88}"
description:
Ed25519 signature of the `Vote Details`_ object.
vote:
type: "object"
additionalProperties: false
description: |
`Vote Details`_ to be signed.
required:
- invalid_reason
- is_block_valid
@ -40,33 +27,17 @@ properties:
properties:
previous_block:
"$ref": "#/definitions/sha3_hexdigest"
description: |
ID (SHA3 hash) of the block that precedes the block being voted on.
The notion of a "previous" block is subject to vote.
voting_for_block:
"$ref": "#/definitions/sha3_hexdigest"
description: |
ID (SHA3 hash) of the block being voted on.
is_block_valid:
type: "boolean"
description: |
This field is ``true`` if the block was deemed valid by the node.
invalid_reason:
anyOf:
- type: "string"
description: |
Reason the block is voted invalid, or ``null``.
.. container:: notice
**Note**: The invalid_reason was not being used and may be dropped in a future version of BigchainDB. See Issue `#217 <https://github.com/bigchaindb/bigchaindb/issues/217>`_ on GitHub.
- type: "null"
timestamp:
type: "string"
pattern: "[0-9]{10}"
description: |
Unix timestamp that the vote was created by the node, according
to the system time of the node.
definitions:
sha3_hexdigest:
pattern: "[0-9a-f]{64}"

View File

@ -1,7 +1,10 @@
import time
import re
import rapidjson
import bigchaindb
from bigchaindb.common.exceptions import ValidationError
def gen_timestamp():
"""The Unix time, rounded to the nearest second.
@ -46,3 +49,90 @@ def deserialize(data):
string.
"""
return rapidjson.loads(data)
def validate_txn_obj(obj_name, obj, key, validation_fun):
"""Validate value of `key` in `obj` using `validation_fun`.
Args:
obj_name (str): name for `obj` being validated.
obj (dict): dictionary object.
key (str): key to be validated in `obj`.
validation_fun (function): function used to validate the value
of `key`.
Returns:
None: indicates validation successful
Raises:
ValidationError: `validation_fun` will raise exception on failure
"""
backend = bigchaindb.config['database']['backend']
if backend == 'mongodb':
data = obj.get(key, {})
if isinstance(data, dict):
validate_all_keys(obj_name, data, validation_fun)
def validate_all_keys(obj_name, obj, validation_fun):
"""Validate all (nested) keys in `obj` by using `validation_fun`.
Args:
obj_name (str): name for `obj` being validated.
obj (dict): dictionary object.
validation_fun (function): function used to validate the value
of `key`.
Returns:
None: indicates validation successful
Raises:
ValidationError: `validation_fun` will raise this error on failure
"""
for key, value in obj.items():
validation_fun(obj_name, key)
if isinstance(value, dict):
validate_all_keys(obj_name, value, validation_fun)
def validate_all_values_for_key(obj, key, validation_fun):
"""Validate value for all (nested) occurrence of `key` in `obj`
using `validation_fun`.
Args:
obj (dict): dictionary object.
key (str): key whose value is to be validated.
validation_fun (function): function used to validate the value
of `key`.
Raises:
ValidationError: `validation_fun` will raise this error on failure
"""
for vkey, value in obj.items():
if vkey == key:
validation_fun(value)
elif isinstance(value, dict):
validate_all_values_for_key(value, key, validation_fun)
def validate_key(obj_name, key):
"""Check if `key` contains ".", "$" or null characters.
https://docs.mongodb.com/manual/reference/limits/#Restrictions-on-Field-Names
Args:
obj_name (str): object name to use when raising exception
key (str): key to validated
Returns:
None: validation successful
Raises:
ValidationError: will raise exception in case of regex match.
"""
if re.search(r'^[$]|\.|\x00', key):
error_str = ('Invalid key name "{}" in {} object. The '
'key name cannot contain characters '
'".", "$" or null characters').format(key, obj_name)
raise ValidationError(error_str)

View File

@ -192,10 +192,15 @@ class Bigchain(object):
# get the asset ids from the block
if block_dict:
asset_ids = Block.get_asset_ids(block_dict)
txn_ids = Block.get_txn_ids(block_dict)
# get the assets from the database
assets = self.get_assets(asset_ids)
# get the metadata from the database
metadata = self.get_metadata(txn_ids)
# add the assets to the block transactions
block_dict = Block.couple_assets(block_dict, assets)
# add the metadata to the block transactions
block_dict = Block.couple_metadata(block_dict, metadata)
status = None
if include_status:
@ -381,8 +386,8 @@ class Bigchain(object):
for transaction in transactions:
# ignore transactions in invalid blocks
# FIXME: Isn't there a faster solution than doing I/O again?
_, status = self.get_transaction(transaction['id'],
include_status=True)
txn, status = self.get_transaction(transaction['id'],
include_status=True)
if status == self.TX_VALID:
num_valid_transactions += 1
# `txid` can only have been spent in at most on valid block.
@ -392,6 +397,7 @@ class Bigchain(object):
' with the chain'.format(txid))
# if its not and invalid transaction
if status is not None:
transaction.update({'metadata': txn.metadata})
non_invalid_transactions.append(transaction)
if non_invalid_transactions:
@ -510,10 +516,15 @@ class Bigchain(object):
# Decouple assets from block
assets, block_dict = block.decouple_assets()
metadatas, block_dict = block.decouple_metadata(block_dict)
# write the assets
if assets:
self.write_assets(assets)
if metadatas:
self.write_metadata(metadatas)
# write the block
return backend.query.write_block(self.connection, block_dict)
@ -624,6 +635,19 @@ class Bigchain(object):
"""
return backend.query.get_assets(self.connection, asset_ids)
def get_metadata(self, txn_ids):
"""
Return a list of metadata that match the transaction ids (txn_ids)
Args:
txn_ids (:obj:`list` of :obj:`str`): A list of txn_ids to
retrieve from the database.
Returns:
list: The list of metadata returned from the database.
"""
return backend.query.get_metadata(self.connection, txn_ids)
def write_assets(self, assets):
"""
Writes a list of assets into the database.
@ -634,7 +658,17 @@ class Bigchain(object):
"""
return backend.query.write_assets(self.connection, assets)
def text_search(self, search, *, limit=0):
def write_metadata(self, metadata):
"""
Writes a list of metadata into the database.
Args:
metadata (:obj:`list` of :obj:`dict`): A list of metadata to write to
the database.
"""
return backend.query.write_metadata(self.connection, metadata)
def text_search(self, search, *, limit=0, table='assets'):
"""
Return an iterator of assets that match the text search
@ -645,12 +679,13 @@ class Bigchain(object):
Returns:
iter: An iterator of assets that match the text search.
"""
assets = backend.query.text_search(self.connection, search, limit=limit)
objects = backend.query.text_search(self.connection, search, limit=limit,
table=table)
# TODO: This is not efficient. There may be a more efficient way to
# query by storing block ids with the assets and using fastquery.
# See https://github.com/bigchaindb/bigchaindb/issues/1496
for asset in assets:
tx, status = self.get_transaction(asset['id'], True)
for obj in objects:
tx, status = self.get_transaction(obj['id'], True)
if status == self.TX_VALID:
yield asset
yield obj

View File

@ -62,6 +62,7 @@ SUBSCRIBER_LOGGING_CONFIG = {
'loggers': {},
'root': {
'level': logging.DEBUG,
'handlers': ['console', 'file', 'errors']
'handlers': ['console', 'file', 'errors'],
'port': DEFAULT_SOCKET_LOGGING_PORT
},
}

View File

@ -22,11 +22,14 @@ class HttpServerLogger(Logger):
object. *Ignored*.
"""
log_cfg = self.cfg.env_orig.get('custom_log_config', {})
self.log_port = log_cfg.get('port', DEFAULT_SOCKET_LOGGING_PORT)
self._set_socklog_handler(self.error_log)
self._set_socklog_handler(self.access_log)
def _set_socklog_handler(self, log):
socket_handler = logging.handlers.SocketHandler(
DEFAULT_SOCKET_LOGGING_HOST, DEFAULT_SOCKET_LOGGING_PORT)
DEFAULT_SOCKET_LOGGING_HOST, self.log_port)
socket_handler._gunicorn = True
log.addHandler(socket_handler)

View File

@ -25,17 +25,25 @@ def _normalize_log_level(level):
raise ConfigurationError('Log level must be a string!') from exc
def setup_pub_logger():
def setup_pub_logger(logging_port=None):
logging_port = logging_port or DEFAULT_SOCKET_LOGGING_PORT
dictConfig(PUBLISHER_LOGGING_CONFIG)
socket_handler = logging.handlers.SocketHandler(
DEFAULT_SOCKET_LOGGING_HOST, DEFAULT_SOCKET_LOGGING_PORT)
DEFAULT_SOCKET_LOGGING_HOST, logging_port)
socket_handler.setLevel(logging.DEBUG)
logger = logging.getLogger()
logger.addHandler(socket_handler)
def setup_sub_logger(*, user_log_config=None):
server = LogRecordSocketServer()
kwargs = {}
log_port = user_log_config.get('port') if user_log_config is not None else None
if log_port is not None:
kwargs['port'] = log_port
server = LogRecordSocketServer(**kwargs)
with server:
server_proc = Process(
target=server.serve_forever,
@ -45,7 +53,8 @@ def setup_sub_logger(*, user_log_config=None):
def setup_logging(*, user_log_config=None):
setup_pub_logger()
port = user_log_config.get('port') if user_log_config is not None else None
setup_pub_logger(logging_port=port)
setup_sub_logger(user_log_config=user_log_config)

View File

@ -5,11 +5,12 @@ from bigchaindb.common.exceptions import (InvalidHash, InvalidSignature,
DoubleSpend, InputDoesNotExist,
TransactionNotInValidBlock,
AssetIdMismatch, AmountError,
SybilError,
DuplicateTransaction)
SybilError, DuplicateTransaction)
from bigchaindb.common.transaction import Transaction
from bigchaindb.common.utils import gen_timestamp, serialize
from bigchaindb.common.utils import (gen_timestamp, serialize,
validate_txn_obj, validate_key)
from bigchaindb.common.schema import validate_transaction_schema
from bigchaindb.backend.schema import validate_language_key
class Transaction(Transaction):
@ -96,6 +97,9 @@ class Transaction(Transaction):
@classmethod
def from_dict(cls, tx_body):
validate_transaction_schema(tx_body)
validate_txn_obj('asset', tx_body['asset'], 'data', validate_key)
validate_txn_obj('metadata', tx_body, 'metadata', validate_key)
validate_language_key(tx_body['asset'], 'data')
return super().from_dict(tx_body)
@classmethod
@ -121,6 +125,15 @@ class Transaction(Transaction):
del asset['id']
tx_dict.update({'asset': asset})
# get metadata of the transaction
metadata = list(bigchain.get_metadata([tx_dict['id']]))
if 'metadata' not in tx_dict:
metadata = metadata[0] if metadata else None
if metadata:
metadata = metadata.get('metadata')
tx_dict.update({'metadata': metadata})
return cls.from_dict(tx_dict)
@ -359,11 +372,15 @@ class Block(object):
"""
asset_ids = cls.get_asset_ids(block_dict)
assets = bigchain.get_assets(asset_ids)
txn_ids = cls.get_txn_ids(block_dict)
metadata = bigchain.get_metadata(txn_ids)
# reconstruct block
block_dict = cls.couple_assets(block_dict, assets)
block_dict = cls.couple_metadata(block_dict, metadata)
kwargs = from_dict_kwargs or {}
return cls.from_dict(block_dict, **kwargs)
def decouple_assets(self):
def decouple_assets(self, block_dict=None):
"""
Extracts the assets from the ``CREATE`` transactions in the block.
@ -372,7 +389,9 @@ class Block(object):
the block being the dict of the block with no assets in the CREATE
transactions.
"""
block_dict = deepcopy(self.to_dict())
if block_dict is None:
block_dict = deepcopy(self.to_dict())
assets = []
for transaction in block_dict['block']['transactions']:
if transaction['operation'] in [Transaction.CREATE,
@ -383,6 +402,27 @@ class Block(object):
return (assets, block_dict)
def decouple_metadata(self, block_dict=None):
"""
Extracts the metadata from transactions in the block.
Returns:
tuple: (metadatas, block) with the metadatas being a list of dict/null and
the block being the dict of the block with no metadata in any transaction.
"""
if block_dict is None:
block_dict = deepcopy(self.to_dict())
metadatas = []
for transaction in block_dict['block']['transactions']:
metadata = transaction.pop('metadata')
if metadata:
metadata_new = {'id': transaction['id'],
'metadata': metadata}
metadatas.append(metadata_new)
return (metadatas, block_dict)
@staticmethod
def couple_assets(block_dict, assets):
"""
@ -408,6 +448,34 @@ class Block(object):
transaction.update({'asset': assets.get(transaction['id'])})
return block_dict
@staticmethod
def couple_metadata(block_dict, metadatal):
"""
Given a block_dict with no metadata (as returned from a database call)
and a list of metadata, reconstruct the original block by putting the
metadata of each transaction back into its original transaction.
NOTE: Till a transaction gets accepted the `metadata` of the transaction
is not moved outside of the transaction. So, if a transaction is found to
have metadata then it should not be overridden.
Args:
block_dict (:obj:`dict`): The block dict as returned from a
database call.
metadata (:obj:`list` of :obj:`dict`): A list of metadata returned from
a database call.
Returns:
dict: The dict of the reconstructed block.
"""
# create a dict with {'<txid>': metadata}
metadatal = {m.pop('id'): m.pop('metadata') for m in metadatal}
# add the metadata to their corresponding transactions
for transaction in block_dict['block']['transactions']:
metadata = metadatal.get(transaction['id'], None)
transaction.update({'metadata': metadata})
return block_dict
@staticmethod
def get_asset_ids(block_dict):
"""
@ -431,6 +499,25 @@ class Block(object):
return asset_ids
@staticmethod
def get_txn_ids(block_dict):
"""
Given a block_dict return all the transaction ids.
Args:
block_dict (:obj:`dict`): The block dict as returned from a
database call.
Returns:
list: The list of txn_ids in the block.
"""
txn_ids = []
for transaction in block_dict['block']['transactions']:
txn_ids.append(transaction['id'])
return txn_ids
def to_str(self):
return serialize(self.to_dict())

View File

@ -63,7 +63,8 @@ def start():
election.start(events_queue=exchange.get_publisher_queue())
# start the web api
app_server = server.create_server(bigchaindb.config['server'])
app_server = server.create_server(settings=bigchaindb.config['server'],
log_config=bigchaindb.config['log'])
p_webapi = mp.Process(name='webapi', target=app_server.run)
p_webapi.start()

View File

@ -1,2 +1,2 @@
__version__ = '1.1.0.dev'
__short_version__ = '1.1.dev'
__version__ = '1.3.0'
__short_version__ = '1.3'

View File

@ -2,6 +2,7 @@
from flask_restful import Api
from bigchaindb.web.views import (
assets,
metadata,
blocks,
info,
statuses,
@ -27,6 +28,7 @@ def r(*args, **kwargs):
ROUTES_API_V1 = [
r('/', info.ApiV1Index),
r('assets/', assets.AssetListApi),
r('metadata/', metadata.MetadataApi),
r('blocks/<string:block_id>', blocks.BlockApi),
r('blocks/', blocks.BlockListApi),
r('statuses/', statuses.StatusApi),

View File

@ -37,6 +37,11 @@ class StandaloneApplication(gunicorn.app.base.BaseApplication):
super().__init__()
def load_config(self):
# find a better way to pass this such that
# the custom logger class can access it.
custom_log_config = self.options.get('custom_log_config')
self.cfg.env_orig['custom_log_config'] = custom_log_config
config = dict((key, value) for key, value in self.options.items()
if key in self.cfg.settings and value is not None)
@ -74,7 +79,7 @@ def create_app(*, debug=False, threads=1):
return app
def create_server(settings):
def create_server(settings, log_config=None):
"""Wrap and return an application ready to be run.
Args:
@ -97,6 +102,7 @@ def create_server(settings):
settings['threads'] = 1
settings['logger_class'] = 'bigchaindb.log.loggers.HttpServerLogger'
settings['custom_log_config'] = log_config
app = create_app(debug=settings.get('debug', False),
threads=settings['threads'])
standalone = StandaloneApplication(app, options=settings)

View File

@ -0,0 +1,50 @@
"""This module provides the blueprint for some basic API endpoints.
For more information please refer to the documentation: http://bigchaindb.com/http-api
"""
import logging
from flask_restful import reqparse, Resource
from flask import current_app
from bigchaindb.backend.exceptions import OperationError
from bigchaindb.web.views.base import make_error
logger = logging.getLogger(__name__)
class MetadataApi(Resource):
def get(self):
"""API endpoint to perform a text search on transaction metadata.
Args:
search (str): Text search string to query the text index
limit (int, optional): Limit the number of returned documents.
Return:
A list of metadata that match the query.
"""
parser = reqparse.RequestParser()
parser.add_argument('search', type=str, required=True)
parser.add_argument('limit', type=int)
args = parser.parse_args()
if not args['search']:
return make_error(400, 'text_search cannot be empty')
if not args['limit']:
del args['limit']
pool = current_app.config['bigchain_pool']
with pool() as bigchain:
args['table'] = 'metadata'
metadata = bigchain.text_search(**args)
try:
# This only works with MongoDB as the backend
return list(metadata)
except OperationError as e:
return make_error(
400,
'({}): {}'.format(type(e).__name__, e)
)

View File

@ -70,6 +70,15 @@ class Dispatcher:
self.subscribers[uuid] = websocket
def unsubscribe(self, uuid):
"""Remove a websocket from the list of subscribers.
Args:
uuid (str): a unique identifier for the websocket.
"""
del self.subscribers[uuid]
@asyncio.coroutine
def publish(self):
"""Publish new events to the subscribers."""
@ -115,11 +124,16 @@ def websocket_handler(request):
msg = yield from websocket.receive()
except RuntimeError as e:
logger.debug('Websocket exception: %s', str(e))
return websocket
if msg.type == aiohttp.WSMsgType.ERROR:
break
if msg.type == aiohttp.WSMsgType.CLOSED:
logger.debug('Websocket closed')
break
elif msg.type == aiohttp.WSMsgType.ERROR:
logger.debug('Websocket exception: %s', websocket.exception())
return websocket
break
request.app['dispatcher'].unsubscribe(uuid)
return websocket
def init_app(event_source, *, loop=None):

View File

@ -6,7 +6,7 @@ services:
ports:
- "27017"
command: mongod --replSet=bigchain-rs
bdb:
build:
context: .
@ -22,7 +22,6 @@ services:
- ./setup.cfg:/usr/src/app/setup.cfg
- ./pytest.ini:/usr/src/app/pytest.ini
- ./tox.ini:/usr/src/app/tox.ini
- ../cryptoconditions:/usr/src/app/cryptoconditions
environment:
BIGCHAINDB_DATABASE_BACKEND: mongodb
BIGCHAINDB_DATABASE_HOST: mdb

View File

@ -1,6 +1,8 @@
# BigchainDB and Byzantine Fault Tolerance
While BigchainDB is not currently [Byzantine fault tolerant (BFT)](https://en.wikipedia.org/wiki/Byzantine_fault_tolerance), we plan to offer it as an option.
We anticipate that turning it on will cause a severe dropoff in performance. See [Issue #293](https://github.com/bigchaindb/bigchaindb/issues/293).
While BigchainDB is not currently [Byzantine fault tolerant (BFT)](https://en.wikipedia.org/wiki/Byzantine_fault_tolerance), we plan to offer it as an option.
Update Nov 2017: we're actively working on this, the next release or two will likely have support. More details to come in blog form and github issues
Related issue: [Issue #293](https://github.com/bigchaindb/bigchaindb/issues/293). We anticipate that turning on BFT will cause a dropoff in performance (for a gain in security).
In the meantime, there are practical things that one can do to increase security (e.g. firewalls, key management, and access controls).

View File

@ -34,7 +34,9 @@ from recommonmark.parser import CommonMarkParser
# ones.
import sphinx_rtd_theme
extensions = []
extensions = [
'sphinx.ext.autosectionlabel',
]
# Add any paths that contain templates here, relative to this directory.
templates_path = ['_templates']

View File

@ -58,6 +58,9 @@ At a high level, one can communicate with a BigchainDB cluster (set of nodes) us
<div class="buttondiv">
<a class="button" href="http://docs.bigchaindb.com/projects/py-driver/en/latest/index.html">Python Driver Docs</a>
</div>
<div class="buttondiv">
<a class="button" href="https://docs.bigchaindb.com/projects/js-driver/en/latest/index.html">JavaScript Driver Docs</a>
</div>
<div class="buttondiv">
<a class="button" href="https://docs.bigchaindb.com/projects/cli/en/latest/">Command Line Transaction Tool</a>
</div>
@ -85,5 +88,6 @@ More About BigchainDB
assets
smart-contracts
transaction-concepts
permissions
timestamps
Data Models <https://docs.bigchaindb.com/projects/server/en/latest/data-models/index.html>

View File

@ -0,0 +1,74 @@
Permissions in BigchainDB
-------------------------
BigchainDB lets users control what other users can do, to some extent. That ability resembles "permissions" in the \*nix world, "privileges" in the SQL world, and "access control" in the security world.
Permission to Spend/Transfer an Output
======================================
In BigchainDB, every output has an associated condition (crypto-condition).
To spend/transfer an unspent output, a user (or group of users) must fulfill the condition. Another way to say that is that only certain users have permission to spend the output. The simplest condition is of the form, "Only someone with the private key corresponding to this public key can spend this output." Much more elaborate conditions are possible, e.g. "To spend this output, …"
- "…anyone in the Accounting Group can sign."
- "…three of these four people must sign."
- "…either Bob must sign, or both Tom and Sylvia must sign."
For details, see `the documentation about conditions in BigchainDB <https://docs.bigchaindb.com/projects/server/en/latest/data-models/conditions.html>`_.
Once an output has been spent, it can't be spent again: *nobody* has permission to do that. That is, BigchainDB doesn't permit anyone to "double spend" an output.
Write Permissions
=================
When someone builds a TRANSFER transaction, they can put an arbitrary JSON object in the ``metadata`` field (within reason; real BigchainDB networks put a limit on the size of transactions). That is, they can write just about anything they want in a TRANSFER transaction.
Does that mean there are no "write permissions" in BigchainDB? Not at all!
A TRANSFER transaction will only be valid (allowed) if its inputs fulfill some previous outputs. The conditions on those outputs will control who can build valid TRANSFER transactions. In other words, one can interpret the condition on an output as giving "write permissions" to certain users to write something into the history of the associated asset.
As a concrete example, you could use BigchainDB to write a public journal where only you have write permissions. Here's how: First you'd build a CREATE transaction with the ``asset.data`` being something like ``{"title": "The Journal of John Doe"}``, with one output. That output would have an amount 1 and a condition that only you (who has your private key) can spend that output.
Each time you want to append something to your journal, you'd build a new TRANSFER transaction with your latest entry in the ``metadata`` field, e.g.
.. code-block:: json
{"timestamp": "1508319582",
"entry": "I visited Marmot Lake with Jane."}
The TRANSFER transaction would have one output. That output would have an amount 1 and a condition that only you (who has your private key) can spend that output. And so on. Only you would be able to append to the history of that asset (your journal).
The same technique could be used for scientific notebooks, supply-chain records, government meeting minutes, and so on.
You could do more elaborate things too. As one example, each time someone writes a TRANSFER transaction, they give *someone else* permission to spend it, setting up a sort of writers-relay or chain letter.
.. note::
Anyone can write any JSON (again, within reason) in the ``asset.data`` field of a CREATE transaction. They don't need permission.
Read Permissions
================
All the data stored in a BigchainDB network can be read by anyone with access to that network. One *can* store encrypted data, but if the decryption key ever leaks out, then the encrypted data can be read, decrypted, and leak out too. (Deleting the encrypted data is :doc:`not an option <immutable>`.)
The permission to read some specific information (e.g. a music file) can be thought of as an *asset*. (In many countries, that permission or "right" is a kind of intellectual property.)
BigchainDB can be used to register that asset and transfer it from owner to owner.
Today, BigchainDB does not have a way to restrict read access of data stored in a BigchainDB network, but many third-party services do offer that (e.g. Google Docs, Dropbox).
In principle, a third party service could ask a BigchainDB network to determine if a particular user has permission to read some particular data. Indeed they could use BigchainDB to keep track of *all* the rights a user has for some data (not just the right to read it).
That third party could also use BigchainDB to store audit logs, i.e. records of every read, write or other operation on stored data.
BigchainDB can be used in other ways to help parties exchange private data:
- It can be used to publicly disclose the *availability* of some private data (stored elsewhere). For example, there might be a description of the data and a price.
- It can be used to record the TLS handshakes which two parties sent to each other to establish an encrypted and authenticated TLS connection, which they could use to exchange private data with each other. (The stored handshake information wouldn't be enough, by itself, to decrypt the data.) It would be a "proof of TLS handshake."
- See the BigchainDB `Privacy Protocols repository <https://github.com/bigchaindb/privacy-protocols>`_ for more techniques.
Role-Based Access Control (RBAC)
================================
In September 2017, we published a `blog post about how one can define an RBAC sub-system on top of BigchainDB <https://blog.bigchaindb.com/role-based-access-control-for-bigchaindb-assets-b7cada491997>`_.
At the time of writing (October 2017), doing so required the use of a plugin, so it's not possible using standard BigchainDB (which is what's available on `IPDB <https://ipdb.io/>`_). That may change in the future.
If you're interested, `contact BigchainDB <https://www.bigchaindb.com/contact/>`_.

View File

@ -1,241 +0,0 @@
""" Script to render transaction schema into .rst document """
from collections import OrderedDict
import os.path
import yaml
from bigchaindb.common.schema import TX_SCHEMA_PATH, VOTE_SCHEMA_PATH
TPL_PROP = """\
%(title)s
%(underline)s
**type:** %(type)s
%(description)s
"""
TPL_STYLES = """
.. raw:: html
<style>
#%(container)s h2 {
border-top: solid 3px #6ab0de;
background-color: #e7f2fa;
padding: 5px;
}
#%(container)s h3 {
background: #f0f0f0;
border-left: solid 3px #ccc;
font-weight: bold;
padding: 6px;
font-size: 100%%;
font-family: monospace;
}
.document .section p {
margin-bottom: 16px;
}
.notice {
margin: 0px 16px 16px 16px;
background-color: white;
border: 1px solid gold;
padding: 3px 6px;
}
</style>
"""
TPL_TRANSACTION = TPL_STYLES + """\
.. This file was auto generated by %(file)s
==================
Transaction Schema
==================
* `Transaction`_
* Input_
* Output_
* Asset_
* Metadata_
Transaction
-----------
%(transaction)s
Input
-----
%(input)s
Output
------
%(output)s
Asset
-----
%(asset)s
Metadata
--------
%(metadata)s
"""
def generate_transaction_docs():
schema = load_schema(TX_SCHEMA_PATH)
defs = schema['definitions']
doc = TPL_TRANSACTION % {
'transaction': render_section('Transaction', schema),
'output': render_section('Output', defs['output']),
'input': render_section('Input', defs['input']),
'asset': render_section('Asset', defs['asset']),
'metadata': render_section('Metadata', defs['metadata']['anyOf'][0]),
'container': 'transaction-schema',
'file': os.path.basename(__file__),
}
write_schema_doc('transaction', doc)
TPL_VOTE = TPL_STYLES + """\
.. This file was auto generated by %(file)s
===========
Vote Schema
===========
Vote
----
%(vote)s
Vote Details
------------
%(vote_details)s
"""
def generate_vote_docs():
schema = load_schema(VOTE_SCHEMA_PATH)
doc = TPL_VOTE % {
'vote': render_section('Vote', schema),
'vote_details': render_section('Vote', schema['properties']['vote']),
'container': 'vote-schema',
'file': os.path.basename(__file__),
}
write_schema_doc('vote', doc)
def ordered_load_yaml(path):
""" Custom YAML loader to preserve key order """
class OrderedLoader(yaml.SafeLoader):
pass
def construct_mapping(loader, node):
loader.flatten_mapping(node)
return OrderedDict(loader.construct_pairs(node))
OrderedLoader.add_constructor(
yaml.resolver.BaseResolver.DEFAULT_MAPPING_TAG,
construct_mapping)
with open(path) as handle:
return yaml.load(handle, OrderedLoader)
def load_schema(path):
global DEFS
schema = ordered_load_yaml(path)
DEFS = schema['definitions']
return schema
def write_schema_doc(name, doc):
# Check base path exists
base_path = os.path.join(os.path.dirname(__file__), 'source/schema')
if not os.path.exists(base_path):
os.makedirs(base_path)
# Write doc
path = os.path.join(base_path, '%s.rst' % name)
with open(path, 'w') as handle:
handle.write(doc)
def render_section(section_name, obj):
""" Render a domain object and it's properties """
out = [obj['description']]
for name, prop in obj.get('properties', {}).items():
try:
title = '%s.%s' % (section_name, name)
out += [TPL_PROP % {
'title': title,
'underline': '^' * len(title),
'description': property_description(prop),
'type': property_type(prop),
}]
except Exception as exc:
raise ValueError('Error rendering property: %s' % name, exc)
return '\n\n'.join(out + [''])
def property_description(prop):
""" Get description of property """
if 'description' in prop:
return prop['description']
if '$ref' in prop:
return property_description(resolve_ref(prop['$ref']))
if 'anyOf' in prop:
return property_description(prop['anyOf'][0])
raise KeyError('description')
def property_type(prop):
""" Resolve a string representing the type of a property """
if 'type' in prop:
if prop['type'] == 'array':
return 'array (%s)' % property_type(prop['items'])
return prop['type']
if 'anyOf' in prop:
return ' or '.join(property_type(p) for p in prop['anyOf'])
if '$ref' in prop:
return property_type(resolve_ref(prop['$ref']))
raise ValueError('Could not resolve property type')
DEFINITION_BASE_PATH = '#/definitions/'
def resolve_ref(ref):
""" Resolve definition reference """
assert ref.startswith(DEFINITION_BASE_PATH)
return DEFS[ref[len(DEFINITION_BASE_PATH):]]
def main():
""" Main function """
generate_transaction_docs()
generate_vote_docs()
def setup(*_):
""" Fool sphinx into think it's an extension muahaha """
main()
if __name__ == '__main__':
main()

View File

@ -3,3 +3,5 @@ recommonmark>=0.4.0
sphinx-rtd-theme>=0.1.9
sphinxcontrib-napoleon>=0.4.4
sphinxcontrib-httpdomain>=1.5.0
pyyaml>=3.12
bigchaindb

Binary file not shown.

After

Width:  |  Height:  |  Size: 52 KiB

View File

@ -1,101 +0,0 @@
# Run BigchainDB with Docker On Mac
**NOT for Production Use**
Those developing on Mac can follow this document to run BigchainDB in docker
containers for a quick dev setup.
Running BigchainDB on Mac (Docker or otherwise) is not officially supported.
Support is very much limited as there are certain things that work differently
in Docker for Mac than Docker for other platforms.
Also, we do not use mac for our development and testing. :)
This page may not be up to date with various settings and docker updates at
all the times.
These steps work as of this writing (2017.Mar.09) and might break in the
future with updates to Docker for mac.
Community contribution to make BigchainDB run on Docker for Mac will always be
welcome.
## Prerequisite
Install Docker for Mac.
## (Optional) For a clean start
1. Stop all BigchainDB and RethinkDB/MongoDB containers.
2. Delete all BigchainDB docker images.
3. Delete the ~/bigchaindb_docker folder.
## Pull the images
Pull the bigchaindb and other required docker images from docker hub.
```text
docker pull bigchaindb/bigchaindb:master
docker pull [rethinkdb:2.3|mongo:3.4.1]
```
## Create the BigchainDB configuration file on Mac
```text
docker run \
--rm \
--volume $HOME/bigchaindb_docker:/data \
bigchaindb/bigchaindb:master \
-y configure \
[mongodb|rethinkdb]
```
To ensure that BigchainDB connects to the backend database bound to the virtual
interface `172.17.0.1`, you must edit the BigchainDB configuration file
(`~/bigchaindb_docker/.bigchaindb`) and change database.host from `localhost`
to `172.17.0.1`.
## Run the backend database on Mac
From v0.9 onwards, you can run RethinkDB or MongoDB.
We use the virtual interface created by the Docker daemon to allow
communication between the BigchainDB and database containers.
It has an IP address of 172.17.0.1 by default.
You can also use docker host networking or bind to your primary (eth)
interface, if needed.
### For RethinkDB backend
```text
docker run \
--name=rethinkdb \
--publish=28015:28015 \
--publish=8080:8080 \
--restart=always \
--volume $HOME/bigchaindb_docker:/data \
rethinkdb:2.3
```
### For MongoDB backend
```text
docker run \
--name=mongodb \
--publish=27017:27017 \
--restart=always \
--volume=$HOME/bigchaindb_docker/db:/data/db \
--volume=$HOME/bigchaindb_docker/configdb:/data/configdb \
mongo:3.4.1 --replSet=bigchain-rs
```
### Run BigchainDB on Mac
```text
docker run \
--name=bigchaindb \
--publish=9984:9984 \
--restart=always \
--volume=$HOME/bigchaindb_docker:/data \
bigchaindb/bigchaindb \
start
```

View File

@ -10,7 +10,6 @@ Appendices
install-os-level-deps
install-latest-pip
run-with-docker
docker-on-mac
json-serialization
cryptography
the-Bigchain-class
@ -27,3 +26,7 @@ Appendices
rethinkdb-backup
licenses
install-with-lxd
run-with-vagrant
run-with-ansible
tx-yaml-files
vote-yaml

View File

@ -22,8 +22,6 @@ That's just one possible way of setting up the file system so as to provide extr
Another way to get similar reliability would be to mount the RethinkDB data directory on an [Amazon EBS](https://aws.amazon.com/ebs/) volume. Each Amazon EBS volume is, "automatically replicated within its Availability Zone to protect you from component failure, offering high availability and durability."
See [the section on setting up storage for RethinkDB](../dev-and-test/setup-run-node.html#set-up-storage-for-rethinkdb-data) for more details.
As with shard replication, live file-system replication protects against many failure modes, but it doesn't protect against them all. You should still consider having normal, "cold" backups.
@ -108,7 +106,7 @@ Considerations for BigchainDB:
Although it's not advertised as such, RethinkDB's built-in replication feature is similar to continous backup, except the "backup" (i.e. the set of replica shards) is spread across all the nodes. One could take that idea a bit farther by creating a set of backup-only servers with one full backup:
* Give all the original BigchainDB nodes (RethinkDB nodes) the server tag `original`. This is the default if you used the RethinkDB config file suggested in the section titled [Configure RethinkDB Server](../dev-and-test/setup-run-node.html#configure-rethinkdb-server).
* Give all the original BigchainDB nodes (RethinkDB nodes) the server tag `original`.
* Set up a group of servers running RethinkDB only, and give them the server tag `backup`. The `backup` servers could be geographically separated from all the `original` nodes (or not; it's up to the consortium to decide).
* Clients shouldn't be able to read from or write to servers in the `backup` set.
* Send a RethinkDB reconfigure command to the RethinkDB cluster to make it so that the `original` set has the same number of replicas as before (or maybe one less), and the `backup` set has one replica. Also, make sure the `primary_replica_tag='original'` so that all primary shards live on the `original` nodes.

View File

@ -0,0 +1,167 @@
# Run BigchainDB with Ansible
**NOT for Production Use**
You can use the following instructions to deploy a single or multi node
BigchainDB setup for dev/test using Ansible. Ansible will setup BigchainDB node(s) along with
[Docker](https://www.docker.com/), [Docker Compose](https://docs.docker.com/compose/),
[MongoDB](https://www.mongodb.com/), [BigchainDB Python driver](https://docs.bigchaindb.com/projects/py-driver/en/latest/).
Currently, this workflow is only supported for the following distributions:
- Ubuntu >= 16.04
- CentOS >= 7
- Fedora >= 24
## Minimum Requirements | Ansible
Minimum resource requirements for a single node BigchainDB dev setup. **The more the better**:
- Memory >= 512MB
- VCPUs >= 1
## Clone the BigchainDB repository | Ansible
```text
$ git clone https://github.com/bigchaindb/bigchaindb.git
```
## Install dependencies | Ansible
- [Ansible](http://docs.ansible.com/ansible/latest/intro_installation.html)
You can also install `ansible` and other dependencies, if any, using the `boostrap.sh` script
inside the BigchainDB repository.
Navigate to `bigchaindb/pkg/scripts` and run the `bootstrap.sh` script to install the dependecies
for your OS. The script also checks if the OS you are running is compatible with the
supported versions.
**Note**: `bootstrap.sh` only supports Ubuntu >= 16.04, CentOS >= 7 and Fedora >=24.
```text
$ cd bigchaindb/pkg/scripts/
$ sudo ./bootstrap.sh
```
### BigchainDB Setup Configuration(s) | Ansible
#### Local Setup | Ansible
You can run the Ansible playbook `bdb-deploy.yml` on your local dev machine and set up the BigchainDB node where
BigchainDB can be run as a process or inside a Docker container(s) depending on your configuration.
Before, running the playbook locally, you need to update the `hosts` and `bdb-config.yml` configuration, which will notify Ansible that we need to run the play locally.
##### Update Hosts | Local
Navigate to `bigchaindb/pkg/configuration/hosts` inside the BigchainDB repository.
```text
$ cd bigchaindb/pkg/configuration/hosts
```
Edit `all` configuration file:
```text
# Delete any existing configuration in this file and insert
# Hostname of dev machine
<HOSTNAME> ansible_connection=local
```
##### Update Configuration | Local
Navigate to `bigchaindb/pkg/configuration/vars` inside the BigchainDB repository.
```text
$ cd bigchaindb/pkg/configuration/vars/bdb-config.yml
```
Edit `bdb-config.yml` configuration file as per your requirements, sample configuration file(s):
```text
---
deploy_docker: false #[true, false]
docker_cluster_size: 1 # Only needed if `deploy_docker` is true
bdb_hosts:
- name: "<HOSTNAME>" # Hostname of dev machine
```
**Note**: You can also orchestrate a multi-node BigchainDB cluster on a local dev host using Docker containers.
Here is a sample `bdb-config.yml`
```text
---
deploy_docker: true #[true, false]
docker_cluster_size: 3
bdb_hosts:
- name: "<LOCAL_DEV_HOST_HOSTNAME>"
```
### BigchainDB Setup | Ansible
Now, You can safely run the `bdb-deploy.yml` playbook and everything will be taken care of by `Ansible`. To run the playbook please navigate to the `bigchaindb/pkg/configuration` directory inside the BigchainDB repository and run the `bdb-deploy.yml` playbook.
```text
$ cd bigchaindb/pkg/configuration/
$ sudo ansible-playbook bdb-deploy.yml -i hosts/all
```
After successful execution of the playbook, you can verify that BigchainDB docker(s)/process(es) is(are) running.
Verify BigchainDB process(es):
```text
$ ps -ef | grep bigchaindb
```
OR
Verify BigchainDB Docker(s):
```text
$ docker ps | grep bigchaindb
```
The playbook also installs the BigchainDB Python Driver,
so you can use it to make transactions
and verify the functionality of your BigchainDB node.
See the [BigchainDB Python Driver documentation](https://docs.bigchaindb.com/projects/py-driver/en/latest/index.html)
for details on how to use it.
**Note**: The `bdb_root_url` can be be one of the following:
```text
# BigchainDB is running as a process
bdb_root_url = http://<HOST-IP>:9984
OR
# BigchainDB is running inside a docker container
bdb_root_url = http://<HOST-IP>:<DOCKER-PUBLISHED-PORT>
```
**Note**: BigchainDB has [other drivers as well](../drivers-clients/index.html).
### Experimental: Running Ansible a Remote Dev/Host
#### Remote Setup | Ansible
You can also run the Ansible playbook `bdb-deploy.yml` on remote machine(s) and set up the BigchainDB node where
BigchainDB can run as a process or inside a Docker container(s) depending on your configuration.
Before, running the playbook on a remote host, you need to update the `hosts` and `bdb-config.yml` configuration, which will notify Ansible that we need to
run the play on a remote host.
##### Update Hosts | Remote
Navigate to `bigchaindb/pkg/configuration/hosts` inside the BigchainDB repository.
```text
$ cd bigchaindb/pkg/configuration/hosts
```
Edit `all` configuration file:
```text
# Delete any existing configuration in this file and insert
<Remote_Host_IP/Hostname> ansible_ssh_user=<USERNAME> ansible_sudo_pass=<ROOT_PASSWORD>
```
**Note**: You can add multiple hosts to the `all` configuration file. Root password is needed because ansible
will run some tasks that require root permissions.
**Note**: You can also use other methods to get inside the remote machines instead of password based SSH. For other methods
please consult [Ansible Documentation](http://docs.ansible.com/ansible/latest/intro_getting_started.html).
##### Update Configuration | Remote
Navigate to `bigchaindb/pkg/configuration/vars` inside the BigchainDB repository.
```text
$ cd bigchaindb/pkg/configuration/vars/bdb-config.yml
```
Edit `bdb-config.yml` configuration file as per your requirements, sample configuration file(s):
```text
---
deploy_docker: false #[true, false]
docker_cluster_size: 1 # Only needed if `deploy_docker` is true
bdb_hosts:
- name: "<REMOTE_MACHINE_HOSTNAME>"
```
After, the configuration of remote hosts, [run the Ansible playbook and verify your deployment](#bigchaindb-setup-ansible).

View File

@ -6,9 +6,12 @@ For those who like using Docker and wish to experiment with BigchainDB in
non-production environments, we currently maintain a Docker image and a
`Dockerfile` that can be used to build an image for `bigchaindb`.
## Prerequisite(s)
- [Docker](https://docs.docker.com/engine/installation/)
## Pull and Run the Image from Docker Hub
Assuming you have Docker installed, you would proceed as follows.
With Docker installed, you can proceed as follows.
In a terminal shell, pull the latest version of the BigchainDB Docker image using:
```text
@ -26,6 +29,7 @@ docker run \
--rm \
--tty \
--volume $HOME/bigchaindb_docker:/data \
--env BIGCHAINDB_DATABASE_HOST=172.17.0.1 \
bigchaindb/bigchaindb \
-y configure \
[mongodb|rethinkdb]
@ -46,24 +50,18 @@ Let's analyze that command:
this allows us to have the data persisted on the host machine,
you can read more in the [official Docker
documentation](https://docs.docker.com/engine/tutorials/dockervolumes)
* `--env BIGCHAINDB_DATABASE_HOST=172.17.0.1`, `172.17.0.1` is the default `docker0` bridge
IP address, for fresh Docker installations. It is used for the communication between BigchainDB and database
containers.
* `bigchaindb/bigchaindb` the image to use. All the options after the container name are passed on to the entrypoint inside the container.
* `-y configure` execute the `configure` sub-command (of the `bigchaindb`
command) inside the container, with the `-y` option to automatically use all the default config values
* `mongodb` or `rethinkdb` specifies the database backend to use with bigchaindb
To ensure that BigchainDB connects to the backend database bound to the virtual
interface `172.17.0.1`, you must edit the BigchainDB configuration file
(`~/bigchaindb_docker/.bigchaindb`) and change database.host from `localhost`
to `172.17.0.1`.
### Run the backend database
From v0.9 onwards, you can run either RethinkDB or MongoDB.
We use the virtual interface created by the Docker daemon to allow
communication between the BigchainDB and database containers.
It has an IP address of 172.17.0.1 by default.
You can also use docker host networking or bind to your primary (eth)
interface, if needed.
@ -73,8 +71,8 @@ You can also use docker host networking or bind to your primary (eth)
docker run \
--detach \
--name=rethinkdb \
--publish=172.17.0.1:28015:28015 \
--publish=172.17.0.1:58080:8080 \
--publish=28015:28015 \
--publish=58080:8080 \
--restart=always \
--volume $HOME/bigchaindb_docker:/data \
rethinkdb:2.3
@ -102,11 +100,11 @@ group.
docker run \
--detach \
--name=mongodb \
--publish=172.17.0.1:27017:27017 \
--publish=27017:27017 \
--restart=always \
--volume=$HOME/mongodb_docker/db:/data/db \
--volume=$HOME/mongodb_docker/configdb:/data/configdb \
mongo:3.4.1 --replSet=bigchain-rs
mongo:3.4.9 --replSet=bigchain-rs
```
### Run BigchainDB

View File

@ -0,0 +1,170 @@
# Run BigchainDB with Vagrant
**NOT for Production Use**
You can use the following instructions to deploy a single or multi node
BigchainDB setup for dev/test using Vagrant. Vagrant will set up the BigchainDB node(s)
with all the dependencies along with MongoDB and BigchainDB Python driver. You
can also tweak the following configurations for the BigchainDB node(s).
- Vagrant Box
- Currently, we support the following boxes:
- `ubuntu/xenial64 # >=16.04`
- `centos/7 # >=7`
- `fedora/24 # >=24`
- **NOTE** : You can choose any other vagrant box of your choice but these are
the minimum versioning requirements.
- Resources and specs for your box.
- RAM
- VCPUs
- Network Type
- Currently, only `private_network` is supported.
- IP Address
- Deploy node with Docker
- Deploy all the services in Docker containers or as processes.
- Number of BigchainDB nodes
- If you want to deploy the services inside Docker containers, you
can specify number of member(s) in the BigchainDB cluster.
- Upstart Script
- Vagrant Provider
- Virtualbox
- VMware
## Minimum Requirements | Vagrant
Minimum resource requirements for a single node BigchainDB dev setup. **The more the better**:
- Memory >= 512MB
- VCPUs >= 1
## Install dependencies | Vagrant
1. [VirtualBox](https://www.virtualbox.org/wiki/Downloads) >= 5.0.0
2. [Vagrant](https://www.vagrantup.com/downloads.html) >= 1.16.0
## Clone the BigchainDB repository | Vagrant
```text
$ git clone https://github.com/bigchaindb/bigchaindb.git
```
## Configuration | Vagrant
Navigate to `bigchaindb/pkg/configuration/vars/` inside the BigchainDB repository.
```text
$ cd bigchaindb/pkg/configuration/vars/
```
Edit `bdb-config.yml` as per your requirements. Sample `bdb-config.yml`:
```text
---
deploy_docker: false #[true, false]
docker_cluster_size: 1
upstart: "/bigchaindb/scripts/bootstrap.sh"
bdb_hosts:
- name: "bdb-node-01"
box:
name: "ubuntu/xenial64"
ram: "2048"
vcpus: "2"
network:
ip: "10.20.30.40"
type: "private_network"
```
**Note**: You can spawn multiple instances to orchestrate a multi-node BigchainDB cluster.
Here is a sample `bdb-config.yml`:
```text
---
deploy_docker: false #[true, false]
docker_cluster_size: 1
upstart: "/bigchaindb/scripts/bootstrap.sh"
bdb_hosts:
- name: "bdb-node-01"
box:
name: "ubuntu/xenial64"
ram: "2048"
vcpus: "2"
network:
ip: "10.20.30.40"
type: "private_network"
- name: "bdb-node-02"
box:
name: "ubuntu/xenial64"
ram: "2048"
vcpus: "2"
network:
ip: "10.20.30.50"
type: "private_network"
```
**Note**: You can also orchestrate a multi-node BigchainDB cluster on a single dev host using Docker containers.
Here is a sample `bdb-config.yml`
```text
---
deploy_docker: true #[true, false]
docker_cluster_size: 3
upstart: "/bigchaindb/scripts/bootstrap.sh"
bdb_hosts:
- name: "bdb-node-01"
box:
name: "ubuntu/xenial64"
ram: "8192"
vcpus: "4"
network:
ip: "10.20.30.40"
type: "private_network"
```
The above mentioned configuration will deploy a 3 node BigchainDB cluster with Docker containers
on your specified host.
## BigchainDB Setup | Vagrant
**Note**: There are some vagrant plugins required for the installation,
user will be prompted to install them if they are not present. To install
the required plugins, run the following command:
```text
$ vagrant plugin install vagrant-cachier vagrant-vbguest vagrant-hosts
```
To bring up the BigchainDB node(s), run the following command:
```text
$ vagrant up
```
After successful execution of Vagrant, you can log in to your fresh BigchainDB node.
```text
$ vagrant ssh <instance-name>
```
## Make your first transaction
Once you are inside the BigchainDB node, you can verify that BigchainDB
docker(s)/process(es) is(are) running.
Verify BigchainDB process(es):
```text
$ ps -ef | grep bigchaindb
```
OR
Verify BigchainDB Docker(s):
```text
$ docker ps | grep bigchaindb
```
The BigchainDB Python Driver is pre-installed in the instance,
so you can use it to make transactions
and verify the functionality of your BigchainDB node.
See the [BigchainDB Python Driver documentation](https://docs.bigchaindb.com/projects/py-driver/en/latest/index.html)
for details on how to use it.
Note 1: The `bdb_root_url` can be one of the following:
```text
# BigchainDB is running as a process
bdb_root_url = http://<HOST-IP>:9984
OR
# BigchainDB is running inside a docker container
bdb_root_url = http://<HOST-IP>:<DOCKER-PUBLISHED-HOST-PORT>
```
Note 2: BigchainDB has [other drivers as well](../drivers-clients/index.html).

View File

@ -0,0 +1,37 @@
The Transaction Schema Files
============================
BigchainDB checks all :ref:`transactions <The Transaction Model>`
(JSON documents) against a formal schema
defined in some JSON Schema files named
transaction.yaml,
transaction_create.yaml and
transaction_transfer.yaml.
The contents of those files are copied below.
To understand those contents
(i.e. JSON Schema), check out
`"Understanding JSON Schema"
<https://spacetelescope.github.io/understanding-json-schema/index.html>`_
by Michael Droettboom or
`json-schema.org <http://json-schema.org/>`_.
transaction.yaml
----------------
.. literalinclude:: ../../../../bigchaindb/common/schema/transaction.yaml
:language: yaml
transaction_create.yaml
-----------------------
.. literalinclude:: ../../../../bigchaindb/common/schema/transaction_create.yaml
:language: yaml
transaction_transfer.yaml
-------------------------
.. literalinclude:: ../../../../bigchaindb/common/schema/transaction_transfer.yaml
:language: yaml

View File

@ -0,0 +1,20 @@
The Vote Schema File
====================
BigchainDB checks all :ref:`votes <The Vote Model>`
(JSON documents) against a formal schema
defined in a JSON Schema file named vote.yaml.
The contents of that file are copied below.
To understand those contents
(i.e. JSON Schema), check out
`"Understanding JSON Schema"
<https://spacetelescope.github.io/understanding-json-schema/index.html>`_
by Michael Droettboom or
`json-schema.org <http://json-schema.org/>`_.
vote.yaml
---------
.. literalinclude:: ../../../../bigchaindb/common/schema/vote.yaml
:language: yaml

View File

@ -51,7 +51,6 @@ extensions = [
'sphinx.ext.autosectionlabel',
# Below are actually build steps made to look like sphinx extensions.
# It was the easiest way to get it running with ReadTheDocs.
'generate_schema_documentation',
'generate_http_server_api_documentation',
]

View File

@ -2,6 +2,8 @@
To avoid redundant data in transactions, the asset model is different for `CREATE` and `TRANSFER` transactions.
## In CREATE Transactions
In a `CREATE` transaction, the `"asset"` must contain exactly one key-value pair. The key must be `"data"` and the value can be any valid JSON document, or `null`. For example:
```json
{
@ -12,6 +14,15 @@ In a `CREATE` transaction, the `"asset"` must contain exactly one key-value pair
}
```
When using MongoDB for storage, certain restriction apply to all (including nested) keys of the `"data"` JSON document:
* Keys (i.e. key names, not values) must **not** begin with the `$` character.
* Keys must not contain `.` or the null character (Unicode code point 0000).
* The key `"language"` (at any level in the hierarchy) is a special key and used for specifying text search language. Its value must be one of the allowed values; see the valid [Text Search Languages](https://docs.mongodb.com/manual/reference/text-search-languages/) in the MongoDB Docs. In BigchainDB, only the languages supported by _MongoDB community edition_ are allowed.
## In TRANSFER Transactions
In a `TRANSFER` transaction, the `"asset"` must contain exactly one key-value pair. They key must be `"id"` and the value must contain a transaction ID (i.e. a SHA3-256 hash: the ID of the `CREATE` transaction which created the asset, which also serves as the asset ID). For example:
```json
{

View File

@ -1,36 +1,90 @@
The Block Model
===============
A block has the following structure:
A block is a JSON object with a particular schema,
as outlined in this page.
A block must contain the following JSON keys
(also called names or fields):
.. code-block:: json
{
"id": "<hash of block>",
"id": "<ID of the block>",
"block": {
"timestamp": "<block-creation timestamp>",
"transactions": ["<list of transactions>"],
"node_pubkey": "<public key of the node creating the block>",
"voters": ["<list of public keys of all nodes in the cluster>"]
"timestamp": "<Block-creation timestamp>",
"transactions": ["<List of transactions>"],
"node_pubkey": "<Public key of the node which created the block>",
"voters": ["<List of public keys of all nodes in the cluster>"]
},
"signature": "<signature of block>"
"signature": "<Signature of inner block object>"
}
- ``id``: The :ref:`hash <Hashes>` of the serialized inner ``block`` (i.e. the ``timestamp``, ``transactions``, ``node_pubkey``, and ``voters``). It's used as a unique index in the database backend (e.g. RethinkDB or MongoDB).
The JSON Keys in a Block
------------------------
- ``block``:
- ``timestamp``: The Unix time when the block was created. It's provided by the node that created the block.
- ``transactions``: A list of the transactions included in the block.
- ``node_pubkey``: The public key of the node that created the block.
- ``voters``: A list of the public keys of all cluster nodes at the time the block was created.
It's the list of nodes which can cast a vote on this block.
This list can change from block to block, as nodes join and leave the cluster.
**id**
- ``signature``: :ref:`Cryptographic signature <Signature Algorithm and Keys>` of the block by the node that created the block (i.e. the node with public key ``node_pubkey``). To generate the signature, the node signs the serialized inner ``block`` (the same thing that was hashed to determine the ``id``) using the private key corresponding to ``node_pubkey``.
The transaction ID and also the SHA3-256 hash
of the inner ``block`` object, loosely speaking.
It's a string.
To compute it, 1) construct an :term:`associative array` ``d`` containing
``block.timestamp``, ``block.transactions``, ``block.node_pubkey``,
``block.voters``, and their values. 2) compute ``id = hash_of_aa(d)``.
There's pseudocode for the ``hash_of_aa()`` function
in the `IPDB Protocol documentation page about cryptographic hashes
<https://the-ipdb-protocol.readthedocs.io/en/latest/crypto-hashes.html#computing-the-hash-of-an-associative-array>`_.
The result (``id``) is a string: the block ID.
An example is ``"b60adf655932bf47ef58c0bfb2dd276d4795b94346b36cbb477e10d7eb02cea8"``
Working with Blocks
-------------------
**block.timestamp**
There's a **Block** class for creating and working with Block objects; look in `/bigchaindb/models.py <https://github.com/bigchaindb/bigchaindb/blob/master/bigchaindb/models.py>`_. (The link is to the latest version on the master branch on GitHub.)
The `Unix time <https://en.wikipedia.org/wiki/Unix_time>`_
when the block was created, according to the node which created it.
It's a string representation of an integer.
An example is ``"1507294217"``.
**block.transactions**
A list of the :ref:`transactions <The Transaction Model>` included in the block.
(Each transaction is a JSON object.)
**block.node_pubkey**
The public key of the node that created the block.
It's a string.
See the `IPDB Protocol documentation page about cryptographic keys & signatures
<https://the-ipdb-protocol.readthedocs.io/en/latest/crypto-keys-and-sigs.html>`_.
**block.voters**
A list of the public keys of all cluster nodes at the time the block was created.
It's a list of strings.
This list can change from block to block, as nodes join and leave the cluster.
**signature**
The cryptographic signature of the inner ``block``
by the node that created the block
(i.e. the node with public key ``node_pubkey``).
To compute that:
#. Construct an :term:`associative array` ``d`` containing the contents
of the inner ``block``
(i.e. ``block.timestamp``, ``block.transactions``, ``block.node_pubkey``,
``block.voters``, and their values).
#. Compute ``signature = sig_of_aa(d, private_key)``,
where ``private_key`` is the node's private key
(i.e. ``node_pubkey`` and ``private_key`` are a key pair). There's pseudocode
for the ``sig_of_aa()`` function
on `the IPDB Protocol documentation page about cryptographic keys and signatures
<https://the-ipdb-protocol.readthedocs.io/en/latest/crypto-keys-and-sigs.html#computing-the-signature-of-an-associative-array>`_.
.. note::
The ``d_bytes`` computed when computing the block ID will be the *same* as the ``d_bytes`` computed when computing the block signature. This can be used to avoid redundant calculations.

View File

@ -26,7 +26,6 @@ An input has the following structure:
You can think of the ``fulfills`` object as a pointer to an output on another transaction: the output that this input is spending/transferring.
A CREATE transaction should have exactly one input. That input can contain one or more ``owners_before``, a ``fulfillment`` (with one signature from each of the owners-before), and the value of ``fulfills`` should be ``null``). A TRANSFER transaction should have at least one input, and the value of ``fulfills`` should not be ``null``.
See the reference on :ref:`inputs <Input>` for more description about the meaning of each field.
The ``fulfillment`` string fulfills the condition in the output that is being spent (transferred).
To calculate it:
@ -62,7 +61,6 @@ An output has the following structure:
The :ref:`page about conditions <Conditions>` explains the contents of a ``condition``.
The list of ``public_keys`` is always the "owners" of the asset at the time the transaction completed, but before the next transaction started.
See the reference on :ref:`outputs <Output>` for more description about the meaning of each field.
Note that ``amount`` must be a string (e.g. ``"7"``).
In a TRANSFER transaction, the sum of the output amounts must be the same as the sum of the outputs that it transfers (i.e. the sum of the input amounts). For example, if a TRANSFER transaction has two outputs, one with ``"amount": "2"`` and one with ``"amount": "3"``, then the sum of the outputs is 5 and so the sum of the outputs-being-transferred must also be 5.

View File

@ -19,20 +19,18 @@ Here's some explanation of the contents:
- **id**: The ID of the transaction and also the hash of the transaction (loosely speaking). See below for an explanation of how it's computed. It's also the database primary key.
- **version**: The version-number of :ref:`the transaction schema <Transaction Schema>`. As of BigchainDB Server 1.0.0, the only allowed value is ``"1.0"``.
- **version**: The version-number of the transaction schema. As of BigchainDB Server 1.0.0, the only allowed value is ``"1.0"``.
- **inputs**: List of inputs.
Each input spends/transfers a previous output by satisfying/fulfilling
the crypto-conditions on that output.
A CREATE transaction should have exactly one input.
A TRANSFER transaction should have at least one input (i.e. ≥1).
For more details, see the subsection about :ref:`inputs <Inputs>`.
- **outputs**: List of outputs.
Each output indicates the crypto-conditions which must be satisfied
by anyone wishing to spend/transfer that output.
It also indicates the number of shares of the asset tied to that output.
For more details, see the subsection about :ref:`outputs <Outputs>`.
- **operation**: A string indicating what kind of transaction this is,
and how it should be validated.
@ -46,6 +44,10 @@ Here's some explanation of the contents:
- **metadata**: User-provided transaction metadata.
It can be any valid JSON document, or ``null``.
**NOTE:** When using MongoDB for storage, certain restriction apply
to all (including nested) keys of the ``"data"`` JSON document:
1) keys (i.e. key names, not values) must **not** begin with the ``$`` character, and
2) keys must not contain ``.`` or the null character (Unicode code point 0000).
**How the transaction ID is computed.**
1) Build a Python dictionary containing ``version``, ``inputs``, ``outputs``, ``operation``, ``asset``, ``metadata`` and their values,
@ -60,3 +62,13 @@ There are example BigchainDB transactions in
:ref:`the HTTP API documentation <The HTTP Client-Server API>`
and
`the Python Driver documentation <https://docs.bigchaindb.com/projects/py-driver/en/latest/usage.html>`_.
The Transaction Schema
----------------------
BigchainDB checks all transactions (JSON documents)
against a formal schema defined in :ref:`some JSON Schema files named
transaction.yaml,
transaction_create.yaml and
transaction_transfer.yaml <The Transaction Schema Files>`.

View File

@ -1,27 +0,0 @@
# The Vote Model
A vote has the following structure:
```json
{
"node_pubkey": "<The public key of the voting node>",
"vote": {
"voting_for_block": "<ID of the block the node is voting on>",
"previous_block": "<ID of the block previous to the block being voted on>",
"is_block_valid": "<true OR false>",
"invalid_reason": null,
"timestamp": "<Unix time when the vote was generated, provided by the voting node>"
},
"signature": "<Cryptographic signature of vote>"
}
```
**Notes**
* Votes have no ID (or `"id"`), as far as users are concerned. (The backend database uses one internally, but it's of no concern to users and it's never reported to them via BigchainDB APIs.)
* At the time of writing, the value of `"invalid_reason"` was always `null`. In other words, it wasn't being used. It may be used or dropped in a future version of BigchainDB. See [Issue #217](https://github.com/bigchaindb/bigchaindb/issues/217) on GitHub.
* For more information about the vote `"timestamp"`, see [the page about timestamps in BigchainDB](https://docs.bigchaindb.com/en/latest/timestamps.html).
* For more information about how the `"signature"` is calculated, see [the page about cryptography in BigchainDB](../appendices/cryptography.html).

View File

@ -0,0 +1,121 @@
The Vote Model
==============
A vote is a JSON object with a particular schema,
as outlined in this page.
A vote must contain the following JSON keys
(also called names or fields):
.. code-block:: json
{
"node_pubkey": "<The public key of the voting node>",
"vote": {
"voting_for_block": "<ID of the block the node is voting on>",
"previous_block": "<ID of the block previous to the block being voted on>",
"is_block_valid": "<true OR false>",
"invalid_reason": null,
"timestamp": "<Vote-creation timestamp>"
},
"signature": "<Signature of inner vote object>"
}
.. note::
Votes have no ID (or ``"id"``), as far as users are concerned.
The backend database may use one internally,
but it's of no concern to users and it's never reported to them via APIs.
The JSON Keys in a Vote
-----------------------
**node_pubkey**
The public key of the node which cast this vote.
It's a string.
For more information about public keys,
see the `IPDB Protocol documentation page about cryptographic keys and signatures
<https://the-ipdb-protocol.readthedocs.io/en/latest/crypto-keys-and-sigs.html>`_.
**vote.voting_for_block**
The block ID that this vote is for.
It's a string.
For more information about block IDs,
see the page about :ref:`blocks <The Block Model>`.
**vote.previous_block**
The block ID of the block "before" the block that this vote is for,
according to the node which cast this vote.
It's a string.
(It's possible for different nodes to see different block orders.)
For more information about block IDs,
see the page about :ref:`blocks <The Block Model>`.
**vote.is_block_valid**
``true`` if the node which cast this vote considered the block in question to be valid,
and ``false`` otherwise.
Note that it's a *boolean* (i.e. ``true`` or ``false``), not a string.
**vote.invalid_reason**
Always ``null``, that is, it's not being used.
It may be used or dropped in a future version.
See `bigchaindb/bigchaindb issue #217
<https://github.com/bigchaindb/bigchaindb/issues/217>`_ on GitHub.
**vote.timestamp**
The `Unix time <https://en.wikipedia.org/wiki/Unix_time>`_
when the vote was created, according to the node which created it.
It's a string representation of an integer.
**signature**
The cryptographic signature of the inner ``vote``
by the node that created the vote
(i.e. the node with public key ``node_pubkey``).
To compute that:
#. Construct an :term:`associative array` ``d`` containing the contents of the inner ``vote``
(i.e. ``vote.voting_for_block``, ``vote.previous_block``, ``vote.is_block_valid``,
``vote.invalid_reason``, ``vote.timestamp``, and their values).
#. Compute ``signature = sig_of_aa(d, private_key)``, where ``private_key``
is the node's private key (i.e. ``node_pubkey`` and ``private_key`` are a key pair).
There's pseudocode for the ``sig_of_aa()`` function
on `the IPDB Protocol documentation page about cryptographic keys and signatures
<https://the-ipdb-protocol.readthedocs.io/en/latest/crypto-keys-and-sigs.html#computing-the-signature-of-an-associative-array>`_.
The Vote Schema
---------------
BigchainDB checks all votes (JSON documents) against a formal schema
defined in a :ref:`JSON Schema file named vote.yaml <The Vote Schema File>`.
An Example Vote
---------------
.. code-block:: json
{
"node_pubkey": "3ZCsVWPAhPTqHx9wZVxp9Se54pcNeeM5mQvnozDWyDR9",
"vote": {
"voting_for_block": "11c3a3fcc9efa4fc4332a0849fc39b58e403ff37794a7d1fdfb9e7703a94a274",
"previous_block": "3dd1441018b782a50607dc4c7f83a0f0a23eb257f4b6a8d99330dfff41271e0d",
"is_block_valid": true,
"invalid_reason": null,
"timestamp": "1509977988"
},
"signature": "3tW2EBVgxaZTE6nixVd9QEQf1vUxqPmQaNAMdCHc7zHik5KEosdkwScGYt4VhiHDTB6BCxTUzmqu3P7oP93tRWfj"
}

View File

@ -1,8 +1,13 @@
Develop & Test BigchainDB Server
================================
This section outlines some ways that you could set up a minimal BigchainDB node for development and testing purposes. For additional guidance on how you could help develop BigchainDB, see the `CONTRIBUTING.md file on GitHub <https://github.com/bigchaindb/bigchaindb/blob/master/CONTRIBUTING.md>`_.
.. toctree::
:maxdepth: 1
setup-run-node
running-all-tests
Using a Local Dev Machine <setup-bdb-host>
Using a Local Dev Machine and Docker <../appendices/run-with-docker>
Using Vagrant <../appendices/run-with-vagrant>
Using Ansible <../appendices/run-with-ansible>
running-all-tests

View File

@ -0,0 +1,61 @@
# Set Up BigchainDB Node on Local Dev Machine
The BigchainDB core dev team develops BigchainDB on recent Ubuntu, Fedora and CentOS distributions, so we recommend you use one of those. BigchainDB Server doesn't work on Windows or macOS (unless you use a VM or containers).
## With MongoDB
First read the BigchainDB [CONTRIBUTING.md file](https://github.com/bigchaindb/bigchaindb/blob/master/CONTRIBUTING.md). It outlines the steps to set up a machine for developing and testing BigchainDB.
Create a default BigchainDB config file (in `$HOME/.bigchaindb`):
```text
$ bigchaindb -y configure mongodb
```
Note: [The BigchainDB CLI](../server-reference/bigchaindb-cli.html) and the [BigchainDB Configuration Settings](../server-reference/configuration.html) are documented elsewhere. (Click the links.)
Start MongoDB __3.4+__ using:
```text
$ mongod --replSet=bigchain-rs
```
You can verify that MongoDB is running correctly by checking the output of the
previous command for the line:
```text
waiting for connections on port 27017
```
To run BigchainDB Server, do:
```text
$ bigchaindb start
```
You can [run all the unit tests](running-all-tests.html) to test your installation.
## With RethinkDB
First read the BigchainDB [CONTRIBUTING.md file](https://github.com/bigchaindb/bigchaindb/blob/master/CONTRIBUTING.md). It outlines the steps to set up a machine for developing and testing BigchainDB.
Create a default BigchainDB config file (in `$HOME/.bigchaindb`):
```text
$ bigchaindb -y configure rethinkdb
```
Note: [The BigchainDB CLI](../server-reference/bigchaindb-cli.html) and the [BigchainDB Configuration Settings](../server-reference/configuration.html) are documented elsewhere. (Click the links.)
Start RethinkDB using:
```text
$ rethinkdb
```
You can verify that RethinkDB is running by opening the RethinkDB web interface in your web browser. It should be at http://localhost:8080/
<!-- Don't hyperlink http://localhost:8080/ because Sphinx will fail when you do "make linkcheck" -->
To run BigchainDB Server, do:
```text
$ bigchaindb start
```
You can [run all the unit tests](running-all-tests.html) to test your installation.

View File

@ -1,189 +0,0 @@
# Set Up & Run a Dev/Test Node
This page explains how to set up a minimal local BigchainDB node for development and testing purposes.
The BigchainDB core dev team develops BigchainDB on recent Ubuntu and Fedora distributions, so we recommend you use one of those. BigchainDB Server doesn't work on Windows and Mac OS X (unless you use a VM or containers).
## Option A: Using a Local Dev Machine
Read through the BigchainDB [CONTRIBUTING.md file](https://github.com/bigchaindb/bigchaindb/blob/master/CONTRIBUTING.md). It outlines the steps to setup a machine for developing and testing BigchainDB.
### With RethinkDB
Create a default BigchainDB config file (in `$HOME/.bigchaindb`):
```text
$ bigchaindb -y configure rethinkdb
```
Note: [The BigchainDB CLI](../server-reference/bigchaindb-cli.html) and the [BigchainDB Configuration Settings](../server-reference/configuration.html) are documented elsewhere. (Click the links.)
Start RethinkDB using:
```text
$ rethinkdb
```
You can verify that RethinkDB is running by opening the RethinkDB web interface in your web browser. It should be at http://localhost:8080/
<!-- Don't hyperlink http://localhost:8080/ because Sphinx will fail when you do "make linkcheck" -->
To run BigchainDB Server, do:
```text
$ bigchaindb start
```
You can [run all the unit tests](running-unit-tests.html) to test your installation.
The BigchainDB [CONTRIBUTING.md file](https://github.com/bigchaindb/bigchaindb/blob/master/CONTRIBUTING.md) has more details about how to contribute.
### With MongoDB
Create a default BigchainDB config file (in `$HOME/.bigchaindb`):
```text
$ bigchaindb -y configure mongodb
```
Note: [The BigchainDB CLI](../server-reference/bigchaindb-cli.html) and the [BigchainDB Configuration Settings](../server-reference/configuration.html) are documented elsewhere. (Click the links.)
Start MongoDB __3.4+__ using:
```text
$ mongod --replSet=bigchain-rs
```
You can verify that MongoDB is running correctly by checking the output of the
previous command for the line:
```text
waiting for connections on port 27017
```
To run BigchainDB Server, do:
```text
$ bigchaindb start
```
You can [run all the unit tests](running-unit-tests.html) to test your installation.
The BigchainDB [CONTRIBUTING.md file](https://github.com/bigchaindb/bigchaindb/blob/master/CONTRIBUTING.md) has more details about how to contribute.
## Option B: Using a Local Dev Machine and Docker
You need to have recent versions of [Docker Engine](https://docs.docker.com/engine/installation/)
and (Docker) [Compose](https://docs.docker.com/compose/install/).
Build the images:
```bash
docker-compose build
```
### Docker with RethinkDB
**Note**: If you're upgrading BigchainDB and have previously already built the images, you may need
to rebuild them after the upgrade to install any new dependencies.
Start RethinkDB:
```bash
docker-compose -f docker-compose.rdb.yml up -d rdb
```
The RethinkDB web interface should be accessible at http://localhost:58080/.
Depending on which platform, and/or how you are running docker, you may need
to change `localhost` for the `ip` of the machine that is running docker. As a
dummy example, if the `ip` of that machine was `0.0.0.0`, you would access the
web interface at: http://0.0.0.0:58080/.
Start a BigchainDB node:
```bash
docker-compose -f docker-compose.rdb.yml up -d bdb-rdb
```
You can monitor the logs:
```bash
docker-compose -f docker-compose.rdb.yml logs -f bdb-rdb
```
If you wish to run the tests:
```bash
docker-compose -f docker-compose.rdb.yml run --rm bdb-rdb pytest -v -n auto
```
### Docker with MongoDB
Start MongoDB:
```bash
docker-compose up -d mdb
```
MongoDB should now be up and running. You can check the port binding for the
MongoDB driver port using:
```bash
$ docker-compose port mdb 27017
```
Start a BigchainDB node:
```bash
docker-compose up -d bdb
```
You can monitor the logs:
```bash
docker-compose logs -f bdb
```
If you wish to run the tests:
```bash
docker-compose run --rm bdb py.test -v --database-backend=mongodb
```
### Accessing the HTTP API
You can do quick check to make sure that the BigchainDB server API is operational:
```bash
curl $(docker-compose port bdb 9984)
```
The result should be a JSON object (inside braces like { })
containing the name of the software ("BigchainDB"),
the version of BigchainDB, the node's public key, and other information.
How does the above curl command work? Inside the Docker container, BigchainDB
exposes the HTTP API on port `9984`. First we get the public port where that
port is bound:
```bash
docker-compose port bdb 9984
```
The port binding will change whenever you stop/restart the `bdb` service. You
should get an output similar to:
```bash
0.0.0.0:32772
```
but with a port different from `32772`.
Knowing the public port we can now perform a simple `GET` operation against the
root:
```bash
curl 0.0.0.0:32772
```
## Option C: Using a Dev Machine on Cloud9
Ian Worrall of [Encrypted Labs](http://www.encryptedlabs.com/) wrote a document (PDF) explaining how to set up a BigchainDB (Server) dev machine on Cloud9:
[Download that document from GitHub](https://raw.githubusercontent.com/bigchaindb/bigchaindb/master/docs/server/source/_static/cloud9.pdf)

View File

@ -23,6 +23,6 @@ Community-Driven Libraries and Tools
* `Haskell transaction builder <https://github.com/bigchaindb/bigchaindb-hs>`_
* `Go driver <https://github.com/zbo14/envoke/blob/master/bigchain/bigchain.go>`_
* `Java driver <https://github.com/mgrand/bigchaindb-java-driver>`_
* `Java driver <https://github.com/authenteq/java-bigchaindb-driver>`_
* `Ruby driver <https://github.com/LicenseRocks/bigchaindb_ruby>`_
* `Ruby library for preparing/signing transactions and submitting them or querying a BigchainDB/IPDB node (MIT licensed) <https://rubygems.org/gems/bigchaindb>`_

View File

@ -40,11 +40,14 @@ response contains a ``streams`` property:
Connection Keep-Alive
---------------------
The Event Stream API initially does not provide any mechanisms for connection
keep-alive other than enabling TCP keepalive on each open WebSocket connection.
In the future, we may add additional functionality to handle ping/pong frames
or payloads designed for keep-alive.
The Event Stream API supports Ping/Pong frames as descibed in
`RFC 6455 <https://tools.ietf.org/html/rfc6455#section-5.5.2>`_.
.. note::
It might not be possible to send PING/PONG frames via web browsers because
of non availability of Javascript API on different browsers to achieve the
same.
Streams
-------

View File

@ -0,0 +1,19 @@
Glossary
========
.. glossary::
:sorted:
associative array
A collection of key/value (or name/value) pairs
such that each possible key appears at most once
in the collection.
In JavaScript (and JSON), all objects behave as associative arrays
with string-valued keys.
In Python and .NET, associative arrays are called *dictionaries*.
In Java and Go, they are called *maps*.
In Ruby, they are called *hashes*.
See also: Wikipedia's articles for
`Associative array <https://en.wikipedia.org/wiki/Associative_array>`_
and
`Comparison of programming languages (associative array) <https://en.wikipedia.org/wiki/Comparison_of_programming_languages_(associative_array)>`_

View File

@ -452,6 +452,118 @@ Assets
text search.
Transaction Metadata
--------------------------------
.. http:get:: /api/v1/metadata
Return all the metadata that match a given text search.
:query string text search: Text search string to query.
:query int limit: (Optional) Limit the number of returned metadata objects. Defaults
to ``0`` meaning return all matching objects.
.. note::
Currently this enpoint is only supported if the server is running
MongoDB as the backend.
.. http:get:: /api/v1/metadata/?search={text_search}
Return all metadata that match a given text search. The ``id`` of the metadata
is the same ``id`` of the transaction where it was defined.
If no metadata match the text search it returns an empty list.
If the text string is empty or the server does not support text search,
a ``400`` is returned.
The results are sorted by text score.
For more information about the behavior of text search see `MongoDB text
search behavior <https://docs.mongodb.com/manual/reference/operator/query/text/#behavior>`_
**Example request**:
.. sourcecode:: http
GET /api/v1/metadata/?search=bigchaindb HTTP/1.1
Host: example.com
**Example response**:
.. sourcecode:: http
HTTP/1.1 200 OK
Content-type: application/json
[
{
"metadata": {"metakey1": "Hello BigchainDB 1!"},
"id": "51ce82a14ca274d43e4992bbce41f6fdeb755f846e48e710a3bbb3b0cf8e4204"
},
{
"metadata": {"metakey2": "Hello BigchainDB 2!"},
"id": "b4e9005fa494d20e503d916fa87b74fe61c079afccd6e084260674159795ee31"
},
{
"metadata": {"metakey3": "Hello BigchainDB 3!"},
"id": "fa6bcb6a8fdea3dc2a860fcdc0e0c63c9cf5b25da8b02a4db4fb6a2d36d27791"
}
]
:resheader Content-Type: ``application/json``
:statuscode 200: The query was executed successfully.
:statuscode 400: The query was not executed successfully. Returned if the
text string is empty or the server does not support
text search.
.. http:get:: /api/v1/metadata/?search={text_search}&limit={n_documents}
Return at most ``n`` metadata objects that match a given text search.
If no metadata match the text search it returns an empty list.
If the text string is empty or the server does not support text search,
a ``400`` is returned.
The results are sorted by text score.
For more information about the behavior of text search see `MongoDB text
search behavior <https://docs.mongodb.com/manual/reference/operator/query/text/#behavior>`_
**Example request**:
.. sourcecode:: http
GET /api/v1/metadata/?search=bigchaindb&limit=2 HTTP/1.1
Host: example.com
**Example response**:
.. sourcecode:: http
HTTP/1.1 200 OK
Content-type: application/json
[
{
"metadata": {"msg": "Hello BigchainDB 1!"},
"id": "51ce82a14ca274d43e4992bbce41f6fdeb755f846e48e710a3bbb3b0cf8e4204"
},
{
"metadata": {"msg": "Hello BigchainDB 2!"},
"id": "b4e9005fa494d20e503d916fa87b74fe61c079afccd6e084260674159795ee31"
},
]
:resheader Content-Type: ``application/json``
:statuscode 200: The query was executed successfully.
:statuscode 400: The query was not executed successfully. Returned if the
text string is empty or the server does not support
text search.
Advanced Usage
--------------------------------

View File

@ -16,7 +16,6 @@ BigchainDB Server Documentation
events/index
drivers-clients/index
data-models/index
schema/transaction
schema/vote
release-notes
glossary
appendices/index

View File

@ -15,14 +15,13 @@ Note that there are a few kinds of nodes:
## Setup Instructions for Various Cases
* [Set up a local stand-alone BigchainDB node for learning and experimenting: Quickstart](quickstart.html)
* [Set up and run a local dev/test node for developing and testing BigchainDB Server](dev-and-test/setup-run-node.html)
* [Quickstart](quickstart.html)
* [Set up a local BigchainDB node for development, experimenting and testing](dev-and-test/index.html)
* [Set up and run a BigchainDB cluster](clusters.html)
There are some old RethinkDB-based deployment instructions as well:
* [Deploy a bare-bones RethinkDB-based node on Azure](appendices/azure-quickstart-template.html)
* [Deploy a bare-bones RethinkDB-based node on any Ubuntu machine with Ansible](appendices/template-ansible.html)
* [Deploy a RethinkDB-based testing cluster on AWS](appendices/aws-testing-cluster.html)
Instructions for setting up a client will be provided once there's a public test net.

View File

@ -0,0 +1,75 @@
Architecture of an IPDB Node
============================
An IPDB Production deployment is hosted on a Kubernetes cluster and includes:
* NGINX, OpenResty, BigchainDB and MongoDB
`Kubernetes Services <https://kubernetes.io/docs/concepts/services-networking/service/>`_.
* NGINX, OpenResty, BigchainDB, Monitoring Agent and Backup Agent
`Kubernetes Deployments <https://kubernetes.io/docs/concepts/workloads/controllers/deployment/>`_.
* MongoDB `Kubernetes StatefulSet <https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/>`_.
* Third party services like `3scale <https://3scale.net>`_,
`MongoDB Cloud Manager <https://cloud.mongodb.com>`_ and the
`Azure Operations Management Suite
<https://docs.microsoft.com/en-us/azure/operations-management-suite/>`_.
.. image:: ../_static/arch.jpg
.. note::
The arrows in the diagram represent the client-server communication. For
example, A-->B implies that A initiates the connection to B.
It does not represent the flow of data; the communication channel is always
fully duplex.
NGINX
-----
We use an NGINX as HTTP proxy on port 443 (configurable) at the cloud
entrypoint for:
#. Rate Limiting: We configure NGINX to allow only a certain number of requests
(configurable) which prevents DoS attacks.
#. HTTPS Termination: The HTTPS connection does not carry through all the way
to BigchainDB and terminates at NGINX for now.
#. Request Routing: For HTTPS connections on port 443 (or the configured BigchainDB public api port),
the connection is proxied to:
#. OpenResty Service if it is a POST request.
#. BigchainDB Service if it is a GET request.
We use an NGINX TCP proxy on port 27017 (configurable) at the cloud
entrypoint for:
#. Rate Limiting: We configure NGINX to allow only a certain number of requests
(configurable) which prevents DoS attacks.
#. Request Routing: For connections on port 27017 (or the configured MongoDB
public api port), the connection is proxied to the MongoDB Service.
OpenResty
---------
We use `OpenResty <https://openresty.org/>`_ to perform authorization checks
with 3scale using the ``app_id`` and ``app_key`` headers in the HTTP request.
OpenResty is NGINX plus a bunch of other
`components <https://openresty.org/en/components.html>`_. We primarily depend
on the LuaJIT compiler to execute the functions to authenticate the ``app_id``
and ``app_key`` with the 3scale backend.
MongoDB
-------
We use MongoDB as the backend database for BigchainDB.
In a multi-node deployment, MongoDB members communicate with each other via the
public port exposed by the NGINX Service.
We achieve security by avoiding DoS attacks at the NGINX proxy layer and by
ensuring that MongoDB has TLS enabled for all its connections.

View File

@ -92,7 +92,7 @@ consolidated file containing both the public and private keys.
.. code:: bash
cat /path/to/mdb-instance-0.crt /path/to/mdb-instance-0.key > mdb-instance-0.pem
cat /path/to/bdb-instance-0.crt /path/to/bdb-instance-0.key > bdb-instance-0.pem
OR

View File

@ -16,7 +16,7 @@ Configure MongoDB Cloud Manager for Monitoring
* Select the group from the dropdown box on the page.
* Go to Settings, Group Settings and add a ``Preferred Hostnames`` entry as
* Go to Settings and add a ``Preferred Hostnames`` entry as
a regexp based on the ``mdb-instance-name`` of the nodes in your cluster.
It may take up to 5 mins till this setting takes effect.
You may refresh the browser window and verify whether the changes have

View File

@ -28,3 +28,5 @@ Feel free change things to suit your needs or preferences.
add-node-on-kubernetes
restore-from-mongodb-cloud-manager
tectonic-azure
troubleshoot
architecture

View File

@ -322,6 +322,18 @@ Step 9.1: Vanilla NGINX
``cluster-health-check-port``. Set them to the values specified in the
ConfigMap.
* The configuration uses the following values set in the ConfigMap:
- ``cluster-frontend-port``
- ``cluster-health-check-port``
- ``cluster-dns-server-ip``
- ``mongodb-frontend-port``
- ``ngx-mdb-instance-name``
- ``mongodb-backend-port``
- ``ngx-bdb-instance-name``
- ``bigchaindb-api-port``
- ``bigchaindb-ws-port``
* Start the Kubernetes Deployment:
.. code:: bash
@ -346,6 +358,25 @@ Step 9.2: NGINX with HTTPS
``cluster-health-check-port``. Set them to the values specified in the
ConfigMap.
* The configuration uses the following values set in the ConfigMap:
- ``cluster-frontend-port``
- ``cluster-health-check-port``
- ``cluster-fqdn``
- ``cluster-dns-server-ip``
- ``mongodb-frontend-port``
- ``ngx-mdb-instance-name``
- ``mongodb-backend-port``
- ``openresty-backend-port``
- ``ngx-openresty-instance-name``
- ``ngx-bdb-instance-name``
- ``bigchaindb-api-port``
- ``bigchaindb-ws-port``
* The configuration uses the following values set in the Secret:
- ``https-certs``
* Start the Kubernetes Deployment:
.. code:: bash
@ -383,8 +414,8 @@ First, you need an Azure storage account.
If you deployed your Kubernetes cluster on Azure
using the Azure CLI 2.0
(as per :doc:`our template <template-kubernetes-azure>`),
then the `az acs create` command already created two
storage accounts in the same location and resource group
then the `az acs create` command already created a
storage account in the same location and resource group
as your Kubernetes cluster.
Both should have the same "storage account SKU": ``Standard_LRS``.
Standard storage is lower-cost and lower-performance.
@ -393,13 +424,14 @@ LRS means locally-redundant storage: three replicas
in the same data center.
Premium storage is higher-cost and higher-performance.
It uses solid state drives (SSD).
At the time of writing,
when we created a storage account with SKU ``Premium_LRS``
and tried to use that,
the PersistentVolumeClaim would get stuck in a "Pending" state.
You can create a `storage account <https://docs.microsoft.com/en-us/azure/storage/common/storage-create-storage-account>`_
for Premium storage and associate it with your Azure resource group.
For future reference, the command to create a storage account is
`az storage account create <https://docs.microsoft.com/en-us/cli/azure/storage/account#create>`_.
.. Note::
Please refer to `Azure documentation <https://docs.microsoft.com/en-us/azure/virtual-machines/windows/premium-storage>`_
for the list of VMs that are supported by Premium Storage.
The Kubernetes template for configuration of Storage Class is located in the
file ``mongodb/mongo-sc.yaml``.
@ -407,6 +439,10 @@ file ``mongodb/mongo-sc.yaml``.
You may have to update the ``parameters.location`` field in the file to
specify the location you are using in Azure.
If you want to use a custom storage account with the Storage Class, you
can also update `parameters.storageAccount` and provide the Azure storage
account name.
Create the required storage classes using:
.. code:: bash
@ -416,15 +452,6 @@ Create the required storage classes using:
You can check if it worked using ``kubectl get storageclasses``.
**Azure.** Note that there is no line of the form
``storageAccount: <azure storage account name>``
under ``parameters:``. When we included one
and then created a PersistentVolumeClaim based on it,
the PersistentVolumeClaim would get stuck
in a "Pending" state.
Kubernetes just looks for a storageAccount
with the specified skuName and location.
Step 11: Create Kubernetes Persistent Volume Claims
---------------------------------------------------
@ -457,6 +484,27 @@ You can check its status using: ``kubectl get pvc -w``
Initially, the status of persistent volume claims might be "Pending"
but it should become "Bound" fairly quickly.
.. Note::
The default Reclaim Policy for dynamically created persistent volumes is ``Delete``
which means the PV and its associated Azure storage resource will be automatically
deleted on deletion of PVC or PV. In order to prevent this from happening do
the following steps to change default reclaim policy of dyanmically created PVs
from ``Delete`` to ``Retain``
* Run the following command to list existing PVs
.. Code:: bash
$ kubectl --context k8s-bdb-test-cluster-0 get pv
* Run the following command to update a PV's reclaim policy to <Retain>
.. Code:: bash
$ kubectl --context k8s-bdb-test-cluster-0 patch pv <pv-name> -p '{"spec":{"persistentVolumeReclaimPolicy":"Retain"}}'
For notes on recreating a private volume form a released Azure disk resource consult
:ref:`the page about cluster troubleshooting <Cluster Troubleshooting>`.
Step 12: Start a Kubernetes StatefulSet for MongoDB
---------------------------------------------------
@ -500,6 +548,30 @@ Step 12: Start a Kubernetes StatefulSet for MongoDB
backend port. Set it to the value specified for ``mongodb-backend-port``
in the ConfigMap.
* The configuration uses the following values set in the ConfigMap:
- ``mdb-instance-name``
- ``mongodb-replicaset-name``
- ``mongodb-backend-port``
* The configuration uses the following values set in the Secret:
- ``mdb-certs``
- ``ca-auth``
* **Optional**: You can change the value for ``STORAGE_ENGINE_CACHE_SIZE`` in the ConfigMap ``storage-engine-cache-size``, for more information
regarding this configuration, please consult the `MongoDB Official
Documentation <https://docs.mongodb.com/manual/reference/configuration-options/#storage.wiredTiger.engineConfig.cacheSizeGB>`_.
* **Optional**: If you are not using the **Standard_D2_v2** virtual machines for Kubernetes agents as per the guide,
please update the ``resources`` for ``mongo-ss``. We suggest allocating ``memory`` using the following scheme
for a MongoDB StatefulSet:
.. code:: bash
memory = (Total_Memory_Agent_VM_GB - 2GB)
STORAGE_ENGINE_CACHE_SIZE = memory / 2
* Create the MongoDB StatefulSet using:
.. code:: bash
@ -661,6 +733,12 @@ Step 14: Start a Kubernetes Deployment for MongoDB Monitoring Agent
``mdb-mon-instance-name`` is ``mdb-mon-instance-0``, set the fields to the
value ``mdb-mon-instance-0-dep``.
* The configuration uses the following values set in the Secret:
- ``mdb-mon-certs``
- ``ca-auth``
- ``cloud-manager-credentials``
* Start the Kubernetes Deployment using:
.. code:: bash
@ -682,6 +760,12 @@ Step 15: Start a Kubernetes Deployment for MongoDB Backup Agent
``mdb-bak-instance-name`` is ``mdb-bak-instance-0``, set the fields to the
value ``mdb-bak-instance-0-dep``.
* The configuration uses the following values set in the Secret:
- ``mdb-bak-certs``
- ``ca-auth``
- ``cloud-manager-credentials``
* Start the Kubernetes Deployment using:
.. code:: bash
@ -714,10 +798,34 @@ Step 16: Start a Kubernetes Deployment for BigchainDB
richer monitoring and probing becomes available in BigchainDB, we will
tweak the ``livenessProbe`` and ``readinessProbe`` parameters.
* Set the ports to be exposed from the pod in the
``spec.containers[0].ports`` section. We currently expose 2 ports -
``bigchaindb-api-port`` and ``bigchaindb-ws-port``. Set them to the
values specified in the ConfigMap.
* Set the ports to be exposed from the pod in the
``spec.containers[0].ports`` section. We currently expose 2 ports -
``bigchaindb-api-port`` and ``bigchaindb-ws-port``. Set them to the
values specified in the ConfigMap.
* The configuration uses the following values set in the ConfigMap:
- ``mdb-instance-name``
- ``mongodb-backend-port``
- ``mongodb-replicaset-name``
- ``bigchaindb-database-name``
- ``bigchaindb-server-bind``
- ``bigchaindb-ws-interface``
- ``cluster-fqdn``
- ``bigchaindb-ws-port``
- ``cluster-frontend-port``
- ``bigchaindb-wsserver-advertised-scheme``
- ``bdb-public-key``
- ``bigchaindb-backlog-reassign-delay``
- ``bigchaindb-database-maxtries``
- ``bigchaindb-database-connection-timeout``
- ``bigchaindb-log-level``
- ``bdb-user``
* The configuration uses the following values set in the Secret:
- ``bdb-certs``
- ``ca-auth``
* Create the BigchainDB Deployment using:
@ -747,6 +855,17 @@ Step 17: Start a Kubernetes Deployment for OpenResty
which OpenResty is listening for requests, ``openresty-backend-port`` in
the above ConfigMap.
* The configuration uses the following values set in the Secret:
- ``threescale-credentials``
* The configuration uses the following values set in the ConfigMap:
- ``cluster-dns-server-ip``
- ``openresty-backend-port``
- ``ngx-bdb-instance-name``
- ``bigchaindb-api-port``
* Create the OpenResty Deployment using:
.. code:: bash

View File

@ -47,7 +47,9 @@ when following the steps above:
``tectonic-cluster-CLUSTER``.
#. Set the ``tectonic_base_domain`` to ``""`` if you want to use Azure managed
DNS. You will be assigned a ``cloudapp.azure.com`` sub-domain by default.
DNS. You will be assigned a ``cloudapp.azure.com`` sub-domain by default and
you can skip the ``Configuring Azure DNS`` section from the Tectonic installation
guide.
#. Set the ``tectonic_cl_channel`` to ``"stable"`` unless you want to
experiment or test with the latest release.
@ -76,6 +78,14 @@ when following the steps above:
#. Set the ``tectonic_azure_ssh_key`` to the path of the public key created in
the previous step.
#. We recommend setting up or using a CA(Certificate Authority) to generate Tectonic
Console's server certificate(s) and adding it to your trusted authorities on the client side,
accessing the Tectonic Console i.e. Browser. If you already have a CA(self-signed or otherwise),
Set the ``tectonic_ca_cert`` and ``tectonic_ca_key`` configurations with the content
of PEM-encoded certificate and key files, respectively. For more information about, how to set
up a self-signed CA, Please refer to
:doc:`How to Set up self-signed CA <ca-installation>`.
#. Note that the ``tectonic_azure_client_secret`` is the same as the
``ARM_CLIENT_SECRET``.
@ -85,6 +95,10 @@ when following the steps above:
``test-cluster`` and specified the datacenter as ``westeurope``, the Tectonic
console will be available at ``test-cluster.westeurope.cloudapp.azure.com``.
#. Note that, if you do not specify ``tectonic_ca_cert``, a CA certificate will
be generated automatically and you will encounter the untrusted certificate
message on your client(Browser), when accessing the Tectonic Console.
Step 4: Configure kubectl
-------------------------

View File

@ -105,6 +105,21 @@ Finally, you can deploy an ACS using something like:
--orchestrator-type kubernetes \
--debug --output json
.. Note::
Please refer to `Azure documentation <https://docs.microsoft.com/en-us/cli/azure/acs?view=azure-cli-latest#az_acs_create>`_
for a comprehensive list of options available for `az acs create`.
Please tune the following parameters as per your requirement:
* Master count.
* Agent count.
* Agent VM size.
* **Optional**: Master storage profile.
* **Optional**: Agent storage profile.
There are more options. For help understanding all the options, use the built-in help:

View File

@ -0,0 +1,139 @@
Cluster Troubleshooting
=======================
This page describes some basic issues we have faced while deploying and
operating the cluster.
1. MongoDB Restarts
-------------------
We define the following in the ``mongo-ss.yaml`` file:
.. code:: bash
resources:
limits:
cpu: 200m
memory: 5G
When the MongoDB cache occupies a memory greater than 5GB, it is
terminated by the ``kubelet``.
This can usually be verified by logging in to the worker node running MongoDB
container and looking at the syslog (the ``journalctl`` command should usually
work).
This issue is resolved in
`PR #1757 <https://github.com/bigchaindb/bigchaindb/pull/1757>`_.
2. 502 Bad Gateway Error on Runscope Tests
------------------------------------------
It means that NGINX could not find the appropriate backed to forward the
requests to. This typically happens when:
#. MongoDB goes down (as described above) and BigchainDB, after trying for
``BIGCHAINDB_DATABASE_MAXTRIES`` times, gives up. The Kubernetes BigchainDB
Deployment then restarts the BigchainDB pod.
#. BigchainDB crashes for some reason. We have seen this happen when updating
BigchainDB from one version to the next. This usually means the older
connections to the service gets disconnected; retrying the request one more
time, forwards the connection to the new instance and succeed.
3. Service Unreachable
----------------------
Communication between Kubernetes Services and Deployments fail in
v1.6.6 and before due to a trivial key lookup error for non-existent services
in the ``kubelet``.
This error can be reproduced by restarting any public facing (that is, services
using the cloud load balancer) Kubernetes services, and watching the
``kube-proxy`` failure in its logs.
The solution to this problem is to restart ``kube-proxy`` on the affected
worker/agent node. Login to the worker node and run:
.. code:: bash
docker stop `docker ps | grep k8s_kube-proxy | cut -d" " -f1`
docker logs -f `docker ps | grep k8s_kube-proxy | cut -d" " -f1`
`This issue <https://github.com/kubernetes/kubernetes/issues/48705>`_ is
`fixed in Kubernetes v1.7 <https://github.com/kubernetes/kubernetes/commit/41c4e965c353187889f9b86c3e541b775656dc18>`_.
4. Single Disk Attached to Multiple Mountpoints in a Container
--------------------------------------------------------------
This is currently the issue faced in one of the clusters and being debugged by
the support team at Microsoft.
The issue was first seen on August 29, 2017 on the Test Network and has been
logged in the `Azure/acs-engine repo on GitHub <https://github.com/Azure/acs-engine/issues/1364>`_.
This is apparently fixed in Kubernetes v1.7.2 which include a new disk driver,
but is yet to tested by us.
5. MongoDB Monitoring Agent throws a dial error while connecting to MongoDB
---------------------------------------------------------------------------
You might see something similar to this in the MongoDB Monitoring Agent logs:
.. code:: bash
Failure dialing host without auth. Err: `no reachable servers`
at monitoring-agent/components/dialing.go:278
at monitoring-agent/components/dialing.go:116
at monitoring-agent/components/dialing.go:213
at src/runtime/asm_amd64.s:2086
The first thing to check is if the networking is set up correctly. You can use
the (maybe using the `toolbox` container).
If everything looks fine, it might be a problem with the ``Preferred
Hostnames`` setting in MongoDB Cloud Manager. If you do need to change the
regular expression, ensure that it is correct and saved properly (maybe try
refreshing the MongoDB Cloud Manager web page to see if the setting sticks).
Once you update the regular expression, you will need to remove the deployment
and add it again for the Monitoring Agent to discover and connect to the
MongoDB instance correctly.
More information about this configuration is provided in
:doc:`this document <cloud-manager>`.
6. Create a Persistent Volume from existing Azure disk storage Resource
---------------------------------------------------------------------------
When deleting a k8s cluster, all dynamically-created PVs are deleted, along with the
underlying Azure storage disks (so those can't be used in a new cluster). resources
are also deleted thus cannot be used in a new cluster. This workflow will preserve
the Azure storage disks while deleting the k8s cluster and re-use the same disks on a new
cluster for MongoDB persistent storage without losing any data.
The template to create two PVs for MongoDB Stateful Set (One for MongoDB data store and
the other for MongoDB config store) is located at ``mongodb/mongo-pv.yaml``.
You need to configure ``diskName`` and ``diskURI`` in ``mongodb/mongo-pv.yaml`` file. You can get
these values by logging into your Azure portal and going to ``Resource Groups`` and click on your
relevant resource group. From the list of resources click on the storage account resource and
click the container (usually named as ``vhds``) that contains storage disk blobs that are available
for PVs. Click on the storage disk file that you wish to use for your PV and you will be able to
see ``NAME`` and ``URL`` parameters which you can use for ``diskName`` and ``diskURI`` values in
your template respectively and run the following command to create PVs:
.. code:: bash
$ kubectl --context <context-name> apply -f mongodb/mongo-pv.yaml
.. note::
Please make sure the storage disks you are using are not already being used by any
other PVs. To check the existing PVs in your cluster, run the following command
to get PVs and Storage disk file mapping.
.. code:: bash
$ kubectl --context <context-name> get pv --output yaml

View File

@ -110,13 +110,13 @@ secret token, service ID, version header and API service token.
☐ If the cluster uses MongoDB Cloud Manager for monitoring and backup,
you must ask the managing organization for the ``Group ID`` and the
you must ask the managing organization for the ``Project ID`` and the
``Agent API Key``.
(Each Cloud Manager "group" has its own ``Group ID``. A ``Group ID`` can
(Each Cloud Manager "Project" has its own ``Project ID``. A ``Project ID`` can
contain a number of ``Agent API Key`` s. It can be found under
**Settings - Group Settings**. It was recently added to the Cloud Manager to
**Settings**. It was recently added to the Cloud Manager to
allow easier periodic rotation of the ``Agent API Key`` with a constant
``Group ID``)
``Project ID``)
:doc:`Deploy a Kubernetes cluster on Azure <template-kubernetes-azure>`.

View File

@ -1,6 +1,6 @@
# Quickstart
This page has instructions to set up a single stand-alone BigchainDB node for learning or experimenting. Instructions for other cases are [elsewhere](introduction.html). We will assume you're using Ubuntu 16.04 or similar. If you're not using Linux, then you might try [running BigchainDB with Docker](appendices/run-with-docker.html).
This page has instructions to set up a single stand-alone BigchainDB node for learning or experimenting. Instructions for other cases are [elsewhere](introduction.html). We will assume you're using Ubuntu 16.04 or similar. You can also try, [running BigchainDB with Docker](appendices/run-with-docker.html).
A. Install MongoDB as the database backend. (There are other options but you can ignore them for now.)
@ -58,9 +58,8 @@ $ bigchaindb start
```
J. Verify BigchainDB Server setup by visiting the BigchainDB Root URL in your browser:
```text
$ http://127.0.0.1:9984/
```
[http://127.0.0.1:9984/](http://127.0.0.1:9984/)
A correctly installed installation will show you a JSON object with information about the API, docs, version and your public key.

View File

@ -51,8 +51,6 @@ all database tables/collections,
various backend database indexes,
and the genesis block.
Note: The `bigchaindb start` command (see below) always starts by trying a `bigchaindb init` first. If it sees that the backend database already exists, then it doesn't re-initialize the database. One doesn't have to do `bigchaindb init` before `bigchaindb start`. `bigchaindb init` is useful if you only want to initialize (but not start).
## bigchaindb drop
@ -63,7 +61,7 @@ If you want to force-drop the database (i.e. skipping the yes/no prompt), then u
## bigchaindb start
Start BigchainDB. It always begins by trying a `bigchaindb init` first. See the note in the documentation for `bigchaindb init`.
Start BigchainDB. It always begins by trying a `bigchaindb init` first. See the note in the documentation for `bigchaindb init`. The database initialization step is optional and can be skipped by passing the `--no-init` flag i.e. `bigchaindb start --no-init`.
You can also use the `--dev-start-rethinkdb` command line option to automatically start rethinkdb with bigchaindb if rethinkdb is not already running,
e.g. `bigchaindb --dev-start-rethinkdb start`. Note that this will also shutdown rethinkdb when the bigchaindb process stops.
The option `--dev-allow-temp-keypair` will generate a keypair on the fly if no keypair is found, this is useful when you want to run a temporary instance of BigchainDB in a Docker container, for example.

View File

@ -39,6 +39,7 @@ For convenience, here's a list of all the relevant environment variables (docume
`BIGCHAINDB_LOG_FMT_CONSOLE`<br>
`BIGCHAINDB_LOG_FMT_LOGFILE`<br>
`BIGCHAINDB_LOG_GRANULAR_LEVELS`<br>
`BIGCHAINDB_LOG_PORT`<br>
`BIGCHAINDB_DATABASE_SSL`<br>
`BIGCHAINDB_DATABASE_LOGIN`<br>
`BIGCHAINDB_DATABASE_PASSWORD`<br>
@ -319,7 +320,8 @@ holding the logging configuration.
"granular_levels": {
"bichaindb.backend": "info",
"bichaindb.core": "info"
}
},
"port": 7070
}
```
@ -336,7 +338,8 @@ holding the logging configuration.
"datefmt_logfile": "%Y-%m-%d %H:%M:%S",
"fmt_logfile": "[%(asctime)s] [%(levelname)s] (%(name)s) %(message)s (%(processName)-10s - pid: %(process)d)",
"fmt_console": "[%(asctime)s] [%(levelname)s] (%(name)s) %(message)s (%(processName)-10s - pid: %(process)d)",
"granular_levels": {}
"granular_levels": {},
"port": 9020
}
```
@ -530,7 +533,23 @@ logging of the `core.py` module to be more verbose, you would set the
}
```
**Defaults to**: `"{}"`
**Defaults to**: `{}`
### log.port
The port number at which the logging server should listen.
**Example**:
```
{
"log": {
"port": 7070
}
}
```
**Defaults to**: `9020`
## graphite.host

View File

@ -12,7 +12,7 @@ spec:
terminationGracePeriodSeconds: 10
containers:
- name: bigchaindb
image: bigchaindb/bigchaindb:1.0.1
image: bigchaindb/bigchaindb:1.3.0
imagePullPolicy: IfNotPresent
args:
- start
@ -158,4 +158,4 @@ spec:
- name: ca-auth
secret:
secretName: ca-auth
defaultMode: 0400
defaultMode: 0400

View File

@ -99,6 +99,11 @@ data:
# WebSocket API in BigchainDB; can be 'ws' or 'wss' (default).
bigchaindb-wsserver-advertised-scheme: "wss"
# Optional: Optimize storage engine(wired tiger)
# cache size. e.g. (2048MB, 2GB, 1TB), otherwise
# it will use the default cache size; i.e. max((50% RAM - 1GB), 256MB)
storage-engine-cache-size: ""
---
apiVersion: v1
kind: ConfigMap

View File

@ -14,9 +14,9 @@ metadata:
namespace: default
type: Opaque
data:
# Base64-encoded Group ID
# Group ID used by MongoDB deployment
group-id: "<b64 encoded Group ID>"
# Base64-encoded Project ID
# Project ID used by MongoDB deployment
group-id: "<b64 encoded Project ID>"
# Base64-encoded MongoDB Agent API Key for the group
agent-api-key: "<b64 encoded Agent API Key>"
---

View File

@ -34,7 +34,7 @@ spec:
terminationGracePeriodSeconds: 10
containers:
- name: bigchaindb
image: bigchaindb/bigchaindb:1.0.0
image: bigchaindb/bigchaindb:1.3.0
imagePullPolicy: Always
args:
- start

View File

@ -1,5 +1,5 @@
#!/bin/bash
docker build -t bigchaindb/mongodb-backup-agent:3.4 .
docker build -t bigchaindb/mongodb-backup-agent:3.5 .
docker push bigchaindb/mongodb-backup-agent:3.4
docker push bigchaindb/mongodb-backup-agent:3.5

View File

@ -24,7 +24,7 @@ spec:
terminationGracePeriodSeconds: 10
containers:
- name: mdb-backup
image: bigchaindb/mongodb-backup-agent:3.4
image: bigchaindb/mongodb-backup-agent:3.5
imagePullPolicy: IfNotPresent
env:
- name: MMS_API_KEYFILE_PATH

View File

@ -1,5 +1,5 @@
#!/bin/bash
docker build -t bigchaindb/mongodb:3.1 .
docker build -t bigchaindb/mongodb:3.2 .
docker push bigchaindb/mongodb:3.1
docker push bigchaindb/mongodb:3.2

View File

@ -86,6 +86,7 @@ storage:
wiredTiger:
engineConfig:
journalCompressor: snappy
configString: cache_size=STORAGE_ENGINE_CACHE_SIZE
collectionConfig:
blockCompressor: snappy
indexConfig:
@ -98,4 +99,3 @@ operationProfiling:
replication:
replSetName: REPLICA_SET_NAME
enableMajorityReadConcern: true

View File

@ -46,6 +46,10 @@ while [[ $# -gt 1 ]]; do
MONGODB_IP="$2"
shift
;;
--storage-engine-cache-size)
STORAGE_ENGINE_CACHE_SIZE="$2"
shift
;;
*)
echo "Unknown option: $1"
exit 1
@ -61,7 +65,8 @@ if [[ -z "${REPLICA_SET_NAME:?REPLICA_SET_NAME not specified. Exiting!}" || \
-z "${MONGODB_IP:?MONGODB_IP not specified. Exiting!}" || \
-z "${MONGODB_KEY_FILE_PATH:?MONGODB_KEY_FILE_PATH not specified. Exiting!}" || \
-z "${MONGODB_CA_FILE_PATH:?MONGODB_CA_FILE_PATH not specified. Exiting!}" || \
-z "${MONGODB_CRL_FILE_PATH:?MONGODB_CRL_FILE_PATH not specified. Exiting!}" ]] ; then
-z "${MONGODB_CRL_FILE_PATH:?MONGODB_CRL_FILE_PATH not specified. Exiting!}" || \
-z "${STORAGE_ENGINE_CACHE_SIZE:=''}" ]] ; then
#-z "${MONGODB_KEY_FILE_PASSWORD:?MongoDB Key File Password not specified. Exiting!}" || \
exit 1
else
@ -72,6 +77,7 @@ else
echo MONGODB_KEY_FILE_PATH="$MONGODB_KEY_FILE_PATH"
echo MONGODB_CA_FILE_PATH="$MONGODB_CA_FILE_PATH"
echo MONGODB_CRL_FILE_PATH="$MONGODB_CRL_FILE_PATH"
echo STORAGE_ENGINE_CACHE_SIZE="$STORAGE_ENGINE_CACHE_SIZE"
fi
MONGODB_CONF_FILE_PATH=/etc/mongod.conf
@ -84,6 +90,16 @@ sed -i "s|MONGODB_KEY_FILE_PATH|${MONGODB_KEY_FILE_PATH}|g" ${MONGODB_CONF_FILE_
sed -i "s|MONGODB_CA_FILE_PATH|${MONGODB_CA_FILE_PATH}|g" ${MONGODB_CONF_FILE_PATH}
sed -i "s|MONGODB_CRL_FILE_PATH|${MONGODB_CRL_FILE_PATH}|g" ${MONGODB_CONF_FILE_PATH}
sed -i "s|REPLICA_SET_NAME|${REPLICA_SET_NAME}|g" ${MONGODB_CONF_FILE_PATH}
if [ ! -z "$STORAGE_ENGINE_CACHE_SIZE" ]; then
if [[ "$STORAGE_ENGINE_CACHE_SIZE" =~ ^[0-9]+(G|M|T)B$ ]]; then
sed -i.bk "s|STORAGE_ENGINE_CACHE_SIZE|${STORAGE_ENGINE_CACHE_SIZE}|g" ${MONGODB_CONF_FILE_PATH}
else
echo "Invalid Value for storage engine cache size $STORAGE_ENGINE_CACHE_SIZE"
exit 1
fi
else
sed -i.bk "/cache_size=/d" ${MONGODB_CONF_FILE_PATH}
fi
# add the hostname and ip to hosts file
echo "${MONGODB_IP} ${MONGODB_FQDN}" >> $HOSTS_FILE_PATH

41
k8s/mongodb/mongo-pv.yaml Normal file
View File

@ -0,0 +1,41 @@
#############################################################
# This YAML section desribes a k8s PV for mongodb dbPath #
#############################################################
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-mongo-db
spec:
accessModes:
- ReadWriteOnce
azureDisk:
cachingMode: None
diskName: <Azure Disk Name>
diskURI: <Azure Disk URL>
fsType: ext4
readOnly: false
capacity:
storage: 50Gi
persistentVolumeReclaimPolicy: Retain
storageClassName: slow-db
---
#############################################################
# This YAML section desribes a k8s PV for mongodb configDB #
#############################################################
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-mongdo-configdb
spec:
accessModes:
- ReadWriteOnce
azureDisk:
cachingMode: None
diskName: <Azure Disk Name>
diskURI: <Azure Disk URL>
fsType: ext4
readOnly: false
capacity:
storage: 2Gi
persistentVolumeReclaimPolicy: Retain
storageClassName: slow-configdb

View File

@ -7,8 +7,12 @@ metadata:
name: slow-db
provisioner: kubernetes.io/azure-disk
parameters:
skuName: Standard_LRS
skuName: Premium_LRS #[Premium_LRS, Standard_LRS]
location: westeurope
# If you have created a different storage account e.g. for Premium Storage
#storageAccount: <Storage account name>
# Use Managed Disk(s) with VMs using Managed Disks(Only used for Tectonic deployment)
#kind: Managed
---
######################################################################
# This YAML section desribes a StorageClass for the mongodb configDB #
@ -19,5 +23,9 @@ metadata:
name: slow-configdb
provisioner: kubernetes.io/azure-disk
parameters:
skuName: Standard_LRS
skuName: Premium_LRS #[Premium_LRS, Standard_LRS]
location: westeurope
# If you have created a different storage account e.g. for Premium Storage
#storageAccount: <Storage account name>
# Use Managed Disk(s) with VMs using Managed Disks(Only used for Tectonic deployment)
#kind: Managed

View File

@ -21,7 +21,7 @@ spec:
terminationGracePeriodSeconds: 10
containers:
- name: mongodb
image: bigchaindb/mongodb:3.1
image: bigchaindb/mongodb:3.2
imagePullPolicy: IfNotPresent
env:
- name: MONGODB_FQDN
@ -43,6 +43,11 @@ spec:
configMapKeyRef:
name: vars
key: mongodb-backend-port
- name: STORAGE_ENGINE_CACHE_SIZE
valueFrom:
configMapKeyRef:
name: vars
key: storage-engine-cache-size
args:
- --mongodb-port
- $(MONGODB_PORT)
@ -58,6 +63,8 @@ spec:
- $(MONGODB_FQDN)
- --mongodb-ip
- $(MONGODB_POD_IP)
- --storage-engine-cache-size
- $(STORAGE_ENGINE_CACHE_SIZE)
securityContext:
capabilities:
add:
@ -80,7 +87,7 @@ spec:
resources:
limits:
cpu: 200m
memory: 3.5G
memory: 5G
livenessProbe:
tcpSocket:
port: mdb-api-port

View File

@ -1,4 +1,4 @@
FROM nginx:1.13.1
FROM nginx:stable
LABEL maintainer "dev@bigchaindb.com"
WORKDIR /
RUN apt-get update \

View File

@ -1,5 +1,5 @@
#!/bin/bash
docker build -t bigchaindb/nginx_http:1.0 .
docker build -t bigchaindb/nginx_http:1.1 .
docker push bigchaindb/nginx_http:1.0
docker push bigchaindb/nginx_http:1.1

View File

@ -45,6 +45,12 @@ http {
keepalive_timeout 60s;
# Do not expose nginx data/version number in error response and header
server_tokens off;
# To prevent cross-site scripting
add_header X-XSS-Protection "1; mode=block";
# The following map blocks enable lazy-binding to the backend at runtime,
# rather than binding as soon as NGINX starts.
map $remote_addr $bdb_backend {
@ -54,7 +60,6 @@ http {
# Frontend server for the external clients
server {
listen CLUSTER_FRONTEND_PORT;
underscores_in_headers on;
# Forward websockets to backend BDB at 9985.
@ -86,7 +91,7 @@ http {
add_header 'Access-Control-Allow-Methods' 'GET, POST, OPTIONS';
add_header 'Access-Control-Allow-Headers' 'DNT,X-CustomHeader,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,Content-Range,Range';
add_header 'Access-Control-Expose-Headers' 'DNT,X-CustomHeader,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,Content-Range,Range';
proxy_pass http://$bdb_backend:BIGCHAINDB_API_PORT;
}
@ -100,6 +105,11 @@ http {
add_header 'Content-Length' 0;
return 204;
}
# Only return this reponse if request_method is neither POST|GET|OPTIONS
if ($request_method !~ ^(GET|OPTIONS|POST)$) {
return 444;
}
}
}
@ -130,10 +140,10 @@ stream {
# Enable logging when connections are being throttled.
limit_conn_log_level notice;
# Allow 16 connections from the same IP address.
limit_conn two 16;
# DNS resolver to use for all the backend names specified in this configuration.
resolver DNS_SERVER valid=30s ipv6=off;
@ -142,7 +152,7 @@ stream {
map $remote_addr $mdb_backend {
default MONGODB_BACKEND_HOST;
}
# Frontend server to forward connections to MDB instance.
server {
listen MONGODB_FRONTEND_PORT so_keepalive=10m:1m:5;

View File

@ -12,7 +12,7 @@ spec:
terminationGracePeriodSeconds: 10
containers:
- name: nginx
image: bigchaindb/nginx_http:1.0
image: bigchaindb/nginx_http:1.1
imagePullPolicy: IfNotPresent
env:
- name: CLUSTER_FRONTEND_PORT

View File

@ -1,5 +1,5 @@
#!/bin/bash
docker build -t bigchaindb/nginx-https-web-proxy:0.10 .
docker build -t bigchaindb/nginx-https-web-proxy:0.12 .
docker push bigchaindb/nginx-https-web-proxy:0.10
docker push bigchaindb/nginx-https-web-proxy:0.12

View File

@ -90,12 +90,6 @@ http {
end
}
# check if the request originated from the required web page
# use referer header.
if ($http_referer !~ "PROXY_EXPECTED_REFERER_HEADER" ) {
return 403 'Unknown referer';
}
# check if the request has the expected origin header
if ($http_origin !~ "PROXY_EXPECTED_ORIGIN_HEADER" ) {
return 403 'Unknown origin';
@ -108,9 +102,16 @@ http {
add_header 'Access-Control-Max-Age' 43200;
add_header 'Content-Type' 'text/plain charset=UTF-8';
add_header 'Content-Length' 0;
add_header 'Referrer-Policy' "PROXY_REFERRER_POLICY";
return 204;
}
# check if the request originated from the required web page
# use referer header.
if ($http_referer !~ "PROXY_EXPECTED_REFERER_HEADER" ) {
return 403 'Unknown referer';
}
# No auth for GETs, forward directly to BDB.
if ($request_method = GET) {
proxy_pass http://$bdb_backend:BIGCHAINDB_API_PORT;

View File

@ -49,6 +49,11 @@ data:
# are available to external clients.
proxy-frontend-port: "4443"
# proxy-referrer-policy defines the expected behaviour from
# browser while setting the referer header in the HTTP requests to the
# proxy service.
proxy-referrer-policy: "origin-when-cross-origin"
# expected-http-referer is the expected regex expression of the Referer
# header in the HTTP requests to the proxy.
# The default below accepts the referrer value to be *.bigchaindb.com

View File

@ -25,6 +25,11 @@ spec:
configMapKeyRef:
name: proxy-vars
key: proxy-frontend-port
- name: PROXY_REFERRER_POLICY
valueFrom:
configMapKeyRef:
name: proxy-vars
key: proxy-referrer-policy
- name: PROXY_EXPECTED_REFERER_HEADER
valueFrom:
configMapKeyRef:

Some files were not shown because too many files have changed in this diff Show More