1
0
mirror of https://github.com/bigchaindb/bigchaindb.git synced 2024-06-23 17:56:41 +02:00

Merge branch 'master' into replace-pr-762

This commit is contained in:
Troy McConaghy 2017-02-03 12:05:13 +01:00 committed by GitHub
commit b01898aced
101 changed files with 2136 additions and 1069 deletions

3
.gitattributes vendored
View File

@ -1,6 +1,7 @@
benchmarking-tests export-ignore
deploy-cluster-aws export-ignore
docs export-ignore export-ignore
docs export-ignore
ntools export-ignore
speed-tests export-ignore
tests export-ignore
.gitattributes export-ignore

View File

@ -16,6 +16,27 @@ For reference, the possible headings are:
* **Notes**
## [0.8.2] - 2017-01-27
Tag name: v0.8.2
### Fixed
- Fix spend input twice in same transaction
(https://github.com/bigchaindb/bigchaindb/issues/1099).
## [0.8.1] - 2017-01-16
Tag name: v0.8.1
= commit:
committed:
### Changed
- Upgrade pysha3 to 1.0.0 (supports official NIST standard).
### Fixed
- Workaround for rapidjson problem with package metadata extraction
(https://github.com/kenrobbins/python-rapidjson/pull/52).
## [0.8.0] - 2016-11-29
Tag name: v0.8.0
= commit:

View File

@ -11,9 +11,12 @@ RUN sed -i -e 's/# en_US.UTF-8 UTF-8/en_US.UTF-8 UTF-8/' /etc/locale.gen && \
ENV LANG en_US.UTF-8
RUN apt-get -y install python3 python3-pip libffi-dev
RUN pip3 install --upgrade pip
RUN pip3 install --upgrade setuptools
# The `apt-get update` command executed with the install instructions should
# not use a locally cached storage layer. Force update the cache again.
# https://docs.docker.com/engine/userguide/eng-image/dockerfile_best-practices/#run
RUN apt-get update && apt-get -y install python3 python3-pip libffi-dev \
&& pip3 install --upgrade pip \
&& pip3 install --upgrade setuptools
RUN mkdir -p /usr/src/app

View File

@ -65,12 +65,11 @@ coverage: ## check code coverage quickly with the default Python
$(BROWSER) htmlcov/index.html
docs: ## generate Sphinx HTML documentation, including API docs
rm -f docs/bigchaindb.rst
rm -f docs/modules.rst
sphinx-apidoc -o docs/ bigchaindb
$(MAKE) -C docs clean
$(MAKE) -C docs html
$(BROWSER) docs/_build/html/index.html
$(MAKE) -C docs/root clean
$(MAKE) -C docs/root html
$(MAKE) -C docs/server clean
$(MAKE) -C docs/server html
$(BROWSER) docs/root/_build/html/index.html
servedocs: docs ## compile the docs watching for changes
watchmedo shell-command -p '*.rst' -c '$(MAKE) -C docs html' -R -D .

View File

@ -1,23 +1,52 @@
# Our Release Process
This is a summary of the steps we go through to release a new version of BigchainDB Server.
The release process for BigchainDB server differs slightly depending on whether it's a minor or a patch release.
1. Update the `CHANGELOG.md` file
1. Update the version numbers in `bigchaindb/version.py`. Note that we try to use [semantic versioning](http://semver.org/) (i.e. MAJOR.MINOR.PATCH)
1. Go to the [bigchaindb/bigchaindb Releases page on GitHub](https://github.com/bigchaindb/bigchaindb/releases)
and click the "Draft a new release" button
1. Name the tag something like v0.7.0
1. The target should be a specific commit: the one when the update of `bigchaindb/version.py` got merged into master
1. The release title should be something like v0.7.0
1. The description should be copied from the `CHANGELOG.md` file updated above
1. Generate and send the latest `bigchaindb` package to PyPI. Dimi and Sylvain can do this, maybe others
1. Login to readthedocs.org as a maintainer of the BigchainDB Server docs.
Go to Admin --> Versions and under **Choose Active Versions**, make sure that the new version's tag is
"Active" and "Public"
BigchainDB follows [semantic versioning](http://semver.org/) (i.e. MAJOR.MINOR.PATCH), taking into account
that [major version 0.x does not export a stable API](http://semver.org/#spec-item-4).
After the release:
## Minor release
1. Update `bigchaindb/version.py` again, to be something like 0.8.0.dev (with a dev on the end).
A minor release is preceeded by a feature freeze and created from the 'master' branch. This is a summary of the steps we go through to release a new minor version of BigchainDB Server.
1. Update the `CHANGELOG.md` file in master
1. Create and checkout a new branch for the release, named after the minor version, without preceeding 'v', ie: `git checkout -b 0.9`
1. Commit changes and push new branch to Github
1. Follow steps outlined in [Common Steps](#common-steps)
1. In 'master' branch, Edit `bigchaindb/version.py`, increment the minor version to the next planned release ie: `0.10.0.dev'.
This is so people reading the latest docs will know that they're for the latest (master branch)
version of BigchainDB Server, not the docs at the time of the most recent release (which are also
available).
Congratulations, you have released BigchainDB!
## Patch release
A patch release is similar to a minor release, but piggybacks on an existing minor release branch:
1. Check out the minor release branch
1. Apply the changes you want, ie using `git cherry-pick`.
1. Update the `CHANGELOG.md` file
1. Increment the patch version in `bigchaindb/version.py`, ie: "0.9.1"
1. Follow steps outlined in [Common Steps](#common-steps)
## Common steps
These steps are common between minor and patch releases:
1. Go to the [bigchaindb/bigchaindb Releases page on GitHub](https://github.com/bigchaindb/bigchaindb/releases)
and click the "Draft a new release" button
1. Fill in the details:
- Tag version: version number preceeded by 'v', ie: "v0.9.1"
- Target: the release branch that was just pushed
- Title: Same as tag name
- Description: The body of the changelog entry (Added, Changed etc)
1. Publish the release on Github
1. Generate the release tarball with `python setup.py sdist`. Upload the release to Pypi.
1. Login to readthedocs.org as a maintainer of the BigchainDB Server docs.
Go to Admin --> Versions and under **Choose Active Versions**, make sure that the new version's tag is
"Active" and "Public", and make sure the new version's branch
(without the 'v' in front) is _not_ active
1. Also in readthedocs.org, go to Admin --> Advanced Settings
and make sure that "Default branch:" (i.e. what "latest" points to)
is set to the new release's tag, e.g. `v0.9.1`. (Don't miss the 'v' in front.)

View File

@ -1,13 +1,11 @@
import multiprocessing as mp
import uuid
import json
import argparse
import csv
import time
import logging
import rethinkdb as r
from os.path import expanduser
from bigchaindb.common.transaction import Transaction
from bigchaindb import Bigchain
@ -49,15 +47,6 @@ def run_add_backlog(args):
workers.start()
def run_set_statsd_host(args):
with open(expanduser('~') + '/.bigchaindb', 'r') as f:
conf = json.load(f)
conf['statsd']['host'] = args.statsd_host
with open(expanduser('~') + '/.bigchaindb', 'w') as f:
json.dump(conf, f)
def run_gather_metrics(args):
# setup a rethinkdb connection
conn = r.connect(args.bigchaindb_host, 28015, 'bigchain')
@ -145,14 +134,6 @@ def main():
default='minimal',
help='Payload size')
# set statsd host
statsd_parser = subparsers.add_parser('set-statsd-host',
help='Set statsd host')
statsd_parser.add_argument('statsd_host',
metavar='statsd_host',
default='localhost',
help='Hostname of the statsd server')
# metrics
metrics_parser = subparsers.add_parser('gather-metrics',
help='Gather metrics to a csv file')

View File

@ -28,14 +28,6 @@ def put_benchmark_utils():
put('benchmark_utils.py')
@task
@parallel
def set_statsd_host(statsd_host='localhost'):
run('python3 benchmark_utils.py set-statsd-host {}'.format(statsd_host))
print('update configuration')
run('bigchaindb show-config')
@task
@parallel
def prepare_backlog(num_transactions=10000):

View File

@ -15,7 +15,6 @@ Then:
```bash
fab put_benchmark_utils
fab set_statsd_host:<hostname of the statsd server>
fab prepare_backlog:<num txs per node> # wait for process to finish
fab start_bigchaindb
```
```

View File

@ -26,10 +26,6 @@ Entry point for the BigchainDB process, after initialization. All subprocesses
Methods for managing the configuration, including loading configuration files, automatically generating the configuration, and keeping the configuration consistent across BigchainDB instances.
### [`monitor.py`](./monitor.py)
Code for monitoring speed of various processes in BigchainDB via `statsd` and Grafana. [See documentation.](https://docs.bigchaindb.com/projects/server/en/latest/clusters-feds/monitoring.html)
## Folders
### [`pipelines`](./pipelines)

View File

@ -5,6 +5,25 @@ import os
# PORT_NUMBER = reduce(lambda x, y: x * y, map(ord, 'BigchainDB')) % 2**16
# basically, the port number is 9984
_database_rethinkdb = {
'backend': os.environ.get('BIGCHAINDB_DATABASE_BACKEND', 'rethinkdb'),
'host': os.environ.get('BIGCHAINDB_DATABASE_HOST', 'localhost'),
'port': int(os.environ.get('BIGCHAINDB_DATABASE_PORT', 28015)),
'name': os.environ.get('BIGCHAINDB_DATABASE_NAME', 'bigchain'),
}
_database_mongodb = {
'backend': os.environ.get('BIGCHAINDB_DATABASE_BACKEND', 'mongodb'),
'host': os.environ.get('BIGCHAINDB_DATABASE_HOST', 'localhost'),
'port': int(os.environ.get('BIGCHAINDB_DATABASE_PORT', 27017)),
'name': os.environ.get('BIGCHAINDB_DATABASE_NAME', 'bigchain'),
'replicaset': os.environ.get('BIGCHAINDB_DATABASE_REPLICASET', 'bigchain-rs'),
}
_database_map = {
'mongodb': _database_mongodb,
'rethinkdb': _database_rethinkdb
}
config = {
'server': {
@ -14,23 +33,14 @@ config = {
'workers': None, # if none, the value will be cpu_count * 2 + 1
'threads': None, # if none, the value will be cpu_count * 2 + 1
},
'database': {
'backend': os.environ.get('BIGCHAINDB_DATABASE_BACKEND', 'rethinkdb'),
'host': os.environ.get('BIGCHAINDB_DATABASE_HOST', 'localhost'),
'port': int(os.environ.get('BIGCHAINDB_DATABASE_PORT', 28015)),
'name': os.environ.get('BIGCHAINDB_DATABASE_NAME', 'bigchain'),
'replicaset': os.environ.get('BIGCHAINDB_DATABASE_REPLICASET', 'bigchain-rs'),
},
'database': _database_map[
os.environ.get('BIGCHAINDB_DATABASE_BACKEND', 'rethinkdb')
],
'keypair': {
'public': None,
'private': None,
},
'keyring': [],
'statsd': {
'host': 'localhost',
'port': 8125,
'rate': 0.01,
},
'backlog_reassign_delay': 120
}

View File

@ -20,3 +20,15 @@ def set_shards(connection, *, shards):
@singledispatch
def set_replicas(connection, *, replicas):
raise NotImplementedError
@singledispatch
def add_replicas(connection, replicas):
raise NotImplementedError('This command is specific to the '
'MongoDB backend.')
@singledispatch
def remove_replicas(connection, replicas):
raise NotImplementedError('This command is specific to the '
'MongoDB backend.')

View File

@ -40,6 +40,12 @@ def connect(backend=None, host=None, port=None, name=None, replicaset=None):
host = host or bigchaindb.config['database']['host']
port = port or bigchaindb.config['database']['port']
dbname = name or bigchaindb.config['database']['name']
# Not sure how to handle this here. This setting is only relevant for
# mongodb.
# I added **kwargs for both RethinkDBConnection and MongoDBConnection
# to handle these these additional args. In case of RethinkDBConnection
# it just does not do anything with it.
replicaset = replicaset or bigchaindb.config['database'].get('replicaset')
try:
module_name, _, class_name = BACKENDS[backend].rpartition('.')
@ -51,7 +57,7 @@ def connect(backend=None, host=None, port=None, name=None, replicaset=None):
raise ConfigurationError('Error loading backend `{}`'.format(backend)) from exc
logger.debug('Connection: {}'.format(Class))
return Class(host, port, dbname)
return Class(host, port, dbname, replicaset=replicaset)
class Connection:

View File

@ -16,7 +16,7 @@ generic backend interfaces to the implementations in this module.
"""
# Register the single dispatched modules on import.
from bigchaindb.backend.mongodb import schema, query, changefeed # noqa
from bigchaindb.backend.mongodb import admin, schema, query, changefeed # noqa
# MongoDBConnection should always be accessed via
# ``bigchaindb.backend.connect()``.

View File

@ -0,0 +1,86 @@
"""Database configuration functions."""
import logging
from pymongo.errors import OperationFailure
from bigchaindb.backend import admin
from bigchaindb.backend.utils import module_dispatch_registrar
from bigchaindb.backend.exceptions import DatabaseOpFailedError
from bigchaindb.backend.mongodb.connection import MongoDBConnection
logger = logging.getLogger(__name__)
register_admin = module_dispatch_registrar(admin)
@register_admin(MongoDBConnection)
def add_replicas(connection, replicas):
"""Add a set of replicas to the replicaset
Args:
connection (:class:`~bigchaindb.backend.connection.Connection`):
A connection to the database.
replicas (:obj:`list` of :obj:`str`): replica addresses in the
form "hostname:port".
Raises:
DatabaseOpFailedError: If the reconfiguration fails due to a MongoDB
:exc:`OperationFailure`
"""
# get current configuration
conf = connection.conn.admin.command('replSetGetConfig')
# MongoDB does not automatically add an id for the members so we need
# to choose one that does not exist yet. The safest way is to use
# incrementing ids, so we first check what is the highest id already in
# the set and continue from there.
cur_id = max([member['_id'] for member in conf['config']['members']])
# add the nodes to the members list of the replica set
for replica in replicas:
cur_id += 1
conf['config']['members'].append({'_id': cur_id, 'host': replica})
# increase the configuration version number
# when reconfiguring, mongodb expects a version number higher than the one
# it currently has
conf['config']['version'] += 1
# apply new configuration
try:
connection.conn.admin.command('replSetReconfig', conf['config'])
except OperationFailure as exc:
raise DatabaseOpFailedError(exc.details['errmsg'])
@register_admin(MongoDBConnection)
def remove_replicas(connection, replicas):
"""Remove a set of replicas from the replicaset
Args:
connection (:class:`~bigchaindb.backend.connection.Connection`):
A connection to the database.
replicas (:obj:`list` of :obj:`str`): replica addresses in the
form "hostname:port".
Raises:
DatabaseOpFailedError: If the reconfiguration fails due to a MongoDB
:exc:`OperationFailure`
"""
# get the current configuration
conf = connection.conn.admin.command('replSetGetConfig')
# remove the nodes from the members list in the replica set
conf['config']['members'] = list(
filter(lambda member: member['host'] not in replicas,
conf['config']['members'])
)
# increase the configuration version number
conf['config']['version'] += 1
# apply new configuration
try:
connection.conn.admin.command('replSetReconfig', conf['config'])
except OperationFailure as exc:
raise DatabaseOpFailedError(exc.details['errmsg'])

View File

@ -14,7 +14,7 @@ logger = logging.getLogger(__name__)
class MongoDBConnection(Connection):
def __init__(self, host=None, port=None, dbname=None, max_tries=3,
replicaset=None):
replicaset=None, **kwargs):
"""Create a new Connection instance.
Args:

View File

@ -7,6 +7,7 @@ from pymongo import errors
from bigchaindb import backend
from bigchaindb.common.exceptions import CyclicBlockchainError
from bigchaindb.common.transaction import Transaction
from bigchaindb.backend.utils import module_dispatch_registrar
from bigchaindb.backend.mongodb.connection import MongoDBConnection
@ -83,17 +84,30 @@ def get_blocks_status_from_transaction(conn, transaction_id):
@register_query(MongoDBConnection)
def get_txids_by_asset_id(conn, asset_id):
cursor = conn.db['bigchain'].aggregate([
{'$match': {
'block.transactions.asset.id': asset_id
}},
def get_txids_filtered(conn, asset_id, operation=None):
match_create = {
'block.transactions.operation': 'CREATE',
'block.transactions.id': asset_id
}
match_transfer = {
'block.transactions.operation': 'TRANSFER',
'block.transactions.asset.id': asset_id
}
if operation == Transaction.CREATE:
match = match_create
elif operation == Transaction.TRANSFER:
match = match_transfer
else:
match = {'$or': [match_create, match_transfer]}
pipeline = [
{'$match': match},
{'$unwind': '$block.transactions'},
{'$match': {
'block.transactions.asset.id': asset_id
}},
{'$match': match},
{'$project': {'block.transactions.id': True}}
])
]
cursor = conn.db['bigchain'].aggregate(pipeline)
return (elem['block']['transactions']['id'] for elem in cursor)
@ -119,6 +133,10 @@ def get_asset_by_id(conn, asset_id):
@register_query(MongoDBConnection)
def get_spent(conn, transaction_id, output):
cursor = conn.db['bigchain'].aggregate([
{'$match': {
'block.transactions.inputs.fulfills.txid': transaction_id,
'block.transactions.inputs.fulfills.output': output
}},
{'$unwind': '$block.transactions'},
{'$match': {
'block.transactions.inputs.fulfills.txid': transaction_id,
@ -133,12 +151,9 @@ def get_spent(conn, transaction_id, output):
@register_query(MongoDBConnection)
def get_owned_ids(conn, owner):
cursor = conn.db['bigchain'].aggregate([
{'$match': {'block.transactions.outputs.public_keys': owner}},
{'$unwind': '$block.transactions'},
{'$match': {
'block.transactions.outputs.public_keys': {
'$elemMatch': {'$eq': owner}
}
}}
{'$match': {'block.transactions.outputs.public_keys': owner}}
])
# we need to access some nested fields before returning so lets use a
# generator to avoid having to read all records on the cursor at this point

View File

@ -60,8 +60,19 @@ def create_bigchain_secondary_index(conn, dbname):
# secondary index for asset uuid, this field is unique
conn.conn[dbname]['bigchain']\
.create_index('block.transactions.transaction.asset.id',
name='asset_id')
.create_index('block.transactions.asset.id', name='asset_id')
# secondary index on the public keys of outputs
conn.conn[dbname]['bigchain']\
.create_index('block.transactions.outputs.public_keys',
name='outputs')
# secondary index on inputs/transaction links (txid, output)
conn.conn[dbname]['bigchain']\
.create_index([
('block.transactions.inputs.fulfills.txid', ASCENDING),
('block.transactions.inputs.fulfills.output', ASCENDING),
], name='inputs')
def create_backlog_secondary_index(conn, dbname):

View File

@ -107,25 +107,6 @@ def get_blocks_status_from_transaction(connection, transaction_id):
raise NotImplementedError
@singledispatch
def get_txids_by_asset_id(connection, asset_id):
"""Retrieves transactions ids related to a particular asset.
A digital asset in bigchaindb is identified by its ``CREATE``
transaction's ID. Knowing this ID allows us to query all the
transactions related to a particular digital asset.
Args:
asset_id (str): the ID of the asset.
Returns:
A list of transactions ids related to the asset. If no transaction
exists for that asset it returns an empty list ``[]``
"""
raise NotImplementedError
@singledispatch
def get_asset_by_id(conneciton, asset_id):
"""Returns the asset associated with an asset_id.
@ -318,3 +299,16 @@ def get_unvoted_blocks(connection, node_pubkey):
"""
raise NotImplementedError
@singledispatch
def get_txids_filtered(connection, asset_id, operation=None):
"""
Return all transactions for a particular asset id and optional operation.
Args:
asset_id (str): ID of transaction that defined the asset
operation (str) (optional): Operation to filter on
"""
raise NotImplementedError

View File

@ -17,7 +17,7 @@ class RethinkDBConnection(Connection):
more times to run the query or open a connection.
"""
def __init__(self, host, port, dbname, max_tries=3):
def __init__(self, host, port, dbname, max_tries=3, **kwargs):
"""Create a new :class:`~.RethinkDBConnection` instance.
See :meth:`.Connection.__init__` for
@ -77,3 +77,5 @@ class RethinkDBConnection(Connection):
wait_time = 2**i
logging.debug('Error connecting to database, waiting %ss', wait_time)
time.sleep(wait_time)
else:
break

View File

@ -1,9 +1,11 @@
from itertools import chain
from time import time
import rethinkdb as r
from bigchaindb import backend, utils
from bigchaindb.common import exceptions
from bigchaindb.common.transaction import Transaction
from bigchaindb.backend.utils import module_dispatch_registrar
from bigchaindb.backend.rethinkdb.connection import RethinkDBConnection
@ -72,19 +74,27 @@ def get_blocks_status_from_transaction(connection, transaction_id):
@register_query(RethinkDBConnection)
def get_txids_by_asset_id(connection, asset_id):
def get_txids_filtered(connection, asset_id, operation=None):
# here we only want to return the transaction ids since later on when
# we are going to retrieve the transaction with status validation
# Then find any TRANSFER transactions related to the asset
tx_cursor = connection.run(
r.table('bigchain')
.get_all(asset_id, index='asset_id')
.concat_map(lambda block: block['block']['transactions'])
.filter(lambda transaction: transaction['asset']['id'] == asset_id)
.get_field('id'))
parts = []
return tx_cursor
if operation in (Transaction.CREATE, None):
# First find the asset's CREATE transaction
parts.append(connection.run(
_get_asset_create_tx_query(asset_id).get_field('id')))
if operation in (Transaction.TRANSFER, None):
# Then find any TRANSFER transactions related to the asset
parts.append(connection.run(
r.table('bigchain')
.get_all(asset_id, index='asset_id')
.concat_map(lambda block: block['block']['transactions'])
.filter(lambda transaction: transaction['asset']['id'] == asset_id)
.get_field('id')))
return chain(*parts)
@register_query(RethinkDBConnection)
@ -101,21 +111,22 @@ def _get_asset_create_tx_query(asset_id):
@register_query(RethinkDBConnection)
def get_spent(connection, transaction_id, output):
# TODO: use index!
return connection.run(
r.table('bigchain', read_mode=READ_MODE)
.concat_map(lambda doc: doc['block']['transactions'])
.filter(lambda transaction: transaction['inputs'].contains(
lambda input: input['fulfills'] == {'txid': transaction_id, 'output': output})))
.get_all([transaction_id, output], index='inputs')
.concat_map(lambda doc: doc['block']['transactions'])
.filter(lambda transaction: transaction['inputs'].contains(
lambda input_: input_['fulfills'] == {'txid': transaction_id, 'output': output})))
@register_query(RethinkDBConnection)
def get_owned_ids(connection, owner):
# TODO: use index!
return connection.run(
r.table('bigchain', read_mode=READ_MODE)
.concat_map(lambda doc: doc['block']['transactions'])
.filter(lambda tx: tx['outputs'].contains(
.get_all(owner, index='outputs')
.distinct()
.concat_map(lambda doc: doc['block']['transactions'])
.filter(lambda tx: tx['outputs'].contains(
lambda c: c['public_keys'].contains(owner))))

View File

@ -66,6 +66,31 @@ def create_bigchain_secondary_index(connection, dbname):
.table('bigchain')
.index_create('asset_id', r.row['block']['transactions']['asset']['id'], multi=True))
# secondary index on the public keys of outputs
# the last reduce operation is to return a flatten list of public_keys
# without it we would need to match exactly the public_keys list.
# For instance querying for `pk1` would not match documents with
# `public_keys: [pk1, pk2, pk3]`
connection.run(
r.db(dbname)
.table('bigchain')
.index_create('outputs',
r.row['block']['transactions']
.concat_map(lambda tx: tx['outputs']['public_keys'])
.reduce(lambda l, r: l + r), multi=True))
# secondary index on inputs/transaction links (txid, output)
connection.run(
r.db(dbname)
.table('bigchain')
.index_create('inputs',
r.row['block']['transactions']
.concat_map(lambda tx: tx['inputs']['fulfills'])
.with_fields('txid', 'output')
.map(lambda fulfills: [fulfills['txid'],
fulfills['output']]),
multi=True))
# wait for rethinkdb to finish creating secondary indexes
connection.run(
r.db(dbname)

View File

@ -12,9 +12,9 @@ def module_dispatch_registrar(module):
return dispatch_registrar.register(obj_type)(func)
except AttributeError as ex:
raise ModuleDispatchRegistrationError(
("`{module}` does not contain a single-dispatchable "
"function named `{func}`. The module being registered "
"was not implemented correctly!").format(
('`{module}` does not contain a single-dispatchable '
'function named `{func}`. The module being registered '
'was not implemented correctly!').format(
func=func_name, module=module.__name__)) from ex
return wrapper
return dispatch_wrapper

View File

@ -22,7 +22,8 @@ from bigchaindb.models import Transaction
from bigchaindb.utils import ProcessGroup
from bigchaindb import backend
from bigchaindb.backend import schema
from bigchaindb.backend.admin import set_replicas, set_shards
from bigchaindb.backend.admin import (set_replicas, set_shards, add_replicas,
remove_replicas)
from bigchaindb.backend.exceptions import DatabaseOpFailedError
from bigchaindb.commands import utils
from bigchaindb import processes
@ -86,6 +87,11 @@ def run_configure(args, skip_if_exists=False):
conf['keypair']['private'], conf['keypair']['public'] = \
crypto.generate_key_pair()
# select the correct config defaults based on the backend
print('Generating default configuration for backend {}'
.format(args.backend))
conf['database'] = bigchaindb._database_map[args.backend]
if not args.yes:
for key in ('bind', ):
val = conf['server'][key]
@ -99,12 +105,6 @@ def run_configure(args, skip_if_exists=False):
input_on_stderr('Database {}? (default `{}`): '.format(key, val)) \
or val
for key in ('host', 'port', 'rate'):
val = conf['statsd'][key]
conf['statsd'][key] = \
input_on_stderr('Statsd {}? (default `{}`): '.format(key, val)) \
or val
val = conf['backlog_reassign_delay']
conf['backlog_reassign_delay'] = \
input_on_stderr(('Stale transaction reassignment delay (in '
@ -259,6 +259,32 @@ def run_set_replicas(args):
logger.warn(e)
def run_add_replicas(args):
# Note: This command is specific to MongoDB
bigchaindb.config_utils.autoconfigure(filename=args.config, force=True)
conn = backend.connect()
try:
add_replicas(conn, args.replicas)
except (DatabaseOpFailedError, NotImplementedError) as e:
logger.warn(e)
else:
logger.info('Added {} to the replicaset.'.format(args.replicas))
def run_remove_replicas(args):
# Note: This command is specific to MongoDB
bigchaindb.config_utils.autoconfigure(filename=args.config, force=True)
conn = backend.connect()
try:
remove_replicas(conn, args.replicas)
except (DatabaseOpFailedError, NotImplementedError) as e:
logger.warn(e)
else:
logger.info('Removed {} from the replicaset.'.format(args.replicas))
def create_parser():
parser = argparse.ArgumentParser(
description='Control your BigchainDB node.',
@ -282,9 +308,13 @@ def create_parser():
dest='command')
# parser for writing a config file
subparsers.add_parser('configure',
help='Prepare the config file '
'and create the node keypair')
config_parser = subparsers.add_parser('configure',
help='Prepare the config file '
'and create the node keypair')
config_parser.add_argument('backend',
choices=['rethinkdb', 'mongodb'],
help='The backend to use. It can be either '
'rethinkdb or mongodb.')
# parsers for showing/exporting config values
subparsers.add_parser('show-config',
@ -320,6 +350,32 @@ def create_parser():
type=int, default=1,
help='Number of replicas (i.e. the replication factor)')
# parser for adding nodes to the replica set
add_replicas_parser = subparsers.add_parser('add-replicas',
help='Add a set of nodes to the '
'replica set. This command '
'is specific to the MongoDB'
' backend.')
add_replicas_parser.add_argument('replicas', nargs='+',
type=utils.mongodb_host,
help='A list of space separated hosts to '
'add to the replicaset. Each host '
'should be in the form `host:port`.')
# parser for removing nodes from the replica set
rm_replicas_parser = subparsers.add_parser('remove-replicas',
help='Remove a set of nodes from the '
'replica set. This command '
'is specific to the MongoDB'
' backend.')
rm_replicas_parser.add_argument('replicas', nargs='+',
type=utils.mongodb_host,
help='A list of space separated hosts to '
'remove from the replicaset. Each host '
'should be in the form `host:port`.')
load_parser = subparsers.add_parser('load',
help='Write transactions to the backlog')

View File

@ -3,14 +3,15 @@ for ``argparse.ArgumentParser``.
"""
import argparse
from bigchaindb.common.exceptions import StartupError
import multiprocessing as mp
import subprocess
import rethinkdb as r
from pymongo import uri_parser
import bigchaindb
from bigchaindb import backend
from bigchaindb.common.exceptions import StartupError
from bigchaindb.version import __version__
@ -95,6 +96,34 @@ def start(parser, argv, scope):
return func(args)
def mongodb_host(host):
"""Utility function that works as a type for mongodb ``host`` args.
This function validates the ``host`` args provided by to the
``add-replicas`` and ``remove-replicas`` commands and checks if each arg
is in the form "host:port"
Args:
host (str): A string containing hostname and port (e.g. "host:port")
Raises:
ArgumentTypeError: if it fails to parse the argument
"""
# check if mongodb can parse the host
try:
hostname, port = uri_parser.parse_host(host, default_port=None)
except ValueError as exc:
raise argparse.ArgumentTypeError(exc.args[0])
# we do require the port to be provided.
if port is None or hostname == '':
raise argparse.ArgumentTypeError('expected host in the form '
'`host:port`. Got `{}` instead.'
.format(host))
return host
base_parser = argparse.ArgumentParser(add_help=False, prog='bigchaindb')
base_parser.add_argument('-c', '--config',

View File

@ -103,8 +103,8 @@ definitions:
description: |
Description of the asset being transacted. In the case of a ``TRANSFER``
transaction, this field contains only the ID of asset. In the case
of a ``CREATE`` transaction, this field contains the user-defined
payload and the asset ID (duplicated from the Transaction ID).
of a ``CREATE`` transaction, this field contains only the user-defined
payload.
additionalProperties: false
properties:
id:

View File

@ -159,7 +159,7 @@ class TransactionLink(object):
def __eq__(self, other):
# TODO: If `other !== TransactionLink` return `False`
return self.to_dict() == self.to_dict()
return self.to_dict() == other.to_dict()
@classmethod
def from_dict(cls, link):
@ -410,7 +410,7 @@ class Transaction(object):
TRANSFER = 'TRANSFER'
GENESIS = 'GENESIS'
ALLOWED_OPERATIONS = (CREATE, TRANSFER, GENESIS)
VERSION = bigchaindb.version.__version__
VERSION = bigchaindb.version.__short_version__[:-4] # 0.9, 0.10 etc
def __init__(self, operation, asset, inputs=None, outputs=None,
metadata=None, version=None):
@ -444,7 +444,6 @@ class Transaction(object):
asset is not None and not (isinstance(asset, dict) and 'data' in asset)):
raise TypeError(('`asset` must be None or a dict holding a `data` '
" property instance for '{}' Transactions".format(operation)))
asset.pop('id', None) # Remove duplicated asset ID if there is one
elif (operation == Transaction.TRANSFER and
not (isinstance(asset, dict) and 'id' in asset)):
raise TypeError(('`asset` must be a dict holding an `id` property '
@ -483,8 +482,8 @@ class Transaction(object):
Args:
tx_signers (:obj:`list` of :obj:`str`): A list of keys that
represent the signers of the CREATE Transaction.
recipients (:obj:`list` of :obj:`str`): A list of keys that
represent the recipients of the outputs of this
recipients (:obj:`list` of :obj:`tuple`): A list of
([keys],amount) that represent the recipients of this
Transaction.
metadata (dict): The metadata to be stored along with the
Transaction.
@ -550,7 +549,7 @@ class Transaction(object):
inputs (:obj:`list` of :class:`~bigchaindb.common.transaction.
Input`): Converted `Output`s, intended to
be used as inputs in the transfer to generate.
recipients (:obj:`list` of :obj:`str`): A list of
recipients (:obj:`list` of :obj:`tuple`): A list of
([keys],amount) that represent the recipients of this
Transaction.
asset_id (str): The asset ID of the asset to be transferred in
@ -927,11 +926,9 @@ class Transaction(object):
tx_no_signatures = Transaction._remove_signatures(tx)
tx_serialized = Transaction._to_str(tx_no_signatures)
tx['id'] = Transaction._to_hash(tx_serialized)
if self.operation == Transaction.CREATE:
# Duplicate asset into asset for consistency with TRANSFER
# transactions
tx['asset']['id'] = tx['id']
tx_id = Transaction._to_hash(tx_serialized)
tx['id'] = tx_id
return tx
@staticmethod
@ -955,9 +952,6 @@ class Transaction(object):
# case could yield incorrect signatures. This is why we only
# set it to `None` if it's set in the dict.
input_['fulfillment'] = None
# Pop duplicated asset_id from CREATE tx
if tx_dict['operation'] == Transaction.CREATE:
tx_dict['asset'].pop('id', None)
return tx_dict
@staticmethod
@ -1037,10 +1031,6 @@ class Transaction(object):
"the hash of its body, i.e. it's not valid.")
raise InvalidHash(err_msg.format(proposed_tx_id))
if tx_body.get('operation') == Transaction.CREATE:
if proposed_tx_id != tx_body['asset'].get('id'):
raise InvalidHash("CREATE tx has wrong asset_id")
@classmethod
def from_dict(cls, tx):
"""Transforms a Python dictionary to a Transaction object.

View File

@ -39,6 +39,6 @@ class BaseConsensusRules():
except SchemaValidationError as exc:
logger.warning(exc)
else:
logger.warning("Vote failed signature verification: "
"%s with voters: %s", signed_vote, voters)
logger.warning('Vote failed signature verification: '
'%s with voters: %s', signed_vote, voters)
return False

View File

@ -317,30 +317,6 @@ class Bigchain(object):
else:
return None
def get_transactions_by_asset_id(self, asset_id):
"""Retrieves valid or undecided transactions related to a particular
asset.
A digital asset in bigchaindb is identified by an uuid. This allows us
to query all the transactions related to a particular digital asset,
knowing the id.
Args:
asset_id (str): the id for this particular asset.
Returns:
A list of valid or undecided transactions related to the asset.
If no transaction exists for that asset it returns an empty list
`[]`
"""
txids = backend.query.get_txids_by_asset_id(self.connection, asset_id)
transactions = []
for txid in txids:
tx = self.get_transaction(txid)
if tx:
transactions.append(tx)
return transactions
def get_asset_by_id(self, asset_id):
"""Returns the asset associated with an asset_id.
@ -397,8 +373,9 @@ class Bigchain(object):
else:
return None
def get_owned_ids(self, owner):
"""Retrieve a list of ``txid`` s that can be used as inputs.
def get_outputs(self, owner):
"""Retrieve a list of links to transaction outputs for a given public
key.
Args:
owner (str): base58 encoded public key.
@ -407,10 +384,9 @@ class Bigchain(object):
:obj:`list` of TransactionLink: list of ``txid`` s and ``output`` s
pointing to another transaction's condition
"""
# get all transactions in which owner is in the `owners_after` list
response = backend.query.get_owned_ids(self.connection, owner)
owned = []
links = []
for tx in response:
# disregard transactions from invalid blocks
@ -435,11 +411,41 @@ class Bigchain(object):
# subfulfillment for `owner`
if utils.condition_details_has_owner(output['condition']['details'], owner):
tx_link = TransactionLink(tx['id'], index)
# check if input was already spent
if not self.get_spent(tx_link.txid, tx_link.output):
owned.append(tx_link)
links.append(tx_link)
return links
return owned
def get_owned_ids(self, owner):
"""Retrieve a list of ``txid`` s that can be used as inputs.
Args:
owner (str): base58 encoded public key.
Returns:
:obj:`list` of TransactionLink: list of ``txid`` s and ``output`` s
pointing to another transaction's condition
"""
return self.get_outputs_filtered(owner, include_spent=False)
def get_outputs_filtered(self, owner, include_spent=True):
"""
Get a list of output links filtered on some criteria
"""
outputs = self.get_outputs(owner)
if not include_spent:
outputs = [o for o in outputs
if not self.get_spent(o.txid, o.output)]
return outputs
def get_transactions_filtered(self, asset_id, operation=None):
"""
Get a list of transactions filtered on some criteria
"""
txids = backend.query.get_txids_filtered(self.connection, asset_id,
operation)
for txid in txids:
tx, status = self.get_transaction(txid, True)
if status == self.TX_VALID:
yield tx
def create_block(self, validated_transactions):
"""Creates a block given a list of `validated_transactions`.

View File

@ -88,6 +88,11 @@ class Transaction(Transaction):
if output.amount < 1:
raise AmountError('`amount` needs to be greater than zero')
# Validate that all inputs are distinct
links = [i.fulfills.to_uri() for i in self.inputs]
if len(links) != len(set(links)):
raise DoubleSpend('tx "{}" spends inputs twice'.format(self.id))
# validate asset id
asset_id = Transaction.get_asset_id(input_txs)
if asset_id != self.asset['id']:
@ -197,11 +202,6 @@ class Block(object):
OperationError: If a non-federation node signed the Block.
InvalidSignature: If a Block's signature is invalid.
"""
# First, make sure this node hasn't already voted on this block
if bigchain.has_previous_vote(self.id, self.voters):
return self
# Check if the block was created by a federation node
possible_voters = (bigchain.nodes_except_me + [bigchain.me])
if self.node_pubkey not in possible_voters:

View File

@ -1,32 +0,0 @@
from platform import node
import statsd
import bigchaindb
from bigchaindb import config_utils
class Monitor(statsd.StatsClient):
"""Set up statsd monitoring."""
def __init__(self, *args, **kwargs):
"""Overrides statsd client, fixing prefix to messages and loading configuration
Args:
*args: arguments (identical to Statsclient)
**kwargs: keyword arguments (identical to Statsclient)
"""
config_utils.autoconfigure()
if not kwargs:
kwargs = {}
# set prefix, parameters from configuration file
if 'prefix' not in kwargs:
kwargs['prefix'] = '{hostname}.'.format(hostname=node())
if 'host' not in kwargs:
kwargs['host'] = bigchaindb.config['statsd']['host']
if 'port' not in kwargs:
kwargs['port'] = bigchaindb.config['statsd']['port']
super().__init__(*args, **kwargs)

View File

@ -5,7 +5,7 @@ from bigchaindb.web.views import (
info,
statuses,
transactions as tx,
unspents,
outputs,
votes,
)
@ -30,7 +30,7 @@ ROUTES_API_V1 = [
r('statuses/', statuses.StatusApi),
r('transactions/<string:tx_id>', tx.TransactionApi),
r('transactions', tx.TransactionListApi),
r('unspents/', unspents.UnspentListApi),
r('outputs/', outputs.OutputListApi),
r('votes/', votes.VotesApi),
]

View File

@ -13,8 +13,6 @@ from bigchaindb import utils
from bigchaindb import Bigchain
from bigchaindb.web.routes import add_routes
from bigchaindb.monitor import Monitor
# TODO: Figure out if we do we need all this boilerplate.
class StandaloneApplication(gunicorn.app.base.BaseApplication):
@ -65,7 +63,6 @@ def create_app(*, debug=False, threads=4):
app.debug = debug
app.config['bigchain_pool'] = utils.pool(Bigchain, size=threads)
app.config['monitor'] = Monitor()
add_routes(app)

View File

@ -36,10 +36,10 @@ class ApiV1Index(Resource):
'/drivers-clients/http-client-server-api.html',
]
return {
"_links": {
"docs": ''.join(docs_url),
"self": api_root,
"statuses": api_root + "statuses/",
"transactions": api_root + "transactions/",
'_links': {
'docs': ''.join(docs_url),
'self': api_root,
'statuses': api_root + 'statuses/',
'transactions': api_root + 'transactions/',
},
}

View File

@ -0,0 +1,28 @@
from flask import current_app
from flask_restful import reqparse, Resource
from bigchaindb.web.views import parameters
class OutputListApi(Resource):
def get(self):
"""API endpoint to retrieve a list of links to transaction
outputs.
Returns:
A :obj:`list` of :cls:`str` of links to outputs.
"""
parser = reqparse.RequestParser()
parser.add_argument('public_key', type=parameters.valid_ed25519,
required=True)
parser.add_argument('unspent', type=parameters.valid_bool)
args = parser.parse_args()
pool = current_app.config['bigchain_pool']
include_spent = not args['unspent']
with pool() as bigchain:
outputs = bigchain.get_outputs_filtered(args['public_key'],
include_spent)
# NOTE: We pass '..' as a path to create a valid relative URI
return [u.to_uri('..') for u in outputs]

View File

@ -0,0 +1,32 @@
import re
def valid_txid(txid):
if re.match('^[a-fA-F0-9]{64}$', txid):
return txid.lower()
raise ValueError("Invalid hash")
def valid_bool(val):
val = val.lower()
if val == 'true':
return True
if val == 'false':
return False
raise ValueError('Boolean value must be "true" or "false" (lowercase)')
def valid_ed25519(key):
if (re.match('^[1-9a-zA-Z]{43,44}$', key) and not
re.match('.*[Il0O]', key)):
return key
raise ValueError("Invalid base58 ed25519 key")
def valid_operation(op):
op = op.upper()
if op == 'CREATE':
return 'CREATE'
if op == 'TRANSFER':
return 'TRANSFER'
raise ValueError('Operation must be "CREATE" or "TRANSFER')

View File

@ -28,7 +28,7 @@ class StatusApi(Resource):
# logical xor - exactly one query argument required
if bool(tx_id) == bool(block_id):
return make_error(400, "Provide exactly one query parameter. Choices are: block_id, tx_id")
return make_error(400, 'Provide exactly one query parameter. Choices are: block_id, tx_id')
pool = current_app.config['bigchain_pool']
status, links = None, None
@ -37,7 +37,7 @@ class StatusApi(Resource):
if tx_id:
status = bigchain.get_status(tx_id)
links = {
"tx": "/transactions/{}".format(tx_id)
'tx': '/transactions/{}'.format(tx_id)
}
elif block_id:
@ -56,7 +56,7 @@ class StatusApi(Resource):
if links:
response.update({
"_links": links
'_links': links
})
return response

View File

@ -7,7 +7,8 @@ For more information please refer to the documentation on ReadTheDocs:
import logging
from flask import current_app, request
from flask_restful import Resource
from flask_restful import Resource, reqparse
from bigchaindb.common.exceptions import (
AmountError,
@ -22,9 +23,9 @@ from bigchaindb.common.exceptions import (
ValidationError,
)
import bigchaindb
from bigchaindb.models import Transaction
from bigchaindb.web.views.base import make_error
from bigchaindb.web.views import parameters
logger = logging.getLogger(__name__)
@ -51,6 +52,18 @@ class TransactionApi(Resource):
class TransactionListApi(Resource):
def get(self):
parser = reqparse.RequestParser()
parser.add_argument('operation', type=parameters.valid_operation)
parser.add_argument('asset_id', type=parameters.valid_txid,
required=True)
args = parser.parse_args()
with current_app.config['bigchain_pool']() as bigchain:
txs = bigchain.get_transactions_filtered(**args)
return [tx.to_dict() for tx in txs]
def post(self):
"""API endpoint to push transactions to the Federation.
@ -58,7 +71,6 @@ class TransactionListApi(Resource):
A ``dict`` containing the data about the transaction.
"""
pool = current_app.config['bigchain_pool']
monitor = current_app.config['monitor']
# `force` will try to format the body of the POST request even if the
# `content-type` header is not set to `application/json`
@ -95,8 +107,6 @@ class TransactionListApi(Resource):
'Invalid transaction ({}): {}'.format(type(e).__name__, e)
)
else:
rate = bigchaindb.config['statsd']['rate']
with monitor.timer('write_transaction', rate=rate):
bigchain.write_transaction(tx_obj)
bigchain.write_transaction(tx_obj)
return tx
return tx, 202

View File

@ -1,23 +0,0 @@
from flask import current_app
from flask_restful import reqparse, Resource
class UnspentListApi(Resource):
def get(self):
"""API endpoint to retrieve a list of links to transactions's
conditions that have not been used in any previous transaction.
Returns:
A :obj:`list` of :cls:`str` of links to unfulfilled conditions.
"""
parser = reqparse.RequestParser()
parser.add_argument('public_key', type=str, location='args',
required=True)
args = parser.parse_args()
pool = current_app.config['bigchain_pool']
with pool() as bigchain:
unspents = bigchain.get_owned_ids(args['public_key'])
# NOTE: We pass '..' as a path to create a valid relative URI
return [u.to_uri('..') for u in unspents]

View File

@ -1,8 +1,10 @@
#! /bin/bash
#!/bin/bash
# The set -e option instructs bash to immediately exit
# if any command has a non-zero exit status
set -e
set -euo pipefail
# -e Abort at the first failed line (i.e. if exit status is not 0)
# -u Abort when undefined variable is used
# -o pipefail (Bash-only) Piped commands return the status
# of the last failed command, rather than the status of the last command
# Check for the first command-line argument
# (the name of the AWS deployment config file)

View File

@ -1,89 +0,0 @@
# -*- coding: utf-8 -*-
"""A Fabric fabfile with functionality to install Docker,
install Docker Compose, and run a BigchainDB monitoring server
(using the docker-compose-monitor.yml file)
"""
from __future__ import with_statement, unicode_literals
from fabric.api import sudo, env
from fabric.api import task
from fabric.operations import put, run
from ssh_key import ssh_key_path
# Ignore known_hosts
# http://docs.fabfile.org/en/1.10/usage/env.html#disable-known-hosts
env.disable_known_hosts = True
env.user = 'ubuntu'
env.key_filename = ssh_key_path
@task
def install_docker_engine():
"""Install Docker on an EC2 Ubuntu 14.04 instance
Example:
fab --fabfile=fabfile-monitor.py \
--hosts=ec2-52-58-106-17.eu-central-1.compute.amazonaws.com \
install_docker_engine
"""
# install prerequisites
sudo('apt-get update')
sudo('apt-get -y install apt-transport-https ca-certificates linux-image-extra-$(uname -r) apparmor')
# install docker repositories
sudo('apt-key adv --keyserver hkp://p80.pool.sks-keyservers.net:80 \
--recv-keys 58118E89F3A912897C070ADBF76221572C52609D')
sudo("echo 'deb https://apt.dockerproject.org/repo ubuntu-trusty main' | \
sudo tee /etc/apt/sources.list.d/docker.list")
# install docker engine
sudo('apt-get update')
sudo('apt-get -y install docker-engine')
# add ubuntu user to the docker group
sudo('usermod -aG docker ubuntu')
@task
def install_docker_compose():
"""Install Docker Compose on an EC2 Ubuntu 14.04 instance
Example:
fab --fabfile=fabfile-monitor.py \
--hosts=ec2-52-58-106-17.eu-central-1.compute.amazonaws.com \
install_docker_compose
"""
sudo('curl -L https://github.com/docker/compose/releases/download/1.7.0/docker-compose-`uname \
-s`-`uname -m` > /usr/local/bin/docker-compose')
sudo('chmod +x /usr/local/bin/docker-compose')
@task
def install_docker():
"""Install Docker and Docker Compose on an EC2 Ubuntu 14.04 instance
Example:
fab --fabfile=fabfile-monitor.py \
--hosts=ec2-52-58-106-17.eu-central-1.compute.amazonaws.com \
install_docker
"""
install_docker_engine()
install_docker_compose()
@task
def run_monitor():
"""Run BigchainDB monitor on an EC2 Ubuntu 14.04 instance
Example:
fab --fabfile=fabfile-monitor.py \
--hosts=ec2-52-58-106-17.eu-central-1.compute.amazonaws.com \
run_monitor
"""
# copy docker-compose-monitor to the ec2 instance
put('../docker-compose-monitor.yml')
run('INFLUXDB_DATA=/influxdb-data docker-compose -f docker-compose-monitor.yml up -d')

View File

@ -221,7 +221,7 @@ def install_bigchaindb_from_git_archive():
@task
@parallel
def configure_bigchaindb():
run('bigchaindb -y configure', pty=False)
run('bigchaindb -y configure rethinkdb', pty=False)
# Send the specified configuration file to

View File

@ -1,8 +1,6 @@
#! /bin/bash
# The set -e option instructs bash to immediately exit
# if any command has a non-zero exit status
set -e
set -euo pipefail
function printErr()
{
@ -36,5 +34,5 @@ mkdir $CONFDIR
for (( i=0; i<$NUMFILES; i++ )); do
CONPATH=$CONFDIR"/bcdb_conf"$i
echo "Writing "$CONPATH
bigchaindb -y -c $CONPATH configure
bigchaindb -y -c $CONPATH configure rethinkdb
done

View File

@ -1,28 +0,0 @@
version: '2'
services:
influxdb:
image: tutum/influxdb
ports:
- "8083:8083"
- "8086:8086"
- "8090"
- "8099"
environment:
PRE_CREATE_DB: "telegraf"
volumes:
- $INFLUXDB_DATA:/data
grafana:
image: bigchaindb/grafana-bigchaindb-docker
tty: true
ports:
- "3000:3000"
environment:
INFLUXDB_HOST: "influxdb"
statsd:
image: bigchaindb/docker-telegraf-statsd
ports:
- "8125:8125/udp"
environment:
INFLUXDB_HOST: "influxdb"

View File

@ -5,7 +5,7 @@ services:
image: mongo:3.4.1
ports:
- "27017"
command: mongod --replSet=rs0
command: mongod --replSet=bigchain-rs
rdb:
image: rethinkdb

View File

@ -4,13 +4,61 @@ import json
import os
import os.path
from bigchaindb.common.transaction import Transaction
from bigchaindb.common.transaction import Transaction, Input, TransactionLink
from bigchaindb.core import Bigchain
from bigchaindb.models import Block
from bigchaindb.web import server
TPLS = {}
TPLS['index-response'] = """\
HTTP/1.1 200 OK
Content-Type: application/json
%(index)s
"""
TPLS['api-index-response'] = """\
HTTP/1.1 200 OK
Content-Type: application/json
%(api_index)s
"""
TPLS['get-tx-id-request'] = """\
GET /api/v1/transactions/%(txid)s HTTP/1.1
Host: example.com
"""
TPLS['get-tx-id-response'] = """\
HTTP/1.1 200 OK
Content-Type: application/json
%(tx)s
"""
TPLS['get-tx-by-asset-request'] = """\
GET /api/v1/transactions?operation=TRANSFER&asset_id=%(txid)s HTTP/1.1
Host: example.com
"""
TPLS['get-tx-by-asset-response'] = """\
HTTP/1.1 200 OK
Content-Type: application/json
[%(tx_transfer)s,
%(tx_transfer_last)s]
"""
TPLS['post-tx-request'] = """\
POST /transactions/ HTTP/1.1
POST /api/v1/transactions/ HTTP/1.1
Host: example.com
Content-Type: application/json
@ -19,62 +67,215 @@ Content-Type: application/json
TPLS['post-tx-response'] = """\
HTTP/1.1 201 Created
HTTP/1.1 202 Accepted
Content-Type: application/json
%(tx)s
"""
TPLS['get-tx-status-request'] = """\
GET /transactions/%(txid)s/status HTTP/1.1
TPLS['get-statuses-tx-request'] = """\
GET /statuses?tx_id=%(txid)s HTTP/1.1
Host: example.com
"""
TPLS['get-tx-status-response'] = """\
TPLS['get-statuses-tx-invalid-response'] = """\
HTTP/1.1 200 OK
Content-Type: application/json
{
"status": "valid"
"status": "invalid"
}
"""
TPLS['get-tx-request'] = """\
GET /transactions/%(txid)s HTTP/1.1
TPLS['get-statuses-tx-valid-response'] = """\
HTTP/1.1 200 OK
Content-Type: application/json
{
"status": "valid",
"_links": {
"tx": "/transactions/%(txid)s"
}
}
"""
TPLS['get-statuses-block-request'] = """\
GET /api/v1/statuses?block_id=%(blockid)s HTTP/1.1
Host: example.com
"""
TPLS['get-tx-response'] = """\
TPLS['get-statuses-block-invalid-response'] = """\
HTTP/1.1 200 OK
Content-Type: application/json
%(tx)s
{
"status": "invalid"
}
"""
TPLS['get-statuses-block-valid-response'] = """\
HTTP/1.1 200 OK
Content-Type: application/json
{
"status": "valid",
"_links": {
"block": "/blocks/%(blockid)s"
}
}
"""
TPLS['get-block-request'] = """\
GET /api/v1/blocks/%(blockid)s HTTP/1.1
Host: example.com
"""
TPLS['get-block-response'] = """\
HTTP/1.1 200 OK
Content-Type: application/json
%(block)s
"""
TPLS['get-block-txid-request'] = """\
GET /api/v1/blocks?tx_id=%(txid)s HTTP/1.1
Host: example.com
"""
TPLS['get-block-txid-response'] = """\
HTTP/1.1 200 OK
Content-Type: application/json
%(block_list)s
"""
TPLS['get-vote-request'] = """\
GET /api/v1/votes?block_id=%(blockid)s HTTP/1.1
Host: example.com
"""
TPLS['get-vote-response'] = """\
HTTP/1.1 200 OK
Content-Type: application/json
[%(vote)s]
"""
def main():
""" Main function """
ctx = {}
def pretty_json(data):
return json.dumps(data, indent=2, sort_keys=True)
client = server.create_app().test_client()
host = 'example.com:9984'
# HTTP Index
res = client.get('/', environ_overrides={'HTTP_HOST': host})
res_data = json.loads(res.data.decode())
res_data['keyring'] = [
"6qHyZew94NMmUTYyHnkZsB8cxJYuRNEiEpXHe1ih9QX3",
"AdDuyrTyjrDt935YnFu4VBCVDhHtY2Y6rcy7x2TFeiRi"
]
res_data['public_key'] = 'NC8c8rYcAhyKVpx1PCV65CBmyq4YUbLysy3Rqrg8L8mz'
ctx['index'] = pretty_json(res_data)
# API index
res = client.get('/api/v1/', environ_overrides={'HTTP_HOST': host})
ctx['api_index'] = pretty_json(json.loads(res.data.decode()))
# tx create
privkey = 'CfdqtD7sS7FgkMoGPXw55MVGGFwQLAoHYTcBhZDtF99Z'
pubkey = '4K9sWUMFwTgaDGPfdynrbxWqWS6sWmKbZoTjxLtVUibD'
tx = Transaction.create([pubkey], [([pubkey], 1)])
asset = {'msg': 'Hello BigchainDB!'}
tx = Transaction.create([pubkey], [([pubkey], 1)], asset=asset, metadata={'sequence': 0})
tx = tx.sign([privkey])
tx_json = json.dumps(tx.to_dict(), indent=2, sort_keys=True)
ctx['tx'] = pretty_json(tx.to_dict())
ctx['public_keys'] = tx.outputs[0].public_keys[0]
ctx['txid'] = tx.id
# tx transfer
privkey_transfer = '3AeWpPdhEZzWLYfkfYHBfMFC2r1f8HEaGS9NtbbKssya'
pubkey_transfer = '3yfQPHeWAa1MxTX9Zf9176QqcpcnWcanVZZbaHb8B3h9'
cid = 0
input_ = Input(fulfillment=tx.outputs[cid].fulfillment,
fulfills=TransactionLink(txid=tx.id, output=cid),
owners_before=tx.outputs[cid].public_keys)
tx_transfer = Transaction.transfer([input_], [([pubkey_transfer], 1)], asset_id=tx.id, metadata={'sequence': 1})
tx_transfer = tx_transfer.sign([privkey])
ctx['tx_transfer'] = pretty_json(tx_transfer.to_dict())
ctx['public_keys_transfer'] = tx_transfer.outputs[0].public_keys[0]
ctx['tx_transfer_id'] = tx_transfer.id
# privkey_transfer_last = 'sG3jWDtdTXUidBJK53ucSTrosktG616U3tQHBk81eQe'
pubkey_transfer_last = '3Af3fhhjU6d9WecEM9Uw5hfom9kNEwE7YuDWdqAUssqm'
cid = 0
input_ = Input(fulfillment=tx_transfer.outputs[cid].fulfillment,
fulfills=TransactionLink(txid=tx_transfer.id, output=cid),
owners_before=tx_transfer.outputs[cid].public_keys)
tx_transfer_last = Transaction.transfer([input_], [([pubkey_transfer_last], 1)],
asset_id=tx.id, metadata={'sequence': 2})
tx_transfer_last = tx_transfer_last.sign([privkey_transfer])
ctx['tx_transfer_last'] = pretty_json(tx_transfer_last.to_dict())
ctx['tx_transfer_last_id'] = tx_transfer_last.id
ctx['public_keys_transfer_last'] = tx_transfer_last.outputs[0].public_keys[0]
# block
node_private = "5G2kE1zJAgTajkVSbPAQWo4c2izvtwqaNHYsaNpbbvxX"
node_public = "DngBurxfeNVKZWCEcDnLj1eMPAS7focUZTE5FndFGuHT"
signature = "53wxrEQDYk1dXzmvNSytbCfmNVnPqPkDQaTnAe8Jf43s6ssejPxezkCvUnGTnduNUmaLjhaan1iRLi3peu6s5DzA"
block = Block(transactions=[tx], node_pubkey=node_public, voters=[node_public], signature=signature)
ctx['block'] = pretty_json(block.to_dict())
ctx['blockid'] = block.id
block_transfer = Block(transactions=[tx_transfer], node_pubkey=node_public,
voters=[node_public], signature=signature)
ctx['block_transfer'] = pretty_json(block.to_dict())
# vote
DUMMY_SHA3 = '0123456789abcdef' * 4
b = Bigchain(public_key=node_public, private_key=node_private)
vote = b.vote(block.id, DUMMY_SHA3, True)
ctx['vote'] = pretty_json(vote)
# block status
block_list = [
block_transfer.id,
block.id
]
ctx['block_list'] = pretty_json(block_list)
base_path = os.path.join(os.path.dirname(__file__),
'source/drivers-clients/samples')
if not os.path.exists(base_path):
os.makedirs(base_path)
for name, tpl in TPLS.items():
path = os.path.join(base_path, name + '.http')
code = tpl % {'tx': tx_json, 'txid': tx.id}
code = tpl % ctx
with open(path, 'w') as handle:
handle.write(code)

View File

@ -189,7 +189,7 @@ def render_section(section_name, obj):
'type': property_type(prop),
}]
except Exception as exc:
raise ValueError("Error rendering property: %s" % name, exc)
raise ValueError('Error rendering property: %s' % name, exc)
return '\n\n'.join(out + [''])
@ -201,7 +201,7 @@ def property_description(prop):
return property_description(resolve_ref(prop['$ref']))
if 'anyOf' in prop:
return property_description(prop['anyOf'][0])
raise KeyError("description")
raise KeyError('description')
def property_type(prop):
@ -214,7 +214,7 @@ def property_type(prop):
return ' or '.join(property_type(p) for p in prop['anyOf'])
if '$ref' in prop:
return property_type(resolve_ref(prop['$ref']))
raise ValueError("Could not resolve property type")
raise ValueError('Could not resolve property type')
DEFINITION_BASE_PATH = '#/definitions/'

View File

@ -44,11 +44,6 @@ Port 161 is the default SNMP port (usually UDP, sometimes TCP). SNMP is used, fo
Port 443 is the default HTTPS port (TCP). You may need to open it up for outbound requests (and inbound responses) temporarily because some RethinkDB installation instructions use wget over HTTPS to get the RethinkDB GPG key. Package managers might also get some packages using HTTPS.
## Port 8125
If you set up a [cluster-monitoring server](../clusters-feds/monitoring.html), then StatsD will send UDP packets to Telegraf (on the monitoring server) via port 8125.
## Port 8080
Port 8080 is the default port used by RethinkDB for its adminstrative web (HTTP) interface (TCP). While you _can_, you shouldn't allow traffic arbitrary external sources. You can still use the RethinkDB web interface by binding it to localhost and then accessing it via a SOCKS proxy or reverse proxy; see "Binding the web interface port" on [the RethinkDB page about securing your cluster](https://rethinkdb.com/docs/security/).
@ -76,8 +71,3 @@ Port 29015 is the default port for RethinkDB intracluster connections (TCP). It
## Other Ports
On Linux, you can use commands such as `netstat -tunlp` or `lsof -i` to get a sense of currently open/listening ports and connections, and the associated processes.
## Cluster-Monitoring Server
If you set up a [cluster-monitoring server](../clusters-feds/monitoring.html) (running Telegraf, InfluxDB & Grafana), Telegraf will listen on port 8125 for UDP packets from StatsD, and the Grafana web dashboard will use port 3000. (Those are the default ports.)

View File

@ -7,7 +7,7 @@ pip -V
If it says that `pip` isn't installed, or it says `pip` is associated with a Python version less than 3.4, then you must install a `pip` version associated with Python 3.4+. In the following instructions, we call it `pip3` but you may be able to use `pip` if that refers to the same thing. See [the `pip` installation instructions](https://pip.pypa.io/en/stable/installing/).
On Ubuntu 14.04, we found that this works:
On Ubuntu 16.04, we found that this works:
```text
sudo apt-get install python3-pip
```

View File

@ -2,13 +2,13 @@
BigchainDB Server has some OS-level dependencies that must be installed.
On Ubuntu 14.04 and 16.04, we found that the following was enough:
On Ubuntu 16.04, we found that the following was enough:
```text
sudo apt-get update
sudo apt-get install g++ python3-dev libffi-dev
```
On Fedora 23 and 24, we found that the following was enough:
On Fedora 2325, we found that the following was enough:
```text
sudo dnf update
sudo dnf install gcc-c++ redhat-rpm-config python3-devel libffi-devel

View File

@ -1,5 +1,7 @@
# Installing BigchainDB on LXC containers using LXD
**Note: This page was contributed by an external contributor and is not actively maintained. We include it in case someone is interested.**
You can visit this link to install LXD (instructions here): [LXD Install](https://linuxcontainers.org/lxd/getting-started-cli/)
(assumption is that you are using Ubuntu 14.04 for host/container)

View File

@ -23,9 +23,9 @@ If your BigchainDB node is running on an Amazon Linux instance (i.e. a Linux ins
That said, you should check _which_ NTP daemon is installed. Is it recent? Is it configured securely?
## Ubuntu's ntp Package
## The Ubuntu ntp Packages
The [Ubuntu 14.04 (Trusty Tahr) package `ntp`](https://launchpad.net/ubuntu/trusty/+source/ntp) is based on the reference implementation of an NTP daemon (i.e. `ntpd`).
The [Ubuntu `ntp` packages](https://launchpad.net/ubuntu/+source/ntp) are based on the reference implementation of NTP.
The following commands will uninstall the `ntp` and `ntpdate` packages, install the latest `ntp` package (which _might not be based on the latest ntpd code_), and start the NTP daemon (a local NTP server). (`ntpdate` is not reinstalled because it's [deprecated](https://askubuntu.com/questions/297560/ntpd-vs-ntpdate-pros-and-cons) and you shouldn't use it.)
```text

View File

@ -21,7 +21,7 @@ be stored in a file on your host machine at `~/bigchaindb_docker/.bigchaindb`:
```text
docker run --rm -v "$HOME/bigchaindb_docker:/data" -ti \
bigchaindb/bigchaindb -y configure
bigchaindb/bigchaindb -y configure rethinkdb
Generating keypair
Configuration written to /data/.bigchaindb
Ready to go!

View File

@ -2,12 +2,12 @@
If you didn't read the introduction to the [cloud deployment starter templates](index.html), please do that now. The main point is that they're not for deploying a production node; they can be used as a starting point.
This page explains how to use [Ansible](https://www.ansible.com/) to install, configure and run all the software needed to run a one-machine BigchainDB node on a server running Ubuntu 14.04.
This page explains how to use [Ansible](https://www.ansible.com/) to install, configure and run all the software needed to run a one-machine BigchainDB node on a server running Ubuntu 16.04.
## Install Ansible
The Ansible documentation has [installation instructions](https://docs.ansible.com/ansible/intro_installation.html). Note the control machine requirements: at the time of writing, Ansible required Python 2.6 or 2.7. (Support for Python 3 [is a goal of Ansible 2.2](https://github.com/ansible/ansible/issues/15976#issuecomment-221264089).)
The Ansible documentation has [installation instructions](https://docs.ansible.com/ansible/intro_installation.html). Note the control machine requirements: at the time of writing, Ansible required Python 2.6 or 2.7. ([Python 3 support is coming](https://docs.ansible.com/ansible/python_3_support.html): "Ansible 2.2 features a tech preview of Python 3 support." and the latest version, as of January 31, 2017, was 2.2.1.0. For now, it's probably best to use it with Python 2.)
For example, you could create a special Python 2.x virtualenv named `ansenv` and then install Ansible in it:
```text
@ -19,9 +19,9 @@ pip install ansible
## About Our Example Ansible Playbook
Our example Ansible playbook installs, configures and runs a basic BigchainDB node on an Ubuntu 14.04 machine. That playbook is in `.../bigchaindb/ntools/one-m/ansible/one-m-node.yml`.
Our example Ansible playbook installs, configures and runs a basic BigchainDB node on an Ubuntu 16.04 machine. That playbook is in `.../bigchaindb/ntools/one-m/ansible/one-m-node.yml`.
When you run the playbook (as per the instructions below), it ensures all the necessary software is installed, configured and running. It can be used to get a BigchainDB node set up on a bare Ubuntu 14.04 machine, but it can also be used to ensure that everything is okay on a running BigchainDB node. (If you run the playbook against a host where everything is okay, then it won't change anything on that host.)
When you run the playbook (as per the instructions below), it ensures all the necessary software is installed, configured and running. It can be used to get a BigchainDB node set up on a bare Ubuntu 16.04 machine, but it can also be used to ensure that everything is okay on a running BigchainDB node. (If you run the playbook against a host where everything is okay, then it won't change anything on that host.)
## Create an Ansible Inventory File
@ -39,7 +39,15 @@ echo "192.0.2.128" > hosts
but replace `192.0.2.128` with the IP address of the host.
## Run the Ansible Playbook
## Run the Ansible Playbook(s)
The latest Ubuntu 16.04 AMIs from Canonical don't include Python 2 (which is required by Ansible), so the first step is to run a small Ansible playbook to install Python 2 on the managed node:
```text
# cd to the directory .../bigchaindb/ntools/one-m/ansible
ansible-playbook -i hosts --private-key ~/.ssh/<key-name> install-python2.yml
```
where `<key-name>` should be replaced by the name of the SSH private key you created earlier (for SSHing to the host machine at your cloud hosting provider).
The next step is to run the Ansible playbook named `one-m-node.yml`:
```text
@ -47,14 +55,12 @@ The next step is to run the Ansible playbook named `one-m-node.yml`:
ansible-playbook -i hosts --private-key ~/.ssh/<key-name> one-m-node.yml
```
where `<key-name>` should be replaced by the name of the SSH private key you created earlier (for SSHing to the host machine at your cloud hosting provider).
What did you just do? Running that playbook ensures all the software necessary for a one-machine BigchainDB node is installed, configured, and running properly. You can run that playbook on a regular schedule to ensure that the system stays properly configured. If something is okay, it does nothing; it only takes action when something is not as-desired.
## Some Notes on the One-Machine Node You Just Got Running
* It ensures that the installed version of RethinkDB is `2.3.4~0trusty`. You can change that by changing the installation task.
* It ensures that the installed version of RethinkDB is the latest. You can change that by changing the installation task.
* It uses a very basic RethinkDB configuration file based on `bigchaindb/ntools/one-m/ansible/roles/rethinkdb/templates/rethinkdb.conf.j2`.
* If you edit the RethinkDB configuration file, then running the Ansible playbook will **not** restart RethinkDB for you. You must do that manually. (You can stop RethinkDB using `sudo /etc/init.d/rethinkdb stop`; run the playbook to get RethinkDB started again. This assumes you're using init.d, which is what the Ansible playbook assumes. If you want to use systemd, you'll have to edit the playbook accordingly, and stop RethinkDB using `sudo systemctl stop rethinkdb@<name_instance>`.)
* It generates and uses a default BigchainDB configuration file, which it stores in `~/.bigchaindb` (the default location).

View File

@ -2,7 +2,7 @@
If you didn't read the introduction to the [cloud deployment starter templates](index.html), please do that now. The main point is that they're not for deploying a production node; they can be used as a starting point.
This page explains a way to use [Terraform](https://www.terraform.io/) to provision an Ubuntu machine (i.e. an EC2 instance with Ubuntu 14.04) and other resources on [AWS](https://aws.amazon.com/). That machine can then be used to host a one-machine BigchainDB node.
This page explains a way to use [Terraform](https://www.terraform.io/) to provision an Ubuntu machine (i.e. an EC2 instance with Ubuntu 16.04) and other resources on [AWS](https://aws.amazon.com/). That machine can then be used to host a one-machine BigchainDB node.
## Install Terraform
@ -65,7 +65,7 @@ terraform apply
Terraform will report its progress as it provisions all the resources. Once it's done, you can go to the Amazon EC2 web console and see the instance, its security group, its elastic IP, and its attached storage volumes (one for the root directory and one for RethinkDB storage).
At this point, there is no software installed on the instance except for Ubuntu 14.04 and whatever else came with the Amazon Machine Image (AMI) specified in the Terraform configuration (files).
At this point, there is no software installed on the instance except for Ubuntu 16.04 and whatever else came with the Amazon Machine Image (AMI) specified in the Terraform configuration (files).
The next step is to install, configure and run all the necessary software for a BigchainDB node. You could use [our example Ansible playbook](template-ansible.html) to do that.

View File

@ -14,10 +14,17 @@ We use some Bash and Python scripts to launch several instances (virtual servers
## Python Setup
The instructions that follow have been tested on Ubuntu 14.04, but may also work on similar distros or operating systems.
The instructions that follow have been tested on Ubuntu 16.04. Similar instructions should work on similar Linux distros.
**Note: Our Python scripts for deploying to AWS use Python 2 because Fabric doesn't work with Python 3.**
You must install the Python package named `fabric`, but it depends on the `cryptography` package, and that depends on some OS-level packages. On Ubuntu 16.04, you can install those OS-level packages using:
```text
sudo apt-get install build-essential libssl-dev libffi-dev python-dev
```
For other operating systems, see [the installation instructions for the `cryptography` package](https://cryptography.io/en/latest/installation/).
Maybe create a Python 2 virtual environment and activate it. Then install the following Python packages (in that virtual environment):
```text
pip install fabric fabtools requests boto3 awscli
@ -57,50 +64,6 @@ For a super lax, somewhat risky, anything-can-enter security group, add these ru
If you want to set up a more secure security group, see the [Notes for Firewall Setup](../appendices/firewall-notes.html).
## Deploy a BigchainDB Monitor
This step is optional.
One way to monitor a BigchainDB cluster is to use the monitoring setup described in the [Monitoring](monitoring.html) section of this documentation. If you want to do that, then you may want to deploy the monitoring server first, so you can tell your BigchainDB nodes where to send their monitoring data.
You can deploy a monitoring server on AWS. To do that, go to the AWS EC2 Console and launch an instance:
1. Choose an AMI: select Ubuntu Server 14.04 LTS.
2. Choose an Instance Type: a t2.micro will suffice.
3. Configure Instance Details: you can accept the defaults, but feel free to change them.
4. Add Storage: A "Root" volume type should already be included. You _could_ store monitoring data there (e.g. in a folder named `/influxdb-data`) but we will attach another volume and store the monitoring data there instead. Select "Add New Volume" and an EBS volume type.
5. Tag Instance: give your instance a memorable name.
6. Configure Security Group: choose your bigchaindb security group.
7. Review and launch your instance.
When it asks, choose an existing key pair: the one you created earlier (named `bigchaindb`).
Give your instance some time to launch and become able to accept SSH connections. You can see its current status in the AWS EC2 Console (in the "Instances" section). SSH into your instance using something like:
```text
cd deploy-cluster-aws
ssh -i pem/bigchaindb.pem ubuntu@ec2-52-58-157-229.eu-central-1.compute.amazonaws.com
```
where `ec2-52-58-157-229.eu-central-1.compute.amazonaws.com` should be replaced by your new instance's EC2 hostname. (To get that, go to the AWS EC2 Console, select Instances, click on your newly-launched instance, and copy its "Public DNS" name.)
Next, create a file system on the attached volume, make a directory named `/influxdb-data`, and set the attached volume's mount point to be `/influxdb-data`. For detailed instructions on how to do that, see the AWS documentation for [Making an Amazon EBS Volume Available for Use](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-using-volumes.html).
Then install Docker and Docker Compose:
```text
# in a Python 2.5-2.7 virtual environment where fabric, boto3, etc. are installed
fab --fabfile=fabfile-monitor.py --hosts=<EC2 hostname> install_docker
```
After Docker is installed, we can run the monitor with:
```text
fab --fabfile=fabfile-monitor.py --hosts=<EC2 hostname> run_monitor
```
For more information about monitoring (e.g. how to view the Grafana dashboard in your web browser), see the [Monitoring](monitoring.html) section of this documentation.
To configure a BigchainDB node to send monitoring data to the monitoring server, change the statsd host in the configuration of the BigchainDB node. The section on [Configuring a BigchainDB Node](../server-reference/configuration.html) explains how you can do that. (For example, you can change the statsd host in `$HOME/.bigchaindb`.)
## Deploy a BigchainDB Cluster
### Step 1

View File

@ -7,5 +7,4 @@ Clusters & Federations
set-up-a-federation
backup
aws-testing-cluster
monitoring

View File

@ -1,40 +0,0 @@
# Cluster Monitoring
BigchainDB uses [StatsD](https://github.com/etsy/statsd) for cluster monitoring. We require some additional infrastructure to take full advantage of its functionality:
* an agent to listen for metrics: [Telegraf](https://github.com/influxdata/telegraf),
* a time-series database: [InfluxDB](https://www.influxdata.com/time-series-platform/influxdb/), and
* a frontend to display analytics: [Grafana](http://grafana.org/).
We put each of those inside its own Docker container. The whole system is illustrated below.
![BigchainDB monitoring system diagram: Application metrics flow from servers running BigchainDB to Telegraf to InfluxDB to Grafana](../_static/monitoring_system_diagram.png)
For ease of use, we've created a Docker [_Compose file_](https://docs.docker.com/compose/compose-file/) (named `docker-compose-monitor.yml`) to define the monitoring system setup. To use it, just go to to the top `bigchaindb` directory and run:
```text
$ docker-compose -f docker-compose-monitor.yml build
$ docker-compose -f docker-compose-monitor.yml up
```
It is also possible to mount a host directory as a data volume for InfluxDB
by setting the `INFLUXDB_DATA` environment variable:
```text
$ INFLUXDB_DATA=/data docker-compose -f docker-compose-monitor.yml up
```
You can view the Grafana dashboard in your web browser at:
[http://localhost:3000/dashboard/script/bigchaindb_dashboard.js](http://localhost:3000/dashboard/script/bigchaindb_dashboard.js)
(You may want to replace `localhost` with another hostname in that URL, e.g. the hostname of a remote monitoring server.)
The login and password are `admin` by default. If BigchainDB is running and processing transactions, you should see analytics—if not, [start BigchainDB](../dev-and-test/setup-run-node.html#run-bigchaindb) and load some test transactions:
```text
$ bigchaindb load
```
then refresh the page after a few seconds.
If you're not interested in monitoring, don't worry: BigchainDB will function just fine without any monitoring setup.
Feel free to modify the [custom Grafana dashboard](https://github.com/rhsimplex/grafana-bigchaindb-docker/blob/master/bigchaindb_dashboard.js) to your liking!

View File

@ -1,11 +1,10 @@
# The Digital Asset Model
The asset ID is the same as the ID of the CREATE transaction that defined the asset.
To avoid redundant data in transactions, the digital asset model is different for `CREATE` and `TRANSFER` transactions.
In the case of a CREATE transaction, the transaction ID is duplicated into the asset object for clarity and consistency in the database. The CREATE transaction also contains a user definable payload to describe the asset:
A digital asset's properties are defined in a `CREATE` transaction with the following model:
```json
{
"id": "<same as transaction ID (sha3-256 hash)>",
"data": "<json document>"
}
```

View File

@ -7,25 +7,27 @@ The BigchainDB core dev team develops BigchainDB on recent Ubuntu and Fedora dis
## Option A: Using a Local Dev Machine
First, read through the BigchainDB [CONTRIBUTING.md file](https://github.com/bigchaindb/bigchaindb/blob/master/CONTRIBUTING.md). It outlines the steps to setup a machine for developing and testing BigchainDB.
Read through the BigchainDB [CONTRIBUTING.md file](https://github.com/bigchaindb/bigchaindb/blob/master/CONTRIBUTING.md). It outlines the steps to setup a machine for developing and testing BigchainDB.
Next, create a default BigchainDB config file (in `$HOME/.bigchaindb`):
### With RethinkDB
Create a default BigchainDB config file (in `$HOME/.bigchaindb`):
```text
bigchaindb -y configure
$ bigchaindb -y configure rethinkdb
```
Note: [The BigchainDB CLI](../server-reference/bigchaindb-cli.html) and the [BigchainDB Configuration Settings](../server-reference/configuration.html) are documented elsewhere. (Click the links.)
Start RethinkDB using:
```text
rethinkdb
$ rethinkdb
```
You can verify that RethinkDB is running by opening the RethinkDB web interface in your web browser. It should be at [http://localhost:8080/](http://localhost:8080/).
To run BigchainDB Server, do:
```text
bigchaindb start
$ bigchaindb start
```
You can [run all the unit tests](running-unit-tests.html) to test your installation.
@ -33,13 +35,37 @@ You can [run all the unit tests](running-unit-tests.html) to test your installat
The BigchainDB [CONTRIBUTING.md file](https://github.com/bigchaindb/bigchaindb/blob/master/CONTRIBUTING.md) has more details about how to contribute.
## Option B: Using a Dev Machine on Cloud9
### With MongoDB
Ian Worrall of [Encrypted Labs](http://www.encryptedlabs.com/) wrote a document (PDF) explaining how to set up a BigchainDB (Server) dev machine on Cloud9:
Create a default BigchainDB config file (in `$HOME/.bigchaindb`):
```text
$ bigchaindb -y configure mongodb
```
[Download that document from GitHub](https://raw.githubusercontent.com/bigchaindb/bigchaindb/master/docs/server/source/_static/cloud9.pdf)
Note: [The BigchainDB CLI](../server-reference/bigchaindb-cli.html) and the [BigchainDB Configuration Settings](../server-reference/configuration.html) are documented elsewhere. (Click the links.)
## Option C: Using a Local Dev Machine and Docker
Start MongoDB __3.4+__ using:
```text
$ mongod --replSet=bigchain-rs
```
You can verify that MongoDB is running correctly by checking the output of the
previous command for the line:
```text
waiting for connections on port 27017
```
To run BigchainDB Server, do:
```text
$ bigchaindb start
```
You can [run all the unit tests](running-unit-tests.html) to test your installation.
The BigchainDB [CONTRIBUTING.md file](https://github.com/bigchaindb/bigchaindb/blob/master/CONTRIBUTING.md) has more details about how to contribute.
## Option B: Using a Local Dev Machine and Docker
You need to have recent versions of [Docker Engine](https://docs.docker.com/engine/installation/)
and (Docker) [Compose](https://docs.docker.com/compose/install/).
@ -50,6 +76,8 @@ Build the images:
docker-compose build
```
### Docker with RethinkDB
**Note**: If you're upgrading BigchainDB and have previously already built the images, you may need
to rebuild them after the upgrade to install any new dependencies.
@ -62,7 +90,7 @@ docker-compose up -d rdb
The RethinkDB web interface should be accessible at <http://localhost:58080/>.
Depending on which platform, and/or how you are running docker, you may need
to change `localhost` for the `ip` of the machine that is running docker. As a
dummy example, if the `ip` of that machine was `0.0.0.0`, you would accees the
dummy example, if the `ip` of that machine was `0.0.0.0`, you would access the
web interface at: <http://0.0.0.0:58080/>.
Start a BigchainDB node:
@ -83,6 +111,40 @@ If you wish to run the tests:
docker-compose run --rm bdb py.test -v -n auto
```
### Docker with MongoDB
Start MongoDB:
```bash
docker-compose up -d mdb
```
MongoDB should now be up and running. You can check the port binding for the
MongoDB driver port using:
```bash
$ docker-compose port mdb 27017
```
Start a BigchainDB node:
```bash
docker-compose up -d bdb-mdb
```
You can monitor the logs:
```bash
docker-compose logs -f bdb-mdb
```
If you wish to run the tests:
```bash
docker-compose run --rm bdb-mdb py.test -v --database-backend=mongodb
```
### Accessing the HTTP API
A quick check to make sure that the BigchainDB server API is operational:
```bash
@ -123,3 +185,9 @@ root:
```bash
curl 0.0.0.0:32772
```
## Option C: Using a Dev Machine on Cloud9
Ian Worrall of [Encrypted Labs](http://www.encryptedlabs.com/) wrote a document (PDF) explaining how to set up a BigchainDB (Server) dev machine on Cloud9:
[Download that document from GitHub](https://raw.githubusercontent.com/bigchaindb/bigchaindb/master/docs/server/source/_static/cloud9.pdf)

View File

@ -1,10 +0,0 @@
Example Apps
============
.. warning::
There are some example BigchainDB apps (i.e. apps which use BigchainDB) in the GitHub repository named `bigchaindb-examples <https://github.com/bigchaindb/bigchaindb-examples>`_. They were created before there was much of an HTTP API, so instead of communicating with a BigchainDB node via the HTTP API, they communicate directly with the node using the BigchainDB Python server API and the RethinkDB Python Driver. That's not how a real production app would work. The HTTP API is getting better, and we recommend using it to communicate with BigchainDB nodes.
Moreover, because of changes to the BigchainDB Server code, some of the examples in the bigchaindb-examples repo might not work anymore, or they might not work as expected.
In the future, we hope to create a set of examples using the HTTP API (or wrappers of it, such as the Python Driver API).

View File

@ -1,58 +1,136 @@
The HTTP Client-Server API
==========================
.. note::
The HTTP client-server API is currently quite rudimentary. For example,
there is no ability to do complex queries using the HTTP API. We plan to add
more querying capabilities in the future.
This page assumes you already know an API Root URL
for a BigchainDB node or reverse proxy.
It should be something like ``http://apihosting4u.net:9984``
or ``http://12.34.56.78:9984``.
It should be something like ``https://example.com:9984``
or ``https://12.34.56.78:9984``.
If you set up a BigchainDB node or reverse proxy yourself,
and you're not sure what the API Root URL is,
then see the last section of this page for help.
API Root URL
------------
If you send an HTTP GET request to the API Root URL
e.g. ``http://localhost:9984``
or ``http://apihosting4u.net:9984``
(with no ``/api/v1/`` on the end),
then you should get an HTTP response
with something like the following in the body:
.. code-block:: json
{
"keyring": [
"6qHyZew94NMmUTYyHnkZsB8cxJYuRNEiEpXHe1ih9QX3",
"AdDuyrTyjrDt935YnFu4VBCVDhHtY2Y6rcy7x2TFeiRi"
],
"public_key": "AiygKSRhZWTxxYT4AfgKoTG4TZAoPsWoEt6C6bLq4jJR",
"software": "BigchainDB",
"version": "0.6.0"
}
POST /transactions/
BigchainDB Root URL
-------------------
.. http:post:: /transactions/
If you send an HTTP GET request to the BigchainDB Root URL
e.g. ``http://localhost:9984``
or ``https://example.com:9984``
(with no ``/api/v1/`` on the end),
then you should get an HTTP response
with something like the following in the body:
.. literalinclude:: samples/index-response.http
:language: http
API Root Endpoint
-------------------
If you send an HTTP GET request to the API Root Endpoint
e.g. ``http://localhost:9984/api/v1/``
or ``https://example.com:9984/api/v1/``,
then you should get an HTTP response
that allows you to discover the BigchainDB API endpoints:
.. literalinclude:: samples/api-index-response.http
:language: http
Transactions
-------------------
.. http:get:: /api/v1/transactions/{tx_id}
Get the transaction with the ID ``tx_id``.
This endpoint returns a transaction only if a ``VALID`` block on
``bigchain`` exists.
:param tx_id: transaction ID
:type tx_id: hex string
**Example request**:
.. literalinclude:: samples/get-tx-id-request.http
:language: http
**Example response**:
.. literalinclude:: samples/get-tx-id-response.http
:language: http
:resheader Content-Type: ``application/json``
:statuscode 200: A transaction with that ID was found.
:statuscode 404: A transaction with that ID was not found.
.. http:get:: /api/v1/transactions
The unfiltered ``/api/v1/transactions`` endpoint without any query parameters
returns a status code `400`. For valid filters, see the sections below.
There are however filtered requests that might come of use, given the endpoint is
queried correctly. Some of them include retrieving a list of transactions
that include:
* `Transactions related to a specific asset <#get--transactions?asset_id=asset_id&operation=CREATE|TRANSFER>`_
In this section, we've listed those particular requests, as they will likely
to be very handy when implementing your application on top of BigchainDB.
.. note::
Looking up transactions with a specific ``metadata`` field is currently not supported,
however, providing a way to query based on ``metadata`` data is on our roadmap.
A generalization of those parameters follows:
:query string asset_id: The ID of the asset.
:query string operation: (Optional) One of the two supported operations of a transaction: ``CREATE``, ``TRANSFER``.
.. http:get:: /api/v1/transactions?asset_id={asset_id}&operation={CREATE|TRANSFER}
Get a list of transactions that use an asset with the ID ``asset_id``.
Every ``TRANSFER`` transaction that originates from a ``CREATE`` transaction
with ``asset_id`` will be included. This allows users to query the entire history or
provenance of an asset.
This endpoint returns transactions only if they are decided ``VALID`` by the server.
:query string operation: (Optional) One of the two supported operations of a transaction: ``CREATE``, ``TRANSFER``.
:query string asset_id: asset ID.
**Example request**:
.. literalinclude:: samples/get-tx-by-asset-request.http
:language: http
**Example response**:
.. literalinclude:: samples/get-tx-by-asset-response.http
:language: http
:resheader Content-Type: ``application/json``
:statuscode 200: A list of transactions containing an asset with ID ``asset_id`` was found and returned.
:statuscode 400: The request wasn't understood by the server, e.g. the ``asset_id`` querystring was not included in the request.
.. http:post:: /api/v1/transactions
Push a new transaction.
Note: The posted transaction should be a valid and signed :doc:`transaction <../data-models/transaction-model>`.
The steps to build a valid transaction are beyond the scope of this page.
One would normally use a driver such as the `BigchainDB Python Driver
<https://docs.bigchaindb.com/projects/py-driver/en/latest/index.html>`_ to
build a valid transaction. The exact contents of a valid transaction depend
on the associated public/private keypairs.
.. note::
The posted `transaction
<https://docs.bigchaindb.com/projects/server/en/latest/data-models/transaction-model.html>`_
should be structurally valid and not spending an already spent output.
The steps to build a valid transaction are beyond the scope of this page.
One would normally use a driver such as the `BigchainDB Python Driver
<https://docs.bigchaindb.com/projects/py-driver/en/latest/index.html>`_
to build a valid transaction.
**Example request**:
@ -64,110 +142,255 @@ POST /transactions/
.. literalinclude:: samples/post-tx-response.http
:language: http
:statuscode 201: A new transaction was created.
:statuscode 400: The transaction was invalid and not created.
:resheader Content-Type: ``application/json``
:statuscode 202: The pushed transaction was accepted in the ``BACKLOG``, but the processing has not been completed.
:statuscode 400: The transaction was malformed and not accepted in the ``BACKLOG``.
GET /transactions/{tx_id}/status
--------------------------------
Transaction Outputs
-------------------
.. http:get:: /transactions/{tx_id}/status
Get the status of the transaction with the ID ``tx_id``, if a transaction
with that ``tx_id`` exists.
The possible status values are ``backlog``, ``undecided``, ``valid`` or
``invalid``.
:param tx_id: transaction ID
:type tx_id: hex string
**Example request**:
.. literalinclude:: samples/get-tx-status-request.http
:language: http
**Example response**:
.. literalinclude:: samples/get-tx-status-response.http
:language: http
:statuscode 200: A transaction with that ID was found and the status is returned.
:statuscode 404: A transaction with that ID was not found.
The ``/api/v1/outputs`` endpoint returns transactions outputs filtered by a
given public key, and optionally filtered to only include outputs that have
not already been spent.
GET /transactions/{tx_id}
-------------------------
.. http:get:: /api/v1/outputs?public_key={public_key}
.. http:get:: /transactions/{tx_id}
Get transaction outputs by public key. The `public_key` parameter must be
a base58 encoded ed25519 public key associated with transaction output
ownership.
Get the transaction with the ID ``tx_id``.
Returns a list of links to transaction outputs.
This endpoint returns only a transaction from a ``VALID`` or ``UNDECIDED``
block on ``bigchain``, if exists.
:param public_key: Base58 encoded public key associated with output ownership. This parameter is mandatory and without it the endpoint will return a ``400`` response code.
:param unspent: Boolean value ("true" or "false") indicating if the result set should be limited to outputs that are available to spend. Defaults to "false".
:param tx_id: transaction ID
:type tx_id: hex string
**Example request**:
.. literalinclude:: samples/get-tx-request.http
:language: http
**Example response**:
.. literalinclude:: samples/get-tx-response.http
:language: http
:statuscode 200: A transaction with that ID was found.
:statuscode 404: A transaction with that ID was not found.
GET /unspents/
-------------------------
.. note::
This endpoint (unspents) is not yet implemented. We published it here for preview and comment.
.. http:get:: /unspents?owner_after={owner_after}
Get a list of links to transactions' outputs that have not been used in
a previous transaction and could hence be called unspent outputs
(or simply: unspents).
This endpoint will return a ``HTTP 400 Bad Request`` if the querystring
``owner_after`` happens to not be defined in the request.
Note that if unspents for a certain ``public_key`` have not been found by
the server, this will result in the server returning a 200 OK HTTP status
code and an empty list in the response's body.
:param owner_after: A public key, able to validly spend an output of a transaction, assuming the user also has the corresponding private key.
:type owner_after: base58 encoded string
**Example request**:
.. sourcecode:: http
GET /unspents?owner_after=1AAAbbb...ccc HTTP/1.1
GET /api/v1/outputs?public_key=1AAAbbb...ccc HTTP/1.1
Host: example.com
**Example response**:
.. sourcecode:: http
HTTP/1.1 200 OK
Content-Type: application/json
[
"../transactions/2d431073e1477f3073a4693ac7ff9be5634751de1b8abaa1f4e19548ef0b4b0e/outputs/0",
"../transactions/2d431073e1477f3073a4693ac7ff9be5634751de1b8abaa1f4e19548ef0b4b0e/outputs/1"
]
:statuscode 200: A list of outputs were found and returned in the body of the response.
:statuscode 400: The request wasn't understood by the server, e.g. the ``public_key`` querystring was not included in the request.
Statuses
--------------------------------
.. http:get:: /api/v1/statuses
Get the status of an asynchronously written transaction or block by their id.
A link to the resource is also provided in the returned payload under
``_links``.
:query string tx_id: transaction ID
:query string block_id: block ID
.. note::
Exactly one of the ``tx_id`` or ``block_id`` query parameters must be
used together with this endpoint (see below for getting `transaction
statuses <#get--statuses?tx_id=tx_id>`_ and `block statuses
<#get--statuses?block_id=block_id>`_).
.. http:get:: /api/v1/statuses?tx_id={tx_id}
Get the status of a transaction.
The possible status values are ``undecided``, ``valid`` or ``backlog``.
If a transaction in neither of those states is found, a ``404 Not Found``
HTTP status code is returned. `We're currently looking into ways to unambigously let the user know about a transaction's status that was included in an invalid block. <https://github.com/bigchaindb/bigchaindb/issues/1039>`_
**Example request**:
.. literalinclude:: samples/get-statuses-tx-request.http
:language: http
**Example response**:
.. literalinclude:: samples/get-statuses-tx-valid-response.http
:language: http
:resheader Content-Type: ``application/json``
:resheader Location: Once the transaction has been persisted, this header will link to the actual resource.
:statuscode 200: A transaction with that ID was found.
:statuscode 404: A transaction with that ID was not found.
.. http:get:: /api/v1/statuses?block_id={block_id}
Get the status of a block.
The possible status values are ``undecided``, ``valid`` or ``invalid``.
**Example request**:
.. literalinclude:: samples/get-statuses-block-request.http
:language: http
**Example response**:
.. literalinclude:: samples/get-statuses-block-invalid-response.http
:language: http
**Example response**:
.. literalinclude:: samples/get-statuses-block-valid-response.http
:language: http
:resheader Content-Type: ``application/json``
:resheader Location: Once the block has been persisted, this header will link to the actual resource.
:statuscode 200: A block with that ID was found.
:statuscode 404: A block with that ID was not found.
Advanced Usage
--------------------------------
The following endpoints are more advanced and meant for debugging and transparency purposes.
More precisely, the `blocks endpoint <#blocks>`_ allows you to retrieve a block by ``block_id`` as well the list of blocks that
a certain transaction with ``tx_id`` occured in (a transaction can occur in multiple ``invalid`` blocks until it
either gets rejected or validated by the system). This endpoint gives the ability to drill down on the lifecycle of a
transaction
The `votes endpoint <#votes>`_ contains all the voting information for a specific block. So after retrieving the
``block_id`` for a given ``tx_id``, one can now simply inspect the votes that happened at a specific time on that block.
Blocks
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. http:get:: /api/v1/blocks/{block_id}
Get the block with the ID ``block_id``. Any blocks, be they ``VALID``, ``UNDECIDED`` or ``INVALID`` will be
returned. To check a block's status independently, use the `Statuses endpoint <#status>`_.
To check the votes on a block, have a look at the `votes endpoint <#votes>`_.
:param block_id: block ID
:type block_id: hex string
**Example request**:
.. literalinclude:: samples/get-block-request.http
:language: http
**Example response**:
.. literalinclude:: samples/get-block-response.http
:language: http
:resheader Content-Type: ``application/json``
:statuscode 200: A block with that ID was found.
:statuscode 400: The request wasn't understood by the server, e.g. just requesting ``/blocks`` without the ``block_id``.
:statuscode 404: A block with that ID was not found.
.. http:get:: /api/v1/blocks
The unfiltered ``/blocks`` endpoint without any query parameters returns a `400` status code.
The list endpoint should be filtered with a ``tx_id`` query parameter,
see the ``/blocks?tx_id={tx_id}&status={UNDECIDED|VALID|INVALID}``
`endpoint <#get--blocks?tx_id=tx_id&status=UNDECIDED|VALID|INVALID>`_.
**Example request**:
.. sourcecode:: http
GET /api/v1/blocks HTTP/1.1
Host: example.com
**Example response**:
.. sourcecode:: http
HTTP/1.1 200 OK
Content-Type: application/json
HTTP/1.1 400 Bad Request
[
"../transactions/2d431073e1477f3073a4693ac7ff9be5634751de1b8abaa1f4e19548ef0b4b0e/outputs/0",
"../transactions/2d431073e1477f3073a4693ac7ff9be5634751de1b8abaa1f4e19548ef0b4b0e/outputs/1"
]
:statuscode 400: The request wasn't understood by the server, e.g. just requesting ``/blocks`` without the ``block_id``.
:statuscode 200: A list of outputs were found and returned in the body of the response.
:statuscode 400: The request wasn't understood by the server, e.g. the ``owner_after`` querystring was not included in the request.
.. http:get:: /api/v1/blocks?tx_id={tx_id}&status={UNDECIDED|VALID|INVALID}
Retrieve a list of ``block_id`` with their corresponding status that contain a transaction with the ID ``tx_id``.
Any blocks, be they ``UNDECIDED``, ``VALID`` or ``INVALID`` will be
returned if no status filter is provided.
.. note::
In case no block was found, an empty list and an HTTP status code
``200 OK`` is returned, as the request was still successful.
:query string tx_id: transaction ID *(required)*
:query string status: Filter blocks by their status. One of ``VALID``, ``UNDECIDED`` or ``INVALID``.
**Example request**:
.. literalinclude:: samples/get-block-txid-request.http
:language: http
**Example response**:
.. literalinclude:: samples/get-block-txid-response.http
:language: http
:resheader Content-Type: ``application/json``
:statuscode 200: A list of blocks containing a transaction with ID ``tx_id`` was found and returned.
:statuscode 400: The request wasn't understood by the server, e.g. just requesting ``/blocks``, without defining ``tx_id``.
Votes
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. http:get:: /api/v1/votes?block_id={block_id}
Retrieve a list of votes for a certain block with ID ``block_id``.
To check for the validity of a vote, a user of this endpoint needs to
perform the `following steps: <https://github.com/bigchaindb/bigchaindb/blob/8ebd93ed3273e983f5770b1617292aadf9f1462b/bigchaindb/util.py#L119>`_
1. Check if the vote's ``node_pubkey`` is allowed to vote.
2. Verify the vote's signature against the vote's body (``vote.vote``) and ``node_pubkey``.
:query string block_id: The block ID to filter the votes.
**Example request**:
.. literalinclude:: samples/get-vote-request.http
:language: http
**Example response**:
.. literalinclude:: samples/get-vote-response.http
:language: http
:resheader Content-Type: ``application/json``
:statuscode 200: A list of votes voting for a block with ID ``block_id`` was found and returned.
:statuscode 400: The request wasn't understood by the server, e.g. just requesting ``/votes``, without defining ``block_id``.
Determining the API Root URL
@ -190,18 +413,18 @@ the HTTP API publicly accessible.
If the API endpoint is publicly accessible,
then the public API Root URL is determined as follows:
- The public IP address (like 12.34.56.78)
is the public IP address of the machine exposing
the HTTP API to the public internet (e.g. either the machine hosting
Gunicorn or the machine running the reverse proxy such as Nginx).
- The public IP address (like 12.34.56.78)
is the public IP address of the machine exposing
the HTTP API to the public internet (e.g. either the machine hosting
Gunicorn or the machine running the reverse proxy such as Nginx).
It's determined by AWS, Azure, Rackspace, or whoever is hosting the machine.
- The DNS hostname (like apihosting4u.net) is determined by DNS records,
such as an "A Record" associating apihosting4u.net with 12.34.56.78
- The DNS hostname (like example.com) is determined by DNS records,
such as an "A Record" associating example.com with 12.34.56.78
- The port (like 9984) is determined by the ``server.bind`` setting
if Gunicorn is exposed directly to the public Internet.
If a reverse proxy (like Nginx) is exposed directly to the public Internet
instead, then it could expose the HTTP API on whatever port it wants to.
(It should expose the HTTP API on port 9984, but it's not bound to do
- The port (like 9984) is determined by the ``server.bind`` setting
if Gunicorn is exposed directly to the public Internet.
If a reverse proxy (like Nginx) is exposed directly to the public Internet
instead, then it could expose the HTTP API on whatever port it wants to.
(It should expose the HTTP API on port 9984, but it's not bound to do
that by anything other than convention.)

View File

@ -14,4 +14,3 @@ your choice, and then use the HTTP API directly to post transactions.
http-client-server-api
The Python Driver <https://docs.bigchaindb.com/projects/py-driver/en/latest/index.html>
Transaction CLI <https://docs.bigchaindb.com/projects/cli/en/latest/>
example-apps

View File

@ -9,7 +9,7 @@ Note: This section will be broken apart into several pages, e.g. NTP requirement
* BigchainDB Server requires Python 3.4+ and Python 3.4+ [will run on any modern OS](https://docs.python.org/3.4/using/index.html).
* BigchaindB Server uses the Python `multiprocessing` package and [some functionality in the `multiprocessing` package doesn't work on OS X](https://docs.python.org/3.4/library/multiprocessing.html#multiprocessing.Queue.qsize). You can still use Mac OS X if you use Docker or a virtual machine.
The BigchainDB core dev team uses Ubuntu 14.04, Ubuntu 16.04, Fedora 23, and Fedora 24.
The BigchainDB core dev team uses recent LTS versions of Ubuntu and recent versions of Fedora.
We don't test BigchainDB on Windows or Mac OS X, but you can try.

View File

@ -94,21 +94,7 @@ If you're testing or developing BigchainDB on a stand-alone node, then you shoul
## Install BigchainDB Server
BigchainDB Server has some OS-level dependencies that must be installed.
On Ubuntu 14.04, we found that the following was enough:
```text
sudo apt-get update
sudo apt-get install g++ python3-dev libffi-dev
```
On Fedora 23, we found that the following was enough (tested in February 2015):
```text
sudo dnf update
sudo dnf install gcc-c++ redhat-rpm-config python3-devel libffi-devel
```
(If you're using a version of Fedora before version 22, you may have to use `yum` instead of `dnf`.)
First, [install the OS-level dependencies of BigchainDB Server (link)](../appendices/install-os-level-deps.html).
With OS-level dependencies installed, you can install BigchainDB Server with `pip` or from source.
@ -122,7 +108,7 @@ pip -V
If it says that `pip` isn't installed, or it says `pip` is associated with a Python version less than 3.4, then you must install a `pip` version associated with Python 3.4+. In the following instructions, we call it `pip3` but you may be able to use `pip` if that refers to the same thing. See [the `pip` installation instructions](https://pip.pypa.io/en/stable/installing/).
On Ubuntu 14.04, we found that this works:
On Ubuntu 16.04, we found that this works:
```text
sudo apt-get install python3-pip
```

View File

@ -1,35 +1,56 @@
# Quickstart
This page has instructions to set up a single stand-alone BigchainDB node for learning or experimenting. Instructions for other cases are [elsewhere](introduction.html). We will assume you're using Ubuntu 14.04 or similar. If you're not using Linux, then you might try [running BigchainDB with Docker](appendices/run-with-docker.html).
This page has instructions to set up a single stand-alone BigchainDB node for learning or experimenting. Instructions for other cases are [elsewhere](introduction.html). We will assume you're using Ubuntu 16.04 or similar. If you're not using Linux, then you might try [running BigchainDB with Docker](appendices/run-with-docker.html).
A. [Install RethinkDB Server](https://rethinkdb.com/docs/install/ubuntu/)
A. Install the database backend.
B. Open a Terminal and run RethinkDB Server with the command:
[Install RethinkDB Server](https://rethinkdb.com/docs/install/ubuntu/) or
[Install MongoDB Server 3.4+](https://docs.mongodb.com/manual/tutorial/install-mongodb-on-ubuntu/)
B. Run the database backend. Open a Terminal and run the command:
with RethinkDB
```text
rethinkdb
$ rethinkdb
```
C. Ubuntu 14.04 already has Python 3.4, so you don't need to install it, but you do need to install a couple other things:
with MongoDB __3.4+__
```text
sudo apt-get update
sudo apt-get install g++ python3-dev libffi-dev
$ mongod --replSet=bigchain-rs
```
C. Ubuntu 16.04 already has Python 3.5, so you don't need to install it, but you do need to install some other things:
```text
$ sudo apt-get update
$ sudo apt-get install g++ python3-dev libffi-dev
```
D. Get the latest version of pip and setuptools:
```text
sudo apt-get install python3-pip
sudo pip3 install --upgrade pip setuptools
$ sudo apt-get install python3-pip
$ sudo pip3 install --upgrade pip setuptools
```
E. Install the `bigchaindb` Python package from PyPI:
```text
sudo pip3 install bigchaindb
$ sudo pip3 install bigchaindb
```
F. Configure and run BigchainDB Server:
F. Configure the BigchainDB Server: and run BigchainDB Server:
with RethinkDB
```text
bigchaindb -y configure
bigchaindb start
$ bigchaindb -y configure rethinkdb
```
with MongoDB
```text
$ bigchaindb -y configure mongodb
```
G. Run the BigchainDB Server:
```text
$ bigchaindb start
```
That's it!

View File

@ -15,18 +15,22 @@ Show the version number. `bigchaindb -v` does the same thing.
## bigchaindb configure
Generate a local config file (which can be used to set some or all [BigchainDB node configuration settings](configuration.html)). It will auto-generate a public-private keypair and then ask you for the values of other configuration settings. If you press Enter for a value, it will use the default value.
Generate a local configuration file (which can be used to set some or all [BigchainDB node configuration settings](configuration.html)). It will auto-generate a public-private keypair and then ask you for the values of other configuration settings. If you press Enter for a value, it will use the default value.
Since BigchainDB supports multiple databases you need to always specify the
database backend that you want to use. At this point only two database backends
are supported: `rethinkdb` and `mongodb`.
If you use the `-c` command-line option, it will generate the file at the specified path:
```text
bigchaindb -c path/to/new_config.json configure
bigchaindb -c path/to/new_config.json configure rethinkdb
```
If you don't use the `-c` command-line option, the file will be written to `$HOME/.bigchaindb` (the default location where BigchainDB looks for a config file, if one isn't specified).
If you use the `-y` command-line option, then there won't be any interactive prompts: it will just generate a keypair and use the default values for all the other configuration settings.
```text
bigchaindb -y configure
bigchaindb -y configure rethinkdb
```
@ -83,3 +87,25 @@ Set the number of replicas (of each shard) in the underlying datastore. For exam
```text
$ bigchaindb set-replicas 3
```
## bigchaindb add-replicas
This command is specific to MongoDB so it will only run if BigchainDB is
configured with `mongodb` as the backend.
This command is used to add nodes to a BigchainDB cluster. It accepts a list of
space separated hosts in the form _hostname:port_:
```text
$ bigchaindb add-replicas server1.com:27017 server2.com:27017 server3.com:27017
```
## bigchaindb remove-replicas
This command is specific to MongoDB so it will only run if BigchainDB is
configured with `mongodb` as the backend.
This command is used to remove nodes from a BigchainDB cluster. It accepts a
list of space separated hosts in the form _hostname:port_:
```text
$ bigchaindb remove-replicas server1.com:27017 server2.com:27017 server3.com:27017
```

View File

@ -19,9 +19,6 @@ For convenience, here's a list of all the relevant environment variables (docume
`BIGCHAINDB_SERVER_BIND`<br>
`BIGCHAINDB_SERVER_WORKERS`<br>
`BIGCHAINDB_SERVER_THREADS`<br>
`BIGCHAINDB_STATSD_HOST`<br>
`BIGCHAINDB_STATSD_PORT`<br>
`BIGCHAINDB_STATSD_RATE`<br>
`BIGCHAINDB_CONFIG_PATH`<br>
`BIGCHAINDB_BACKLOG_REASSIGN_DELAY`<br>
@ -151,23 +148,6 @@ export BIGCHAINDB_SERVER_THREADS=5
}
```
## statsd.host, statsd.port & statsd.rate
These settings are used to configure where, and how often, [StatsD](https://github.com/etsy/statsd) should send data for [cluster monitoring](../clusters-feds/monitoring.html) purposes. `statsd.host` is the hostname of the monitoring server, where StatsD should send its data. `stats.port` is the port. `statsd.rate` is the fraction of transaction operations that should be sampled. It's a float between 0.0 and 1.0.
**Example using environment variables**
```text
export BIGCHAINDB_STATSD_HOST="http://monitor.monitors-r-us.io"
export BIGCHAINDB_STATSD_PORT=8125
export BIGCHAINDB_STATSD_RATE=0.01
```
**Example config file snippet: the default**
```js
"statsd": {"host": "localhost", "port": 8125, "rate": 0.01}
```
## backlog_reassign_delay
Specifies how long, in seconds, transactions can remain in the backlog before being reassigned. Long-waiting transactions must be reassigned because the assigned node may no longer be responsive. The default duration is 120 seconds.

View File

@ -0,0 +1,15 @@
---
# This playbook ensures Python 2 is installed on the managed node.
# This is inspired by https://gist.github.com/gwillem/4ba393dceb55e5ae276a87300f6b8e6f
- hosts: all
gather_facts: false
remote_user: ubuntu
pre_tasks:
- name: Install Python 2
raw: test -e /usr/bin/python || (apt -y update && apt install -y python-minimal)
become: true
# action: setup will gather facts after python2 has been installed
- action: setup

View File

@ -10,22 +10,24 @@
apt: name={{item}} state=latest update_cache=yes
become: true
with_items:
- make
- git
- g++
- python3-dev
- libffi-dev
- python3-setuptools # mainly for easy_install3, which is used to get latest pip3
# This should make both pip and pip3 be pip version >=8.1.2 (python 3.4).
# See the comments about this below.
- name: Ensure the latest pip/pip3 is installed, using easy_install3
easy_install: executable=easy_install3 name=pip state=latest
become: true
- python3-dev
- python3-pip
- python3-setuptools
- name: Ensure the latest setuptools (Python package) is installed
pip: executable=pip3 name=setuptools state=latest
become: true
# This should make both pip and pip3 be pip version >=8.1.2 (python 3.4).
# See the comments about this below.
#- name: Ensure the latest pip/pip3 is installed, using easy_install3
# easy_install: executable=easy_install3 name=pip state=latest
# become: true
- name: Install BigchainDB from PyPI using sudo pip3 install bigchaindb
pip: executable=pip3 name=bigchaindb state=latest
become: true

View File

@ -12,12 +12,14 @@
# To better understand the /etc/fstab fields/columns, see:
# http://man7.org/linux/man-pages/man5/fstab.5.html
# https://tinyurl.com/jmmsyon = the soure code of the mount module
# Note: It seems the "nobootwait" option is gone in Ubuntu 16.04. See
# https://askubuntu.com/questions/786928/ubuntu-16-04-fstab-fails-with-nobootwait
- name: Ensure /data dir exists and is mounted + update /etc/fstab
mount:
name=/data
src=/dev/xvdp
fstype=ext4
opts="defaults,nofail,nobootwait"
opts="defaults,nofail"
dump=0
passno=2
state=mounted

View File

@ -2,11 +2,12 @@
# ansible/roles/rethinkdb/tasks/main.yml
# Note: the .list extension will be added to the rethinkdb filename automatically
# Note: xenial is the $DISTRIB_CODENAME for Ubuntu 16.04
- name: >
Ensure RethinkDB's APT repository for Ubuntu trusty is present
Ensure RethinkDB's APT repository for Ubuntu xenial is present
in /etc/apt/sources.list.d/rethinkdb.list
apt_repository:
repo='deb http://download.rethinkdb.com/apt trusty main'
repo='deb http://download.rethinkdb.com/apt xenial main'
filename=rethinkdb
state=present
become: true
@ -15,8 +16,8 @@
apt_key: url=http://download.rethinkdb.com/apt/pubkey.gpg state=present
become: true
- name: Ensure the Ubuntu package rethinkdb 2.3.4~0trusty is installed
apt: name=rethinkdb=2.3.4~0trusty state=present update_cache=yes
- name: Ensure the latest rethinkdb package is installed
apt: name=rethinkdb state=latest update_cache=yes
become: true
- name: Ensure the /data directory's owner and group are both 'rethinkdb'

View File

@ -2,19 +2,20 @@
# even though the contents are the same.
# This file has the mapping from region --> AMI name.
#
# These are all Ubuntu 14.04 LTS AMIs
# These are all Ubuntu 16.04 LTS AMIs
# with Arch = amd64, Instance Type = hvm:ebs-ssd
# from https://cloud-images.ubuntu.com/locator/ec2/
# as of Jan. 31, 2017
variable "amis" {
type = "map"
default = {
eu-west-1 = "ami-55452e26"
eu-central-1 = "ami-b1cf39de"
us-east-1 = "ami-8e0b9499"
us-west-2 = "ami-547b3834"
ap-northeast-1 = "ami-49d31328"
ap-southeast-1 = "ami-5e429c3d"
ap-southeast-2 = "ami-25f3c746"
sa-east-1 = "ami-97980efb"
eu-west-1 = "ami-d8f4deab"
eu-central-1 = "ami-5aee2235"
us-east-1 = "ami-6edd3078"
us-west-2 = "ami-7c803d1c"
ap-northeast-1 = "ami-eb49358c"
ap-southeast-1 = "ami-b1943fd2"
ap-southeast-2 = "ami-fe71759d"
sa-east-1 = "ami-7379e31f"
}
}

View File

@ -62,14 +62,6 @@ resource "aws_security_group" "node_sg1" {
cidr_blocks = ["0.0.0.0/0"]
}
# StatsD
ingress {
from_port = 8125
to_port = 8125
protocol = "udp"
cidr_blocks = ["0.0.0.0/0"]
}
# Future: Don't allow port 8080 for the RethinkDB web interface.
# Use a SOCKS proxy or reverse proxy instead.

View File

@ -45,6 +45,7 @@ tests_require = [
'coverage',
'pep8',
'flake8',
'flake8-quotes==0.8.1',
'pylint',
'pytest>=3.0.0',
'pytest-catchlog>=1.2.2',
@ -64,7 +65,6 @@ install_requires = [
'pymongo~=3.4',
'pysha3==1.0.0',
'cryptoconditions>=0.5.0',
'statsd>=3.2.1',
'python-rapidjson>=0.0.8',
'logstats>=0.2.1',
'flask>=0.10.1',

View File

@ -90,95 +90,6 @@ def test_asset_id_mismatch(b, user_pk):
Transaction.get_asset_id([tx1, tx2])
@pytest.mark.bdb
@pytest.mark.usefixtures('inputs')
def test_get_transactions_by_asset_id(b, user_pk, user_sk):
from bigchaindb.models import Transaction
tx_create = b.get_owned_ids(user_pk).pop()
tx_create = b.get_transaction(tx_create.txid)
asset_id = tx_create.id
txs = b.get_transactions_by_asset_id(asset_id)
assert len(txs) == 1
assert txs[0].id == tx_create.id
assert txs[0].id == asset_id
# create a transfer transaction
tx_transfer = Transaction.transfer(tx_create.to_inputs(), [([user_pk], 1)],
tx_create.id)
tx_transfer_signed = tx_transfer.sign([user_sk])
# create the block
block = b.create_block([tx_transfer_signed])
b.write_block(block)
# vote the block valid
vote = b.vote(block.id, b.get_last_voted_block().id, True)
b.write_vote(vote)
txs = b.get_transactions_by_asset_id(asset_id)
assert len(txs) == 2
assert {tx_create.id, tx_transfer.id} == set(tx.id for tx in txs)
assert asset_id == Transaction.get_asset_id(txs)
@pytest.mark.bdb
@pytest.mark.usefixtures('inputs')
def test_get_transactions_by_asset_id_with_invalid_block(b, user_pk, user_sk):
from bigchaindb.models import Transaction
tx_create = b.get_owned_ids(user_pk).pop()
tx_create = b.get_transaction(tx_create.txid)
asset_id = tx_create.id
txs = b.get_transactions_by_asset_id(asset_id)
assert len(txs) == 1
assert txs[0].id == tx_create.id
assert txs[0].id == asset_id
# create a transfer transaction
tx_transfer = Transaction.transfer(tx_create.to_inputs(), [([user_pk], 1)],
tx_create.id)
tx_transfer_signed = tx_transfer.sign([user_sk])
# create the block
block = b.create_block([tx_transfer_signed])
b.write_block(block)
# vote the block invalid
vote = b.vote(block.id, b.get_last_voted_block().id, False)
b.write_vote(vote)
txs = b.get_transactions_by_asset_id(asset_id)
assert len(txs) == 1
@pytest.mark.bdb
@pytest.mark.usefixtures('inputs')
def test_get_asset_by_id(b, user_pk, user_sk):
from bigchaindb.models import Transaction
tx_create = b.get_owned_ids(user_pk).pop()
tx_create = b.get_transaction(tx_create.txid)
# create a transfer transaction
tx_transfer = Transaction.transfer(tx_create.to_inputs(), [([user_pk], 1)],
tx_create.id)
tx_transfer_signed = tx_transfer.sign([user_sk])
# create the block
block = b.create_block([tx_transfer_signed])
b.write_block(block)
# vote the block valid
vote = b.vote(block.id, b.get_last_voted_block().id, True)
b.write_vote(vote)
asset_id = Transaction.get_asset_id([tx_create, tx_transfer])
txs = b.get_transactions_by_asset_id(asset_id)
assert len(txs) == 2
asset = b.get_asset_by_id(asset_id)
assert asset == tx_create.asset
def test_create_invalid_divisible_asset(b, user_pk, user_sk):
from bigchaindb.models import Transaction
from bigchaindb.common.exceptions import AmountError

View File

@ -0,0 +1,108 @@
"""Tests for the :mod:`bigchaindb.backend.mongodb.admin` module."""
import copy
from unittest import mock
import pytest
from pymongo.database import Database
from pymongo.errors import OperationFailure
@pytest.fixture
def mock_replicaset_config():
return {
'config': {
'_id': 'bigchain-rs',
'members': [
{
'_id': 0,
'arbiterOnly': False,
'buildIndexes': True,
'hidden': False,
'host': 'localhost:27017',
'priority': 1.0,
'slaveDelay': 0,
'tags': {},
'votes': 1
}
],
'version': 1
}
}
@pytest.fixture
def connection():
from bigchaindb.backend import connect
connection = connect()
# connection is a lazy object. It only actually creates a connection to
# the database when its first used.
# During the setup of a MongoDBConnection some `Database.command` are
# executed to make sure that the replica set is correctly initialized.
# Here we force the the connection setup so that all required
# `Database.command` are executed before we mock them it in the tests.
connection._connect()
return connection
def test_add_replicas(mock_replicaset_config, connection):
from bigchaindb.backend.admin import add_replicas
expected_config = copy.deepcopy(mock_replicaset_config)
expected_config['config']['members'] += [
{'_id': 1, 'host': 'localhost:27018'},
{'_id': 2, 'host': 'localhost:27019'}
]
expected_config['config']['version'] += 1
with mock.patch.object(Database, 'command') as mock_command:
mock_command.return_value = mock_replicaset_config
add_replicas(connection, ['localhost:27018', 'localhost:27019'])
mock_command.assert_called_with('replSetReconfig',
expected_config['config'])
def test_add_replicas_raises(mock_replicaset_config, connection):
from bigchaindb.backend.admin import add_replicas
from bigchaindb.backend.exceptions import DatabaseOpFailedError
with mock.patch.object(Database, 'command') as mock_command:
mock_command.side_effect = [
mock_replicaset_config,
OperationFailure(error=1, details={'errmsg': ''})
]
with pytest.raises(DatabaseOpFailedError):
add_replicas(connection, ['localhost:27018'])
def test_remove_replicas(mock_replicaset_config, connection):
from bigchaindb.backend.admin import remove_replicas
expected_config = copy.deepcopy(mock_replicaset_config)
expected_config['config']['version'] += 1
# add some hosts to the configuration to remove
mock_replicaset_config['config']['members'] += [
{'_id': 1, 'host': 'localhost:27018'},
{'_id': 2, 'host': 'localhost:27019'}
]
with mock.patch.object(Database, 'command') as mock_command:
mock_command.return_value = mock_replicaset_config
remove_replicas(connection, ['localhost:27018', 'localhost:27019'])
mock_command.assert_called_with('replSetReconfig',
expected_config['config'])
def test_remove_replicas_raises(mock_replicaset_config, connection):
from bigchaindb.backend.admin import remove_replicas
from bigchaindb.backend.exceptions import DatabaseOpFailedError
with mock.patch.object(Database, 'command') as mock_command:
mock_command.side_effect = [
mock_replicaset_config,
OperationFailure(error=1, details={'errmsg': ''})
]
with pytest.raises(DatabaseOpFailedError):
remove_replicas(connection, ['localhost:27018'])

View File

@ -0,0 +1,23 @@
import pytest
from unittest.mock import MagicMock
pytestmark = pytest.mark.bdb
def test_asset_id_index():
from bigchaindb.backend.mongodb.query import get_txids_filtered
from bigchaindb.backend import connect
# Passes a mock in place of a connection to get the query params from the
# query function, then gets the explain plan from MongoDB to test that
# it's using certain indexes.
m = MagicMock()
get_txids_filtered(m, '')
pipeline = m.db['bigchain'].aggregate.call_args[0][0]
run = connect().db.command
res = run('aggregate', 'bigchain', pipeline=pipeline, explain=True)
stages = (res['stages'][0]['$cursor']['queryPlanner']['winningPlan']
['inputStage']['inputStages'])
indexes = [s['inputStage']['indexName'] for s in stages]
assert set(indexes) == {'asset_id', 'transaction_id'}

View File

@ -125,24 +125,6 @@ def test_get_block_status_from_transaction(create_tx):
assert block_db['block']['voters'] == block.voters
def test_get_txids_by_asset_id(signed_create_tx, signed_transfer_tx):
from bigchaindb.backend import connect, query
from bigchaindb.models import Block
conn = connect()
# create and insert two blocks, one for the create and one for the
# transfer transaction
block = Block(transactions=[signed_create_tx])
conn.db.bigchain.insert_one(block.to_dict())
block = Block(transactions=[signed_transfer_tx])
conn.db.bigchain.insert_one(block.to_dict())
txids = list(query.get_txids_by_asset_id(conn, signed_create_tx.id))
assert len(txids) == 2
assert txids == [signed_create_tx.id, signed_transfer_tx.id]
def test_get_asset_by_id(create_tx):
from bigchaindb.backend import connect, query
from bigchaindb.models import Block
@ -366,3 +348,30 @@ def test_get_unvoted_blocks(signed_create_tx):
assert len(unvoted_blocks) == 1
assert unvoted_blocks[0] == block.to_dict()
def test_get_txids_filtered(signed_create_tx, signed_transfer_tx):
from bigchaindb.backend import connect, query
from bigchaindb.models import Block, Transaction
conn = connect()
# create and insert two blocks, one for the create and one for the
# transfer transaction
block = Block(transactions=[signed_create_tx])
conn.db.bigchain.insert_one(block.to_dict())
block = Block(transactions=[signed_transfer_tx])
conn.db.bigchain.insert_one(block.to_dict())
asset_id = Transaction.get_asset_id([signed_create_tx, signed_transfer_tx])
# Test get by just asset id
txids = set(query.get_txids_filtered(conn, asset_id))
assert txids == {signed_create_tx.id, signed_transfer_tx.id}
# Test get by asset and CREATE
txids = set(query.get_txids_filtered(conn, asset_id, Transaction.CREATE))
assert txids == {signed_create_tx.id}
# Test get by asset and TRANSFER
txids = set(query.get_txids_filtered(conn, asset_id, Transaction.TRANSFER))
assert txids == {signed_transfer_tx.id}

View File

@ -21,8 +21,8 @@ def test_init_creates_db_tables_and_indexes():
assert sorted(collection_names) == ['backlog', 'bigchain', 'votes']
indexes = conn.conn[dbname]['bigchain'].index_information().keys()
assert sorted(indexes) == ['_id_', 'asset_id', 'block_timestamp',
'transaction_id']
assert sorted(indexes) == ['_id_', 'asset_id', 'block_timestamp', 'inputs',
'outputs', 'transaction_id']
indexes = conn.conn[dbname]['backlog'].index_information().keys()
assert sorted(indexes) == ['_id_', 'assignee__transaction_timestamp',
@ -81,8 +81,8 @@ def test_create_secondary_indexes():
# Bigchain table
indexes = conn.conn[dbname]['bigchain'].index_information().keys()
assert sorted(indexes) == ['_id_', 'asset_id', 'block_timestamp',
'transaction_id']
assert sorted(indexes) == ['_id_', 'asset_id', 'block_timestamp', 'inputs',
'outputs', 'transaction_id']
# Backlog table
indexes = conn.conn[dbname]['backlog'].index_information().keys()

View File

@ -57,8 +57,8 @@ def test_set_shards_dry_run(rdb_conn, db_name, db_conn):
@pytest.mark.bdb
@pytest.mark.skipif(
_count_rethinkdb_servers() < 2,
reason=("Requires at least two servers. It's impossible to have"
"more replicas of the data than there are servers.")
reason=('Requires at least two servers. It\'s impossible to have'
'more replicas of the data than there are servers.')
)
def test_set_replicas(rdb_conn, db_name, db_conn):
from bigchaindb.backend.schema import TABLES
@ -85,8 +85,8 @@ def test_set_replicas(rdb_conn, db_name, db_conn):
@pytest.mark.bdb
@pytest.mark.skipif(
_count_rethinkdb_servers() < 2,
reason=("Requires at least two servers. It's impossible to have"
"more replicas of the data than there are servers.")
reason=('Requires at least two servers. It\'s impossible to have'
'more replicas of the data than there are servers.')
)
def test_set_replicas_dry_run(rdb_conn, db_name, db_conn):
from bigchaindb.backend.schema import TABLES
@ -109,8 +109,8 @@ def test_set_replicas_dry_run(rdb_conn, db_name, db_conn):
@pytest.mark.bdb
@pytest.mark.skipif(
_count_rethinkdb_servers() < 2,
reason=("Requires at least two servers. It's impossible to have"
"more replicas of the data than there are servers.")
reason=('Requires at least two servers. It\'s impossible to have'
'more replicas of the data than there are servers.')
)
def test_reconfigure(rdb_conn, db_name, db_conn):
from bigchaindb.backend.rethinkdb.admin import reconfigure

View File

@ -1,6 +1,7 @@
import time
import multiprocessing as mp
from threading import Thread
from unittest.mock import patch
import pytest
import rethinkdb as r
@ -118,3 +119,15 @@ def test_changefeed_reconnects_when_connection_lost(monkeypatch):
fact = changefeed.outqueue.get()['fact']
assert fact == 'Cats sleep 70% of their lives.'
@patch('rethinkdb.connect')
def test_connection_happens_one_time_if_successful(mock_connect):
from bigchaindb.backend import connect
query = r.expr('1')
conn = connect('rethinkdb', 'localhost', 1337, 'whatev')
conn.run(query)
mock_connect.assert_called_once_with(host='localhost',
port=1337,
db='whatev')

View File

@ -85,6 +85,10 @@ def test_create_secondary_indexes():
'transaction_id')) is True
assert conn.run(r.db(dbname).table('bigchain').index_list().contains(
'asset_id')) is True
assert conn.run(r.db(dbname).table('bigchain').index_list().contains(
'inputs')) is True
assert conn.run(r.db(dbname).table('bigchain').index_list().contains(
'outputs')) is True
# Backlog table
assert conn.run(r.db(dbname).table('backlog').index_list().contains(

View File

@ -1,6 +1,3 @@
from importlib import import_module
from unittest.mock import patch
from pytest import mark, raises
@ -26,7 +23,7 @@ def test_schema(schema_func_name, args_qty):
('get_stale_transactions', 1),
('get_blocks_status_from_transaction', 1),
('get_transaction_from_backlog', 1),
('get_txids_by_asset_id', 1),
('get_txids_filtered', 1),
('get_asset_by_id', 1),
('get_owned_ids', 1),
('get_votes_by_block_id', 1),
@ -69,34 +66,13 @@ def test_changefeed_class(changefeed_class_func_name, args_qty):
changefeed_class_func(None, *range(args_qty))
@mark.parametrize('db,conn_cls', (
('mongodb', 'MongoDBConnection'),
('rethinkdb', 'RethinkDBConnection'),
))
@patch('bigchaindb.backend.schema.create_indexes',
autospec=True, return_value=None)
@patch('bigchaindb.backend.schema.create_tables',
autospec=True, return_value=None)
@patch('bigchaindb.backend.schema.create_database',
autospec=True, return_value=None)
def test_init_database(mock_create_database, mock_create_tables,
mock_create_indexes, db, conn_cls):
from bigchaindb.backend.schema import init_database
conn = getattr(
import_module('bigchaindb.backend.{}.connection'.format(db)),
conn_cls,
)('host', 'port', 'dbname')
init_database(connection=conn, dbname='mickeymouse')
mock_create_database.assert_called_once_with(conn, 'mickeymouse')
mock_create_tables.assert_called_once_with(conn, 'mickeymouse')
mock_create_indexes.assert_called_once_with(conn, 'mickeymouse')
@mark.parametrize('admin_func_name,kwargs', (
('get_config', {'table': None}),
('reconfigure', {'table': None, 'shards': None, 'replicas': None}),
('set_shards', {'shards': None}),
('set_replicas', {'replicas': None}),
('add_replicas', {'replicas': None}),
('remove_replicas', {'replicas': None}),
))
def test_admin(admin_func_name, kwargs):
from bigchaindb.backend import admin

View File

@ -35,7 +35,6 @@ def mock_bigchaindb_backup_config(monkeypatch):
config = {
'keypair': {},
'database': {'host': 'host', 'port': 12345, 'name': 'adbname'},
'statsd': {'host': 'host', 'port': 12345, 'rate': 0.1},
'backlog_reassign_delay': 5
}
monkeypatch.setattr('bigchaindb._config', config)

View File

@ -1,6 +1,6 @@
import json
from unittest.mock import Mock, patch
from argparse import Namespace
from argparse import Namespace, ArgumentTypeError
import copy
import pytest
@ -12,7 +12,8 @@ def test_make_sure_we_dont_remove_any_command():
parser = create_parser()
assert parser.parse_args(['configure']).command
assert parser.parse_args(['configure', 'rethinkdb']).command
assert parser.parse_args(['configure', 'mongodb']).command
assert parser.parse_args(['show-config']).command
assert parser.parse_args(['export-my-pubkey']).command
assert parser.parse_args(['init']).command
@ -21,6 +22,8 @@ def test_make_sure_we_dont_remove_any_command():
assert parser.parse_args(['set-shards', '1']).command
assert parser.parse_args(['set-replicas', '1']).command
assert parser.parse_args(['load']).command
assert parser.parse_args(['add-replicas', 'localhost:27017']).command
assert parser.parse_args(['remove-replicas', 'localhost:27017']).command
def test_start_raises_if_command_not_implemented():
@ -31,8 +34,8 @@ def test_start_raises_if_command_not_implemented():
with pytest.raises(NotImplementedError):
# Will raise because `scope`, the third parameter,
# doesn't contain the function `run_configure`
utils.start(parser, ['configure'], {})
# doesn't contain the function `run_start`
utils.start(parser, ['start'], {})
def test_start_raises_if_no_arguments_given():
@ -204,7 +207,7 @@ def test_run_configure_when_config_does_not_exist(monkeypatch,
from bigchaindb.commands.bigchain import run_configure
monkeypatch.setattr('os.path.exists', lambda path: False)
monkeypatch.setattr('builtins.input', lambda: '\n')
args = Namespace(config='foo', yes=True)
args = Namespace(config='foo', backend='rethinkdb', yes=True)
return_value = run_configure(args)
assert return_value is None
@ -228,6 +231,36 @@ def test_run_configure_when_config_does_exist(monkeypatch,
assert value == {}
@pytest.mark.parametrize('backend', (
'rethinkdb',
'mongodb',
))
def test_run_configure_with_backend(backend, monkeypatch, mock_write_config):
import bigchaindb
from bigchaindb.commands.bigchain import run_configure
value = {}
def mock_write_config(new_config, filename=None):
value['return'] = new_config
monkeypatch.setattr('os.path.exists', lambda path: False)
monkeypatch.setattr('builtins.input', lambda: '\n')
monkeypatch.setattr('bigchaindb.config_utils.write_config',
mock_write_config)
args = Namespace(config='foo', backend=backend, yes=True)
expected_config = bigchaindb.config
run_configure(args)
# update the expected config with the correct backend and keypair
backend_conf = getattr(bigchaindb, '_database_' + backend)
expected_config.update({'database': backend_conf,
'keypair': value['return']['keypair']})
assert value['return'] == expected_config
@patch('bigchaindb.common.crypto.generate_key_pair',
return_value=('private_key', 'public_key'))
@pytest.mark.usefixtures('ignore_local_config_file')
@ -345,3 +378,73 @@ def test_calling_main(start_mock, base_parser_mock, parse_args_mock,
'distributed equally to all '
'the processes')
assert start_mock.called is True
@pytest.mark.usefixtures('ignore_local_config_file')
@patch('bigchaindb.commands.bigchain.add_replicas')
def test_run_add_replicas(mock_add_replicas):
from bigchaindb.commands.bigchain import run_add_replicas
from bigchaindb.backend.exceptions import DatabaseOpFailedError
args = Namespace(config=None, replicas=['localhost:27017'])
# test add_replicas no raises
mock_add_replicas.return_value = None
assert run_add_replicas(args) is None
assert mock_add_replicas.call_count == 1
mock_add_replicas.reset_mock()
# test add_replicas with `DatabaseOpFailedError`
mock_add_replicas.side_effect = DatabaseOpFailedError()
assert run_add_replicas(args) is None
assert mock_add_replicas.call_count == 1
mock_add_replicas.reset_mock()
# test add_replicas with `NotImplementedError`
mock_add_replicas.side_effect = NotImplementedError()
assert run_add_replicas(args) is None
assert mock_add_replicas.call_count == 1
mock_add_replicas.reset_mock()
@pytest.mark.usefixtures('ignore_local_config_file')
@patch('bigchaindb.commands.bigchain.remove_replicas')
def test_run_remove_replicas(mock_remove_replicas):
from bigchaindb.commands.bigchain import run_remove_replicas
from bigchaindb.backend.exceptions import DatabaseOpFailedError
args = Namespace(config=None, replicas=['localhost:27017'])
# test add_replicas no raises
mock_remove_replicas.return_value = None
assert run_remove_replicas(args) is None
assert mock_remove_replicas.call_count == 1
mock_remove_replicas.reset_mock()
# test add_replicas with `DatabaseOpFailedError`
mock_remove_replicas.side_effect = DatabaseOpFailedError()
assert run_remove_replicas(args) is None
assert mock_remove_replicas.call_count == 1
mock_remove_replicas.reset_mock()
# test add_replicas with `NotImplementedError`
mock_remove_replicas.side_effect = NotImplementedError()
assert run_remove_replicas(args) is None
assert mock_remove_replicas.call_count == 1
mock_remove_replicas.reset_mock()
def test_mongodb_host_type():
from bigchaindb.commands.utils import mongodb_host
# bad port provided
with pytest.raises(ArgumentTypeError):
mongodb_host('localhost:11111111111')
# no port information provided
with pytest.raises(ArgumentTypeError):
mongodb_host('localhost')
# bad host provided
with pytest.raises(ArgumentTypeError):
mongodb_host(':27017')

View File

@ -13,7 +13,7 @@ def _test_additionalproperties(node, path=''):
if isinstance(node, dict):
if node.get('type') == 'object':
assert 'additionalProperties' in node, \
("additionalProperties not set at path:" + path)
('additionalProperties not set at path:' + path)
for name, val in node.items():
_test_additionalproperties(val, path + name + '.')
@ -47,7 +47,7 @@ def test_drop_descriptions():
},
'definitions': {
'wat': {
'description': "go"
'description': 'go'
}
}
}

View File

@ -300,7 +300,6 @@ def test_transaction_serialization(user_input, user_output, data):
'operation': Transaction.CREATE,
'metadata': None,
'asset': {
'id': tx_id,
'data': data,
}
}
@ -308,7 +307,7 @@ def test_transaction_serialization(user_input, user_output, data):
tx = Transaction(Transaction.CREATE, {'data': data}, [user_input],
[user_output])
tx_dict = tx.to_dict()
tx_dict['id'] = tx_dict['asset']['id'] = tx_id
tx_dict['id'] = tx_id
assert tx_dict == expected
@ -335,7 +334,6 @@ def test_transaction_deserialization(user_input, user_output, data):
}
tx_no_signatures = Transaction._remove_signatures(tx)
tx['id'] = Transaction._to_hash(Transaction._to_str(tx_no_signatures))
tx['asset']['id'] = tx['id']
tx = Transaction.from_dict(tx)
assert tx == expected
@ -436,6 +434,15 @@ def test_cast_transaction_link_to_boolean():
assert bool(TransactionLink(False, False)) is True
def test_transaction_link_eq():
from bigchaindb.common.transaction import TransactionLink
assert TransactionLink(1, 2) == TransactionLink(1, 2)
assert TransactionLink(2, 2) != TransactionLink(1, 2)
assert TransactionLink(1, 1) != TransactionLink(1, 2)
assert TransactionLink(2, 1) != TransactionLink(1, 2)
def test_add_input_to_tx(user_input, asset_definition):
from bigchaindb.common.transaction import Transaction
@ -682,7 +689,6 @@ def test_create_create_transaction_single_io(user_output, user_pub, data):
tx_dict = tx.to_dict()
tx_dict['inputs'][0]['fulfillment'] = None
tx_dict.pop('id')
tx_dict['asset'].pop('id')
assert tx_dict == expected
@ -766,7 +772,6 @@ def test_create_create_transaction_threshold(user_pub, user2_pub, user3_pub,
metadata=data, asset=data)
tx_dict = tx.to_dict()
tx_dict.pop('id')
tx_dict['asset'].pop('id')
tx_dict['inputs'][0]['fulfillment'] = None
assert tx_dict == expected
@ -966,11 +971,13 @@ def test_cant_add_empty_input():
def test_validate_version(utx):
import re
import bigchaindb.version
from .utils import validate_transaction_model
from bigchaindb.common.exceptions import SchemaValidationError
assert utx.version == bigchaindb.version.__version__
short_ver = bigchaindb.version.__short_version__
assert utx.version == re.match(r'^(.*\d)', short_ver).group(1)
validate_transaction_model(utx)
@ -978,25 +985,3 @@ def test_validate_version(utx):
utx.version = '1.0.0'
with raises(SchemaValidationError):
validate_transaction_model(utx)
def test_create_tx_has_asset_id(tx):
tx = tx.to_dict()
assert tx['id'] == tx['asset']['id']
def test_create_tx_validates_asset_id(tx):
from bigchaindb.common.transaction import Transaction
from bigchaindb.common.exceptions import InvalidHash
tx = tx.to_dict()
# Test fails with wrong asset_id
tx['asset']['id'] = tx['asset']['id'][::-1]
with raises(InvalidHash):
Transaction.from_dict(tx)
# Test fails with no asset_id
tx['asset'].pop('id')
with raises(InvalidHash):
Transaction.from_dict(tx)

View File

@ -109,26 +109,23 @@ def _restore_dbs(request):
@pytest.fixture(scope='session')
def _configure_bigchaindb(request):
import bigchaindb
from bigchaindb import config_utils
test_db_name = TEST_DB_NAME
# Put a suffix like _gw0, _gw1 etc on xdist processes
xdist_suffix = getattr(request.config, 'slaveinput', {}).get('slaveid')
if xdist_suffix:
test_db_name = '{}_{}'.format(TEST_DB_NAME, xdist_suffix)
backend = request.config.getoption('--database-backend')
config = {
'database': {
'name': test_db_name,
'backend': request.config.getoption('--database-backend'),
},
'database': bigchaindb._database_map[backend],
'keypair': {
'private': '31Lb1ZGKTyHnmVK3LUMrAUrPNfd4sE2YyBt3UA4A25aA',
'public': '4XYfCbabAWVUCbjTmRTFEu2sc3dFEdkse4r6X498B1s8',
}
}
# FIXME
if config['database']['backend'] == 'mongodb':
# not a great way to do this
config['database']['port'] = 27017
config['database']['name'] = test_db_name
config_utils.set_config(config)

View File

@ -1,6 +1,7 @@
from time import sleep
import pytest
from unittest.mock import patch
pytestmark = pytest.mark.bdb
@ -1156,3 +1157,86 @@ class TestMultipleInputs(object):
# check that the other remain marked as unspent
for unspent in transactions[1:]:
assert b.get_spent(unspent.id, 0) is None
def test_get_owned_ids_calls_get_outputs_filtered():
from bigchaindb.core import Bigchain
with patch('bigchaindb.core.Bigchain.get_outputs_filtered') as gof:
b = Bigchain()
res = b.get_owned_ids('abc')
gof.assert_called_once_with('abc', include_spent=False)
assert res == gof()
def test_get_outputs_filtered_only_unspent():
from bigchaindb.common.transaction import TransactionLink
from bigchaindb.core import Bigchain
with patch('bigchaindb.core.Bigchain.get_outputs') as get_outputs:
get_outputs.return_value = [TransactionLink('a', 1),
TransactionLink('b', 2)]
with patch('bigchaindb.core.Bigchain.get_spent') as get_spent:
get_spent.side_effect = [True, False]
out = Bigchain().get_outputs_filtered('abc', include_spent=False)
get_outputs.assert_called_once_with('abc')
assert out == [TransactionLink('b', 2)]
def test_get_outputs_filtered():
from bigchaindb.common.transaction import TransactionLink
from bigchaindb.core import Bigchain
with patch('bigchaindb.core.Bigchain.get_outputs') as get_outputs:
get_outputs.return_value = [TransactionLink('a', 1),
TransactionLink('b', 2)]
with patch('bigchaindb.core.Bigchain.get_spent') as get_spent:
out = Bigchain().get_outputs_filtered('abc')
get_outputs.assert_called_once_with('abc')
get_spent.assert_not_called()
assert out == get_outputs.return_value
@pytest.mark.bdb
def test_cant_spend_same_input_twice_in_tx(b, genesis_block):
"""
Recreate duplicated fulfillments bug
https://github.com/bigchaindb/bigchaindb/issues/1099
"""
from bigchaindb.models import Transaction
from bigchaindb.common.exceptions import DoubleSpend
# create a divisible asset
tx_create = Transaction.create([b.me], [([b.me], 100)])
tx_create_signed = tx_create.sign([b.me_private])
assert b.validate_transaction(tx_create_signed) == tx_create_signed
# create a block and valid vote
block = b.create_block([tx_create_signed])
b.write_block(block)
vote = b.vote(block.id, genesis_block.id, True)
b.write_vote(vote)
# Create a transfer transaction with duplicated fulfillments
dup_inputs = tx_create.to_inputs() + tx_create.to_inputs()
tx_transfer = Transaction.transfer(dup_inputs, [([b.me], 200)],
asset_id=tx_create.id)
tx_transfer_signed = tx_transfer.sign([b.me_private])
assert b.is_valid_transaction(tx_transfer_signed) is False
with pytest.raises(DoubleSpend):
tx_transfer_signed.validate(b)
@pytest.mark.bdb
def test_transaction_unicode(b):
from bigchaindb.common.utils import serialize
from bigchaindb.models import Transaction
# http://www.fileformat.info/info/unicode/char/1f37a/index.htm
beer_python = {'beer': '\N{BEER MUG}'}
beer_json = '{"beer":"\N{BEER MUG}"}'
tx = (Transaction.create([b.me], [([b.me], 100)], beer_python)
).sign([b.me_private])
block = b.create_block([tx])
b.write_block(block)
assert b.get_block(block.id) == block.to_dict()
assert block.validate(b) == block
assert beer_json in serialize(block.to_dict())

View File

@ -44,3 +44,38 @@ def test_double_create(b, user_pk):
last_voted_block = b.get_last_voted_block()
assert len(last_voted_block.transactions) == 1
assert count_blocks(b.connection) == 2
@pytest.mark.usefixtures('inputs')
def test_get_owned_ids_works_after_double_spend(b, user_pk, user_sk):
""" Test for #633 https://github.com/bigchaindb/bigchaindb/issues/633 """
from bigchaindb.common.exceptions import DoubleSpend
from bigchaindb.models import Transaction
input_valid = b.get_owned_ids(user_pk).pop()
input_valid = b.get_transaction(input_valid.txid)
tx_valid = Transaction.transfer(input_valid.to_inputs(),
[([user_pk], 1)],
input_valid.id,
{'1': 1}).sign([user_sk])
# write the valid tx and wait for voting/block to catch up
b.write_transaction(tx_valid)
time.sleep(2)
# doesn't throw an exception
b.get_owned_ids(user_pk)
# create another transaction with the same input
tx_double_spend = Transaction.transfer(input_valid.to_inputs(),
[([user_pk], 1)],
input_valid.id,
{'2': 2}).sign([user_sk])
# write the double spend tx
b.write_transaction(tx_double_spend)
time.sleep(2)
# still doesn't throw an exception
b.get_owned_ids(user_pk)
with pytest.raises(DoubleSpend):
b.validate_transaction(tx_double_spend)

View File

@ -10,24 +10,32 @@ ORIGINAL_CONFIG = copy.deepcopy(bigchaindb._config)
@pytest.fixture(scope='function', autouse=True)
def clean_config(monkeypatch):
monkeypatch.setattr('bigchaindb.config', copy.deepcopy(ORIGINAL_CONFIG))
def clean_config(monkeypatch, request):
import bigchaindb
original_config = copy.deepcopy(ORIGINAL_CONFIG)
backend = request.config.getoption('--database-backend')
original_config['database'] = bigchaindb._database_map[backend]
monkeypatch.setattr('bigchaindb.config', original_config)
def test_bigchain_instance_is_initialized_when_conf_provided():
def test_bigchain_instance_is_initialized_when_conf_provided(request):
import bigchaindb
from bigchaindb import config_utils
assert 'CONFIGURED' not in bigchaindb.config
config_utils.set_config({'keypair': {'public': 'a', 'private': 'b'}})
assert bigchaindb.config['CONFIGURED'] is True
b = bigchaindb.Bigchain()
assert b.me
assert b.me_private
def test_bigchain_instance_raises_when_not_configured(monkeypatch):
def test_bigchain_instance_raises_when_not_configured(request, monkeypatch):
import bigchaindb
from bigchaindb import config_utils
from bigchaindb.common import exceptions
assert 'CONFIGURED' not in bigchaindb.config
@ -101,47 +109,64 @@ def test_env_config(monkeypatch):
def test_autoconfigure_read_both_from_file_and_env(monkeypatch, request):
# constants
DATABASE_HOST = 'test-host'
DATABASE_NAME = 'test-dbname'
DATABASE_PORT = 4242
DATABASE_BACKEND = request.config.getoption('--database-backend')
SERVER_BIND = '1.2.3.4:56'
KEYRING = 'pubkey_0:pubkey_1:pubkey_2'
file_config = {
'database': {
'host': 'test-host',
'backend': request.config.getoption('--database-backend')
'host': DATABASE_HOST
},
'backlog_reassign_delay': 5
}
monkeypatch.setattr('bigchaindb.config_utils.file_config', lambda *args, **kwargs: file_config)
monkeypatch.setattr('os.environ', {'BIGCHAINDB_DATABASE_NAME': 'test-dbname',
'BIGCHAINDB_DATABASE_PORT': '4242',
'BIGCHAINDB_SERVER_BIND': '1.2.3.4:56',
'BIGCHAINDB_KEYRING': 'pubkey_0:pubkey_1:pubkey_2'})
monkeypatch.setattr('os.environ', {'BIGCHAINDB_DATABASE_NAME': DATABASE_NAME,
'BIGCHAINDB_DATABASE_PORT': str(DATABASE_PORT),
'BIGCHAINDB_DATABASE_BACKEND': DATABASE_BACKEND,
'BIGCHAINDB_SERVER_BIND': SERVER_BIND,
'BIGCHAINDB_KEYRING': KEYRING})
import bigchaindb
from bigchaindb import config_utils
config_utils.autoconfigure()
database_rethinkdb = {
'backend': 'rethinkdb',
'host': DATABASE_HOST,
'port': DATABASE_PORT,
'name': DATABASE_NAME,
}
database_mongodb = {
'backend': 'mongodb',
'host': DATABASE_HOST,
'port': DATABASE_PORT,
'name': DATABASE_NAME,
'replicaset': 'bigchain-rs',
}
database = {}
if DATABASE_BACKEND == 'mongodb':
database = database_mongodb
elif DATABASE_BACKEND == 'rethinkdb':
database = database_rethinkdb
assert bigchaindb.config == {
'CONFIGURED': True,
'server': {
'bind': '1.2.3.4:56',
'bind': SERVER_BIND,
'workers': None,
'threads': None,
},
'database': {
'backend': request.config.getoption('--database-backend'),
'host': 'test-host',
'port': 4242,
'name': 'test-dbname',
'replicaset': 'bigchain-rs'
},
'database': database,
'keypair': {
'public': None,
'private': None,
},
'keyring': ['pubkey_0', 'pubkey_1', 'pubkey_2'],
'statsd': {
'host': 'localhost',
'port': 8125,
'rate': 0.01,
},
'keyring': KEYRING.split(':'),
'backlog_reassign_delay': 5
}
@ -215,7 +240,6 @@ def test_write_config():
('BIGCHAINDB_DATABASE_HOST', 'test-host', 'host'),
('BIGCHAINDB_DATABASE_PORT', 4242, 'port'),
('BIGCHAINDB_DATABASE_NAME', 'test-db', 'name'),
('BIGCHAINDB_DATABASE_REPLICASET', 'test-replicaset', 'replicaset')
))
def test_database_envs(env_name, env_value, config_key, monkeypatch):
import bigchaindb
@ -227,3 +251,18 @@ def test_database_envs(env_name, env_value, config_key, monkeypatch):
expected_config['database'][config_key] = env_value
assert bigchaindb.config == expected_config
def test_database_envs_replicaset(monkeypatch):
# the replica set env is only used if the backend is mongodb
import bigchaindb
monkeypatch.setattr('os.environ', {'BIGCHAINDB_DATABASE_REPLICASET':
'test-replicaset'})
bigchaindb.config['database'] = bigchaindb._database_mongodb
bigchaindb.config_utils.autoconfigure()
expected_config = copy.deepcopy(bigchaindb.config)
expected_config['database']['replicaset'] = 'test-replicaset'
assert bigchaindb.config == expected_config

View File

@ -163,16 +163,3 @@ class TestBlockModel(object):
public_key = PublicKey(b.me)
assert public_key.verify(expected_block_serialized, block.signature)
def test_validate_already_voted_on_block(self, b, monkeypatch):
from unittest.mock import Mock
from bigchaindb.models import Transaction
tx = Transaction.create([b.me], [([b.me], 1)])
block = b.create_block([tx])
has_previous_vote = Mock()
has_previous_vote.return_value = True
monkeypatch.setattr(b, 'has_previous_vote', has_previous_vote)
assert block == block.validate(b)
assert has_previous_vote.called is True

View File

@ -1,14 +0,0 @@
from platform import node
def test_monitor_class_init_defaults():
import bigchaindb
from bigchaindb.monitor import Monitor
monitor = Monitor()
assert monitor
assert len(monitor._addr) == 2
# TODO get value from config
# assert monitor._addr[0] == bigchaindb.config['statsd']['host']
assert monitor._addr[0] == '127.0.0.1'
assert monitor._addr[1] == bigchaindb.config['statsd']['port']
assert monitor._prefix == node() + '.'

65
tests/test_txlist.py Normal file
View File

@ -0,0 +1,65 @@
"""
Test getting a list of transactions from the backend.
This test module defines it's own fixture which is used by all the tests.
"""
import pytest
@pytest.fixture
def txlist(b, user_pk, user2_pk, user_sk, user2_sk, genesis_block):
from bigchaindb.models import Transaction
prev_block_id = genesis_block.id
# Create first block with CREATE transactions
create1 = Transaction.create([user_pk], [([user2_pk], 6)]) \
.sign([user_sk])
create2 = Transaction.create([user2_pk],
[([user2_pk], 5), ([user_pk], 5)]) \
.sign([user2_sk])
block1 = b.create_block([create1, create2])
b.write_block(block1)
# Create second block with TRANSFER transactions
transfer1 = Transaction.transfer(create1.to_inputs(),
[([user_pk], 8)],
create1.id).sign([user2_sk])
block2 = b.create_block([transfer1])
b.write_block(block2)
# Create block with double spend
tx_doublespend = Transaction.transfer(create1.to_inputs(), [([user_pk], 9)],
create1.id).sign([user2_sk])
block_doublespend = b.create_block([tx_doublespend])
b.write_block(block_doublespend)
# Vote on all the blocks
prev_block_id = genesis_block.id
for bid in [block1.id, block2.id]:
vote = b.vote(bid, prev_block_id, True)
prev_block_id = bid
b.write_vote(vote)
# Create undecided block
untx = Transaction.create([user_pk], [([user2_pk], 7)]) \
.sign([user_sk])
block_undecided = b.create_block([untx])
b.write_block(block_undecided)
return type('', (), {
'create1': create1,
'transfer1': transfer1,
})
@pytest.mark.bdb
def test_get_txlist_by_asset(b, txlist):
res = b.get_transactions_filtered(txlist.create1.id)
assert set(tx.id for tx in res) == set([txlist.transfer1.id,
txlist.create1.id])
@pytest.mark.bdb
def test_get_txlist_by_operation(b, txlist):
res = b.get_transactions_filtered(txlist.create1.id, operation='CREATE')
assert set(tx.id for tx in res) == {txlist.create1.id}

View File

@ -41,7 +41,7 @@ def test_get_blocks_by_txid_endpoint(b, client):
block_invalid = b.create_block([tx])
b.write_block(block_invalid)
res = client.get(BLOCKS_ENDPOINT + "?tx_id=" + tx.id)
res = client.get(BLOCKS_ENDPOINT + '?tx_id=' + tx.id)
# test if block is retrieved as undecided
assert res.status_code == 200
assert block_invalid.id in res.json
@ -51,7 +51,7 @@ def test_get_blocks_by_txid_endpoint(b, client):
vote = b.vote(block_invalid.id, b.get_last_voted_block().id, False)
b.write_vote(vote)
res = client.get(BLOCKS_ENDPOINT + "?tx_id=" + tx.id)
res = client.get(BLOCKS_ENDPOINT + '?tx_id=' + tx.id)
# test if block is retrieved as invalid
assert res.status_code == 200
assert block_invalid.id in res.json
@ -61,7 +61,7 @@ def test_get_blocks_by_txid_endpoint(b, client):
block_valid = b.create_block([tx, tx2])
b.write_block(block_valid)
res = client.get(BLOCKS_ENDPOINT + "?tx_id=" + tx.id)
res = client.get(BLOCKS_ENDPOINT + '?tx_id=' + tx.id)
# test if block is retrieved as undecided
assert res.status_code == 200
assert block_valid.id in res.json
@ -71,7 +71,7 @@ def test_get_blocks_by_txid_endpoint(b, client):
vote = b.vote(block_valid.id, block_invalid.id, True)
b.write_vote(vote)
res = client.get(BLOCKS_ENDPOINT + "?tx_id=" + tx.id)
res = client.get(BLOCKS_ENDPOINT + '?tx_id=' + tx.id)
# test if block is retrieved as valid
assert res.status_code == 200
assert block_valid.id in res.json
@ -96,19 +96,19 @@ def test_get_blocks_by_txid_and_status_endpoint(b, client):
block_valid = b.create_block([tx, tx2])
b.write_block(block_valid)
res = client.get("{}?tx_id={}&status={}".format(BLOCKS_ENDPOINT, tx.id, Bigchain.BLOCK_INVALID))
res = client.get('{}?tx_id={}&status={}'.format(BLOCKS_ENDPOINT, tx.id, Bigchain.BLOCK_INVALID))
# test if no blocks are retrieved as invalid
assert res.status_code == 200
assert len(res.json) == 0
res = client.get("{}?tx_id={}&status={}".format(BLOCKS_ENDPOINT, tx.id, Bigchain.BLOCK_UNDECIDED))
res = client.get('{}?tx_id={}&status={}'.format(BLOCKS_ENDPOINT, tx.id, Bigchain.BLOCK_UNDECIDED))
# test if both blocks are retrieved as undecided
assert res.status_code == 200
assert block_valid.id in res.json
assert block_invalid.id in res.json
assert len(res.json) == 2
res = client.get("{}?tx_id={}&status={}".format(BLOCKS_ENDPOINT, tx.id, Bigchain.BLOCK_VALID))
res = client.get('{}?tx_id={}&status={}'.format(BLOCKS_ENDPOINT, tx.id, Bigchain.BLOCK_VALID))
# test if no blocks are retrieved as valid
assert res.status_code == 200
assert len(res.json) == 0
@ -121,18 +121,18 @@ def test_get_blocks_by_txid_and_status_endpoint(b, client):
vote = b.vote(block_valid.id, block_invalid.id, True)
b.write_vote(vote)
res = client.get("{}?tx_id={}&status={}".format(BLOCKS_ENDPOINT, tx.id, Bigchain.BLOCK_INVALID))
res = client.get('{}?tx_id={}&status={}'.format(BLOCKS_ENDPOINT, tx.id, Bigchain.BLOCK_INVALID))
# test if the invalid block is retrieved as invalid
assert res.status_code == 200
assert block_invalid.id in res.json
assert len(res.json) == 1
res = client.get("{}?tx_id={}&status={}".format(BLOCKS_ENDPOINT, tx.id, Bigchain.BLOCK_UNDECIDED))
res = client.get('{}?tx_id={}&status={}'.format(BLOCKS_ENDPOINT, tx.id, Bigchain.BLOCK_UNDECIDED))
# test if no blocks are retrieved as undecided
assert res.status_code == 200
assert len(res.json) == 0
res = client.get("{}?tx_id={}&status={}".format(BLOCKS_ENDPOINT, tx.id, Bigchain.BLOCK_VALID))
res = client.get('{}?tx_id={}&status={}'.format(BLOCKS_ENDPOINT, tx.id, Bigchain.BLOCK_VALID))
# test if the valid block is retrieved as valid
assert res.status_code == 200
assert block_valid.id in res.json
@ -141,11 +141,11 @@ def test_get_blocks_by_txid_and_status_endpoint(b, client):
@pytest.mark.bdb
def test_get_blocks_by_txid_endpoint_returns_empty_list_not_found(client):
res = client.get(BLOCKS_ENDPOINT + "?tx_id=")
res = client.get(BLOCKS_ENDPOINT + '?tx_id=')
assert res.status_code == 200
assert len(res.json) == 0
res = client.get(BLOCKS_ENDPOINT + "?tx_id=123")
res = client.get(BLOCKS_ENDPOINT + '?tx_id=123')
assert res.status_code == 200
assert len(res.json) == 0
@ -155,7 +155,7 @@ def test_get_blocks_by_txid_endpoint_returns_400_bad_query_params(client):
res = client.get(BLOCKS_ENDPOINT)
assert res.status_code == 400
res = client.get(BLOCKS_ENDPOINT + "?ts_id=123")
res = client.get(BLOCKS_ENDPOINT + '?ts_id=123')
assert res.status_code == 400
assert res.json == {
'message': {
@ -163,13 +163,13 @@ def test_get_blocks_by_txid_endpoint_returns_400_bad_query_params(client):
}
}
res = client.get(BLOCKS_ENDPOINT + "?tx_id=123&foo=123")
res = client.get(BLOCKS_ENDPOINT + '?tx_id=123&foo=123')
assert res.status_code == 400
assert res.json == {
'message': 'Unknown arguments: foo'
}
res = client.get(BLOCKS_ENDPOINT + "?tx_id=123&status=123")
res = client.get(BLOCKS_ENDPOINT + '?tx_id=123&status=123')
assert res.status_code == 400
assert res.json == {
'message': {

49
tests/web/test_outputs.py Normal file
View File

@ -0,0 +1,49 @@
import pytest
from unittest.mock import MagicMock, patch
pytestmark = [pytest.mark.bdb, pytest.mark.usefixtures('inputs')]
OUTPUTS_ENDPOINT = '/api/v1/outputs/'
def test_get_outputs_endpoint(client, user_pk):
m = MagicMock()
m.to_uri.side_effect = lambda s: 'a%sb' % s
with patch('bigchaindb.core.Bigchain.get_outputs_filtered') as gof:
gof.return_value = [m, m]
res = client.get(OUTPUTS_ENDPOINT + '?public_key={}'.format(user_pk))
assert res.json == ['a..b', 'a..b']
assert res.status_code == 200
gof.assert_called_once_with(user_pk, True)
def test_get_outputs_endpoint_unspent(client, user_pk):
m = MagicMock()
m.to_uri.side_effect = lambda s: 'a%sb' % s
with patch('bigchaindb.core.Bigchain.get_outputs_filtered') as gof:
gof.return_value = [m]
params = '?unspent=true&public_key={}'.format(user_pk)
res = client.get(OUTPUTS_ENDPOINT + params)
assert res.json == ['a..b']
assert res.status_code == 200
gof.assert_called_once_with(user_pk, False)
def test_get_outputs_endpoint_without_public_key(client):
res = client.get(OUTPUTS_ENDPOINT)
assert res.status_code == 400
def test_get_outputs_endpoint_with_invalid_public_key(client):
expected = {'message': {'public_key': 'Invalid base58 ed25519 key'}}
res = client.get(OUTPUTS_ENDPOINT + '?public_key=abc')
assert expected == res.json
assert res.status_code == 400
def test_get_outputs_endpoint_with_invalid_unspent(client, user_pk):
expected = {'message': {'unspent': 'Boolean value must be "true" or "false" (lowercase)'}}
params = '?unspent=tru&public_key={}'.format(user_pk)
res = client.get(OUTPUTS_ENDPOINT + params)
assert expected == res.json
assert res.status_code == 400

View File

@ -0,0 +1,75 @@
import pytest
def test_valid_txid():
from bigchaindb.web.views.parameters import valid_txid
valid = ['18ac3e7343f016890c510e93f935261169d9e3f565436429830faf0934f4f8e4',
'18AC3E7343F016890C510E93F935261169D9E3F565436429830FAF0934F4F8E4']
for h in valid:
assert valid_txid(h) == h.lower()
non = ['18ac3e7343f016890c510e93f935261169d9e3f565436429830faf0934f4f8e',
'18ac3e7343f016890c510e93f935261169d9e3f565436429830faf0934f4f8e45',
'18ac3e7343f016890c510e93f935261169d9e3f565436429830faf0934f4f8eg',
'18ac3e7343f016890c510e93f935261169d9e3f565436429830faf0934f4f8e ',
'']
for h in non:
with pytest.raises(ValueError):
valid_txid(h)
def test_valid_bool():
from bigchaindb.web.views.parameters import valid_bool
assert valid_bool('true') is True
assert valid_bool('false') is False
assert valid_bool('tRUE') is True
assert valid_bool('fALSE') is False
with pytest.raises(ValueError):
valid_bool('0')
with pytest.raises(ValueError):
valid_bool('1')
with pytest.raises(ValueError):
valid_bool('yes')
with pytest.raises(ValueError):
valid_bool('no')
def test_valid_ed25519():
from bigchaindb.web.views.parameters import valid_ed25519
valid = ['123456789abcdefghijkmnopqrstuvwxyz1111111111',
'123456789ABCDEFGHJKLMNPQRSTUVWXYZ1111111111']
for h in valid:
assert valid_ed25519(h) == h
with pytest.raises(ValueError):
valid_ed25519('1234556789abcdefghijkmnopqrstuvwxyz1111111')
with pytest.raises(ValueError):
valid_ed25519('1234556789abcdefghijkmnopqrstuvwxyz1111111111')
with pytest.raises(ValueError):
valid_ed25519('123456789abcdefghijkmnopqrstuvwxyz111111111l')
with pytest.raises(ValueError):
valid_ed25519('123456789abcdefghijkmnopqrstuvwxyz111111111I')
with pytest.raises(ValueError):
valid_ed25519('1234556789abcdefghijkmnopqrstuvwxyz11111111O')
with pytest.raises(ValueError):
valid_ed25519('1234556789abcdefghijkmnopqrstuvwxyz111111110')
def test_valid_operation():
from bigchaindb.web.views.parameters import valid_operation
assert valid_operation('create') == 'CREATE'
assert valid_operation('transfer') == 'TRANSFER'
assert valid_operation('CREATe') == 'CREATE'
assert valid_operation('TRANSFEr') == 'TRANSFER'
with pytest.raises(ValueError):
valid_operation('GENESIS')
with pytest.raises(ValueError):
valid_operation('blah')
with pytest.raises(ValueError):
valid_operation('')

View File

@ -10,15 +10,15 @@ STATUSES_ENDPOINT = '/api/v1/statuses'
def test_get_transaction_status_endpoint(b, client, user_pk):
input_tx = b.get_owned_ids(user_pk).pop()
tx, status = b.get_transaction(input_tx.txid, include_status=True)
res = client.get(STATUSES_ENDPOINT + "?tx_id=" + input_tx.txid)
res = client.get(STATUSES_ENDPOINT + '?tx_id=' + input_tx.txid)
assert status == res.json['status']
assert res.json['_links']['tx'] == "/transactions/{}".format(input_tx.txid)
assert res.json['_links']['tx'] == '/transactions/{}'.format(input_tx.txid)
assert res.status_code == 200
@pytest.mark.bdb
def test_get_transaction_status_endpoint_returns_404_if_not_found(client):
res = client.get(STATUSES_ENDPOINT + "?tx_id=123")
res = client.get(STATUSES_ENDPOINT + '?tx_id=123')
assert res.status_code == 404
@ -32,7 +32,7 @@ def test_get_block_status_endpoint_undecided(b, client):
status = b.block_election_status(block.id, block.voters)
res = client.get(STATUSES_ENDPOINT + "?block_id=" + block.id)
res = client.get(STATUSES_ENDPOINT + '?block_id=' + block.id)
assert status == res.json['status']
assert '_links' not in res.json
assert res.status_code == 200
@ -53,7 +53,7 @@ def test_get_block_status_endpoint_valid(b, client):
status = b.block_election_status(block.id, block.voters)
res = client.get(STATUSES_ENDPOINT + "?block_id=" + block.id)
res = client.get(STATUSES_ENDPOINT + '?block_id=' + block.id)
assert status == res.json['status']
assert '_links' not in res.json
assert res.status_code == 200
@ -74,7 +74,7 @@ def test_get_block_status_endpoint_invalid(b, client):
status = b.block_election_status(block.id, block.voters)
res = client.get(STATUSES_ENDPOINT + "?block_id=" + block.id)
res = client.get(STATUSES_ENDPOINT + '?block_id=' + block.id)
assert status == res.json['status']
assert '_links' not in res.json
assert res.status_code == 200
@ -82,7 +82,7 @@ def test_get_block_status_endpoint_invalid(b, client):
@pytest.mark.bdb
def test_get_block_status_endpoint_returns_404_if_not_found(client):
res = client.get(STATUSES_ENDPOINT + "?block_id=123")
res = client.get(STATUSES_ENDPOINT + '?block_id=123')
assert res.status_code == 404
@ -91,8 +91,8 @@ def test_get_status_endpoint_returns_400_bad_query_params(client):
res = client.get(STATUSES_ENDPOINT)
assert res.status_code == 400
res = client.get(STATUSES_ENDPOINT + "?ts_id=123")
res = client.get(STATUSES_ENDPOINT + '?ts_id=123')
assert res.status_code == 400
res = client.get(STATUSES_ENDPOINT + "?tx_id=123&block_id=123")
res = client.get(STATUSES_ENDPOINT + '?tx_id=123&block_id=123')
assert res.status_code == 400

View File

@ -1,5 +1,6 @@
import builtins
import json
from unittest.mock import patch
import pytest
from bigchaindb.common import crypto
@ -37,6 +38,9 @@ def test_post_create_transaction_endpoint(b, client):
tx = tx.sign([user_priv])
res = client.post(TX_ENDPOINT, data=json.dumps(tx.to_dict()))
assert res.status_code == 202
assert res.json['inputs'][0]['owners_before'][0] == user_pub
assert res.json['outputs'][0]['public_keys'][0] == user_pub
@ -53,8 +57,8 @@ def test_post_create_transaction_with_invalid_id(b, client, caplog):
res = client.post(TX_ENDPOINT, data=json.dumps(tx))
expected_status_code = 400
expected_error_message = (
"Invalid transaction ({}): The transaction's id '{}' isn't equal to "
"the hash of its body, i.e. it's not valid."
'Invalid transaction ({}): The transaction\'s id \'{}\' isn\'t equal to '
'the hash of its body, i.e. it\'s not valid.'
).format(InvalidHash.__name__, tx['id'])
assert res.status_code == expected_status_code
assert res.json['message'] == expected_error_message
@ -74,8 +78,8 @@ def test_post_create_transaction_with_invalid_signature(b, client, caplog):
res = client.post(TX_ENDPOINT, data=json.dumps(tx))
expected_status_code = 400
expected_error_message = (
"Invalid transaction ({}): Fulfillment URI "
"couldn't been parsed"
'Invalid transaction ({}): Fulfillment URI '
'couldn\'t been parsed'
).format(InvalidSignature.__name__)
assert res.status_code == expected_status_code
assert res.json['message'] == expected_error_message
@ -156,6 +160,8 @@ def test_post_transfer_transaction_endpoint(b, client, user_pk, user_sk):
res = client.post(TX_ENDPOINT, data=json.dumps(transfer_tx.to_dict()))
assert res.status_code == 202
assert res.json['inputs'][0]['owners_before'][0] == user_pk
assert res.json['outputs'][0]['public_keys'][0] == user_pub
@ -180,3 +186,45 @@ def test_post_invalid_transfer_transaction_returns_400(b, client, user_pk):
InvalidSignature.__name__, 'Transaction signature is invalid.')
assert res.status_code == expected_status_code
assert res.json['message'] == expected_error_message
def test_transactions_get_list_good(client):
from functools import partial
def get_txs_patched(conn, **args):
""" Patch `get_transactions_filtered` so that rather than return an array
of transactions it returns an array of shims with a to_dict() method
that reports one of the arguments passed to `get_transactions_filtered`.
"""
return [type('', (), {'to_dict': partial(lambda a: a, arg)})
for arg in sorted(args.items())]
asset_id = '1' * 64
with patch('bigchaindb.core.Bigchain.get_transactions_filtered', get_txs_patched):
url = TX_ENDPOINT + "?asset_id=" + asset_id
assert client.get(url).json == [
['asset_id', asset_id],
['operation', None]
]
url = TX_ENDPOINT + "?asset_id=" + asset_id + "&operation=CREATE"
assert client.get(url).json == [
['asset_id', asset_id],
['operation', 'CREATE']
]
def test_transactions_get_list_bad(client):
def should_not_be_called():
assert False
with patch('bigchaindb.core.Bigchain.get_transactions_filtered',
lambda *_, **__: should_not_be_called()):
# Test asset id validated
url = TX_ENDPOINT + "?asset_id=" + '1' * 63
assert client.get(url).status_code == 400
# Test operation validated
url = TX_ENDPOINT + "?asset_id=" + '1' * 64 + "&operation=CEATE"
assert client.get(url).status_code == 400
# Test asset ID required
url = TX_ENDPOINT + "?operation=CREATE"
assert client.get(url).status_code == 400

View File

@ -1,24 +0,0 @@
import pytest
pytestmark = [pytest.mark.bdb, pytest.mark.usefixtures('inputs')]
UNSPENTS_ENDPOINT = '/api/v1/unspents/'
def test_get_unspents_endpoint(b, client, user_pk):
expected = [u.to_uri('..') for u in b.get_owned_ids(user_pk)]
res = client.get(UNSPENTS_ENDPOINT + '?public_key={}'.format(user_pk))
assert expected == res.json
assert res.status_code == 200
def test_get_unspents_endpoint_without_public_key(client):
res = client.get(UNSPENTS_ENDPOINT)
assert res.status_code == 400
def test_get_unspents_endpoint_with_unused_public_key(client):
expected = []
res = client.get(UNSPENTS_ENDPOINT + '?public_key=abc')
assert expected == res.json
assert res.status_code == 200

Some files were not shown because too many files have changed in this diff Show More