Merge branch 'master' into tendermint

This commit is contained in:
vrde 2017-11-29 17:18:00 +01:00
commit 044a052644
No known key found for this signature in database
GPG Key ID: 6581C7C39B3D397D
143 changed files with 2144 additions and 1614 deletions

View File

@ -13,5 +13,7 @@ elif [[ "${BIGCHAINDB_DATABASE_BACKEND}" == localmongodb && \
# Run a sub-set of tests over SSL; those marked as 'pytest.mark.bdb_ssl'.
pytest -sv --database-backend=localmongodb-ssl --cov=bigchaindb -m bdb_ssl
else
pytest -sv -n auto --cov=bigchaindb
# Run the full suite of tests for RethinkDB (the default backend when testing)
pytest -sv -m "serial"
pytest -sv --cov=bigchaindb -m "not serial"
fi

34
.github/pull_request_template.md vendored Normal file
View File

@ -0,0 +1,34 @@
## Description
A few sentences describing the overall goals of the pull request's commits.
## Issues This PR Fixes
Fixes #NNNN
Fixes #NNNN
## Related PRs
List related PRs against other branches e.g. for backporting features/bugfixes
to previous release branches:
Repo/Branch | PR
------ | ------
some_other_PR | [link]()
## Todos
- [ ] Tested and working on development environment
- [ ] Unit tests (if appropriate)
- [ ] Added/Updated all related documentation. Add [link]() if different from this PR
- [ ] DevOps Support needed e.g. create Runscope API test if new endpoint added or
update deployment docs. Create a ticket and add [link]()
## Deployment Notes
Notes about how to deploy this work. For example, running a migration against the production DB.
## How to QA
Outline the steps to test or reproduce the PR here.
## Impacted Areas in Application
List general components of the application that this PR will affect:
- Scale
- Performance
- Security etc.

1
.gitignore vendored
View File

@ -77,7 +77,6 @@ ntools/one-m/ansible/hosts
ntools/one-m/ansible/ansible.cfg
# Just in time documentation
docs/server/source/schema
docs/server/source/http-samples
# Terraform state files

View File

@ -32,6 +32,36 @@ For reference, the possible headings are:
* **External Contributors** to list contributors outside of BigchainDB GmbH.
* **Notes**
## [1.3] - 2017-11-21
Tag name: v1.3.0
### Added
* Metadata full-text search. [Pull request #1812](https://github.com/bigchaindb/bigchaindb/pull/1812)
### Notes
* Improved documentation about blocks and votes. [Pull request #1855](https://github.com/bigchaindb/bigchaindb/pull/1855)
## [1.2] - 2017-11-13
Tag name: v1.2.0
### Added
* New and improved installation setup docs and code. Pull requests [#1775](https://github.com/bigchaindb/bigchaindb/pull/1775) and [#1785](https://github.com/bigchaindb/bigchaindb/pull/1785)
* New BigchainDB configuration setting to set the port number of the log server: `log.port`. [Pull request #1796](https://github.com/bigchaindb/bigchaindb/pull/1796)
* New secondary index on `id` in the bigchain table. That will make some queries execute faster. [Pull request #1803](https://github.com/bigchaindb/bigchaindb/pull/1803)
* When using MongoDB, there are some restrictions on allowed names for keys (JSON keys). Those restrictions were always there but now BigchainDB checks key names explicitly, rather than leaving that to MongoDB. Pull requests [#1807](https://github.com/bigchaindb/bigchaindb/pull/1807) and [#1811](https://github.com/bigchaindb/bigchaindb/pull/1811)
* When using MongoDB, there are some restrictions on the allowed values of "language" (if that key is used in the values of `metadata` or `asset.data`). Those restrictions were always there but now BigchainDB checks the values explicitly, rather than leaving that to MongoDB. Pull requests [#1806](https://github.com/bigchaindb/bigchaindb/pull/1806) and [#1811](https://github.com/bigchaindb/bigchaindb/pull/1811)
* There's a new page in the root docs about permissions in BigchainDB. [Pull request #1788](https://github.com/bigchaindb/bigchaindb/pull/1788)
* There's a new option in the `bigchaindb start` command: `bigchaindb start --no-init` will avoid doing `bigchaindb init` if it wasn't done already. [Pull request #1814](https://github.com/bigchaindb/bigchaindb/pull/1814)
### Fixed
* Fixed a bug where setting the log level in a BigchainDB config file didn't have any effect. It does now. [Pull request #1797](https://github.com/bigchaindb/bigchaindb/pull/1797)
* The docs were wrong about there being no Ping/Pong support in the Events API. There is, so the docs were fixed. [Pull request #1799](https://github.com/bigchaindb/bigchaindb/pull/1799)
* Fixed an issue with closing WebSocket connections properly. [Pull request #1819](https://github.com/bigchaindb/bigchaindb/pull/1819)
### Notes
* Many changes were made to the Kubernetes-based production deployment template and code.
## [1.1] - 2017-09-26
Tag name: v1.1.0

View File

@ -145,6 +145,20 @@ Once you accept and submit the CLA, we'll email you with further instructions. (
Someone will then merge your branch or suggest changes. If we suggest changes, you won't have to open a new pull request, you can just push new code to the same branch (on `origin`) as you did before creating the pull request.
### Pull Request Guidelines
Before you submit a pull request, check that it meets these guidelines:
1. The pull request should include tests.
2. If the pull request adds functionality, the docs should be updated. Put
your new functionality into a function with a docstring, and add the
feature to the list in README.rst.
3. The pull request should work for Python 3.5, and pass the flake8 check.
Check https://travis-ci.org/bigchaindb/bigchaindb-driver/pull_requests
and make sure that the tests pass for all supported Python versions.
4. Follow the pull request template while creating new PRs, the template will
be visible to you when you create a new pull request.
### Tip: Upgrading All BigchainDB Dependencies
Over time, your versions of the Python packages used by BigchainDB will get out of date. You can upgrade them using:

View File

@ -13,7 +13,7 @@ BigchainDB is a scalable blockchain database. [The whitepaper](https://www.bigch
## Get Started with BigchainDB Server
### [Quickstart](https://docs.bigchaindb.com/projects/server/en/latest/quickstart.html)
### [Set Up & Run a Dev/Test Node](https://docs.bigchaindb.com/projects/server/en/latest/dev-and-test/setup-run-node.html)
### [Set Up & Run a Dev/Test Node](https://docs.bigchaindb.com/projects/server/en/latest/dev-and-test/index.html)
### [Run BigchainDB Server with Docker](https://docs.bigchaindb.com/projects/server/en/latest/appendices/run-with-docker.html)
### [Run BigchainDB Server with Vagrant](https://docs.bigchaindb.com/projects/server/en/latest/appendices/run-with-vagrant.html)
### [Run BigchainDB Server with Ansible](https://docs.bigchaindb.com/projects/server/en/latest/appendices/run-with-ansible.html)

View File

@ -265,6 +265,16 @@ def write_assets(conn, assets):
return
@register_query(MongoDBConnection)
def write_metadata(conn, metadata):
try:
return conn.run(
conn.collection('metadata')
.insert_many(metadata, ordered=False))
except OperationError:
return
@register_query(MongoDBConnection)
def get_assets(conn, asset_ids):
return conn.run(
@ -273,6 +283,14 @@ def get_assets(conn, asset_ids):
projection={'_id': False}))
@register_query(MongoDBConnection)
def get_metadata(conn, txn_ids):
return conn.run(
conn.collection('metadata')
.find({'id': {'$in': txn_ids}},
projection={'_id': False}))
@register_query(MongoDBConnection)
def count_blocks(conn):
return conn.run(
@ -348,9 +366,9 @@ def get_new_blocks_feed(conn, start_block_id):
@register_query(MongoDBConnection)
def text_search(conn, search, *, language='english', case_sensitive=False,
diacritic_sensitive=False, text_score=False, limit=0):
diacritic_sensitive=False, text_score=False, limit=0, table='assets'):
cursor = conn.run(
conn.collection('assets')
conn.collection(table)
.find({'$text': {
'$search': search,
'$language': language,
@ -363,7 +381,7 @@ def text_search(conn, search, *, language='english', case_sensitive=False,
if text_score:
return cursor
return (_remove_text_score(asset) for asset in cursor)
return (_remove_text_score(obj) for obj in cursor)
def _remove_text_score(asset):

View File

@ -27,7 +27,7 @@ def create_database(conn, dbname):
@register_schema(MongoDBConnection)
def create_tables(conn, dbname):
for table_name in ['bigchain', 'backlog', 'votes', 'assets']:
for table_name in ['bigchain', 'backlog', 'votes', 'assets', 'metadata']:
logger.info('Create `%s` table.', table_name)
# create the table
# TODO: read and write concerns can be declared here
@ -40,6 +40,7 @@ def create_indexes(conn, dbname):
create_backlog_secondary_index(conn, dbname)
create_votes_secondary_index(conn, dbname)
create_assets_secondary_index(conn, dbname)
create_metadata_secondary_index(conn, dbname)
@register_schema(MongoDBConnection)
@ -121,3 +122,17 @@ def create_assets_secondary_index(conn, dbname):
# full text search index
conn.conn[dbname]['assets'].create_index([('$**', TEXT)], name='text')
def create_metadata_secondary_index(conn, dbname):
logger.info('Create `metadata` secondary index.')
# unique index on the id of the metadata.
# the id is the txid of the transaction for which the metadata
# was specified
conn.conn[dbname]['metadata'].create_index('id',
name='transaction_id',
unique=True)
# full text search index
conn.conn[dbname]['metadata'].create_index([('$**', TEXT)], name='text')

View File

@ -303,6 +303,19 @@ def write_assets(connection, assets):
raise NotImplementedError
@singledispatch
def write_metadata(connection, metadata):
"""Write a list of metadata to the metadata table.
Args:
metadata (list): a list of metadata to write.
Returns:
The database response.
"""
raise NotImplementedError
@singledispatch
def get_assets(connection, asset_ids):
"""Get a list of assets from the assets table.
@ -317,6 +330,20 @@ def get_assets(connection, asset_ids):
raise NotImplementedError
@singledispatch
def get_metadata(connection, txn_ids):
"""Get a list of metadata from the metadata table.
Args:
txn_ids (list): a list of ids for the metadata to be retrieved from
the database.
Returns:
metadata (list): the list of returned metadata.
"""
raise NotImplementedError
@singledispatch
def count_blocks(connection):
"""Count the number of blocks in the bigchain table.
@ -409,7 +436,7 @@ def get_new_blocks_feed(connection, start_block_id):
@singledispatch
def text_search(conn, search, *, language='english', case_sensitive=False,
diacritic_sensitive=False, text_score=False, limit=0):
diacritic_sensitive=False, text_score=False, limit=0, table=None):
"""Return all the assets that match the text search.
The results are sorted by text score.

View File

@ -173,6 +173,13 @@ def write_assets(connection, assets):
.insert(assets, durability=WRITE_DURABILITY))
@register_query(RethinkDBConnection)
def write_metadata(connection, metadata):
return connection.run(
r.table('metadata')
.insert(metadata, durability=WRITE_DURABILITY))
@register_query(RethinkDBConnection)
def get_assets(connection, asset_ids):
return connection.run(
@ -180,6 +187,13 @@ def get_assets(connection, asset_ids):
.get_all(*asset_ids))
@register_query(RethinkDBConnection)
def get_metadata(connection, txn_ids):
return connection.run(
r.table('metadata', read_mode=READ_MODE)
.get_all(*txn_ids))
@register_query(RethinkDBConnection)
def count_blocks(connection):
return connection.run(

View File

@ -23,7 +23,7 @@ def create_database(connection, dbname):
@register_schema(RethinkDBConnection)
def create_tables(connection, dbname):
for table_name in ['bigchain', 'backlog', 'votes', 'assets']:
for table_name in ['bigchain', 'backlog', 'votes', 'assets', 'metadata']:
logger.info('Create `%s` table.', table_name)
connection.run(r.db(dbname).table_create(table_name))

View File

@ -16,10 +16,17 @@ import logging
import bigchaindb
from bigchaindb.backend.connection import connect
from bigchaindb.common.exceptions import ValidationError
from bigchaindb.common.utils import validate_all_values_for_key
logger = logging.getLogger(__name__)
TABLES = ('bigchain', 'backlog', 'votes', 'assets')
TABLES = ('bigchain', 'backlog', 'votes', 'assets', 'metadata')
VALID_LANGUAGES = ('danish', 'dutch', 'english', 'finnish', 'french', 'german',
'hungarian', 'italian', 'norwegian', 'portuguese', 'romanian',
'russian', 'spanish', 'swedish', 'turkish', 'none',
'da', 'nl', 'en', 'fi', 'fr', 'de', 'hu', 'it', 'nb', 'pt',
'ro', 'ru', 'es', 'sv', 'tr')
@singledispatch
@ -99,3 +106,44 @@ def init_database(connection=None, dbname=None):
create_database(connection, dbname)
create_tables(connection, dbname)
create_indexes(connection, dbname)
def validate_language_key(obj, key):
"""Validate all nested "language" key in `obj`.
Args:
obj (dict): dictionary whose "language" key is to be validated.
Returns:
None: validation successful
Raises:
ValidationError: will raise exception in case language is not valid.
"""
backend = bigchaindb.config['database']['backend']
if backend == 'mongodb':
data = obj.get(key, {})
if isinstance(data, dict):
validate_all_values_for_key(data, 'language', validate_language)
def validate_language(value):
"""Check if `value` is a valid language.
https://docs.mongodb.com/manual/reference/text-search-languages/
Args:
value (str): language to validated
Returns:
None: validation successful
Raises:
ValidationError: will raise exception in case language is not valid.
"""
if value not in VALID_LANGUAGES:
error_str = ('MongoDB does not support text search for the '
'language "{}". If you do not understand this error '
'message then please rename key/field "language" to '
'something else like "lang".').format(value)
raise ValidationError(error_str)

View File

@ -196,7 +196,7 @@ def run_start(args):
logger.info('RethinkDB started with PID %s' % proc.pid)
try:
if args.initialize_database:
if not args.skip_initialize_database:
logger.info('Initializing database')
_run_init()
except DatabaseAlreadyExists:
@ -303,10 +303,11 @@ def create_parser():
action='store_true',
help='Run RethinkDB on start')
start_parser.add_argument('--init',
dest='initialize_database',
start_parser.add_argument('--no-init',
dest='skip_initialize_database',
default=False,
action='store_true',
help='Force initialize database')
help='Skip database initialization')
# parser for configuring the number of shards
sharding_parser = subparsers.add_parser('set-shards',

View File

@ -3,11 +3,22 @@
This directory contains the schemas for the different JSON documents BigchainDB uses.
The aim is to provide:
- a strict definition/documentation of the data structures used in BigchainDB
- a language independent tool to validate the structure of incoming/outcoming
data (there are several ready to use
[implementations](http://json-schema.org/implementations.html) written in
different languages)
- a strict definition of the data structures used in BigchainDB
- a language independent tool to validate the structure of incoming/outcoming
data (there are several ready to use
[implementations](http://json-schema.org/implementations.html) written in
different languages)
## Sources
The file defining the JSON Schema for votes (`vote.yaml`) is BigchainDB-specific.
The files defining the JSON Schema for transactions (`transaction_*.yaml`)
are copied from the [IPDB Protocol](https://github.com/ipdb/ipdb-protocol).
If you want to add a new version, you must add it to the IPDB Protocol first.
(You can't change existing versions. Those were used to validate old transactions
and are needed to re-check those transactions.)
## Learn about JSON Schema

View File

@ -13,31 +13,23 @@ from bigchaindb.common.exceptions import SchemaValidationError
logger = logging.getLogger(__name__)
def drop_schema_descriptions(node):
""" Drop descriptions from schema, since they clutter log output """
if 'description' in node:
del node['description']
for n in node.get('properties', {}).values():
drop_schema_descriptions(n)
for n in node.get('definitions', {}).values():
drop_schema_descriptions(n)
for n in node.get('anyOf', []):
drop_schema_descriptions(n)
def _load_schema(name):
""" Load a schema from disk """
path = os.path.join(os.path.dirname(__file__), name + '.yaml')
with open(path) as handle:
schema = yaml.safe_load(handle)
drop_schema_descriptions(schema)
fast_schema = rapidjson_schema.loads(rapidjson.dumps(schema))
return path, (schema, fast_schema)
TX_SCHEMA_PATH, TX_SCHEMA_COMMON = _load_schema('transaction')
_, TX_SCHEMA_CREATE = _load_schema('transaction_create')
_, TX_SCHEMA_TRANSFER = _load_schema('transaction_transfer')
TX_SCHEMA_VERSION = 'v1.0'
TX_SCHEMA_PATH, TX_SCHEMA_COMMON = _load_schema('transaction_' +
TX_SCHEMA_VERSION)
_, TX_SCHEMA_CREATE = _load_schema('transaction_create_' +
TX_SCHEMA_VERSION)
_, TX_SCHEMA_TRANSFER = _load_schema('transaction_transfer_' +
TX_SCHEMA_VERSION)
VOTE_SCHEMA_PATH, VOTE_SCHEMA = _load_schema('vote')

View File

@ -1,247 +0,0 @@
---
"$schema": "http://json-schema.org/draft-04/schema#"
id: "http://www.bigchaindb.com/schema/transaction.json"
type: object
additionalProperties: false
title: Transaction Schema
description: |
A transaction represents the creation or transfer of assets in BigchainDB.
required:
- id
- inputs
- outputs
- operation
- metadata
- asset
- version
properties:
id:
"$ref": "#/definitions/sha3_hexdigest"
description: |
A sha3 digest of the transaction. The ID is calculated by removing all
derived hashes and signatures from the transaction, serializing it to
JSON with keys in sorted order and then hashing the resulting string
with sha3.
operation:
"$ref": "#/definitions/operation"
asset:
"$ref": "#/definitions/asset"
description: |
Description of the asset being transacted.
See: `Asset`_.
inputs:
type: array
title: "Transaction inputs"
description: |
Array of the inputs of a transaction.
See: Input_.
items:
"$ref": "#/definitions/input"
outputs:
type: array
description: |
Array of outputs provided by this transaction.
See: Output_.
items:
"$ref": "#/definitions/output"
metadata:
"$ref": "#/definitions/metadata"
description: |
User provided transaction metadata. This field may be ``null`` or may
contain an id and an object with freeform metadata.
See: `Metadata`_.
version:
type: string
pattern: "^1\\.0$"
description: |
BigchainDB transaction schema version.
definitions:
offset:
type: integer
minimum: 0
base58:
pattern: "[1-9a-zA-Z^OIl]{43,44}"
type: string
public_keys:
anyOf:
- type: array
items:
"$ref": "#/definitions/base58"
- type: 'null'
sha3_hexdigest:
pattern: "[0-9a-f]{64}"
type: string
uuid4:
pattern: "[a-f0-9]{8}-[a-f0-9]{4}-4[a-f0-9]{3}-[89ab][a-f0-9]{3}-[a-f0-9]{12}"
type: string
description: |
A `UUID <https://tools.ietf.org/html/rfc4122.html>`_
of type 4 (random).
operation:
type: string
description: |
Type of the transaction:
A ``CREATE`` transaction creates an asset in BigchainDB. This
transaction has outputs but no inputs, so a dummy input is created.
A ``TRANSFER`` transaction transfers ownership of an asset, by providing
an input that meets the conditions of an earlier transaction's outputs.
A ``GENESIS`` transaction is a special case transaction used as the
sole member of the first block in a BigchainDB ledger.
enum:
- CREATE
- TRANSFER
- GENESIS
asset:
type: object
description: |
Description of the asset being transacted. In the case of a ``TRANSFER``
transaction, this field contains only the ID of asset. In the case
of a ``CREATE`` transaction, this field contains only the user-defined
payload.
additionalProperties: false
properties:
id:
"$ref": "#/definitions/sha3_hexdigest"
description: |
ID of the transaction that created the asset.
data:
description: |
User provided metadata associated with the asset. May also be ``null``.
anyOf:
- type: object
additionalProperties: true
- type: 'null'
output:
type: object
description: |
A transaction output. Describes the quantity of an asset and the
requirements that must be met to spend the output.
See also: Input_.
additionalProperties: false
required:
- amount
- condition
- public_keys
properties:
amount:
type: string
pattern: "^[0-9]{1,20}$"
description: |
Integral amount of the asset represented by this output.
In the case of a non divisible asset, this will always be 1.
condition:
description: |
Describes the condition that needs to be met to spend the output. Has the properties:
- **details**: Details of the condition.
- **uri**: Condition encoded as an ASCII string.
type: object
additionalProperties: false
required:
- details
- uri
properties:
details:
"$ref": "#/definitions/condition_details"
uri:
type: string
pattern: "^ni:///sha-256;([a-zA-Z0-9_-]{0,86})[?]\
(fpt=(ed25519|threshold)-sha-256(&)?|cost=[0-9]+(&)?|\
subtypes=ed25519-sha-256(&)?){2,3}$"
public_keys:
"$ref": "#/definitions/public_keys"
description: |
List of public keys associated with the conditions on an output.
input:
type: "object"
description:
An input spends a previous output, by providing one or more fulfillments
that fulfill the conditions of the previous output.
additionalProperties: false
required:
- owners_before
- fulfillment
properties:
owners_before:
"$ref": "#/definitions/public_keys"
description: |
List of public keys of the previous owners of the asset.
fulfillment:
description: |
Fulfillment of an `Output.condition`_, or, put a different way, a payload
that satisfies the condition of a previous output to prove that the
creator(s) of this transaction have control over the listed asset.
anyOf:
- type: string
pattern: "^[a-zA-Z0-9_-]*$"
- "$ref": "#/definitions/condition_details"
fulfills:
anyOf:
- type: 'object'
description: |
Reference to the output that is being spent.
additionalProperties: false
required:
- output_index
- transaction_id
properties:
output_index:
"$ref": "#/definitions/offset"
description: |
Index of the output containing the condition being fulfilled
transaction_id:
"$ref": "#/definitions/sha3_hexdigest"
description: |
Transaction ID containing the output to spend
- type: 'null'
metadata:
anyOf:
- type: object
description: |
User provided transaction metadata. This field may be ``null`` or may
contain an non empty object with freeform metadata.
additionalProperties: true
minProperties: 1
- type: 'null'
condition_details:
description: |
Details needed to reconstruct the condition associated with an output.
Currently, BigchainDB only supports ed25519 and threshold condition types.
anyOf:
- type: object
additionalProperties: false
required:
- type
- public_key
properties:
type:
type: string
pattern: "^ed25519-sha-256$"
public_key:
"$ref": "#/definitions/base58"
- type: object
additionalProperties: false
required:
- type
- threshold
- subconditions
properties:
type:
type: "string"
pattern: "^threshold-sha-256$"
threshold:
type: integer
minimum: 1
maximum: 100
subconditions:
type: array
items:
"$ref": "#/definitions/condition_details"

View File

@ -10,8 +10,6 @@ properties:
properties:
id:
"$ref": "#/definitions/sha3_hexdigest"
description: |
ID of the transaction that created the asset.
required:
- id
inputs:

View File

@ -0,0 +1,161 @@
---
"$schema": "http://json-schema.org/draft-04/schema#"
type: object
additionalProperties: false
title: Transaction Schema
required:
- id
- inputs
- outputs
- operation
- metadata
- asset
- version
properties:
id:
"$ref": "#/definitions/sha3_hexdigest"
operation:
"$ref": "#/definitions/operation"
asset:
"$ref": "#/definitions/asset"
inputs:
type: array
title: "Transaction inputs"
items:
"$ref": "#/definitions/input"
outputs:
type: array
items:
"$ref": "#/definitions/output"
metadata:
"$ref": "#/definitions/metadata"
version:
type: string
pattern: "^1\\.0$"
definitions:
offset:
type: integer
minimum: 0
base58:
pattern: "[1-9a-zA-Z^OIl]{43,44}"
type: string
public_keys:
anyOf:
- type: array
items:
"$ref": "#/definitions/base58"
- type: 'null'
sha3_hexdigest:
pattern: "[0-9a-f]{64}"
type: string
uuid4:
pattern: "[a-f0-9]{8}-[a-f0-9]{4}-4[a-f0-9]{3}-[89ab][a-f0-9]{3}-[a-f0-9]{12}"
type: string
operation:
type: string
enum:
- CREATE
- TRANSFER
- GENESIS
asset:
type: object
additionalProperties: false
properties:
id:
"$ref": "#/definitions/sha3_hexdigest"
data:
anyOf:
- type: object
additionalProperties: true
- type: 'null'
output:
type: object
additionalProperties: false
required:
- amount
- condition
- public_keys
properties:
amount:
type: string
pattern: "^[0-9]{1,20}$"
condition:
type: object
additionalProperties: false
required:
- details
- uri
properties:
details:
"$ref": "#/definitions/condition_details"
uri:
type: string
pattern: "^ni:///sha-256;([a-zA-Z0-9_-]{0,86})[?]\
(fpt=(ed25519|threshold)-sha-256(&)?|cost=[0-9]+(&)?|\
subtypes=ed25519-sha-256(&)?){2,3}$"
public_keys:
"$ref": "#/definitions/public_keys"
input:
type: "object"
additionalProperties: false
required:
- owners_before
- fulfillment
properties:
owners_before:
"$ref": "#/definitions/public_keys"
fulfillment:
anyOf:
- type: string
pattern: "^[a-zA-Z0-9_-]*$"
- "$ref": "#/definitions/condition_details"
fulfills:
anyOf:
- type: 'object'
additionalProperties: false
required:
- output_index
- transaction_id
properties:
output_index:
"$ref": "#/definitions/offset"
transaction_id:
"$ref": "#/definitions/sha3_hexdigest"
- type: 'null'
metadata:
anyOf:
- type: object
additionalProperties: true
minProperties: 1
- type: 'null'
condition_details:
anyOf:
- type: object
additionalProperties: false
required:
- type
- public_key
properties:
type:
type: string
pattern: "^ed25519-sha-256$"
public_key:
"$ref": "#/definitions/base58"
- type: object
additionalProperties: false
required:
- type
- threshold
- subconditions
properties:
type:
type: "string"
pattern: "^threshold-sha-256$"
threshold:
type: integer
minimum: 1
maximum: 100
subconditions:
type: array
items:
"$ref": "#/definitions/condition_details"

View File

@ -4,13 +4,6 @@ id: "http://www.bigchaindb.com/schema/vote.json"
type: object
additionalProperties: false
title: Vote Schema
description: |
A Vote is an endorsement of a Block (identified by a hash) by
a node (identified by a public key).
The outer Vote object contains the details of the vote being made
as well as the signature and identifying information of the node
passing the vote.
required:
- node_pubkey
- signature
@ -19,18 +12,12 @@ properties:
node_pubkey:
type: "string"
pattern: "[1-9a-zA-Z^OIl]{43,44}"
description: |
Ed25519 public key identifying the voting node.
signature:
type: "string"
pattern: "[1-9a-zA-Z^OIl]{86,88}"
description:
Ed25519 signature of the `Vote Details`_ object.
vote:
type: "object"
additionalProperties: false
description: |
`Vote Details`_ to be signed.
required:
- invalid_reason
- is_block_valid
@ -40,33 +27,17 @@ properties:
properties:
previous_block:
"$ref": "#/definitions/sha3_hexdigest"
description: |
ID (SHA3 hash) of the block that precedes the block being voted on.
The notion of a "previous" block is subject to vote.
voting_for_block:
"$ref": "#/definitions/sha3_hexdigest"
description: |
ID (SHA3 hash) of the block being voted on.
is_block_valid:
type: "boolean"
description: |
This field is ``true`` if the block was deemed valid by the node.
invalid_reason:
anyOf:
- type: "string"
description: |
Reason the block is voted invalid, or ``null``.
.. container:: notice
**Note**: The invalid_reason was not being used and may be dropped in a future version of BigchainDB. See Issue `#217 <https://github.com/bigchaindb/bigchaindb/issues/217>`_ on GitHub.
- type: "null"
timestamp:
type: "string"
pattern: "[0-9]{10}"
description: |
Unix timestamp that the vote was created by the node, according
to the system time of the node.
definitions:
sha3_hexdigest:
pattern: "[0-9a-f]{64}"

View File

@ -52,53 +52,73 @@ def deserialize(data):
def validate_txn_obj(obj_name, obj, key, validation_fun):
"""Validates value associated to `key` in `obj` by applying
`validation_fun`.
"""Validate value of `key` in `obj` using `validation_fun`.
Args:
obj_name (str): name for `obj` being validated.
obj (dict): dictonary object.
obj (dict): dictionary object.
key (str): key to be validated in `obj`.
validation_fun (function): function used to validate the value
of `key`.
Returns:
None: indicates validation successfull
None: indicates validation successful
Raises:
ValidationError: `validation_fun` will raise this error on failure
ValidationError: `validation_fun` will raise exception on failure
"""
backend = bigchaindb.config['database']['backend']
if backend == 'mongodb':
data = obj.get(key, {}) or {}
validate_all_keys(obj_name, data, validation_fun)
data = obj.get(key, {})
if isinstance(data, dict):
validate_all_keys(obj_name, data, validation_fun)
def validate_all_keys(obj_name, obj, validation_fun):
"""Validates all (nested) keys in `obj` by using `validation_fun`
"""Validate all (nested) keys in `obj` by using `validation_fun`.
Args:
obj_name (str): name for `obj` being validated.
obj (dict): dictonary object.
obj (dict): dictionary object.
validation_fun (function): function used to validate the value
of `key`.
Returns:
None: indicates validation successfull
None: indicates validation successful
Raises:
ValidationError: `validation_fun` will raise this error on failure
"""
for key, value in obj.items():
validation_fun(obj_name, key)
if type(value) is dict:
if isinstance(value, dict):
validate_all_keys(obj_name, value, validation_fun)
return
def validate_all_values_for_key(obj, key, validation_fun):
"""Validate value for all (nested) occurrence of `key` in `obj`
using `validation_fun`.
Args:
obj (dict): dictionary object.
key (str): key whose value is to be validated.
validation_fun (function): function used to validate the value
of `key`.
Raises:
ValidationError: `validation_fun` will raise this error on failure
"""
for vkey, value in obj.items():
if vkey == key:
validation_fun(value)
elif isinstance(value, dict):
validate_all_values_for_key(value, key, validation_fun)
def validate_key(obj_name, key):
"""Check if `key` contains ".", "$" or null characters
"""Check if `key` contains ".", "$" or null characters.
https://docs.mongodb.com/manual/reference/limits/#Restrictions-on-Field-Names
Args:
@ -106,13 +126,13 @@ def validate_key(obj_name, key):
key (str): key to validated
Returns:
None: indicates validation successfull
None: validation successful
Raises:
ValidationError: raise execption incase of regex match.
ValidationError: will raise exception in case of regex match.
"""
if re.search(r'^[$]|\.|\x00', key):
error_str = ('Invalid key name "{}" in {} object. The '
'key name cannot contain characters '
'".", "$" or null characters').format(key, obj_name)
raise ValidationError(error_str) from ValueError()
raise ValidationError(error_str)

View File

@ -190,10 +190,15 @@ class Bigchain(object):
# get the asset ids from the block
if block_dict:
asset_ids = Block.get_asset_ids(block_dict)
txn_ids = Block.get_txn_ids(block_dict)
# get the assets from the database
assets = self.get_assets(asset_ids)
# get the metadata from the database
metadata = self.get_metadata(txn_ids)
# add the assets to the block transactions
block_dict = Block.couple_assets(block_dict, assets)
# add the metadata to the block transactions
block_dict = Block.couple_metadata(block_dict, metadata)
status = None
if include_status:
@ -379,8 +384,8 @@ class Bigchain(object):
for transaction in transactions:
# ignore transactions in invalid blocks
# FIXME: Isn't there a faster solution than doing I/O again?
_, status = self.get_transaction(transaction['id'],
include_status=True)
txn, status = self.get_transaction(transaction['id'],
include_status=True)
if status == self.TX_VALID:
num_valid_transactions += 1
# `txid` can only have been spent in at most on valid block.
@ -390,6 +395,7 @@ class Bigchain(object):
' with the chain'.format(txid))
# if its not and invalid transaction
if status is not None:
transaction.update({'metadata': txn.metadata})
non_invalid_transactions.append(transaction)
if non_invalid_transactions:
@ -508,10 +514,15 @@ class Bigchain(object):
# Decouple assets from block
assets, block_dict = block.decouple_assets()
metadatas, block_dict = block.decouple_metadata(block_dict)
# write the assets
if assets:
self.write_assets(assets)
if metadatas:
self.write_metadata(metadatas)
# write the block
return backend.query.write_block(self.connection, block_dict)
@ -622,6 +633,19 @@ class Bigchain(object):
"""
return backend.query.get_assets(self.connection, asset_ids)
def get_metadata(self, txn_ids):
"""
Return a list of metadata that match the transaction ids (txn_ids)
Args:
txn_ids (:obj:`list` of :obj:`str`): A list of txn_ids to
retrieve from the database.
Returns:
list: The list of metadata returned from the database.
"""
return backend.query.get_metadata(self.connection, txn_ids)
def write_assets(self, assets):
"""
Writes a list of assets into the database.
@ -632,7 +656,17 @@ class Bigchain(object):
"""
return backend.query.write_assets(self.connection, assets)
def text_search(self, search, *, limit=0):
def write_metadata(self, metadata):
"""
Writes a list of metadata into the database.
Args:
metadata (:obj:`list` of :obj:`dict`): A list of metadata to write to
the database.
"""
return backend.query.write_metadata(self.connection, metadata)
def text_search(self, search, *, limit=0, table='assets'):
"""
Return an iterator of assets that match the text search
@ -643,12 +677,13 @@ class Bigchain(object):
Returns:
iter: An iterator of assets that match the text search.
"""
assets = backend.query.text_search(self.connection, search, limit=limit)
objects = backend.query.text_search(self.connection, search, limit=limit,
table=table)
# TODO: This is not efficient. There may be a more efficient way to
# query by storing block ids with the assets and using fastquery.
# See https://github.com/bigchaindb/bigchaindb/issues/1496
for asset in assets:
tx, status = self.get_transaction(asset['id'], True)
for obj in objects:
tx, status = self.get_transaction(obj['id'], True)
if status == self.TX_VALID:
yield asset
yield obj

View File

@ -5,12 +5,12 @@ from bigchaindb.common.exceptions import (InvalidHash, InvalidSignature,
DoubleSpend, InputDoesNotExist,
TransactionNotInValidBlock,
AssetIdMismatch, AmountError,
SybilError,
DuplicateTransaction)
SybilError, DuplicateTransaction)
from bigchaindb.common.transaction import Transaction
from bigchaindb.common.utils import (gen_timestamp, serialize,
validate_txn_obj, validate_key)
from bigchaindb.common.schema import validate_transaction_schema
from bigchaindb.backend.schema import validate_language_key
class Transaction(Transaction):
@ -91,6 +91,7 @@ class Transaction(Transaction):
validate_transaction_schema(tx_body)
validate_txn_obj('asset', tx_body['asset'], 'data', validate_key)
validate_txn_obj('metadata', tx_body, 'metadata', validate_key)
validate_language_key(tx_body['asset'], 'data')
return super().from_dict(tx_body)
@classmethod
@ -116,6 +117,15 @@ class Transaction(Transaction):
del asset['id']
tx_dict.update({'asset': asset})
# get metadata of the transaction
metadata = list(bigchain.get_metadata([tx_dict['id']]))
if 'metadata' not in tx_dict:
metadata = metadata[0] if metadata else None
if metadata:
metadata = metadata.get('metadata')
tx_dict.update({'metadata': metadata})
return cls.from_dict(tx_dict)
@ -354,11 +364,15 @@ class Block(object):
"""
asset_ids = cls.get_asset_ids(block_dict)
assets = bigchain.get_assets(asset_ids)
txn_ids = cls.get_txn_ids(block_dict)
metadata = bigchain.get_metadata(txn_ids)
# reconstruct block
block_dict = cls.couple_assets(block_dict, assets)
block_dict = cls.couple_metadata(block_dict, metadata)
kwargs = from_dict_kwargs or {}
return cls.from_dict(block_dict, **kwargs)
def decouple_assets(self):
def decouple_assets(self, block_dict=None):
"""
Extracts the assets from the ``CREATE`` transactions in the block.
@ -367,7 +381,9 @@ class Block(object):
the block being the dict of the block with no assets in the CREATE
transactions.
"""
block_dict = deepcopy(self.to_dict())
if block_dict is None:
block_dict = deepcopy(self.to_dict())
assets = []
for transaction in block_dict['block']['transactions']:
if transaction['operation'] in [Transaction.CREATE,
@ -378,6 +394,27 @@ class Block(object):
return (assets, block_dict)
def decouple_metadata(self, block_dict=None):
"""
Extracts the metadata from transactions in the block.
Returns:
tuple: (metadatas, block) with the metadatas being a list of dict/null and
the block being the dict of the block with no metadata in any transaction.
"""
if block_dict is None:
block_dict = deepcopy(self.to_dict())
metadatas = []
for transaction in block_dict['block']['transactions']:
metadata = transaction.pop('metadata')
if metadata:
metadata_new = {'id': transaction['id'],
'metadata': metadata}
metadatas.append(metadata_new)
return (metadatas, block_dict)
@staticmethod
def couple_assets(block_dict, assets):
"""
@ -403,6 +440,34 @@ class Block(object):
transaction.update({'asset': assets.get(transaction['id'])})
return block_dict
@staticmethod
def couple_metadata(block_dict, metadatal):
"""
Given a block_dict with no metadata (as returned from a database call)
and a list of metadata, reconstruct the original block by putting the
metadata of each transaction back into its original transaction.
NOTE: Till a transaction gets accepted the `metadata` of the transaction
is not moved outside of the transaction. So, if a transaction is found to
have metadata then it should not be overridden.
Args:
block_dict (:obj:`dict`): The block dict as returned from a
database call.
metadata (:obj:`list` of :obj:`dict`): A list of metadata returned from
a database call.
Returns:
dict: The dict of the reconstructed block.
"""
# create a dict with {'<txid>': metadata}
metadatal = {m.pop('id'): m.pop('metadata') for m in metadatal}
# add the metadata to their corresponding transactions
for transaction in block_dict['block']['transactions']:
metadata = metadatal.get(transaction['id'], None)
transaction.update({'metadata': metadata})
return block_dict
@staticmethod
def get_asset_ids(block_dict):
"""
@ -426,6 +491,25 @@ class Block(object):
return asset_ids
@staticmethod
def get_txn_ids(block_dict):
"""
Given a block_dict return all the transaction ids.
Args:
block_dict (:obj:`dict`): The block dict as returned from a
database call.
Returns:
list: The list of txn_ids in the block.
"""
txn_ids = []
for transaction in block_dict['block']['transactions']:
txn_ids.append(transaction['id'])
return txn_ids
def to_str(self):
return serialize(self.to_dict())

View File

@ -1,2 +1,2 @@
__version__ = '1.2.0.dev'
__short_version__ = '1.2.dev'
__version__ = '1.4.0.dev'
__short_version__ = '1.4.dev'

View File

@ -2,6 +2,7 @@
from flask_restful import Api
from bigchaindb.web.views import (
assets,
metadata,
blocks,
info,
statuses,
@ -27,6 +28,7 @@ def r(*args, **kwargs):
ROUTES_API_V1 = [
r('/', info.ApiV1Index),
r('assets/', assets.AssetListApi),
r('metadata/', metadata.MetadataApi),
r('blocks/<string:block_id>', blocks.BlockApi),
r('blocks/', blocks.BlockListApi),
r('statuses/', statuses.StatusApi),

View File

@ -50,5 +50,6 @@ def get_api_v1_info(api_prefix):
'statuses': '{}statuses/'.format(api_prefix),
'assets': '{}assets/'.format(api_prefix),
'outputs': '{}outputs/'.format(api_prefix),
'streams': websocket_root
'streams': websocket_root,
'metadata': '{}metadata/'.format(api_prefix),
}

View File

@ -0,0 +1,50 @@
"""This module provides the blueprint for some basic API endpoints.
For more information please refer to the documentation: http://bigchaindb.com/http-api
"""
import logging
from flask_restful import reqparse, Resource
from flask import current_app
from bigchaindb.backend.exceptions import OperationError
from bigchaindb.web.views.base import make_error
logger = logging.getLogger(__name__)
class MetadataApi(Resource):
def get(self):
"""API endpoint to perform a text search on transaction metadata.
Args:
search (str): Text search string to query the text index
limit (int, optional): Limit the number of returned documents.
Return:
A list of metadata that match the query.
"""
parser = reqparse.RequestParser()
parser.add_argument('search', type=str, required=True)
parser.add_argument('limit', type=int)
args = parser.parse_args()
if not args['search']:
return make_error(400, 'text_search cannot be empty')
if not args['limit']:
del args['limit']
pool = current_app.config['bigchain_pool']
with pool() as bigchain:
args['table'] = 'metadata'
metadata = bigchain.text_search(**args)
try:
# This only works with MongoDB as the backend
return list(metadata)
except OperationError as e:
return make_error(
400,
'({}): {}'.format(type(e).__name__, e)
)

View File

@ -70,6 +70,15 @@ class Dispatcher:
self.subscribers[uuid] = websocket
def unsubscribe(self, uuid):
"""Remove a websocket from the list of subscribers.
Args:
uuid (str): a unique identifier for the websocket.
"""
del self.subscribers[uuid]
@asyncio.coroutine
def publish(self):
"""Publish new events to the subscribers."""
@ -115,11 +124,16 @@ def websocket_handler(request):
msg = yield from websocket.receive()
except RuntimeError as e:
logger.debug('Websocket exception: %s', str(e))
return websocket
if msg.type == aiohttp.WSMsgType.ERROR:
break
if msg.type == aiohttp.WSMsgType.CLOSED:
logger.debug('Websocket closed')
break
elif msg.type == aiohttp.WSMsgType.ERROR:
logger.debug('Websocket exception: %s', websocket.exception())
return websocket
break
request.app['dispatcher'].unsubscribe(uuid)
return websocket
def init_app(event_source, *, loop=None):

View File

@ -25,7 +25,7 @@ services:
BIGCHAINDB_GRAPHITE_HOST: graphite
ports:
- "9984"
command: bigchaindb start --init
command: bigchaindb start
graphite:
image: hopsoft/graphite-statsd

View File

@ -45,4 +45,4 @@ services:
BIGCHAINDB_SERVER_BIND: 0.0.0.0:9984
ports:
- "9984"
command: bigchaindb start --init
command: bigchaindb start

View File

@ -30,4 +30,4 @@ services:
BIGCHAINDB_WSSERVER_HOST: 0.0.0.0
ports:
- "9984"
command: bigchaindb start --init
command: bigchaindb start

View File

@ -1,241 +0,0 @@
""" Script to render transaction schema into .rst document """
from collections import OrderedDict
import os.path
import yaml
from bigchaindb.common.schema import TX_SCHEMA_PATH, VOTE_SCHEMA_PATH
TPL_PROP = """\
%(title)s
%(underline)s
**type:** %(type)s
%(description)s
"""
TPL_STYLES = """
.. raw:: html
<style>
#%(container)s h2 {
border-top: solid 3px #6ab0de;
background-color: #e7f2fa;
padding: 5px;
}
#%(container)s h3 {
background: #f0f0f0;
border-left: solid 3px #ccc;
font-weight: bold;
padding: 6px;
font-size: 100%%;
font-family: monospace;
}
.document .section p {
margin-bottom: 16px;
}
.notice {
margin: 0px 16px 16px 16px;
background-color: white;
border: 1px solid gold;
padding: 3px 6px;
}
</style>
"""
TPL_TRANSACTION = TPL_STYLES + """\
.. This file was auto generated by %(file)s
==================
Transaction Schema
==================
* `Transaction`_
* Input_
* Output_
* Asset_
* Metadata_
Transaction
-----------
%(transaction)s
Input
-----
%(input)s
Output
------
%(output)s
Asset
-----
%(asset)s
Metadata
--------
%(metadata)s
"""
def generate_transaction_docs():
schema = load_schema(TX_SCHEMA_PATH)
defs = schema['definitions']
doc = TPL_TRANSACTION % {
'transaction': render_section('Transaction', schema),
'output': render_section('Output', defs['output']),
'input': render_section('Input', defs['input']),
'asset': render_section('Asset', defs['asset']),
'metadata': render_section('Metadata', defs['metadata']['anyOf'][0]),
'container': 'transaction-schema',
'file': os.path.basename(__file__),
}
write_schema_doc('transaction', doc)
TPL_VOTE = TPL_STYLES + """\
.. This file was auto generated by %(file)s
===========
Vote Schema
===========
Vote
----
%(vote)s
Vote Details
------------
%(vote_details)s
"""
def generate_vote_docs():
schema = load_schema(VOTE_SCHEMA_PATH)
doc = TPL_VOTE % {
'vote': render_section('Vote', schema),
'vote_details': render_section('Vote', schema['properties']['vote']),
'container': 'vote-schema',
'file': os.path.basename(__file__),
}
write_schema_doc('vote', doc)
def ordered_load_yaml(path):
""" Custom YAML loader to preserve key order """
class OrderedLoader(yaml.SafeLoader):
pass
def construct_mapping(loader, node):
loader.flatten_mapping(node)
return OrderedDict(loader.construct_pairs(node))
OrderedLoader.add_constructor(
yaml.resolver.BaseResolver.DEFAULT_MAPPING_TAG,
construct_mapping)
with open(path) as handle:
return yaml.load(handle, OrderedLoader)
def load_schema(path):
global DEFS
schema = ordered_load_yaml(path)
DEFS = schema['definitions']
return schema
def write_schema_doc(name, doc):
# Check base path exists
base_path = os.path.join(os.path.dirname(__file__), 'source/schema')
if not os.path.exists(base_path):
os.makedirs(base_path)
# Write doc
path = os.path.join(base_path, '%s.rst' % name)
with open(path, 'w') as handle:
handle.write(doc)
def render_section(section_name, obj):
""" Render a domain object and it's properties """
out = [obj['description']]
for name, prop in obj.get('properties', {}).items():
try:
title = '%s.%s' % (section_name, name)
out += [TPL_PROP % {
'title': title,
'underline': '^' * len(title),
'description': property_description(prop),
'type': property_type(prop),
}]
except Exception as exc:
raise ValueError('Error rendering property: %s' % name, exc)
return '\n\n'.join(out + [''])
def property_description(prop):
""" Get description of property """
if 'description' in prop:
return prop['description']
if '$ref' in prop:
return property_description(resolve_ref(prop['$ref']))
if 'anyOf' in prop:
return property_description(prop['anyOf'][0])
raise KeyError('description')
def property_type(prop):
""" Resolve a string representing the type of a property """
if 'type' in prop:
if prop['type'] == 'array':
return 'array (%s)' % property_type(prop['items'])
return prop['type']
if 'anyOf' in prop:
return ' or '.join(property_type(p) for p in prop['anyOf'])
if '$ref' in prop:
return property_type(resolve_ref(prop['$ref']))
raise ValueError('Could not resolve property type')
DEFINITION_BASE_PATH = '#/definitions/'
def resolve_ref(ref):
""" Resolve definition reference """
assert ref.startswith(DEFINITION_BASE_PATH)
return DEFS[ref[len(DEFINITION_BASE_PATH):]]
def main():
""" Main function """
generate_transaction_docs()
generate_vote_docs()
def setup(*_):
""" Fool sphinx into think it's an extension muahaha """
main()
if __name__ == '__main__':
main()

View File

@ -3,3 +3,5 @@ recommonmark>=0.4.0
sphinx-rtd-theme>=0.1.9
sphinxcontrib-napoleon>=0.4.4
sphinxcontrib-httpdomain>=1.5.0
pyyaml>=3.12
bigchaindb

View File

@ -33,7 +33,7 @@ API Server bind? (default `localhost:9984`): 0.0.0.0:9984
Finally, run BigchainDB Server by doing:
```text
bigchaindb start --init
bigchaindb start
```
BigchainDB Server should now be running on the Azure virtual machine.

View File

@ -1,80 +0,0 @@
# Cryptography
The section documents the cryptographic algorithms and Python implementations
that we use.
Before hashing or computing the signature of a JSON document, we serialize it
as described in [the section on JSON serialization](json-serialization.html).
## Hashes
BigchainDB computes transaction and block hashes using an implementation of the
[SHA3-256](https://pypi.python.org/pypi/pysha3)
algorithm provided by the
[**pysha3** package](https://bitbucket.org/tiran/pykeccak),
which is a wrapper around the optimized reference implementation
from [http://keccak.noekeon.org](http://keccak.noekeon.org).
**Important**: Since selecting the Keccak hashing algorithm for SHA-3 in 2012, NIST released a new version of the hash using the same algorithm but slightly different parameters. As of version 0.9, BigchainDB is using the latest version, supported by pysha3 1.0b1. See below for an example output of the hash function.
Here's the relevant code from `bigchaindb/bigchaindb/common/crypto.py:
```python
import sha3
def hash_data(data):
"""Hash the provided data using SHA3-256"""
return sha3.sha3_256(data.encode()).hexdigest()
```
The incoming `data` is understood to be a Python 3 string,
which may contain Unicode characters such as `'ü'` or `'字'`.
The Python 3 `encode()` method converts `data` to a bytes object.
`sha3.sha3_256(data.encode())` is a _sha3.SHA3 object;
the `hexdigest()` method converts it to a hexadecimal string.
For example:
```python
>>> import sha3
>>> data = '字'
>>> sha3.sha3_256(data.encode()).hexdigest()
'2b38731ba4ef72d4034bef49e87c381d1fbe75435163b391dd33249331f91fe7'
>>> data = 'hello world'
>>> sha3.sha3_256(data.encode()).hexdigest()
'644bcc7e564373040999aac89e7622f3ca71fba1d972fd94a31c3bfbf24e3938'
```
Note: Hashlocks (which are one kind of crypto-condition)
may use a different hash function.
## Signature Algorithm and Keys
BigchainDB uses the [Ed25519](https://ed25519.cr.yp.to/) public-key signature
system for generating its public/private key pairs. Ed25519 is an instance of
the [Edwards-curve Digital Signature Algorithm
(EdDSA)](https://en.wikipedia.org/wiki/EdDSA). As of December 2016, EdDSA was an
["Internet-Draft" with the
IETF](https://tools.ietf.org/html/draft-irtf-cfrg-eddsa-08) but was [already
widely used](https://ianix.com/pub/ed25519-deployment.html).
BigchainDB uses the the
[**cryptoconditions** package](https://github.com/bigchaindb/cryptoconditions)
to do signature and keypair-related calculations.
That package, in turn, uses the [**PyNaCl** package](https://pypi.python.org/pypi/PyNaCl),
a Python binding to the Networking and Cryptography (NaCl) library.
All keys are represented with
[a Base58 encoding](https://en.wikipedia.org/wiki/Base58).
The cryptoconditions package uses the
[**base58** package](https://pypi.python.org/pypi/base58)
to calculate a Base58 encoding.
(There's no standard for Base58 encoding.)
Here's an example public/private key pair:
```js
"keypair": {
"public": "9WYFf8T65bv4S8jKU8wongKPD4AmMZAwvk1absFDbYLM",
"private": "3x7MQpPq8AEUGEuzAxSVHjU1FhLWVQJKFNNkvHhJPGCX"
}
```

View File

@ -0,0 +1,9 @@
Cryptography
============
See `the IPDB Transaction Spec
<https://the-ipdb-transaction-spec.readthedocs.io/en/latest/>`_,
especially the pages about:
- Cryptographic Hashes
- Cryptographic Keys & Signatures

View File

@ -1,101 +0,0 @@
# Run BigchainDB with Docker On Mac
**NOT for Production Use**
Those developing on Mac can follow this document to run BigchainDB in docker
containers for a quick dev setup.
Running BigchainDB on Mac (Docker or otherwise) is not officially supported.
Support is very much limited as there are certain things that work differently
in Docker for Mac than Docker for other platforms.
Also, we do not use mac for our development and testing. :)
This page may not be up to date with various settings and docker updates at
all the times.
These steps work as of this writing (2017.Mar.09) and might break in the
future with updates to Docker for mac.
Community contribution to make BigchainDB run on Docker for Mac will always be
welcome.
## Prerequisite
Install Docker for Mac.
## (Optional) For a clean start
1. Stop all BigchainDB and RethinkDB/MongoDB containers.
2. Delete all BigchainDB docker images.
3. Delete the ~/bigchaindb_docker folder.
## Pull the images
Pull the bigchaindb and other required docker images from docker hub.
```text
docker pull bigchaindb/bigchaindb:master
docker pull [rethinkdb:2.3|mongo:3.4.1]
```
## Create the BigchainDB configuration file on Mac
```text
docker run \
--rm \
--volume $HOME/bigchaindb_docker:/data \
bigchaindb/bigchaindb:master \
-y configure \
[mongodb|rethinkdb]
```
To ensure that BigchainDB connects to the backend database bound to the virtual
interface `172.17.0.1`, you must edit the BigchainDB configuration file
(`~/bigchaindb_docker/.bigchaindb`) and change database.host from `localhost`
to `172.17.0.1`.
## Run the backend database on Mac
From v0.9 onwards, you can run RethinkDB or MongoDB.
We use the virtual interface created by the Docker daemon to allow
communication between the BigchainDB and database containers.
It has an IP address of 172.17.0.1 by default.
You can also use docker host networking or bind to your primary (eth)
interface, if needed.
### For RethinkDB backend
```text
docker run \
--name=rethinkdb \
--publish=28015:28015 \
--publish=8080:8080 \
--restart=always \
--volume $HOME/bigchaindb_docker:/data \
rethinkdb:2.3
```
### For MongoDB backend
```text
docker run \
--name=mongodb \
--publish=27017:27017 \
--restart=always \
--volume=$HOME/bigchaindb_docker/db:/data/db \
--volume=$HOME/bigchaindb_docker/configdb:/data/configdb \
mongo:3.4.1 --replSet=bigchain-rs
```
### Run BigchainDB on Mac
```text
docker run \
--name=bigchaindb \
--publish=9984:9984 \
--restart=always \
--volume=$HOME/bigchaindb_docker:/data \
bigchaindb/bigchaindb \
start
```

View File

@ -10,7 +10,6 @@ Appendices
install-os-level-deps
install-latest-pip
run-with-docker
docker-on-mac
json-serialization
cryptography
the-Bigchain-class
@ -28,4 +27,5 @@ Appendices
licenses
install-with-lxd
run-with-vagrant
run-with-ansible
run-with-ansible
vote-yaml

View File

@ -1,56 +0,0 @@
# JSON Serialization
We needed to clearly define how to serialize a JSON object to calculate the hash.
The serialization should produce the same byte output independently of the architecture running the software. If there are differences in the serialization, hash validations will fail although the transaction is correct.
For example, consider the following two methods of serializing `{'a': 1}`:
```python
# Use a serializer provided by RethinkDB
a = r.expr({'a': 1}).to_json().run(b.connection)
u'{"a":1}'
# Use the serializer in Python's json module
b = json.dumps({'a': 1})
'{"a": 1}'
a == b
False
```
The results are not the same. We want a serialization and deserialization so that the following is always true:
```python
deserialize(serialize(data)) == data
True
```
Since BigchainDB performs a lot of serialization we decided to use [python-rapidjson](https://github.com/python-rapidjson/python-rapidjson)
which is a python wrapper for [rapidjson](https://github.com/miloyip/rapidjson) a fast and fully RFC complient JSON parser.
```python
import rapidjson
rapidjson.dumps(data, skipkeys=False,
ensure_ascii=False,
sort_keys=True)
```
- `skipkeys`: With skipkeys `False` if the provided keys are not a string the serialization will fail. This way we enforce all keys to be strings
- `ensure_ascii`: The RFC recommends `utf-8` for maximum interoperability. By setting `ensure_ascii` to `False` we allow unicode characters and python-rapidjson forces the encoding to `utf-8`.
- `sort_keys`: Sorted output by keys.
Every time we need to perform some operation on the data like calculating the hash or signing/verifying the transaction, we need to use the previous criteria to serialize the data and then use the `byte` representation of the serialized data (if we treat the data as bytes we eliminate possible encoding errors e.g. unicode characters). For example:
```python
# calculate the hash of a transaction
# the transaction is a dictionary
tx_serialized = bytes(serialize(tx))
tx_hash = hashlib.sha3_256(tx_serialized).hexdigest()
# signing a transaction
tx_serialized = bytes(serialize(tx))
signature = sk.sign(tx_serialized)
# verify signature
tx_serialized = bytes(serialize(tx))
pk.verify(signature, tx_serialized)
```

View File

@ -0,0 +1,6 @@
JSON Serialization
==================
See the page about JSON Serialization & Deserialization
in `the IPDB Transaction Spec
<https://the-ipdb-transaction-spec.readthedocs.io/en/latest/>`_.

View File

@ -2,8 +2,8 @@
**NOT for Production Use**
You can use the following instructions to deploy a BigchainDB node for
dev/test using Ansible. Ansible will setup a BigchainDB node along with
You can use the following instructions to deploy a single or multi node
BigchainDB setup for dev/test using Ansible. Ansible will setup BigchainDB node(s) along with
[Docker](https://www.docker.com/), [Docker Compose](https://docs.docker.com/compose/),
[MongoDB](https://www.mongodb.com/), [BigchainDB Python driver](https://docs.bigchaindb.com/projects/py-driver/en/latest/).
@ -12,6 +12,10 @@ Currently, this workflow is only supported for the following distributions:
- CentOS >= 7
- Fedora >= 24
## Minimum Requirements | Ansible
Minimum resource requirements for a single node BigchainDB dev setup. **The more the better**:
- Memory >= 512MB
- VCPUs >= 1
## Clone the BigchainDB repository | Ansible
```text
$ git clone https://github.com/bigchaindb/bigchaindb.git
@ -20,54 +24,93 @@ $ git clone https://github.com/bigchaindb/bigchaindb.git
## Install dependencies | Ansible
- [Ansible](http://docs.ansible.com/ansible/latest/intro_installation.html)
You can also install `ansible` and other dependecies, if any, using the `boostrap.sh` script
You can also install `ansible` and other dependencies, if any, using the `boostrap.sh` script
inside the BigchainDB repository.
Navigate to `bigchaindb/pkg/scripts` and run the `bootstrap.sh` script to install the dependecies
for your OS. The script also checks if the OS you are running is compatible with the
supported versions.
**Note**: `bootstrap.sh` only supports Ubuntu >= 16.04, CentOS >= 7 and Fedora >=24.
```text
$ cd bigchaindb/pkg/scripts/
$ sudo ./bootstrap.sh
```
### Local Setup | Ansible
You can safely run the `quickstart` playbook now and everything will be taken care of by `ansible` on your host. `quickstart` playbook only supports deployment on your dev/local host. To run the playbook please navigate to the ansible directory inside the BigchainDB repository and run the `quickstart` playbook.
### BigchainDB Setup Configuration(s) | Ansible
#### Local Setup | Ansible
You can run the Ansible playbook `bdb-deploy.yml` on your local dev machine and set up the BigchainDB node where
BigchainDB can be run as a process or inside a Docker container(s) depending on your configuration.
Before, running the playbook locally, you need to update the `hosts` and `bdb-config.yml` configuration, which will notify Ansible that we need to run the play locally.
##### Update Hosts | Local
Navigate to `bigchaindb/pkg/configuration/hosts` inside the BigchainDB repository.
```text
$ cd bigchaindb/pkg/ansible/
# All the services will be deployed as processes
$ sudo ansible-playbook quickstart.yml -c local
OR
# To deploy all services inside docker containers
$ sudo ansible-playbook quickstart.yml --extra-vars "with_docker=true" -c local
$ cd bigchaindb/pkg/configuration/hosts
```
After successfull execution of the playbook, you can verify that BigchainDB docker/process is running.
Edit `all` configuration file:
```text
# Delete any existing configuration in this file and insert
# Hostname of dev machine
<HOSTNAME> ansible_connection=local
```
##### Update Configuration | Local
Navigate to `bigchaindb/pkg/configuration/vars` inside the BigchainDB repository.
```text
$ cd bigchaindb/pkg/configuration/vars/bdb-config.yml
```
Verify BigchainDB process:
Edit `bdb-config.yml` configuration file as per your requirements, sample configuration file(s):
```text
---
deploy_docker: false #[true, false]
docker_cluster_size: 1 # Only needed if `deploy_docker` is true
bdb_hosts:
- name: "<HOSTNAME>" # Hostname of dev machine
```
**Note**: You can also orchestrate a multi-node BigchainDB cluster on a local dev host using Docker containers.
Here is a sample `bdb-config.yml`
```text
---
deploy_docker: true #[true, false]
docker_cluster_size: 3
bdb_hosts:
- name: "<LOCAL_DEV_HOST_HOSTNAME>"
```
### BigchainDB Setup | Ansible
Now, You can safely run the `bdb-deploy.yml` playbook and everything will be taken care of by `Ansible`. To run the playbook please navigate to the `bigchaindb/pkg/configuration` directory inside the BigchainDB repository and run the `bdb-deploy.yml` playbook.
```text
$ cd bigchaindb/pkg/configuration/
$ sudo ansible-playbook bdb-deploy.yml -i hosts/all
```
After successful execution of the playbook, you can verify that BigchainDB docker(s)/process(es) is(are) running.
Verify BigchainDB process(es):
```text
$ ps -ef | grep bigchaindb
```
OR
Verify BigchainDB Docker:
Verify BigchainDB Docker(s):
```text
$ docker ps | grep bigchaindb
```
The playbook also installs the BigchainDB Python Driver,
The playbook also installs the BigchainDB Python Driver,
so you can use it to make transactions
and verify the functionality of your BigchainDB node.
See the [BigchainDB Python Driver documentation](https://docs.bigchaindb.com/projects/py-driver/en/latest/index.html)
for details on how to use it.
Note 1: The `bdb_root_url` can be be one of the following:
**Note**: The `bdb_root_url` can be be one of the following:
```text
# BigchainDB is running as a process
bdb_root_url = http://<HOST-IP>:9984
@ -78,4 +121,47 @@ OR
bdb_root_url = http://<HOST-IP>:<DOCKER-PUBLISHED-PORT>
```
Note 2: BigchainDB has [other drivers as well](../drivers-clients/index.html).
**Note**: BigchainDB has [other drivers as well](../drivers-clients/index.html).
### Experimental: Running Ansible a Remote Dev/Host
#### Remote Setup | Ansible
You can also run the Ansible playbook `bdb-deploy.yml` on remote machine(s) and set up the BigchainDB node where
BigchainDB can run as a process or inside a Docker container(s) depending on your configuration.
Before, running the playbook on a remote host, you need to update the `hosts` and `bdb-config.yml` configuration, which will notify Ansible that we need to
run the play on a remote host.
##### Update Hosts | Remote
Navigate to `bigchaindb/pkg/configuration/hosts` inside the BigchainDB repository.
```text
$ cd bigchaindb/pkg/configuration/hosts
```
Edit `all` configuration file:
```text
# Delete any existing configuration in this file and insert
<Remote_Host_IP/Hostname> ansible_ssh_user=<USERNAME> ansible_sudo_pass=<ROOT_PASSWORD>
```
**Note**: You can add multiple hosts to the `all` configuration file. Root password is needed because ansible
will run some tasks that require root permissions.
**Note**: You can also use other methods to get inside the remote machines instead of password based SSH. For other methods
please consult [Ansible Documentation](http://docs.ansible.com/ansible/latest/intro_getting_started.html).
##### Update Configuration | Remote
Navigate to `bigchaindb/pkg/configuration/vars` inside the BigchainDB repository.
```text
$ cd bigchaindb/pkg/configuration/vars/bdb-config.yml
```
Edit `bdb-config.yml` configuration file as per your requirements, sample configuration file(s):
```text
---
deploy_docker: false #[true, false]
docker_cluster_size: 1 # Only needed if `deploy_docker` is true
bdb_hosts:
- name: "<REMOTE_MACHINE_HOSTNAME>"
```
After, the configuration of remote hosts, [run the Ansible playbook and verify your deployment](#bigchaindb-setup-ansible).

View File

@ -6,9 +6,12 @@ For those who like using Docker and wish to experiment with BigchainDB in
non-production environments, we currently maintain a Docker image and a
`Dockerfile` that can be used to build an image for `bigchaindb`.
## Prerequisite(s)
- [Docker](https://docs.docker.com/engine/installation/)
## Pull and Run the Image from Docker Hub
Assuming you have Docker installed, you would proceed as follows.
With Docker installed, you can proceed as follows.
In a terminal shell, pull the latest version of the BigchainDB Docker image using:
```text
@ -26,6 +29,7 @@ docker run \
--rm \
--tty \
--volume $HOME/bigchaindb_docker:/data \
--env BIGCHAINDB_DATABASE_HOST=172.17.0.1 \
bigchaindb/bigchaindb \
-y configure \
[mongodb|rethinkdb]
@ -46,24 +50,18 @@ Let's analyze that command:
this allows us to have the data persisted on the host machine,
you can read more in the [official Docker
documentation](https://docs.docker.com/engine/tutorials/dockervolumes)
* `--env BIGCHAINDB_DATABASE_HOST=172.17.0.1`, `172.17.0.1` is the default `docker0` bridge
IP address, for fresh Docker installations. It is used for the communication between BigchainDB and database
containers.
* `bigchaindb/bigchaindb` the image to use. All the options after the container name are passed on to the entrypoint inside the container.
* `-y configure` execute the `configure` sub-command (of the `bigchaindb`
command) inside the container, with the `-y` option to automatically use all the default config values
* `mongodb` or `rethinkdb` specifies the database backend to use with bigchaindb
To ensure that BigchainDB connects to the backend database bound to the virtual
interface `172.17.0.1`, you must edit the BigchainDB configuration file
(`~/bigchaindb_docker/.bigchaindb`) and change database.host from `localhost`
to `172.17.0.1`.
### Run the backend database
From v0.9 onwards, you can run either RethinkDB or MongoDB.
We use the virtual interface created by the Docker daemon to allow
communication between the BigchainDB and database containers.
It has an IP address of 172.17.0.1 by default.
You can also use docker host networking or bind to your primary (eth)
interface, if needed.
@ -73,8 +71,8 @@ You can also use docker host networking or bind to your primary (eth)
docker run \
--detach \
--name=rethinkdb \
--publish=172.17.0.1:28015:28015 \
--publish=172.17.0.1:58080:8080 \
--publish=28015:28015 \
--publish=58080:8080 \
--restart=always \
--volume $HOME/bigchaindb_docker:/data \
rethinkdb:2.3
@ -102,11 +100,11 @@ group.
docker run \
--detach \
--name=mongodb \
--publish=172.17.0.1:27017:27017 \
--publish=27017:27017 \
--restart=always \
--volume=$HOME/mongodb_docker/db:/data/db \
--volume=$HOME/mongodb_docker/configdb:/data/configdb \
mongo:3.4.1 --replSet=bigchain-rs
mongo:3.4.9 --replSet=bigchain-rs
```
### Run BigchainDB

View File

@ -2,10 +2,10 @@
**NOT for Production Use**
You can use the following instructions to deploy a BigchainDB node
for dev/test using Vagrant. Vagrant will setup a BigchainDB node with
all the dependencies along with MongoDB, BigchainDB Python driver. You
can also tweak the following configurations for the BigchainDB node.
You can use the following instructions to deploy a single or multi node
BigchainDB setup for dev/test using Vagrant. Vagrant will set up the BigchainDB node(s)
with all the dependencies along with MongoDB and BigchainDB Python driver. You
can also tweak the following configurations for the BigchainDB node(s).
- Vagrant Box
- Currently, we support the following boxes:
- `ubuntu/xenial64 # >=16.04`
@ -19,15 +19,21 @@ can also tweak the following configurations for the BigchainDB node.
- Network Type
- Currently, only `private_network` is supported.
- IP Address
- Setup type
- `quickstart`
- Deploy node with Docker
- Deploy all the services in Docker containers or as processes.
- Number of BigchainDB nodes
- If you want to deploy the services inside Docker containers, you
can specify number of member(s) in the BigchainDB cluster.
- Upstart Script
- Vagrant Provider
- Virtualbox
- VMware
## Minimum Requirements | Vagrant
Minimum resource requirements for a single node BigchainDB dev setup. **The more the better**:
- Memory >= 512MB
- VCPUs >= 1
## Install dependencies | Vagrant
1. [VirtualBox](https://www.virtualbox.org/wiki/Downloads) >= 5.0.0
2. [Vagrant](https://www.vagrantup.com/downloads.html) >= 1.16.0
@ -38,86 +44,108 @@ $ git clone https://github.com/bigchaindb/bigchaindb.git
```
## Configuration | Vagrant
Navigate to `bigchaindb/pkg/config/` inside the repository.
Navigate to `bigchaindb/pkg/configuration/vars/` inside the BigchainDB repository.
```text
$ cd bigchaindb/pkg/config/
$ cd bigchaindb/pkg/configuration/vars/
```
Edit the `bdb-config.yaml` as per your requirements. Sample `bdb-config.yaml`:
Edit `bdb-config.yml` as per your requirements. Sample `bdb-config.yml`:
```text
---
- name: "bdb-node-01"
box:
name: "ubuntu/xenial64"
ram: "2048"
vcpus: "2"
setup_type: "quickstart"
deploy_docker: false
network:
ip: "10.20.30.40"
type: "private_network"
upstart: "/bigchaindb/scripts/bootstrap.sh"
deploy_docker: false #[true, false]
docker_cluster_size: 1
upstart: "/bigchaindb/scripts/bootstrap.sh"
bdb_hosts:
- name: "bdb-node-01"
box:
name: "ubuntu/xenial64"
ram: "2048"
vcpus: "2"
network:
ip: "10.20.30.40"
type: "private_network"
```
**Note**: You can spawn multiple instances as well using `bdb-config.yaml`. Here is a sample `bdb-config.yaml`:
**Note**: You can spawn multiple instances to orchestrate a multi-node BigchainDB cluster.
Here is a sample `bdb-config.yml`:
```text
---
- name: "bdb-node-01"
box:
name: "ubuntu/xenial64"
ram: "2048"
vcpus: "2"
setup_type: "quickstart"
deploy_docker: false
network:
ip: "10.20.30.40"
type: "private_network"
upstart: "/bigchaindb/scripts/bootstrap.sh"
- name: "bdb-node-02"
box:
name: "ubuntu/xenial64"
ram: "4096"
vcpus: "3"
setup_type: "quickstart"
deploy_docker: false
network:
ip: "10.20.30.50"
type: "private_network"
upstart: "/bigchaindb/scripts/bootstrap.sh"
deploy_docker: false #[true, false]
docker_cluster_size: 1
upstart: "/bigchaindb/scripts/bootstrap.sh"
bdb_hosts:
- name: "bdb-node-01"
box:
name: "ubuntu/xenial64"
ram: "2048"
vcpus: "2"
network:
ip: "10.20.30.40"
type: "private_network"
- name: "bdb-node-02"
box:
name: "ubuntu/xenial64"
ram: "2048"
vcpus: "2"
network:
ip: "10.20.30.50"
type: "private_network"
```
**Note**: You can also orchestrate a multi-node BigchainDB cluster on a single dev host using Docker containers.
Here is a sample `bdb-config.yml`
```text
---
deploy_docker: true #[true, false]
docker_cluster_size: 3
upstart: "/bigchaindb/scripts/bootstrap.sh"
bdb_hosts:
- name: "bdb-node-01"
box:
name: "ubuntu/xenial64"
ram: "8192"
vcpus: "4"
network:
ip: "10.20.30.40"
type: "private_network"
```
The above mentioned configuration will deploy a 3 node BigchainDB cluster with Docker containers
on your specified host.
## BigchainDB Setup | Vagrant
**Note**: There are some vagrant plugins required for the installation,
user will be prompted to install them if they are not present. To install
the required plugins, run the following command:
```text
$ vagrant plugin install vagrant-cachier vagrant-vbguest vagrant-hosts
```
## Local Setup | Vagrant
To bring up the BigchainDB node, run the following command:
To bring up the BigchainDB node(s), run the following command:
```text
$ vagrant up
```
*Note*: There are some vagrant plugins required for the installation, user will be prompted to install them if they are not present. Instructions to install the plugins can be extracted from the message.
```text
$ vagrant plugin install <plugin-name>
```
After successfull execution of Vagrant, you can log in to your fresh BigchainDB node.
After successful execution of Vagrant, you can log in to your fresh BigchainDB node.
```text
$ vagrant ssh <instance-name>
```
## Make your first transaction
Once you are inside the BigchainDB node, you can verify that BigchainDB docker/process is running.
Once you are inside the BigchainDB node, you can verify that BigchainDB
docker(s)/process(es) is(are) running.
Verify BigchainDB process:
Verify BigchainDB process(es):
```text
$ ps -ef | grep bigchaindb
```
OR
Verify BigchainDB Docker:
Verify BigchainDB Docker(s):
```text
$ docker ps | grep bigchaindb
```

View File

@ -0,0 +1,20 @@
The Vote Schema File
====================
BigchainDB checks all :ref:`votes <The Vote Model>`
(JSON documents) against a formal schema
defined in a JSON Schema file named vote.yaml.
The contents of that file are copied below.
To understand those contents
(i.e. JSON Schema), check out
`"Understanding JSON Schema"
<https://spacetelescope.github.io/understanding-json-schema/index.html>`_
by Michael Droettboom or
`json-schema.org <http://json-schema.org/>`_.
vote.yaml
---------
.. literalinclude:: ../../../../bigchaindb/common/schema/vote.yaml
:language: yaml

View File

@ -51,7 +51,6 @@ extensions = [
'sphinx.ext.autosectionlabel',
# Below are actually build steps made to look like sphinx extensions.
# It was the easiest way to get it running with ReadTheDocs.
'generate_schema_documentation',
'generate_http_server_api_documentation',
]

View File

@ -1,20 +0,0 @@
# The Asset Model
To avoid redundant data in transactions, the asset model is different for `CREATE` and `TRANSFER` transactions.
In a `CREATE` transaction, the `"asset"` must contain exactly one key-value pair. The key must be `"data"` and the value can be any valid JSON document, or `null`. For example:
```json
{
"data": {
"desc": "Gold-inlay bookmark owned by Xavier Bellomat Dickens III",
"xbd_collection_id": 1857
}
}
```
In a `TRANSFER` transaction, the `"asset"` must contain exactly one key-value pair. They key must be `"id"` and the value must contain a transaction ID (i.e. a SHA3-256 hash: the ID of the `CREATE` transaction which created the asset, which also serves as the asset ID). For example:
```json
{
"id": "38100137cea87fb9bd751e2372abb2c73e7d5bcf39d940a5516a324d9c7fb88d"
}
```

View File

@ -0,0 +1,5 @@
The Asset Model
===============
See `the IPDB Transaction Spec
<https://the-ipdb-transaction-spec.readthedocs.io/en/latest/>`_.

View File

@ -1,36 +1,90 @@
The Block Model
===============
A block has the following structure:
A block is a JSON object with a particular schema,
as outlined in this page.
A block must contain the following JSON keys
(also called names or fields):
.. code-block:: json
{
"id": "<hash of block>",
"id": "<ID of the block>",
"block": {
"timestamp": "<block-creation timestamp>",
"transactions": ["<list of transactions>"],
"node_pubkey": "<public key of the node creating the block>",
"voters": ["<list of public keys of all nodes in the cluster>"]
"timestamp": "<Block-creation timestamp>",
"transactions": ["<List of transactions>"],
"node_pubkey": "<Public key of the node which created the block>",
"voters": ["<List of public keys of all nodes in the cluster>"]
},
"signature": "<signature of block>"
"signature": "<Signature of inner block object>"
}
- ``id``: The :ref:`hash <Hashes>` of the serialized inner ``block`` (i.e. the ``timestamp``, ``transactions``, ``node_pubkey``, and ``voters``). It's used as a unique index in the database backend (e.g. RethinkDB or MongoDB).
The JSON Keys in a Block
------------------------
- ``block``:
- ``timestamp``: The Unix time when the block was created. It's provided by the node that created the block.
- ``transactions``: A list of the transactions included in the block.
- ``node_pubkey``: The public key of the node that created the block.
- ``voters``: A list of the public keys of all cluster nodes at the time the block was created.
It's the list of nodes which can cast a vote on this block.
This list can change from block to block, as nodes join and leave the cluster.
**id**
- ``signature``: :ref:`Cryptographic signature <Signature Algorithm and Keys>` of the block by the node that created the block (i.e. the node with public key ``node_pubkey``). To generate the signature, the node signs the serialized inner ``block`` (the same thing that was hashed to determine the ``id``) using the private key corresponding to ``node_pubkey``.
The transaction ID and also the SHA3-256 hash
of the inner ``block`` object, loosely speaking.
It's a string.
To compute it, 1) construct an :term:`associative array` ``d`` containing
``block.timestamp``, ``block.transactions``, ``block.node_pubkey``,
``block.voters``, and their values. 2) compute ``id = hash_of_aa(d)``.
There's pseudocode for the ``hash_of_aa()`` function
in the `IPDB Protocol documentation page about cryptographic hashes
<https://the-ipdb-protocol.readthedocs.io/en/latest/crypto-hashes.html#computing-the-hash-of-an-associative-array>`_.
The result (``id``) is a string: the block ID.
An example is ``"b60adf655932bf47ef58c0bfb2dd276d4795b94346b36cbb477e10d7eb02cea8"``
Working with Blocks
-------------------
**block.timestamp**
There's a **Block** class for creating and working with Block objects; look in `/bigchaindb/models.py <https://github.com/bigchaindb/bigchaindb/blob/master/bigchaindb/models.py>`_. (The link is to the latest version on the master branch on GitHub.)
The `Unix time <https://en.wikipedia.org/wiki/Unix_time>`_
when the block was created, according to the node which created it.
It's a string representation of an integer.
An example is ``"1507294217"``.
**block.transactions**
A list of the :ref:`transactions <The Transaction Model>` included in the block.
(Each transaction is a JSON object.)
**block.node_pubkey**
The public key of the node that created the block.
It's a string.
See the `IPDB Protocol documentation page about cryptographic keys & signatures
<https://the-ipdb-protocol.readthedocs.io/en/latest/crypto-keys-and-sigs.html>`_.
**block.voters**
A list of the public keys of all cluster nodes at the time the block was created.
It's a list of strings.
This list can change from block to block, as nodes join and leave the cluster.
**signature**
The cryptographic signature of the inner ``block``
by the node that created the block
(i.e. the node with public key ``node_pubkey``).
To compute that:
#. Construct an :term:`associative array` ``d`` containing the contents
of the inner ``block``
(i.e. ``block.timestamp``, ``block.transactions``, ``block.node_pubkey``,
``block.voters``, and their values).
#. Compute ``signature = sig_of_aa(d, private_key)``,
where ``private_key`` is the node's private key
(i.e. ``node_pubkey`` and ``private_key`` are a key pair). There's pseudocode
for the ``sig_of_aa()`` function
on `the IPDB Protocol documentation page about cryptographic keys and signatures
<https://the-ipdb-protocol.readthedocs.io/en/latest/crypto-keys-and-sigs.html#computing-the-signature-of-an-associative-array>`_.
.. note::
The ``d_bytes`` computed when computing the block ID will be the *same* as the ``d_bytes`` computed when computing the block signature. This can be used to avoid redundant calculations.

View File

@ -1,102 +1,5 @@
Conditions
==========
At a high level, a condition is like a lock on an output.
If can you satisfy the condition, you can unlock the output and transfer/spend it.
BigchainDB Server supports a subset of the ILP Crypto-Conditions
(`version 02 of Crypto-Conditions <https://tools.ietf.org/html/draft-thomas-crypto-conditions-02>`_).
A condition object can be quite elaborate,
with many nested levels,
but the simplest case is actually quite simple.
Here's an example signature condition:
.. code-block:: json
{
"details": {
"type": "ed25519-sha-256",
"public_key": "HFp773FH21sPFrn4y8wX3Ddrkzhqy4La4cQLfePT2vz7"
},
"uri": "ni:///sha-256;at0MY6Ye8yvidsgL9FrnKmsVzX0XrNNXFmuAPF4bQeU?fpt=ed25519-sha-256&cost=131072"
}
If someone wants to spend the output where this condition is found, then they must create a TRANSFER transaction with an input that fulfills it (this condition). Because it's a ed25519-sha-256 signature condition, that means they must sign the TRANSFER transaction with the private key corresponding to the public key HFp773…
Supported Crypto-Conditions
---------------------------
BigchainDB Server v1.0 supports two of the Crypto-Conditions:
1. ED25519-SHA-256 signature conditions
2. THRESHOLD-SHA-256 threshold conditions
We saw an example signature condition above.
For more information about how BigchainDB handles keys and signatures,
see the page titled :ref:`Signature Algorithm and Keys`.
A more complex condition can be composed by using n signature conditions as inputs to an m-of-n threshold condition: a logic gate which outputs TRUE if and only if m or more inputs are TRUE. If there are n inputs to a threshold condition:
* 1-of-n is the same as a logical OR of all the inputs
* n-of-n is the same as a logical AND of all the inputs
For example, you could create a condition requiring m (of n) signatures.
Here's an example 2-of-2 condition:
.. code-block:: json
{
"details": {
"type": "threshold-sha-256",
"threshold": 2,
"subconditions": [
{
"public_key": "5ycPMinRx7D7e6wYXLNLa3TCtQrMQfjkap4ih7JVJy3h",
"type": "ed25519-sha-256"
},
{
"public_key": "9RSas2uCxR5sx1rJoUgcd2PB3tBK7KXuCHbUMbnH3X1M",
"type": "ed25519-sha-256"
}
]
},
"uri": "ni:///sha-256;zr5oThl2kk6613WKGFDg-JGu00Fv88nXcDcp6Cyr0Vw?fpt=threshold-sha-256&cost=264192&subtypes=ed25519-sha-256"
}
The (single) output of a threshold condition can be used as one of the inputs to another threshold condition. That means you can combine threshold conditions to build complex expressions such as ``(x OR y) AND (2 of {a, b, c})``.
.. image:: /_static/Conditions_Circuit_Diagram.png
When you create a condition, you can calculate its
`cost <https://tools.ietf.org/html/draft-thomas-crypto-conditions-02#section-7.2.2>`_,
an estimate of the resources that would be required to validate the fulfillment.
For example, the cost of one signature condition is 131072.
A BigchainDB federation can put an upper limit on the complexity of each
condition, either directly by setting a maximum allowed cost,
or
`indirectly <https://github.com/bigchaindb/bigchaindb/issues/356#issuecomment-288085251>`_
by :ref:`setting a maximum allowed transaction size <Enforcing a Max Transaction Size>`
which would limit
the overall complexity accross all inputs and outputs of a transaction.
Note: At the time of writing, there was no configuration setting
to set a maximum allowed cost,
so the only real option was to
:ref:`set a maximum allowed transaction size <Enforcing a Max Transaction Size>`.
Constructing a Condition
------------------------
The above examples should make it clear how to construct
a condition object, but they didn't say how to generate the ``uri``.
If you want to generate a correct condition URI,
then you should consult the Crypto-Conditions spec
or use one of the existing Crypto-Conditions packages/libraries
(which are used by the BigchainDB Drivers).
* `Crypto-Conditions Spec (Version 02) <https://tools.ietf.org/html/draft-thomas-crypto-conditions-02>`_
* BigchainDB :ref:`Drivers & Tools`
The `Handcrafting Transactions <https://docs.bigchaindb.com/projects/py-driver/en/latest/handcraft.html>`_
page may also be of interest.
See `the IPDB Transaction Spec
<https://the-ipdb-transaction-spec.readthedocs.io/en/latest/>`_.

View File

@ -1,14 +1,6 @@
Data Models
===========
BigchainDB stores all data in the underlying database as JSON documents (conceptually, at least). There are three main kinds:
1. Transactions, which contain assets, inputs, outputs, and other things
2. Blocks
3. Votes
This section unpacks each one in turn.
.. toctree::
:maxdepth: 1

View File

@ -1,73 +1,5 @@
Inputs and Outputs
==================
There's a high-level overview of inputs and outputs
in `the root docs page about transaction concepts <https://docs.bigchaindb.com/en/latest/transaction-concepts.html>`_.
BigchainDB is modelled around *assets*, and *inputs* and *outputs* are the mechanism by which control of an asset (or shares of an asset) is transferred.
Amounts of an asset are encoded in the outputs of a transaction, and each output may be spent separately. To spend an output, the output's ``condition`` must be met by an ``input`` that provides a corresponding ``fulfillment``. Each output may be spent at most once, by a single input. Note that any asset associated with an output holding an amount greater than one is considered a divisible asset that may be split up in future transactions.
Inputs
------
An input has the following structure:
.. code-block:: json
{
"owners_before": ["<The public_keys list in the output being spent>"],
"fulfillment": "<String that fulfills the condition in the output being spent>",
"fulfills": {
"output_index": "<Index of the output being spent (an integer)>",
"transaction_id": "<ID of the transaction containing the output being spent>"
}
}
You can think of the ``fulfills`` object as a pointer to an output on another transaction: the output that this input is spending/transferring.
A CREATE transaction should have exactly one input. That input can contain one or more ``owners_before``, a ``fulfillment`` (with one signature from each of the owners-before), and the value of ``fulfills`` should be ``null``). A TRANSFER transaction should have at least one input, and the value of ``fulfills`` should not be ``null``.
See the reference on :ref:`inputs <Input>` for more description about the meaning of each field.
The ``fulfillment`` string fulfills the condition in the output that is being spent (transferred).
To calculate it:
1. Determine the fulfillment as per the `Crypto-Conditions spec (version 02) <https://tools.ietf.org/html/draft-thomas-crypto-conditions-02>`_.
2. Encode the fulfillment using the `ASN.1 Distinguished Encoding Rules (DER) <http://www.itu.int/ITU-T/recommendations/rec.aspx?rec=12483&lang=en>`_.
3. Encode the resulting bytes using "base64url" (*not* typical base64) as per `RFC 4648, Section 5 <https://tools.ietf.org/html/rfc4648#section-5>`_.
To do those calculations, you can use one of the
:ref:`BigchainDB drivers or transaction-builders <Drivers & Tools>`,
or use a low-level crypto-conditions library as illustrated
in the page about `Handcrafting Transactions <https://docs.bigchaindb.com/projects/py-driver/en/latest/handcraft.html>`_.
A ``fulfillment`` string should look something like:
.. code::
"pGSAIDgbT-nnN57wgI4Cx17gFHv3UB_pIeAzwZCk10rAjs9bgUDxyNnXMl-5PFgSIOrN7br2Tz59MiWe2XY0zlC7LcN52PKhpmdRtcr7GR1PXuTfQ9dE3vGhv7LHn6QqDD6qYHYM"
Outputs
-------
An output has the following structure:
.. code-block:: json
{
"condition": {"<Condition object>"},
"public_keys": ["<List of all public keys associated with the condition object>"],
"amount": "<Number of shares of the asset (an integer in a string)>"
}
The :ref:`page about conditions <Conditions>` explains the contents of a ``condition``.
The list of ``public_keys`` is always the "owners" of the asset at the time the transaction completed, but before the next transaction started.
See the reference on :ref:`outputs <Output>` for more description about the meaning of each field.
Note that ``amount`` must be a string (e.g. ``"7"``).
In a TRANSFER transaction, the sum of the output amounts must be the same as the sum of the outputs that it transfers (i.e. the sum of the input amounts). For example, if a TRANSFER transaction has two outputs, one with ``"amount": "2"`` and one with ``"amount": "3"``, then the sum of the outputs is 5 and so the sum of the outputs-being-transferred must also be 5.
.. note::
The BigchainDB documentation and code talks about control of an asset in terms of "owners" and "ownership." The language is chosen to represent the most common use cases, but in some more complex scenarios, it may not be accurate to say that the output is owned by the controllers of those public keys—it would only be correct to say that those public keys are associated with the ability to fulfill the conditions on the output. Also, depending on the use case, the entity controlling an output via a private key may not be the legal owner of the asset in the corresponding legal domain. However, since we aim to use language that is simple to understand and covers the majority of use cases, we talk in terms of "owners" of an output that have the ability to "spend" that output.
See `the IPDB Transaction Spec
<https://the-ipdb-transaction-spec.readthedocs.io/en/latest/>`_.

View File

@ -1,62 +1,22 @@
The Transaction Model
=====================
A transaction has the following structure:
See `the IPDB Transaction Spec
<https://the-ipdb-transaction-spec.readthedocs.io/en/latest/>`_.
.. code-block:: json
{
"id": "<ID of the transaction>",
"version": "<Transaction schema version number>",
"inputs": ["<List of inputs>"],
"outputs": ["<List of outputs>"],
"operation": "<String>",
"asset": {"<Asset model; see below>"},
"metadata": {"<Arbitrary transaction metadata>"}
}
The Transaction Schema
----------------------
Here's some explanation of the contents:
- **id**: The ID of the transaction and also the hash of the transaction (loosely speaking). See below for an explanation of how it's computed. It's also the database primary key.
- **version**: The version-number of :ref:`the transaction schema <Transaction Schema>`. As of BigchainDB Server 1.0.0, the only allowed value is ``"1.0"``.
- **inputs**: List of inputs.
Each input spends/transfers a previous output by satisfying/fulfilling
the crypto-conditions on that output.
A CREATE transaction should have exactly one input.
A TRANSFER transaction should have at least one input (i.e. ≥1).
For more details, see the subsection about :ref:`inputs <Inputs>`.
- **outputs**: List of outputs.
Each output indicates the crypto-conditions which must be satisfied
by anyone wishing to spend/transfer that output.
It also indicates the number of shares of the asset tied to that output.
For more details, see the subsection about :ref:`outputs <Outputs>`.
- **operation**: A string indicating what kind of transaction this is,
and how it should be validated.
It can only be ``"CREATE"``, ``"TRANSFER"`` or ``"GENESIS"``
(but there should only be one transaction whose operation is ``"GENESIS"``:
the one in the GENESIS block).
- **asset**: A JSON document for the asset associated with the transaction.
(A transaction can only be associated with one asset.)
See :ref:`the page about the asset model <The Asset Model>`.
- **metadata**: User-provided transaction metadata.
It can be any valid JSON document, or ``null``.
**How the transaction ID is computed.**
1) Build a Python dictionary containing ``version``, ``inputs``, ``outputs``, ``operation``, ``asset``, ``metadata`` and their values,
2) In each of the inputs, replace the value of each ``fulfillment`` with ``null``,
3) :ref:`Serialize <JSON Serialization>` that dictionary,
4) The transaction ID is just :ref:`the SHA3-256 hash <Hashes>` of the serialized dictionary.
**About signing the transaction.**
Later, when we get to the models for the block and the vote, we'll see that both include a signature (from the node which created it). You may wonder why transactions don't have signatures… The answer is that they do! They're just hidden inside the ``fulfillment`` string of each input. What gets signed (as of version 1.0.0) is everything inside the transaction, including the ``id``, but the value of each ``fulfillment`` is replaced with ``null``.
There are example BigchainDB transactions in
:ref:`the HTTP API documentation <The HTTP Client-Server API>`
and
`the Python Driver documentation <https://docs.bigchaindb.com/projects/py-driver/en/latest/usage.html>`_.
BigchainDB checks all transactions (JSON documents)
against a formal schema defined
in some `JSON Schema <http://json-schema.org/>`_ files.
Those files are part of the IPDB Transaction Spec.
Their official source is the ``tx_schema/`` directory
in the `ipdb/ipdb-tx-spec repository on GitHub
<https://github.com/ipdb/ipdb-tx-spec>`_,
but BigchainDB Server uses copies of those files;
those copies can be found
in the ``bigchaindb/common/schema/`` directory
in the `bigchaindb/bigchaindb repository on GitHub
<https://github.com/bigchaindb/bigchaindb>`_.

View File

@ -1,27 +0,0 @@
# The Vote Model
A vote has the following structure:
```json
{
"node_pubkey": "<The public key of the voting node>",
"vote": {
"voting_for_block": "<ID of the block the node is voting on>",
"previous_block": "<ID of the block previous to the block being voted on>",
"is_block_valid": "<true OR false>",
"invalid_reason": null,
"timestamp": "<Unix time when the vote was generated, provided by the voting node>"
},
"signature": "<Cryptographic signature of vote>"
}
```
**Notes**
* Votes have no ID (or `"id"`), as far as users are concerned. (The backend database uses one internally, but it's of no concern to users and it's never reported to them via BigchainDB APIs.)
* At the time of writing, the value of `"invalid_reason"` was always `null`. In other words, it wasn't being used. It may be used or dropped in a future version of BigchainDB. See [Issue #217](https://github.com/bigchaindb/bigchaindb/issues/217) on GitHub.
* For more information about the vote `"timestamp"`, see [the page about timestamps in BigchainDB](https://docs.bigchaindb.com/en/latest/timestamps.html).
* For more information about how the `"signature"` is calculated, see [the page about cryptography in BigchainDB](../appendices/cryptography.html).

View File

@ -0,0 +1,121 @@
The Vote Model
==============
A vote is a JSON object with a particular schema,
as outlined in this page.
A vote must contain the following JSON keys
(also called names or fields):
.. code-block:: json
{
"node_pubkey": "<The public key of the voting node>",
"vote": {
"voting_for_block": "<ID of the block the node is voting on>",
"previous_block": "<ID of the block previous to the block being voted on>",
"is_block_valid": "<true OR false>",
"invalid_reason": null,
"timestamp": "<Vote-creation timestamp>"
},
"signature": "<Signature of inner vote object>"
}
.. note::
Votes have no ID (or ``"id"``), as far as users are concerned.
The backend database may use one internally,
but it's of no concern to users and it's never reported to them via APIs.
The JSON Keys in a Vote
-----------------------
**node_pubkey**
The public key of the node which cast this vote.
It's a string.
For more information about public keys,
see the `IPDB Protocol documentation page about cryptographic keys and signatures
<https://the-ipdb-protocol.readthedocs.io/en/latest/crypto-keys-and-sigs.html>`_.
**vote.voting_for_block**
The block ID that this vote is for.
It's a string.
For more information about block IDs,
see the page about :ref:`blocks <The Block Model>`.
**vote.previous_block**
The block ID of the block "before" the block that this vote is for,
according to the node which cast this vote.
It's a string.
(It's possible for different nodes to see different block orders.)
For more information about block IDs,
see the page about :ref:`blocks <The Block Model>`.
**vote.is_block_valid**
``true`` if the node which cast this vote considered the block in question to be valid,
and ``false`` otherwise.
Note that it's a *boolean* (i.e. ``true`` or ``false``), not a string.
**vote.invalid_reason**
Always ``null``, that is, it's not being used.
It may be used or dropped in a future version.
See `bigchaindb/bigchaindb issue #217
<https://github.com/bigchaindb/bigchaindb/issues/217>`_ on GitHub.
**vote.timestamp**
The `Unix time <https://en.wikipedia.org/wiki/Unix_time>`_
when the vote was created, according to the node which created it.
It's a string representation of an integer.
**signature**
The cryptographic signature of the inner ``vote``
by the node that created the vote
(i.e. the node with public key ``node_pubkey``).
To compute that:
#. Construct an :term:`associative array` ``d`` containing the contents of the inner ``vote``
(i.e. ``vote.voting_for_block``, ``vote.previous_block``, ``vote.is_block_valid``,
``vote.invalid_reason``, ``vote.timestamp``, and their values).
#. Compute ``signature = sig_of_aa(d, private_key)``, where ``private_key``
is the node's private key (i.e. ``node_pubkey`` and ``private_key`` are a key pair).
There's pseudocode for the ``sig_of_aa()`` function
on `the IPDB Protocol documentation page about cryptographic keys and signatures
<https://the-ipdb-protocol.readthedocs.io/en/latest/crypto-keys-and-sigs.html#computing-the-signature-of-an-associative-array>`_.
The Vote Schema
---------------
BigchainDB checks all votes (JSON documents) against a formal schema
defined in a :ref:`JSON Schema file named vote.yaml <The Vote Schema File>`.
An Example Vote
---------------
.. code-block:: json
{
"node_pubkey": "3ZCsVWPAhPTqHx9wZVxp9Se54pcNeeM5mQvnozDWyDR9",
"vote": {
"voting_for_block": "11c3a3fcc9efa4fc4332a0849fc39b58e403ff37794a7d1fdfb9e7703a94a274",
"previous_block": "3dd1441018b782a50607dc4c7f83a0f0a23eb257f4b6a8d99330dfff41271e0d",
"is_block_valid": true,
"invalid_reason": null,
"timestamp": "1509977988"
},
"signature": "3tW2EBVgxaZTE6nixVd9QEQf1vUxqPmQaNAMdCHc7zHik5KEosdkwScGYt4VhiHDTB6BCxTUzmqu3P7oP93tRWfj"
}

View File

@ -7,7 +7,7 @@ This section outlines some ways that you could set up a minimal BigchainDB node
:maxdepth: 1
Using a Local Dev Machine <setup-bdb-host>
Using a Local Dev Machine and Docker <setup-bdb-docker>
Using a Local Dev Machine and Docker <../appendices/run-with-docker>
Using Vagrant <../appendices/run-with-vagrant>
Using Ansible <../appendices/run-with-ansible>
running-all-tests
running-all-tests

View File

@ -1,112 +0,0 @@
# Set Up BigchainDB Node Using Docker
You need to have recent versions of [Docker](https://docs.docker.com/engine/installation/)
and (Docker) [Compose](https://docs.docker.com/compose/install/).
Build the images:
```bash
docker-compose build
```
## Docker with MongoDB
Start MongoDB:
```bash
docker-compose up -d mdb
```
MongoDB should now be up and running. You can check the port binding for the
MongoDB driver port using:
```bash
$ docker-compose port mdb 27017
```
Start a BigchainDB node:
```bash
docker-compose up -d bdb
```
You can monitor the logs:
```bash
docker-compose logs -f bdb
```
If you wish to run the tests:
```bash
docker-compose run --rm bdb py.test -v --database-backend=mongodb
```
## Docker with RethinkDB
**Note**: If you're upgrading BigchainDB and have previously already built the images, you may need
to rebuild them after the upgrade to install any new dependencies.
Start RethinkDB:
```bash
docker-compose -f docker-compose.rdb.yml up -d rdb
```
The RethinkDB web interface should be accessible at http://localhost:58080/.
Depending on which platform, and/or how you are running docker, you may need
to change `localhost` for the `ip` of the machine that is running docker. As a
dummy example, if the `ip` of that machine was `0.0.0.0`, you would access the
web interface at: http://0.0.0.0:58080/.
Start a BigchainDB node:
```bash
docker-compose -f docker-compose.rdb.yml up -d bdb-rdb
```
You can monitor the logs:
```bash
docker-compose -f docker-compose.rdb.yml logs -f bdb-rdb
```
If you wish to run the tests:
```bash
docker-compose -f docker-compose.rdb.yml run --rm bdb-rdb pytest -v -n auto
```
## Accessing the HTTP API
You can do quick check to make sure that the BigchainDB server API is operational:
```bash
curl $(docker-compose port bdb 9984)
```
The result should be a JSON object (inside braces like { })
containing the name of the software ("BigchainDB"),
the version of BigchainDB, the node's public key, and other information.
How does the above curl command work? Inside the Docker container, BigchainDB
exposes the HTTP API on port `9984`. First we get the public port where that
port is bound:
```bash
docker-compose port bdb 9984
```
The port binding will change whenever you stop/restart the `bdb` service. You
should get an output similar to:
```bash
0.0.0.0:32772
```
but with a port different from `32772`.
Knowing the public port we can now perform a simple `GET` operation against the
root:
```bash
curl 0.0.0.0:32772
```

View File

@ -27,7 +27,7 @@ waiting for connections on port 27017
To run BigchainDB Server, do:
```text
$ bigchaindb start --init
$ bigchaindb start
```
You can [run all the unit tests](running-all-tests.html) to test your installation.
@ -55,7 +55,7 @@ You can verify that RethinkDB is running by opening the RethinkDB web interface
To run BigchainDB Server, do:
```text
$ bigchaindb start --init
$ bigchaindb start
```
You can [run all the unit tests](running-all-tests.html) to test your installation.

View File

@ -0,0 +1,19 @@
Glossary
========
.. glossary::
:sorted:
associative array
A collection of key/value (or name/value) pairs
such that each possible key appears at most once
in the collection.
In JavaScript (and JSON), all objects behave as associative arrays
with string-valued keys.
In Python and .NET, associative arrays are called *dictionaries*.
In Java and Go, they are called *maps*.
In Ruby, they are called *hashes*.
See also: Wikipedia's articles for
`Associative array <https://en.wikipedia.org/wiki/Associative_array>`_
and
`Comparison of programming languages (associative array) <https://en.wikipedia.org/wiki/Comparison_of_programming_languages_(associative_array)>`_

View File

@ -452,6 +452,118 @@ Assets
text search.
Transaction Metadata
--------------------------------
.. http:get:: /api/v1/metadata
Return all the metadata that match a given text search.
:query string text search: Text search string to query.
:query int limit: (Optional) Limit the number of returned metadata objects. Defaults
to ``0`` meaning return all matching objects.
.. note::
Currently this enpoint is only supported if the server is running
MongoDB as the backend.
.. http:get:: /api/v1/metadata/?search={text_search}
Return all metadata that match a given text search. The ``id`` of the metadata
is the same ``id`` of the transaction where it was defined.
If no metadata match the text search it returns an empty list.
If the text string is empty or the server does not support text search,
a ``400`` is returned.
The results are sorted by text score.
For more information about the behavior of text search see `MongoDB text
search behavior <https://docs.mongodb.com/manual/reference/operator/query/text/#behavior>`_
**Example request**:
.. sourcecode:: http
GET /api/v1/metadata/?search=bigchaindb HTTP/1.1
Host: example.com
**Example response**:
.. sourcecode:: http
HTTP/1.1 200 OK
Content-type: application/json
[
{
"metadata": {"metakey1": "Hello BigchainDB 1!"},
"id": "51ce82a14ca274d43e4992bbce41f6fdeb755f846e48e710a3bbb3b0cf8e4204"
},
{
"metadata": {"metakey2": "Hello BigchainDB 2!"},
"id": "b4e9005fa494d20e503d916fa87b74fe61c079afccd6e084260674159795ee31"
},
{
"metadata": {"metakey3": "Hello BigchainDB 3!"},
"id": "fa6bcb6a8fdea3dc2a860fcdc0e0c63c9cf5b25da8b02a4db4fb6a2d36d27791"
}
]
:resheader Content-Type: ``application/json``
:statuscode 200: The query was executed successfully.
:statuscode 400: The query was not executed successfully. Returned if the
text string is empty or the server does not support
text search.
.. http:get:: /api/v1/metadata/?search={text_search}&limit={n_documents}
Return at most ``n`` metadata objects that match a given text search.
If no metadata match the text search it returns an empty list.
If the text string is empty or the server does not support text search,
a ``400`` is returned.
The results are sorted by text score.
For more information about the behavior of text search see `MongoDB text
search behavior <https://docs.mongodb.com/manual/reference/operator/query/text/#behavior>`_
**Example request**:
.. sourcecode:: http
GET /api/v1/metadata/?search=bigchaindb&limit=2 HTTP/1.1
Host: example.com
**Example response**:
.. sourcecode:: http
HTTP/1.1 200 OK
Content-type: application/json
[
{
"metadata": {"msg": "Hello BigchainDB 1!"},
"id": "51ce82a14ca274d43e4992bbce41f6fdeb755f846e48e710a3bbb3b0cf8e4204"
},
{
"metadata": {"msg": "Hello BigchainDB 2!"},
"id": "b4e9005fa494d20e503d916fa87b74fe61c079afccd6e084260674159795ee31"
},
]
:resheader Content-Type: ``application/json``
:statuscode 200: The query was executed successfully.
:statuscode 400: The query was not executed successfully. Returned if the
text string is empty or the server does not support
text search.
Advanced Usage
--------------------------------

View File

@ -16,7 +16,6 @@ BigchainDB Server Documentation
events/index
drivers-clients/index
data-models/index
schema/transaction
schema/vote
release-notes
glossary
appendices/index

View File

@ -424,13 +424,14 @@ LRS means locally-redundant storage: three replicas
in the same data center.
Premium storage is higher-cost and higher-performance.
It uses solid state drives (SSD).
At the time of writing,
when we created a storage account with SKU ``Premium_LRS``
and tried to use that,
the PersistentVolumeClaim would get stuck in a "Pending" state.
You can create a `storage account <https://docs.microsoft.com/en-us/azure/storage/common/storage-create-storage-account>`_
for Premium storage and associate it with your Azure resource group.
For future reference, the command to create a storage account is
`az storage account create <https://docs.microsoft.com/en-us/cli/azure/storage/account#create>`_.
.. Note::
Please refer to `Azure documentation <https://docs.microsoft.com/en-us/azure/virtual-machines/windows/premium-storage>`_
for the list of VMs that are supported by Premium Storage.
The Kubernetes template for configuration of Storage Class is located in the
file ``mongodb/mongo-sc.yaml``.
@ -438,6 +439,10 @@ file ``mongodb/mongo-sc.yaml``.
You may have to update the ``parameters.location`` field in the file to
specify the location you are using in Azure.
If you want to use a custom storage account with the Storage Class, you
can also update `parameters.storageAccount` and provide the Azure storage
account name.
Create the required storage classes using:
.. code:: bash
@ -447,15 +452,6 @@ Create the required storage classes using:
You can check if it worked using ``kubectl get storageclasses``.
**Azure.** Note that there is no line of the form
``storageAccount: <azure storage account name>``
under ``parameters:``. When we included one
and then created a PersistentVolumeClaim based on it,
the PersistentVolumeClaim would get stuck
in a "Pending" state.
Kubernetes just looks for a storageAccount
with the specified skuName and location.
Step 11: Create Kubernetes Persistent Volume Claims
---------------------------------------------------

View File

@ -47,7 +47,9 @@ when following the steps above:
``tectonic-cluster-CLUSTER``.
#. Set the ``tectonic_base_domain`` to ``""`` if you want to use Azure managed
DNS. You will be assigned a ``cloudapp.azure.com`` sub-domain by default.
DNS. You will be assigned a ``cloudapp.azure.com`` sub-domain by default and
you can skip the ``Configuring Azure DNS`` section from the Tectonic installation
guide.
#. Set the ``tectonic_cl_channel`` to ``"stable"`` unless you want to
experiment or test with the latest release.
@ -76,6 +78,14 @@ when following the steps above:
#. Set the ``tectonic_azure_ssh_key`` to the path of the public key created in
the previous step.
#. We recommend setting up or using a CA(Certificate Authority) to generate Tectonic
Console's server certificate(s) and adding it to your trusted authorities on the client side,
accessing the Tectonic Console i.e. Browser. If you already have a CA(self-signed or otherwise),
Set the ``tectonic_ca_cert`` and ``tectonic_ca_key`` configurations with the content
of PEM-encoded certificate and key files, respectively. For more information about, how to set
up a self-signed CA, Please refer to
:doc:`How to Set up self-signed CA <ca-installation>`.
#. Note that the ``tectonic_azure_client_secret`` is the same as the
``ARM_CLIENT_SECRET``.
@ -85,6 +95,10 @@ when following the steps above:
``test-cluster`` and specified the datacenter as ``westeurope``, the Tectonic
console will be available at ``test-cluster.westeurope.cloudapp.azure.com``.
#. Note that, if you do not specify ``tectonic_ca_cert``, a CA certificate will
be generated automatically and you will encounter the untrusted certificate
message on your client(Browser), when accessing the Tectonic Console.
Step 4: Configure kubectl
-------------------------

View File

@ -105,6 +105,21 @@ Finally, you can deploy an ACS using something like:
--orchestrator-type kubernetes \
--debug --output json
.. Note::
Please refer to `Azure documentation <https://docs.microsoft.com/en-us/cli/azure/acs?view=azure-cli-latest#az_acs_create>`_
for a comprehensive list of options available for `az acs create`.
Please tune the following parameters as per your requirement:
* Master count.
* Agent count.
* Agent VM size.
* **Optional**: Master storage profile.
* **Optional**: Agent storage profile.
There are more options. For help understanding all the options, use the built-in help:

View File

@ -1,6 +1,6 @@
# Quickstart
This page has instructions to set up a single stand-alone BigchainDB node for learning or experimenting. Instructions for other cases are [elsewhere](introduction.html). We will assume you're using Ubuntu 16.04 or similar. If you're not using Linux, then you might try [running BigchainDB with Docker](appendices/run-with-docker.html).
This page has instructions to set up a single stand-alone BigchainDB node for learning or experimenting. Instructions for other cases are [elsewhere](introduction.html). We will assume you're using Ubuntu 16.04 or similar. You can also try, [running BigchainDB with Docker](appendices/run-with-docker.html).
A. Install MongoDB as the database backend. (There are other options but you can ignore them for now.)
@ -54,7 +54,7 @@ $ bigchaindb -y configure mongodb
I. Run BigchainDB Server:
```text
$ bigchaindb start --init
$ bigchaindb start
```
J. Verify BigchainDB Server setup by visiting the BigchainDB Root URL in your browser:

View File

@ -61,7 +61,7 @@ If you want to force-drop the database (i.e. skipping the yes/no prompt), then u
## bigchaindb start
Start BigchainDB assuming that the database has already been initialized using `bigchaindb init`. If that is not the case then passing the flag `--init` will initialize the database and start BigchainDB.
Start BigchainDB. It always begins by trying a `bigchaindb init` first. See the note in the documentation for `bigchaindb init`. The database initialization step is optional and can be skipped by passing the `--no-init` flag i.e. `bigchaindb start --no-init`.
You can also use the `--dev-start-rethinkdb` command line option to automatically start rethinkdb with bigchaindb if rethinkdb is not already running,
e.g. `bigchaindb --dev-start-rethinkdb start`. Note that this will also shutdown rethinkdb when the bigchaindb process stops.
The option `--dev-allow-temp-keypair` will generate a keypair on the fly if no keypair is found, this is useful when you want to run a temporary instance of BigchainDB in a Docker container, for example.

View File

@ -12,7 +12,7 @@ spec:
terminationGracePeriodSeconds: 10
containers:
- name: bigchaindb
image: bigchaindb/bigchaindb:1.1.0
image: bigchaindb/bigchaindb:1.3.0
imagePullPolicy: IfNotPresent
args:
- start

View File

@ -34,7 +34,7 @@ spec:
terminationGracePeriodSeconds: 10
containers:
- name: bigchaindb
image: bigchaindb/bigchaindb:1.1.0
image: bigchaindb/bigchaindb:1.3.0
imagePullPolicy: Always
args:
- start

View File

@ -7,8 +7,12 @@ metadata:
name: slow-db
provisioner: kubernetes.io/azure-disk
parameters:
skuName: Standard_LRS
skuName: Premium_LRS #[Premium_LRS, Standard_LRS]
location: westeurope
# If you have created a different storage account e.g. for Premium Storage
#storageAccount: <Storage account name>
# Use Managed Disk(s) with VMs using Managed Disks(Only used for Tectonic deployment)
#kind: Managed
---
######################################################################
# This YAML section desribes a StorageClass for the mongodb configDB #
@ -19,5 +23,9 @@ metadata:
name: slow-configdb
provisioner: kubernetes.io/azure-disk
parameters:
skuName: Standard_LRS
skuName: Premium_LRS #[Premium_LRS, Standard_LRS]
location: westeurope
# If you have created a different storage account e.g. for Premium Storage
#storageAccount: <Storage account name>
# Use Managed Disk(s) with VMs using Managed Disks(Only used for Tectonic deployment)
#kind: Managed

View File

@ -1,5 +1,5 @@
#!/bin/bash
docker build -t bigchaindb/nginx-https-web-proxy:0.10 .
docker build -t bigchaindb/nginx-https-web-proxy:0.12 .
docker push bigchaindb/nginx-https-web-proxy:0.10
docker push bigchaindb/nginx-https-web-proxy:0.12

View File

@ -90,12 +90,6 @@ http {
end
}
# check if the request originated from the required web page
# use referer header.
if ($http_referer !~ "PROXY_EXPECTED_REFERER_HEADER" ) {
return 403 'Unknown referer';
}
# check if the request has the expected origin header
if ($http_origin !~ "PROXY_EXPECTED_ORIGIN_HEADER" ) {
return 403 'Unknown origin';
@ -108,9 +102,16 @@ http {
add_header 'Access-Control-Max-Age' 43200;
add_header 'Content-Type' 'text/plain charset=UTF-8';
add_header 'Content-Length' 0;
add_header 'Referrer-Policy' "PROXY_REFERRER_POLICY";
return 204;
}
# check if the request originated from the required web page
# use referer header.
if ($http_referer !~ "PROXY_EXPECTED_REFERER_HEADER" ) {
return 403 'Unknown referer';
}
# No auth for GETs, forward directly to BDB.
if ($request_method = GET) {
proxy_pass http://$bdb_backend:BIGCHAINDB_API_PORT;

View File

@ -49,6 +49,11 @@ data:
# are available to external clients.
proxy-frontend-port: "4443"
# proxy-referrer-policy defines the expected behaviour from
# browser while setting the referer header in the HTTP requests to the
# proxy service.
proxy-referrer-policy: "origin-when-cross-origin"
# expected-http-referer is the expected regex expression of the Referer
# header in the HTTP requests to the proxy.
# The default below accepts the referrer value to be *.bigchaindb.com

View File

@ -25,6 +25,11 @@ spec:
configMapKeyRef:
name: proxy-vars
key: proxy-frontend-port
- name: PROXY_REFERRER_POLICY
valueFrom:
configMapKeyRef:
name: proxy-vars
key: proxy-referrer-policy
- name: PROXY_EXPECTED_REFERER_HEADER
valueFrom:
configMapKeyRef:

65
pkg/Vagrantfile vendored
View File

@ -9,10 +9,12 @@ Vagrant.require_version '>= 1.6.0'
VAGRANTFILE_API_VERSION = '2'
# Configuration files
CONFIGURATION_FILE = 'config/bdb-config.yaml'
CONFIGURATION_FILE = 'configuration/vars/bdb-config.yml'
HOSTS_FILE = 'configuration/hosts/all'
HOST_VARS_PATH = 'configuration/host_vars'
# Validate if all the required plugins are present
required_plugins = ["vagrant-cachier"]
required_plugins = ["vagrant-cachier", "vagrant-vbguest", "vagrant-hosts"]
required_plugins.each do |plugin|
if not Vagrant.has_plugin?(plugin)
raise "Required vagrant plugin #{plugin} not found. Please run `vagrant plugin install #{plugin}`"
@ -21,15 +23,28 @@ end
# Read configuration file(s)
instances_config = YAML.load_file(File.join(File.dirname(__FILE__), CONFIGURATION_FILE))
#TODO: (muawiakh) Add support for Docker, AWS, Azure
hosts_config = File.open(HOSTS_FILE, 'w+')
# TODO: (muawiakh) Add support for Docker, AWS, Azure
Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
instances_config.each do |instance|
instances_config["bdb_hosts"].each do |instance|
# Workaround till canonical fixes https://bugs.launchpad.net/cloud-images/+bug/1569237
# using -u ubuntu as remote user, conventionally vagrant boxes use `vagrant` user
if instance["box"]["name"] == "ubuntu/xenial64"
hosts_config.puts("#{instance["name"]} ansible_user=ubuntu")
else
hosts_config.puts("#{instance["name"]} ansible_user=vagrant")
end
config.vm.define instance['name'] do |bdb|
# Workaround until vagrant cachier plugin supports dnf
if !(instance["box"]["name"].include? "fedora")
if Vagrant.has_plugin?("vagrant-cachier")
config.cache.scope = :box
bdb.cache.scope = :box
end
elsif instance["box"]["name"] == "ubuntu/xenial64"
if Vagrant.has_plugin?("vagrant-vbguest")
bdb.vbguest.auto_update = false
bdb.vbguest.no_install = true
bdb.vbguest.no_remote = true
end
end
bdb.vm.hostname = instance["name"]
@ -40,14 +55,12 @@ Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
else
raise "Invalid network type: Please specify one of the following: [private_network, public_network]"
end
bdb.vm.provision :hosts, :sync_hosts => true
bdb.vm.box = instance["box"]["name"]
bdb.vm.synced_folder ".", "/bigchaindb"
bdb.vm.provision :shell, inline: "cd /bigchaindb/scripts;/bin/bash #{instance["upstart"]}"
if instance["setup_type"] == "quickstart"
bdb.vm.provision :shell, inline: "PYTHONBUFFERED=1 ansible-playbook \
/bigchaindb/ansible/quickstart.yml --extra-vars \"with_docker=#{instance["deploy_docker"]}\" -c local"
end
File.open("#{HOST_VARS_PATH}/#{instance["name"]}", "w+") {|f| \
f.write("ansible_ssh_private_key_file: /bigchaindb/.vagrant/machines/#{instance["name"]}/virtualbox/private_key") }
bdb.vm.provision :shell, inline: "cd /bigchaindb/scripts;/bin/bash #{instances_config["upstart"]}"
bdb.vm.provider 'vmware_fusion' do |vmwf, override|
vmwf.vmx['memsize'] = instance["ram"]
vmwf.vmx['numvcpus'] = instance['vcpus']
@ -59,4 +72,32 @@ Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
end
end
end
hosts_config.close
config.vm.define "config-node" do |bdb|
bdb.vm.box = "ubuntu/xenial64"
bdb.vm.hostname = "config-node"
bdb.vm.provision :hosts, :sync_hosts => true
bdb.vm.synced_folder ".", "/bigchaindb"
bdb.vm.network "private_network", ip: "192.168.100.200"
bdb.vm.provision :shell, inline: "cd /bigchaindb/scripts;/bin/bash #{instances_config["upstart"]}"
bdb.vm.provision :shell, inline: "PYTHONUNBUFFERED=1 ansible-playbook /bigchaindb/configuration/bdb-deploy.yml \
-i /bigchaindb/configuration/hosts/all"
bdb.vm.provider "virtualbox" do |vb|
vb.customize ["modifyvm", :id, "--natdnshostresolver1", "on"]
vb.memory = 2048
vb.cpus = 2
end
bdb.vm.provider 'vmware_fusion' do |vmwf|
vmwf.vmx['memsize'] = 2048
vmwf.vmx['numvcpus'] = 2
end
if Vagrant.has_plugin?("vagrant-vbguest")
config.vbguest.auto_update = false
config.vbguest.no_install = true
config.vbguest.no_remote = true
end
if Vagrant.has_plugin?("vagrant-cachier")
config.cache.scope = :box
end
end
end

View File

@ -1,10 +0,0 @@
- hosts: localhost
remote_user: vagrant
vars:
with_docker: "{{ deploy_docker | default(false) }}"
roles:
- { role: docker, when: with_docker|bool }
- { role: docker-compose, when: with_docker|bool }
- mongodb
- bigchaindb
- bigchaindb-driver

View File

@ -1,16 +0,0 @@
---
- include: with_docker.yml
when: with_docker|bool
tags: [bigchaindb]
- include: debian.yml
when: not with_docker|bool and (distribution_name == "debian" or distribution_name == "ubuntu")
- include: centos.yml
when: not with_docker|bool and (distribution_name == "centos" or distribution_name == "red hat enterprise linux")
- include: fedora.yml
when: not with_docker|bool and (distribution_name == "fedora")
- include: common.yml
when: not with_docker|bool

View File

@ -1,25 +0,0 @@
---
- name: Configuring BigchainDB Docker
docker_container:
name: "{{ bigchaindb_docker_name }}"
image: "{{ bigchaindb_image_name }}"
volumes: "{{ bigchaindb_docker_volumes }}"
pull: false
env:
BIGCHAINDB_SERVER_BIND: "{{ bigchaindb_server_bind }}"
BIGCHAINDB_DATABASE_HOST: "{{ bigchaindb_database_host }}"
entrypoint: "bigchaindb -y configure mongodb"
register: result
tags: [bigchaindb]
- name: Start BigchainDB Docker
docker_container:
name: "{{ bigchaindb_docker_name }}"
image: "{{ bigchaindb_image_name }}"
published_ports: "{{ bigchaindb_docker_published_ports }}"
restart_policy: always
volumes: "{{ bigchaindb_docker_volumes }}"
state: started
pull: false
when: result|succeeded
tags: [bigchaindb]

View File

@ -1,10 +0,0 @@
---
- name: MongoDB Process Check
shell: pgrep mongod | wc -l
register: command_result
tags: [mongodb]
- name: Run MongoDB
shell: "mongod --replSet=bigchain-rs --logpath {{ mongodb_log_path }}/mongod.log &"
when: command_result.stdout| int != 1
tags: [mongodb]

View File

@ -1,31 +0,0 @@
---
- name: Creating directories
file:
path: "{{ item }}"
state: directory
mode: 0700
with_items: "{{ directories }}"
tags: [mongodb]
- include: with_docker.yml
when: with_docker|bool
- name: Verify logfiles exist | Debian
file:
path: "{{ mongodb_log_path }}/mongod.log"
state: touch
mode: 0755
when: not with_docker|bool
tags: [mongodb]
- include: debian.yml
when: not with_docker|bool and (distribution_name == "debian" or distribution_name == "ubuntu")
- include: centos.yml
when: not with_docker|bool and (distribution_name == "centos" or distribution_name == "red hat enterprise linux")
- include: fedora.yml
when: not with_docker|bool and (distribution_name == "fedora")
- include: common.yml
when: not with_docker|bool

View File

@ -1,20 +0,0 @@
---
- name: Check Docker Service
systemd:
name: docker
enabled: yes
state: started
tags: [docker]
- name: Running MongoDB Docker
docker_container:
name: "{{ mongodb_docker_name }}"
image: "{{ mongodb_docker_image }}"
detach: True
published_ports: "{{ mongodb_docker_published_ports }}"
restart_policy: always
volumes: "{{ mongodb_docker_volumes }}"
state: started
pull: false
entrypoint: /entrypoint.sh --replSet=bigchain-rs
tags: [mongodb]

View File

@ -1,14 +0,0 @@
---
- name: "bdb-node-01" # Instance name
box:
name: "ubuntu/xenial64" # Box name
ram: "2048"
vcpus: "2"
setup_type: "quickstart" # Currently, only quickstart is supported.
deploy_docker: true # [true, false]
network:
ip: "10.20.30.50"
type: "private_network"
# Active network interface on host, Only required for public network e.g "en0: Wi-Fi (AirPort)"
bridge: "<network-interface-host>"
upstart: "/bigchaindb/scripts/bootstrap.sh" # Path to upstart script

View File

@ -0,0 +1,12 @@
- import_playbook: pre_req.yml
- hosts: all
vars_files:
- vars/bdb-config.yml
serial: 1
roles:
- bigchaindb
- bigchaindb-driver
- import_playbook: multi_node.yml
when: (bdb_hosts|length > 1) or docker_cluster_size|int > 1

View File

@ -0,0 +1,5 @@
---
ansible_connection: ssh
ansible_ssh_port: 22
ansible_become: yes
ansible_ssh_common_args: '-o StrictHostKeyChecking=no'

View File

@ -0,0 +1,5 @@
# Place holder file for users, running Ansible playbooks manually. Otherwise Vagrant
# populates this dynamically.
# Only needed for logging into remote hosts and adding host specific variables e.g.
#ansible_ssh_private_key_file: "/path/to/private/key"

View File

@ -0,0 +1,8 @@
# Place holder file for users, running Ansible playbooks manually. Otherwise Vagrant
# populates this dynamically.
# For local host
#<HOSTNAME> ansible_connection=local
# For remote host(s)
#<Remote_Host_IP/Hostname> ansible_ssh_user=<USERNAME> ansible_sudo_pass=<ROOT_PASSWORD>

View File

@ -0,0 +1,5 @@
- hosts: all
vars_files:
- vars/bdb-config.yml
roles:
- key-exchange

View File

@ -0,0 +1,8 @@
- hosts: all
vars_files:
- vars/bdb-config.yml
serial: 1
roles:
- { role: docker, when: deploy_docker|bool }
- { role: docker-compose, when: deploy_docker|bool }
- mongodb

View File

@ -23,4 +23,9 @@ dependencies_dnf:
- python3-pip
python_pip_upgrade: true
python_setuptools_upgrade: true
python_setuptools_upgrade: true
# Host configuration
distribution_name: "{{ ansible_distribution|lower }}"
distribution_codename: "{{ ansible_distribution_release|lower }}"
distribution_major: "{{ ansible_distribution_major_version }}"

View File

@ -1,12 +1,12 @@
---
- include: debian.yml
- import_tasks: debian.yml
when: distribution_name == "debian" or distribution_name == "ubuntu"
- include: centos.yml
- import_tasks: centos.yml
when: distribution_name == "centos" or distribution_name == "red hat enterprise linux"
- include: fedora.yml
- import_tasks: fedora.yml
when: distribution_name == "fedora"
- include: common.yml
- import_tasks: common.yml

View File

@ -27,21 +27,25 @@ dependencies_dnf:
python_pip_upgrade: true
python_setuptools_upgrade: true
# Host configuration
distribution_name: "{{ ansible_distribution|lower }}"
distribution_codename: "{{ ansible_distribution_release|lower }}"
distribution_major: "{{ ansible_distribution_major_version }}"
directories:
- /data
backend_db: mongodb #[rethinkdb, mongodb]
backend_db: mongodb #[mongodb]
bigchaindb_config_path: /data/.bigchaindb
bigchaindb_server_bind: "0.0.0.0:9984"
bigchaindb_database_host: "172.17.0.1"
bigchaindb_log_file: "{{ ansible_env.HOME }}/bigchaindb.log"
# Docker configuration
backend_db_image: "mongo:3.4.1"
backend_db_name: "mongodb"
bigchaindb_image_name: "bigchaindb/bigchaindb"
bigchaindb_docker_name: "bigchaindb"
bigchaindb_docker_published_ports:
- 59984:9984
bigchaindb_docker_volumes:
- "{{ ansible_env.HOME }}/bigchaindb_docker:/data"
bigchaindb_default_port: 9984
bigchandb_host_port: 59984
bigchaindb_host_mount_dir: "{{ ansible_env.HOME }}/bigchaindb_docker"
# Default IP of docker0 bridge
bigchaindb_default_host: "172.17.0.1"

View File

@ -13,10 +13,18 @@
shell: "pip3 install bigchaindb"
tags: [bigchaindb]
- name: Check if BigchainDB node is already configured
stat:
path: "{{ bigchaindb_config_path }}"
register: stat_result
- name: Configure BigchainDB
shell: "bigchaindb -y configure {{ backend_db }}"
environment:
BIGCHAINDB_SERVER_BIND: "{{ bigchaindb_server_bind }}"
BIGCHAINDB_CONFIG_PATH: "{{ bigchaindb_config_path }}"
BIGCHAINDB_DATABASE_HOST: "{{ ansible_hostname }}"
when: stat_result.stat.exists == False
tags: [bigchaindb]
- name: MongoDB Process Check
@ -30,7 +38,22 @@
tags: [bigchaindb]
- name: Start BigchainDB
become: yes
shell: "bigchaindb start > {{ bigchaindb_log_file }} 2>&1 &"
environment:
BIGCHAINDB_CONFIG_PATH: "{{ bigchaindb_config_path }}"
when: mdb_pchk.stdout| int >= 1 and bdb_pchk.stdout| int == 0
tags: [bigchaindb]
async: 10
poll: 0
tags: [bigchaindb]
- name: Get BigchainDB node public key
shell: "cat {{ bigchaindb_config_path }}"
register: bdb_node_config
tags: [bigchaindb]
- name: Set Facts BigchainDB
set_fact:
pub_key="{{ ( bdb_node_config.stdout|from_json).keypair.public }}"
hostname="{{ ansible_hostname }}"
bdb_config="{{ bigchaindb_config_path }}"
tags: [bigchaindb]

View File

@ -0,0 +1,48 @@
---
- name: Check if BigchainDB Dockers are already configured
stat:
path: "{{ bigchaindb_host_mount_dir }}{{ item|string }}/.bigchaindb"
with_sequence: start=0 end="{{ docker_cluster_size|int - 1 }}" stride=1
register: stat_result
tags: [bigchaindb]
- name: Configuring BigchainDB Docker
docker_container:
name: "{{ bigchaindb_docker_name }}{{ item }}"
hostname: "{{ bigchaindb_docker_name }}{{ item }}"
image: "{{ bigchaindb_image_name }}"
volumes:
- "{{ bigchaindb_host_mount_dir }}{{ item|string }}:/data"
env:
BIGCHAINDB_SERVER_BIND: "{{ bigchaindb_server_bind }}"
BIGCHAINDB_DATABASE_HOST: "{{ hostvars[ansible_hostname]['mongodb' + item|string] }}"
entrypoint: "bigchaindb -y configure mongodb"
when: stat_result.results[item|int].stat.exists == False
with_sequence: start=0 end="{{ docker_cluster_size|int - 1 }}" stride=1
tags: [bigchaindb]
- name: Start BigchainDB Docker
docker_container:
name: "{{ bigchaindb_docker_name }}{{ item }}"
image: "{{ bigchaindb_image_name }}"
detach: true
published_ports:
- "{{ bigchandb_host_port|int + item|int }}:{{ bigchaindb_default_port }}"
restart_policy: always
volumes:
- "{{ bigchaindb_host_mount_dir }}{{ item|string }}:/data"
state: started
with_sequence: start=0 end="{{ docker_cluster_size|int - 1 }}" stride=1
tags: [bigchaindb]
- name: Get BigchainDB node public key
shell: "cat {{ bigchaindb_host_mount_dir + item|string }}/.bigchaindb"
register: bdb_node_config
with_sequence: start=0 end="{{ docker_cluster_size|int - 1 }}" stride=1
tags: [bigchaindb]
- name: Set facts for BigchainDB containers
set_fact:
pub_key_{{ bigchaindb_docker_name }}{{ item }}="{{ (bdb_node_config.results[item|int].stdout|from_json).keypair.public }}"
with_sequence: start=0 end="{{ docker_cluster_size|int - 1 }}" stride=1
tags: [bigchaindb]

View File

@ -0,0 +1,20 @@
---
- import_tasks: deploy_docker.yml
when: deploy_docker|bool
tags: [bigchaindb]
- import_tasks: debian.yml
when: not deploy_docker|bool and (distribution_name == "debian" or distribution_name == "ubuntu")
tags: [bigchaindb]
- import_tasks: centos.yml
when: not deploy_docker|bool and (distribution_name == "centos" or distribution_name == "red hat enterprise linux")
tags: [bigchaindb]
- import_tasks: fedora.yml
when: not deploy_docker|bool and (distribution_name == "fedora")
tags: [bigchaindb]
- import_tasks: common.yml
when: not deploy_docker|bool
tags: [bigchaindb]

Some files were not shown because too many files have changed in this diff Show More