2.1 KiB
cover | coverY |
---|---|
../.gitbook/assets/cover/data_science.png | 0 |
📊 Data Science
Ocean Protocol was built to serve the data science space.
Data Value Creation Loop stage
With Ocean, each Data Value Creation Loop stage is tokenized with data NFTs and datatokens. Leveraging tokenized standards unlocks several unique benefits for the ecosysem. Together, stakeholders can build sophisticated products by combining assets posted onto Ocean.
Data engineers can publish pipelines for curated data, allowing data scientists to conduct feature engineering and build models on top. The models can be deployed with Compute-to-Data and leveraged by app developers building the last-mile distribution of model outputs into business practices.
Ocean Protocol unlocks composable data science, . Instead of a data scientists needing to conduct each stage of the pipeline themselves, they can work together and build off of each other's components and focus on what they are best at.
This guide links you to the most important tutorials for data scientists working with Ocean Protocol.
Core Components for Data Scientists:
- Ocean data NFTs and datatokens are core building blocks of Ocean Protocol. They allow individuals and businesses to define their ownership of their assets, and create flexible access control tokens
- Ocean's Compute-to-Data engine resolves the trade-off between the benefits of open data and data privacy risks. Using the engine, algorithms can be run on data without exposing the underlying data. Now, data can be widely shared and monetized without
- Ocean.py a python library that interacts with all Ocean contracts and tools. To get started with the library, check out our guides. They will teach installation and set-up and several popular workflows such as publishing an asset and starting a compute job.