Data Hub

The FAST Data Hub contains open data which can be used directly with FAST.

The Data Hub consists of items which can be images, neural network models, text pipelines, or any other kind of data. Each item has its own unique ID, and an item can depend on other items, e.g. a text pipeline can have a neural network model and an image as dependencies. Dependencies are downloaded automatically.

Download example

Items can be downloaded with FAST in the following way:

import fast

#include <FAST/DataHub.hpp>


Pipeline example

FAST has a system for defining pipelines using simple text files. This enables you to create processing and visualization pipelines without programming and compiling, making it very easy to test pipelines with different parameters and input data. Read more about text pipelines here.

The Data Hub has a collection of these text pipelines along with corresponding data and neural network models. Here is an example of how to setup a pipeline from the Data Hub.

Command Line
The runPipeline executable is installed with both the Python FAST package and the stand-alone installers. You use it to run a pipeline from DataHub like so:
runPipeline --datahub jugular-carotid-ultrasound-segmentation
import fast

pipeline = fast.Pipeline.fromDataHub('jugular-carotid-ultrasound-segmentation')
#include <FAST/Pipeline.hpp>

int main() {
    auto pipeline = fast::Pipeline::fromDataHub("jugular-carotid-ultrasound-segmentation");;