Page tree

Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

OpenCGA is an open-source project that aims to provide a _Big Data_ ** storage engine and analysis framework** for genomic scale data analysis of hundreds of terabytes or even petabytes. For users, its main features will include uploading and downloading files to a repository, storing their information in a generic way (non-dependant of the original file-format) and retrieving this information efficiently. For developers, it will be a platform for **for supporting the most used bioinformatics file formats** and accelerating the development of visualization and analysis applications.


There already exist some platforms with the same objectives as OpenCGA. However, they are focused on representing strictly the information from the files they stored. OpenCGA, on the other side, not only includes this information, but its **generic databases** include extra fields of interest and allow to **combine data from different studies** seamlessly.


Plain access to the files stored in the system is simply not fast enough for giving a real-time, **interactive user experience**. For this reason, we are exploring and using the most modern advanced technologies from different fields:

...

The image below show a global view of the infrastructure used by OpenCGA. When a file is uploaded to the system, it is stored in:*

  • A

...

  • filesystem for archiving purposes

...

  • . This filesystem could be UNIX-based or Hadoop-based.

...

  • A

...

  • database for interactive queries

...

  • . We plan to support MongoDB and HBase databases.


This dual schema will always allow to download the original files from the archiving file system, and to use the databases to retrieve information much faster than reading the files.

Apache Hadoop could be considered the _de facto_ standard for Big Data analysis. OpenCGA will allow several ways of accessing and analyzing a file using Hadoop. First, by storing a file in an HDFS filesystem, it is be possible to read it directly and run Map/Reduce jobs. If its data is also saved to a HBase database, real-time queries could also be executed.

It is important to note that the storage and analysis subsystems are not completely separated: Hadoop uses the same machines for data storage and computation in order to reduce network traffic and improve efficiency.

Users do not have direct access to the storage and analysis subsystems, but they access data using a web services API. This provides both an homogeneous way of presenting the information and a security layer that can be easily implemented and improved.

...