Data Pipeline

BANZAI pipeline

LCO's Network of telescopes are used for a diverse set of scientific goals, and managing the data raises challenges that are not present in a single-purpose survey or traditional common-user facility. The large number of instruments and the volume of data they generate means that LCO, as the data originator, is in the best position to understand and to reduce the data optimally. On the other hand, the wide variety of scientific programs that use the network, and their diverse needs for data reduction, renders it almost impossible to make a generalized pipeline optimal for all potential science needs.

The aims of LCO's data pipeline are (1) to do the best we can for the bulk of potential users, and (2) to create pipeline products that are of the most general use. In addition, the pipeline emphasizes recording of the processing steps performed, the parameters used, and the software versions employed. These records are of vital importance for documenting the provenance of the reduced data.

The data pipeline, nicknamed BANZAI, evolved from the set of image processing algorithms devised by the Supernova key project team and began processing all raw frames at the start of the 2016A semester. The BANZAI pipeline is coded in python, maintained in-house by LCOGT scientists, and stored in a Github repository. It runs automatically and requires no user input. It processes raw imager frames as soon as they are received at LCOGT's Santa Barbara headquarters; the results of this immediate ("Quicklook") reduction are transferred to the archive within approximately 15 minutes. The Quicklook reduction uses the bias, dark, and flat field frames from the current night for calibration. Final image processing, using the best-available calibration frames, is done after all science frames are transferred to headquarters at the conclusion of each local night.

For both quicklook and final processing, the following calibrations are performed:

  • Bad-pixel masking
  • Bias subtraction
  • Dark subtraction
  • Flat field correction
  • Source extraction (using SEP, Source Extraction and Photometry in python)
  • Astrometric solution (using astrometry.net)

The processed images are in multi-extension FITS files with two extensions: the catalog of sources (CAT) detected by SEP in a FITS binary table, and the bad pixel mask (BPM) in an array. The catalog lists the pixel positions (X, Y), semi-major and semi-minor axes (A, B), positions angles (THETA), fluxes and errors (FLUX, FLUXERR) of each source.

Get Data >