Top Menu

Quality Control

OOI Data Quality Control procedures were designed with the goal of meeting IOOS Quality Assurance of Real Time Ocean Data (QARTOD) standards. In addition to daily human-in-the-loop QC tests, as data streams are collected, data products are run through up to six automated QC algorithms. QC reports are created on a biweekly or monthly basis. If a user identifies an issue with an OOI data product or has a question about QC procedures, please contact the Data Team through the OOI Helpdesk.

The Ocean Observatories Data Evaluation Team (the Data Team) is part of the Cyberinfrastructure group at Rutgers University, made up of a data manager and four data evaluators. Each member has a range of oceanographic expertise and is assigned to a specific OOI array. They are tasked with reviewing the oceanographic and engineering data from the over 1,200 instruments deployed throughout the OOI system, ensuring that the data and metadata delivered by the OOI meets community data quality standards. They also work with the user community and marine engineers to identify, diagnose and resolve data availability and data quality issues. The Data Team is also responsible for user outreach and training regarding data access, availability, processing routines, and quality control.

The Data Team’s primary goals are:

  • To monitor the operational status of data flowing through the OOI Data system end-to-end
  • To ensure the availability of OOI datasets in the system (raw, processed, derived, and cruise)
  • To ensure that data delivered by the system meets quality guidelines
  • To identify data availability and quality issues and ensure they are resolved
  • To communicate known data issues with end users
  • To report operational statistics on data availability and quality, and issue resolution

Manual QC Tests

Manual QC tests are led by the data manager and conducted by the collocated team of four data evaluators. Tests include both Quick Look tests (a first pass by evaluators using automated tools) as well as Deep Dives (a closer look at data flagged as suspect, drawing in Subject Matter Experts). The data team will clearly annotate any data stream that triggers QC-related alerts, as well as any data that are flagged as suspect during manual inspection.

The draft Standard Operating Procedure (as of March 2017) for the data evaluation team can be found here:
Draft QC SOP

The “as-designed” details of QA/QC methods for OOI data and data products and physical samples (as of January 2013) can be found within the Protocols and Procedures for OOI Data Products document, which also includes calibration and field verification procedures.
QA/QC Protocols

Automated QC Algorithms

Data products are run through six automated QC algorithms. QC reports are created on a biweekly or monthly basis. Automated QC algorithms were coded based on specifications created by OOI Project Scientists and derived from other observatory experiences. The six algorithms currently implemented are:

OOI Test OOI Description QARTOD Equivalent QARTOD Recommendation (from manuals) Notes
Global Range Test (pdf) Data are flagged unless they fall within valid world ocean ranges or instrument limits (whichever is more restrictive) Gross Range Only considers manufacturer-defined sensor and calibration limits Different tests, different names.

OOI test is currently operational.

Spike Test (pdf) Deviation from mean compared to 2*N neighboring points Spike N=1, default threshold is based on the rate of change distribution from previous data sets Roughly identical, same nomenclature.

OOI test is currently operational.

Stuck Value Test (pdf) If 2 neighboring values differ by less than the resolution of the sensor for more than N repetitions, data are flagged Stuck Sensor Manual suggests 3 consecutive points for a stuck sensor suspect flag and 5 for a fail flag. QARTOD manual suggestion may be too low for well-mixed portions of the water column.

We are evaluating the results from the OOI lookup values.

Local Range Test (pdf) Data are flagged unless they fall within locally valid site-specific or depth ranges. Interpolates thresholds between depth and season intervals Local Range Starts with constant limits for each depth/season interval Roughly identical, same nomenclature.

OOI Local ranges are still being established and entered based on first year of operations.

Gradient Test (pdf) If d(data)/d(t) between two points is greater than a set threshold, all following points fail until one falls within an absolute limit (TOLDAT). First data point is assumed good unless a “good” starting data (STARTDAT) point is defined. Rate of Change QARTOD recommends two neighboring points and does not incorporate TOLDAT or STARTDAT values. Different tests, different names.

OOI Gradient test is under review and not currently operational.

Trend Test (pdf) Data are flagged as having a trend if the SD of the residuals to a polynomial curve < original data, multiplied by some factor. Designed to test for sensor drift. N/A No QARTOD equivalent OOI only.

This test is not currently operational, and is being reviewed for efficacy.

A seventh (Density Inversion) test will hopefully also be implemented, based on its utility in other observatory projects (e.g. Argo). This test generates a flag if data at increasing depths do not also increase in density (or vice versa)

QC algorithms are only run on science data products, not on auxiliary or engineering parameters. The automated QC algorithms do not screen out or delete any data, or prevent it from being downloaded. They only flag “suspect” data points in the plotting tools and deliver those flags as additional attributes in downloaded data.

The algorithm code used to process the QC algorithms in the OOI Cyberinfrastructure system can be found here: OOI Algorithms

QC lookup tables that describe these limits can be found here: QC Lookup Tables

Quality Control Goals

OOI instrument deployment and data quality control procedures were designed with the goal of meeting QARTOD quality control standards:

  • Every real-time observation must be accompanied by a quality descriptor
  • All observations should be subject to automated real-time quality tests
  • Quality flags and test descriptions must be described in the metadata
  • Observers should verify / calibrate sensors before deployment
  • Observers should describe methods / calibration in real-time metadata
  • Observers should quantify level of calibration accuracy and expected error
  • Manual checks on automated procedures, real-time data collected, and status of observing system must be provided on an appropriate timescale