Tuesday 5 June 2012

Sewage Works Data Acquisition and Process Control (1980)

"Real Time" or "Data Processing"?

The sewage works of this large New Zealand city was in need of major redevelopment and modernization.

A civil engineering company was the prime contractor, and an electrical automation engineering company was sub-contracted to build the plant control systems. They had experience in micro-processor controllers for basic automation, but they needed someone to take on the back-end computing requirements of control-room data presentation and all levels of reporting.

When I arrived, their engineers were looking at (small) technical issues around the periphery of the requirements. The first thing I did was to review the requirements holistically and perform a chronological functional decomposition and came up with the following Data Flow Diagram.

(apologies for the hand-drawn diagram - that's all I have left from the project)

The engineering management were stunned when I showed them that less than 20% of the system was "real time" and the majority was "data processing" not unlike many commercial systems.

The Central Data Flow

The raw input data from the plant comprised 700 analog measurements (flow rates, tank levels, temperatures, voltage and current), and 150 digital (binary) "alarm" signals. All these inputs were sampled 4 times/second. A simple data communications protocol was designed to interface with the micro-processor controlled electrical connection panels.

Current (Real Time) Data

The "current" data status of the plant was required to be available for display on graphic displays in the control room and selected "alarm" status changes needed to be logged on a printer. The very nature of the type of plant being monitored and the data processing responsiveness of the graphics and printing showed that a 2 second response time was quite adequate, so the design was to average input data in blocks of 8 values to give 2 second averages.

But this data rate was still too fast to write to disk and the "real time" graphic display would not be responsive enough if it had to continually retrieve data from disk. The solution was to use a "mapped, shared memory" data pool. The data was also accumulated in buffers for writing to disk files at 30 second intervals.

Data Accumulation

A series of batch jobs then ran hourly, 8 hourly (per shift), daily and monthly, to summarize results from the lower level and write into the accumulation file at the next level.

Flexible Data Management - self-describing meta-data

A core feature of the system requirements, was for maximum flexibility in describing data, both the real-time inputs and the as yet undetermined manual inputs (typically the results of chemical analyses). Similarly, reports' layouts and contents required maximum flexibility of design and changes. A highly table-driven system was designed.  

But all these tables would need to be maintained (and possibly additional tables added later), so a table-driven table maintenance method was designed with its own table of meta-data that described the contents of the other tables to be maintained. The obvious (?) approach was to have it "self-describing" - the first entries in the meta-data table were manually entered to describe the meta-data table itself. The data maintenance program was then used to populate the remainder of the meta-data table, which could then be used to populate the system data tables.

Files were defined as a collection of records of a given type, and records were defined as a set of data items of various types.  Data items were defined with a long and short name, a data type and some very simple range constraints.

Process Control

A generic process control module was developed, with control tables associating outputs with the feedback inputs along with a set of process control parameters (2nd order function).

Generic Report Definitions

Shift and Daily reports were quite simple lists. Report definition tables specified which variables were to be reported.

The Monthly reports were all data analysis type reports. A given, fixed set of various sorts of analysis algorithms were required (various sorts of means, statistical measures, etc). Again, report definition tables specified which algorithm was to be applied to which set of variables for each report.

Summary

In all, a very satisfactory result was achieved. I was assisted by two new university graduates on a vacation contract, to complete the system in 6 months.

No comments:

Post a Comment