Thursday 19 July 2012

Engineering Maintenance Management (1985)

Introduction

Although this was a mini-computer supplier, computerisation of business functions was still slow, especially in this small Australian subsidiary of this US manufacturer. A lot of business systems were still typically paper based data collection keyed into batch systems for reporting.

We were in the process of bringing the spare-parts inventory control system on-line (which project was initiated from finance and audit). A new National Field Engineering Manager had been hired who had started asking questions that the existing systems could not support, such as device type failure rates, average cost to repair, customer call response times, etc.

Existing Systems

The spare-parts inventory management system is described elsewhere which introduced tracking of good parts taken out on service calls and the non-equivalent part swap-out process.

A customer equipment maintenance contract system had records of all equipment under service but was primarily an invoicing system.

Every service call had a work-sheet completed detailing call, response, travel and completion times, equipment repaired, parts used, etc. A local system had been developed for data entry of this data and some basic reporting had been developed. The annual budgeting system used gross counts of staff and service calls to compute utilization and combined with equipment population and sales projections, produced projected staff requirements.

Enhancements - Stage 1

The first stage was to enhance the work-sheet data entry application with full data validation of customer contracts, equipment, spare-parts and engineers. The database created for the spare-parts inventory system, with its transaction logging facility, was enhanced to record the work-sheet data with improved data structures and indexing for better reporting. Outputs from this system them automatically fed into the annual budgeting process supplying actual totals for the year.

Some interesting results were starting to be seen in the reports from this data, which lead to the decision to go ahead with a full Call Centre System.

Enhancements - Stage 2 - Call Centre Management

There were two key drivers for the call-centre system. First was to capture and validate customer call information while the customer was still on the phone, including precise identification of the equipment at fault. We had found a number of customers were not putting all their equipment under service contract and were logging so-called contract service calls for non-contract equipment (this was especially easy with terminals - a customer might have 20 terminals under contract, but in fact have 100, often bought via the "grey market").

The second driver was to maximise engineer productivity. The engineer would call in job completion from the customer site, being lead through a predetermined list of responses to capture his full "job sheet" including (for the first time) the actual device id of the equipment being repaired.  He could then be directed immediately to his next call without having to return to the depot.

Successes

All in all, the above systems proved very successful. Three successes stand out.

First, by exactly identifying the equipment items being serviced, we were able to bring a lot of "grey" equipment under contract.

The volume of terminals being serviced by "swap-out" brought to light the idea of having a service van just full of terminals, circulating in the city with a courier who could do the "swap-over" rather that incurring the cost of a full engineer service call.

By the end of the first year when we started analysing device type failure rates and cost-to-repair, it became obvious that a particular model of terminal was so fault prone, that its average cost-to-repair was not covered by the contract service price. A heavily discounted replacement sales programme was put into place to upgrade all these devices to more modern, more reliable types.  This was a result that our American head-office had not even picked-up on.



Who's the Client? What's the Deliverable? (1995-7)

Introduction

I hope you will understand my discretion in suppressing certain details and names of companies.  The project was a massive, city-wide, integrated, distributed system with a high daily transaction rate and cash flow, developed for and to be run for the state government.

The contract was won by a Joint Venture Consortium formed between a hardware supplier, a multi-national computer supplier (my employer) supplying the central computers and central management and reporting software, and a cash-services company.

The Warning Signs

We Software Engineers pressed on in typical bespoke software development fashion, collecting and documenting requirements, developing functional design specifications for review and sign-off by the client (end-user SMEs). But obtaining "sign-off" was like trying to extract "hen's teeth" - we put it down to typical public-servants' reluctance to put their name "on the dotted line" and take responsibility for their decision.

Next was the continuous pressure for requirements "creep". Whilst both major suppliers had formal documentation, quality and change-control processes, they were different and weren't coordinated - one came from the engineering discipline and the other from commercial software development backgrounds.  What was seen as acceptable to one supplier was seen as a Change Request to the other.

As the pressure of time and budget increased, the above scope-creep issues had a strange effect on the over-all programme management style. I call it "pendulum management". In alternate months, the key management message alternated from "tighten up, work to budget and time-line", then "keep the customer happy, give him what ever he wants".

The "aha" moment came when we came to specify the functionality for managing the discrepancies between sales transaction data and cash collected. We went to the client users for their "requirements" and, to our surprise", were told, "Its not our problem. Its the consortium's problem.  We simply require that you pay us the higher of the cash collected (assuming sales transactions were lost), or the sale transactions amount (assuming cash has been lost)" - it was a classic "heads they win, tails we lose" situation.

Who's the Client?

This was the point when the "client dilemma" really struck home (to we software engineers at least). The Consortium's contract was in fact a 10 year "Service Contract", to first of all to build the system for an up-front payment (with ownership retained by the consortium), and then to operate the system including provision of enquiry and reporting services to the public servants. The "requirements" of the system being built were to "provide the contracted service".

Our "cash-to-sales reconciliation" problem (above) needed to be resolved by the consortium's operational and accounting staff, who had as yet not been appointed. In fact, at this point, the consortium's operating company comprised a single project manager! The two major partners in the consortium had forged ahead almost independently with no thought or plans for how the ongoing service would be provided and any of its impacts on the requirements of the systems being built.

Our Cash-to-Sales Reconciliation Solution

A major source of issues in cash-to-sales reconciliation, was the distributed nature of the POS equipment and in about half the cases, manual transfer of transaction data (by key-drive type device) to interface into the central system. Transaction batch identification had been included and a certain degree of redundancy. What was agreed as being required, was initial reconciliation when collected cash was counted per identified transaction batch (or group of data batches). Data completeness integrity controls would be needed so that an alert could be flagged indicating that it was known that some data was missing. Then when/if the missing data did arrive (possibly via error correction) an adjustment could be raised against the matching reconciliation and a CR/DR raised.

Needless to say, such a level of audit control had not been anticipated nor built by the hardware supplier and some robust negotiation around changes to the core data interfaces was required. To be fair to the hardware supplier, they had an enormous micro-software change control problem in coordinated distribution of updates across hundreds of devices, not to mention the data storage chips attached to every cash container.

(Technical aside: The core issue revolved around a forward singly-linked list being less robust that a bi-directional doubly linked list).

The End-Game

The total system finally went into production, grossly late and over budget (there were political imperatives (read election) that the system must not "fail"). The "service" has run well past the original 10 years, since the replacement system has had numerous problems of its own.

There was "robust negotiation" (litigation?) between the government and the consortium over the cost of the over-run and scope-creep (with eventual confidential settlement). I was involved in extensive "function point analysis" of the original (very vague) contractual (service) requirements and the system "as built" in order to identify and quantify the scope increase. But even this exercise was predicated on the bespoke software development model.