Jay Lakhani, Senior Applications Engineer, HBM UK explains the benefits of calibration to engineers involved in measuring technology

Jay Lakhani, Senior Applications Engineer, HBM UK explains the benefits of calibration to engineers involved in measuring technology and discusses the guidelines surrounding calibration intervals.

Calibration & the benefits

Calibration accurately determines the correlation between the input and the measured output of whatever quantity an instrument is measuring under specified conditions.  This subsequently means that results are quickly and simply documented, which is particularly important for companies complying with ISO 9001 standards – the ultimate global benchmark for quality management – since it ensures that the equipment is working within its correct specifications.  However, in practice, the question more frequently posed is how often should a recalibration actually be carried out and who is responsible?

Determining calibration intervals

In general, the operator or his supervisor are responsible for determining the calibration interval.  With regards to intervals, if internal specifications for recalibrations are available, e.g. in the quality management manual, then these will be the official source for recalibration schedules.  For some applications, such specifications may also be present in general standards (e.g. ISO 376 for force measuring instruments of certification of vehicle exhaust gas emissions).

For those measurements where the highest demands are placed on precisely known measuring properties, it must be noted that the calibration can only make valid statements at the time of implementation. Consequently, an extremely complicated process is required: a calibration must be implemented both before and after every important measurement.

If a more pragmatic view is followed in industrial practice, as explicitly recommended in ISO 10012, it is of course sensible to allow a greater number of measurements or a specific time interval between two calibrations. If the deviations measured during a calibration compared to the previous calibration lie within the metrological requirements, then the measurement results obtained with the measuring equipment are justifiable. If, however, the deviations are greater, then the question is whether measurements are only meaningful to a limited extent and should they be repeated? The decision on how long a calibration interval should be must therefore take into consideration, how high the costs are, on the one hand, for more frequent calibration (including time lost) and, on the other hand, for possibly worthless measurement results, re-measurements, recall actions, etc.

An important aspect here is also the probability for changes in the measuring properties that may result in significant deviations in calibration results between one calibration and the next. Qualitatively, it is easy to ascertain that certain conditions may require more frequent calibrations, e.g. such as high operating hour values (shift operation), extreme temperature conditions, long-term alternating load operations with transducers and dirt and moisture. However, to produce quantitative statements about the measuring equipment used with the help of manufacturer data would require comprehensive statistical data for each type of transducer or measurement electronics that is normally not available. The operator can instead obtain a very good idea, by continuously tracking the calibration results, of the long-term behaviour of the equipment that he uses under the operating conditions valid for the application.

In other words, if a measuring amplifier is used in a test bench where the operating conditions are hard and the costs high, and measurement results are shown at a later stage to be untrustworthy, it may be sensible to implement a recalibration after six or even three months. If, however, it is clear after the first or second recalibration, that the measuring properties remain stable, it is then probable that the measuring amplifier will also remain stable and one can then decide to lengthen the calibration interval. Such a procedure for determining under which conditions the calibration interval can be lengthened should be a part of the QM system. Equally it should also handle the shortening of calibration intervals, e.g. due to wear or drift behaviour.

Comparison measurements of several calibrated test instruments among each other are another decision-making aid for adapting calibration intervals; for example, if a test laboratory uses several force transducers and has the equipment needed for comparison measurements on hand. Such comparisons can show whether a calibration interval that, initially, may have been rather generously calculated should be shortened in an individual case.

We should also mention that the significance of the operating conditions naturally means that a recalibration should be implemented in all cases where a measurement tool is subject to stresses that lie outside the intended use. These range from greater overloads, falling down and extreme temperature conditions to interventions in the equipment for repair purposes.

The real significance of calibration

Calibration defines the accuracy and quality of measurements recorded when using a piece of equipment.  In short, without calibration, you do not have a recognised method of ensuring that equipment being used between tests is accurate.  Over time, there is a tendency for results and accuracy to ‘drift’ particularly when using certain technologies or measuring specific parameters, such as temperature and humidity.  With this in mind, to be confident in the results being measured, there is an ongoing need to service and maintain the calibration of equipment throughout its lifetime, to ensure reliable, accurate and repeatable measurements.

Founded in Germany in 1950, HBM is a market leader in the field of test and measurement.  In 1977, the first accredited German Calibration Laboratory was based at HBM. Since then, it has become one of the best known and most capable calibration labs of its type. Accreditation to DIN EN ISO/IEC 17025 covers the measured quantities: force calibration, pressure calibration, torque calibration and voltage ratio mV/V calibration. HBM offers calibration to DAkks standards as well as working standard traceable to ISO 10012.

If you may be interested in getting your data acquisition units or transducers calibrated then contact HBM on +44 (0) 20 8515 6000 or via email: info@uk.hbm.co.uk or visit the HBM website at www.hbm.com

 

 

Check Also

Danisense partners with National Instruments to provide current transducers for use in power analysis measurement signal conditioning

Stability, accuracy and ease-of-use highlighted Key points: Danisense products chosen for high accuracy, high bandwidth, …

New Fluke Electrical Measurement Window offers added safety and reduced testing time when taking power quality measurements

Fluke has introduced an electrical measurement window which provides immediate access to voltage and current …