What Gets Measured Gets Improved (Or Not)

July 1, 2010
A material handling consultant discovers that for too many companies, failure is most definitely an option. Learn how to establish measures that will work for you, not against you.

Conventional wisdom says that measurement is a necessary part of improvement, so you can track your gains. Sometimes, just the act of measuring something leads to improvement because the people doing the work will put forth more effort. However, sometimes measurement can actually thwart improvement efforts in ways quite unexpected. Don't let your attempt to measure your results get you the wrong results.

We will look at measurement from two viewpoints:

  1. how to establish a measure that will work, and

  2. how to correct a measure that is producing undesirable results.

It may seem that performance measurement should be fairly simple to do, but there are a number of factors which can lead not only to incorrect measures, but often incorrect actions.

Nearly every manufacturing company is interested in keeping scrap low, so they measure scrap daily. In many cases the source of the scrap data is a person writing a scrap ticket. Since that person knows that low scrap is desired, they will often “underestimate” the amount of scrap, especially if they will get in trouble for a large quantity of scrap. This can lead management to believe there is minimal scrap, only to find the inventory loss is going through the roof. This is a game that is played in many organizations, whereby everyone publicly conspires to falsify scrap performance, knowing that honesty was far more painful than deception.

I used to run a production department in an automotive metal stamping plant. It was easy to “hide” scrap metal parts by simply chopping them up and sending them out to the bailer with metal trimmings. One day my boss confronted me at the beginning of the shift with, “We have over 2.5 million in factory loss year to date. Do you know what that's from?” “Sure,” I responded, “we're cheating on scrap.” I was instructed by my red-faced boss to do something about it right away and met with each of my supervisors to stress the need to report scrap accurately.

My boss previously had held my job, where he had a reputation for running a tight operation with no tolerance for scrap. He would have each one of his supervisors bring their scrap containers to the main aisle and would proceed to chew them out for their scrap. It didn't take very long before the scrap containers showed up empty or with only a few parts, often attributed to the setup person. My boss thought he was making a difference, but he was simply teaching his supervisors to lie to him.

As a result of this lying, significant scrap problems were neither recognized nor fixed because the overall scrap appeared so small. I eventually got our scrap numbers to be reported honestly and did take a lot of heat from multiple sources, including an accountant who had to explain to the corporate office just why our scrap had increased so much in the last month. He found that my explanation was not suitable to report to the home office.

“What Could Go Wrong?”

When a measurement is imposed, especially when people will be judged (or compensated) based on that measurement, you can expect things to change. If you aren't quite careful about it, the change will not be what you wanted.

I was assigned the challenge of increasing on-time shipments at a furniture manufacturer. This company had six plants and a distribution center (DC) in west Michigan. Typically a customer order was split between multiple plants, which would send their product to the DC for consolidation and shipping. The manufacturing plants were measured by percent completions, which were typically 95% or better. The DC was measured by on-time shipment percentage, which often ranged between 65-75%.

Since the plants were doing so well, I decided to start my investigation at the DC and discovered that on Monday mornings the DC had over 98% of the product it needed to ship, but could not begin shipping most orders because there was something missing from almost every order. As the week progressed, more orders were shipped as the plants would send their delinquent items.

Something seemed very suspicious in the plant reporting, so I obtained reports of all of the missing product from the plants and reviewed them with each plant manager. They all agreed that the missing product was correct. When accounting for the actually missing product I calculated 85% completions, yet the plant was reporting 98%. Why? They made “adjustments” for situations that they felt were out of their control, which amounted to nearly everything. They viewed the completion number as a “blame number” rather than an honest measure of performance. I had uncovered a “dirty little secret” that resulted in some painful explaining and some stern redirection from their VP.

Sometimes the way the measurement is calculated can be a source of confusion. Back in that same furniture company I was helping in the lean manufacturing transformation. We were encouraging our plants to reduce inventory and measured them by inventory turns. Our intention was to reduce the amount of inventory and to speed up the flow of material into shipped product. “Inventory turns” was simply a convenient measure that seemed to fit the need. For raw materials, inventory turns are calculated by: “Value of Actual Material Used” divided by “Value of Raw Materials Inventory.” With such a simple calculation, using readily available accounting data, what could possibly go wrong?

One plant made a dramatic improvement in inventory turns without a noticeable difference in its operations. Upon investigation I found that they had worked out deals with their major suppliers to receive raw material inventory on consignment, not owning it until used. As a result, the amount of raw material inventory “on the books” dropped dramatically, even though the actual amount of inventory had not changed. It may have been a good idea to receive the material on consignment, but that totally dodged the intent. We had to change the measure in order to obtain the results we wanted.

Measures Drive Behavior

In order to make sure a measurement accomplishes what you desire, follow these steps:

  1. Write down, as simply and clearly as possible, exactly what you want to happen.

  2. Identify why that is important to your organization.

  3. Identify what you would propose to measure as a performance indicator and how that measurement would be accomplished.

  4. Identify who might look bad as a result of that measure.

  5. Identify how those people affected might corrupt the measure in order to look better or avoid looking bad.

    1. Ways that the measure might be achieved without making the desired improvement.

    2. Ways that achieving the measure might create undesirable results.

  6. Improve the measure to close any loopholes and expose or stop “game playing.”

Items 1 and 2 help you get clear about your objective. When you pick a measure you should expect that you will be asked to explain the reason for that particular measure, so this will help clarify your thinking before you are put on the spot.

When considering item 3, be very open to alternatives. There may be many ways to measure the same objective, with some likely to be better than others.

If your measure works and does drive improvement, it may not be good for everyone in the organization. Item 4 will lead you to seek out the likely opponents to your measure. These opponents might be those directly injured by your proposed measure, such as competing departments or just individuals who might be shown to be poor performers by the measure. If you can engage these people in working through the implementation, you might end up with a measure that doesn't have opponents.

For instance, the best way to measure productivity for lift truck drivers might be very different in a production area than in a loading dock. Imposing a single measure might favor one over the other in a way not desired. Through collaboration with all parties you might devise either a better overall approach, different measures for different situations, or a more localized scope for that measurement, applying it only where it will work well.

Measures will drive behavior — both good and bad. I have found that when an organization is having a serious problem and everyone seems to be doing the right things, often there is a measure or incentive that is driving bad results.

I was asked to find and break the bottleneck in a manufacturing plant, which made a single type of product in a variety of styles and sizes. For reasons that were unclear, they were suddenly falling further and further behind on their shipments, while their schedules had more and more short runs, reducing economies of scale and increasing the number of setup changes. Other than complaining about late shipments, the customers seemed to be ordering as they always had been.

Initially it seemed they needed more equipment, which was quickly disproved. I started to look at how supervisors chose to execute their respective production schedules. Each schedule was simply a list of products and quantities needed for the week. The supervisors had latitude to choose the sequence that would work best. This allowed them to group like items together. They also had visibility to future requirements, which could be pulled into the future week if needed for better efficiency.

It turned out that the culprit was the way the supervisors (and the plant) were measured: number of units produced each day. That simple measure was causing a lot of trouble. Consider item 5b: How could the measure create undesirable results? There was no requirement that everything on the schedule be actually made on a particular day, so supervisors would cherry-pick the schedule and make the larger quantity items and pull from future dates to increase the quantity. As a result, the small quantity items were deferred to tomorrow, then to next week, and so on. Of course any plant manager who allowed this to persist would be badly mismanaging the plant and was released to pursue other opportunities. The new plant manager sought my help on what appeared to be a confusing mess.

We made a simple change to the measure and set a few new rules, which had the plant back on track in several weeks. Specifically we forced all past due orders to be top priority: If material was available, past due orders had to be filled first. We concurrently expedited material for those past due orders.

At first nearly half of the schedule was past due orders and supervisors accused me of sabotaging the plant, but as we worked through the schedule the pain subsided and we got back on track. The change was that the performance measure (number of items produced) was changed to exclude items from a future schedule, unless all current orders were filled.

This situation would have been comical were it not real — and in the middle of all the confusion, the cause of the problem was just not clear. Business problems caused by bad measurements can be very insidious, but once exposed can often be easily remedied.

Remember, just by asking, “How are people measured (or rewarded or compensated)?” you can begin the process of checking out the measure.

Stay Alert

Once we have a good measurement system, we have to still be alert for problems. A system that works very well in the beginning of an improvement initiative may not work well as the effort matures. Here is a common situation, known as a virtuous circle, or reinforcing loop. An improvement program is initiated with an appropriate measure. People respond to the challenge and measureable improvements are seen. The cycle continues with great expectations, but cannot last forever. Everything has limits to growth, especially something that seems to be working well.

Many improvement programs get off to a great start, making rapid gains as the low-hanging fruit or easy opportunities are attacked. Unfortunately we often find that it becomes more and more difficult to make further improvement.

This is a pivotal point for any improvement program. A number of paths could be taken, to name a few:

  • Urge everyone to try harder.

  • Accept declining performance (very unpopular with management).

  • Commit additional resources to improvement.

  • Re-define the objective.

Re-defining the objective is a slippery way to play games with the measurement system. When the improvement object is even slightly ambiguous, there is room for this game.

A chemical company was trying to emphasize innovation, so they mandated that 20% of all revenues had to come from products less than five years old. That was easy in the beginning; soon the R&D department came under a lot of pressure to create innovative new products. Some clever R&D chemists found that it was easier to innovate the definition of “new” than to innovate the product. As a result, they made minor formula changes, producing nearly identical products with new names and product numbers. This is obviously not what company leadership had in mind, but that same leadership was happy about the revenue coming from “new” products.

Because measurement drives behavior, make sure your measurements are crafted to produce the behaviors you want. Beat the measurement “game players” at their own game!

William (Bill) Eureka is president of consulting firm EurekaResults.com, based in Lowell, Mich.