I’ve run into a couple of cases where a corporate continuous improvement team orders an initiative to “implement Toyota kata.”
Aside from trying to prescribe each and every step of the process (which runs counter to the entire point of discovering the solution—“you omitted step 7b”), they also expect reports of metrics related to the “implementation.”
These data included things like:
• Number of coaches
• Number of coaching sessions
• Number of active improvement boards
• Learners’ scores on a dozen or so categories of specific attributes for their coaching session
Another issue is bureaucratic reporting structures demanding that this information (and much, much more) gets dutifully completed and reported up to the continuous improvement office so as to monitor how each site is progressing (often from across an ocean).
I’ve seen this before.
I’ve seen companies try to count kaizen events and quantify the improvements for each one to justify the payback of the effort.
Similarly, I’ve seen top-level corporate leadership teams struggle to determine how to measure, at a glance, whether a site was “doing lean” to get their results—or getting the results by some other means that were less appropriate.
…
Add new comment