Archive

Posts Tagged ‘DBR simulation’

The Dice Game of “Velocity” – Part 1

November 22, 2010 54 comments

I have just finished reading “Velocity: Combining Lean, Six Sigma and the Theory of Constraints to Achieve Breakthrough Performance – A Business Novel” with my Kindle. The author Jeff Cox is the co-author of  “The Goal“. This time the story is about Amy, the newly named president of Hi-T Composites Company could not get any bottom line improvement after implementing Lean Six Sigma for a year. In the end, she convinced her team to combine TOC with LSS approach in order to achieve and exceed the bottom line goal.

A critical piece of the story is a dice game. It is this dice game that has finally got everyone on the same page, including the stubborn LSS guy Wayne to change his approach. A key insight is to abandon the balanced line approach at which Wayne has been working. The team finally has agreed on changing to an unbalanced production with everything synchronized to the bottleneck.

In the book, Amy was betting her career on this dice game to convince her staffs as well as to generate the same results in actual production. It worked out that way in the novel. But in practice, would you bet your career on a dice game? I cannot held to ask the following questions:

  • How repeatable are the results of the dice game  described in the novel? How sound is the statistics behind it?
  • How close is the game in resemblance to the reality of a production line? What are the limitations? Under what conditions would the TOC approach (Drum-Buffer-Rope) work better or worse?
  • Under what conditions does a balanced line with takt time work better or worse than an unbalanced line? How to quantify the variability in order to determine which approach to use?

The book has left these questions unanswered. That means these theories may or may not work at your reality. In order to better understand these questions, I intend to use simulation and analytic techniques to explore further. Stay tuned.

In Scenario 1, a balanced line is simulated with everyone starts with a single dice (same capacity) and the same 4 pennies (Initial buffer size).


In this simulation, WIP has increased from 20 to 26 by the 20th round and the total output is 62 pennies. This “throughput” number can be compared to the 70 pennies, which is the average dice point (3.5) times 20 rounds. 62 is in general less than 70 because of throughout lost as a result of variability.

In order to improve the performance of throughput, it was suggested to unbalance the line and create a constraint. Murphy is given only 1 dice while everyone else is then given 2 dices. The results look like the following:


This time WIP has increased from the initial 20 to 42 by te 20th round and total output is 81 pennies. This is significant throughput improvement but with a high WIP, especially around the bottleneck in front of Murphy.

In order to further improve the performance, a DBR (Drum-Buffer-Rope) method is introduced. In this case, Amy’s dices are being taken and she only releases pennies to the line according to the signal given by Murphy on what he rolls. In addition, Murphy is given a higher initial inventory buffer of 12 pennies.


This time WIP has actually decreased from 28 to 23 by the 20th round and the total output is at 91.

In the final case, the team discussed about improving the yield of at the bottleneck through Lean and Six Sigma. In order to simulate this, the dice roll of Murphy is mapped to number betweens 4 to 6.


The results indicated that WIP stayed low at 21 after 20 rounds, the throughput has been further improved 110.

It is shown that the simulation described in the book is generally repeatable. The logic behind these calculations can be nicely summarized with a G/G/1 queue and solved with Markov Chain analysis. We will discussed how practical are these results in application to real production line next time.

Advertisements