Archive

Archive for the ‘6 Sigma’ Category

Boston Bombing, Earthquake in China and Cost of Quality (COQ)

April 23, 2013 Leave a comment

The Powerful Power Law

 

What does last week’s devastating events of Boston bombing and earthquake hitting Sichuan of China have in common with the nature of COQ at a manufacturing company? In fact they all are observed to follow a simple statistical rule called the power law. Simply put, plotting the logarithm of the magnitude of the events against the logarithm of the probability of occurrence will result in a straight line with a negative slope relationship. In the case of a terrorist event, the magnitude can be measured by the number of casualty. A number of research has shown that this obeys the power law. In case of an earthquake, the relationship between magnitude and the probability of occurrence at a given time and region is described by the Gutenberg-Richter law as a type of power law distribution.

How are these related to COQ? This figure is an analysis of the warranty claim data of an automotive tier 1 supplier within a period of 1 year.

This data set indicates that the larger claims (above $10,000) follow the power law very well. The circled area are smaller claims that most likely indicates many smaller size defects have skipped the system and hence have lower occurrence than predicted by the power law. Typically, empirical earthquake data also demonstrates similar behavior known as “roll-off”. Assuming these data are representative patterns, they are showing that the power constant is approximately equal to -1. This means that the occurrence of above $100K claim is about 100 cases in a year, that of above $1M claim is about 10 cases/year and that of above $10M claim is about once every year.

Studies on terror events all over the world have found that very similar relationship exists between casualty and the probability of occurrence. In fact the power constant for terrorism is found to be about -2.5. In other words, the occurrence of a 200 casualty event such as the Boston bombing is approximately 10^2.5= 316 times more likely than a casualty 2000 and above event such as Sept 11.

Why do important quality events exhibit Power Law behavior?

 

There are 2 main reasons, both are results of the network nature of the manufacturing supply chain.

  1. Interdependency – Supply chain elements are highly interdependent. An example is that during my early career as a storage media quality engineer, there was an incidence that one day a small crack was discovered at the glass furnace at a remote factory in Japan. This turned out to be a devastating event because this glass furnace was the only one that made glass substrate for storage media in multiple brands of magnetic disk drives. These drives were supplied to make servers and PCs. That small crack hence stalled the entire server and PC supply chain for days costing millions of dollars.
  2. Positive feedback – An example of how positive feedback works is Toyota ‘s “unintended acceleration” case that ended up costing Toyota over billion dollars. At first those were considered isolated cases but as more cases were suspected to be connected, Toyota identified potential root cause as the floor mats from certain suppliers. Number of reports increased as the publicity of the case increased which in turn lead to the suspicion of Toyota hiding something increased. Toyota was drawn by Congress for hearing and later being fined for about $1.1B even there had been no proof that could relate the unintended acceleration cases to any electronic or software defects. Each cycle of litigation and probes reinforced the public’s suspicion of something was wrong with Toyota till the point of avalanche even when no major defects were identified by those investigations.

Six Sigma and the Power law

 

This power law behavior of COQ offers important insights on how quality executives should deal with important quality events. This is particular counter-intuitive to many quality professionals who have gone through six sigma training or are themselves six sigma professionals. The foundation of six-sigma builds on the normal distribution or the Bell curve. COQ, however, observes the power distribution, not the normal distribution. Here are some major differences.

  • There is no average – In other words, it is meaningless to talk about the average size of a warranty claim. The Power distribution has no average value like the Normal distribution.
  • The most important data points are the outliners – In our data set, the top 10 claims among the total of 412 claims contributed to more than 50% of the total warranty cost. These large claims are the outliners that are typically ignored by six-sigma methodology.
  • Black swan events occur – The theory was developed by Nassim Nicholas Taleb to describe highly unlikely events that determines the course of human history. According to the above data set and the underlying power law, a warranty claim that costs over billion dollars occurs in about every century. Such event though rare can easily lead to termination of responsible executives or even bankruptcy of the business.

The Power law Strategy

 

Just like security gates alone cannot eliminate terrorist events, government bodies run drills and set early warning systems to reduce the risk of terrorist events. Similar method can be applied to catch quality defects.

In order to tackle the Power Law phenomenon, a strategy is needed to tackle its fundamental elements. This involves 3 major steps. The first step is to enable track and trace of the interdependency of the supply chain. Once interdependency tracking is established, the second step is to conduct further analysis that enables early warning (such as using Big data technology) based on the interdependency. Warning signals detected need to tie to a series of actions that involves PDCA cycles. The third step is a containment strategy to quickly respond to quality events before their effects were amplified by positive feedback. These measures will significantly lower the probability of isolated events escalating into catastrophic events through self-reinforcing cycles of positive feedback. It is worth noting that traditional ROI analysis based on average annual return rarely can be used to justify investment on implementing such strategies and solutions. When dealing with the potential catastrophic effect of the Power law, executive decision is required to set organizational direction. Seeking average annual return of such investment just does not make sense in the world of Black Swan events.

 

 

Advertisements

The Dice Game of “Velocity” – Part 1

November 22, 2010 54 comments

I have just finished reading “Velocity: Combining Lean, Six Sigma and the Theory of Constraints to Achieve Breakthrough Performance – A Business Novel” with my Kindle. The author Jeff Cox is the co-author of  “The Goal“. This time the story is about Amy, the newly named president of Hi-T Composites Company could not get any bottom line improvement after implementing Lean Six Sigma for a year. In the end, she convinced her team to combine TOC with LSS approach in order to achieve and exceed the bottom line goal.

A critical piece of the story is a dice game. It is this dice game that has finally got everyone on the same page, including the stubborn LSS guy Wayne to change his approach. A key insight is to abandon the balanced line approach at which Wayne has been working. The team finally has agreed on changing to an unbalanced production with everything synchronized to the bottleneck.

In the book, Amy was betting her career on this dice game to convince her staffs as well as to generate the same results in actual production. It worked out that way in the novel. But in practice, would you bet your career on a dice game? I cannot held to ask the following questions:

  • How repeatable are the results of the dice game  described in the novel? How sound is the statistics behind it?
  • How close is the game in resemblance to the reality of a production line? What are the limitations? Under what conditions would the TOC approach (Drum-Buffer-Rope) work better or worse?
  • Under what conditions does a balanced line with takt time work better or worse than an unbalanced line? How to quantify the variability in order to determine which approach to use?

The book has left these questions unanswered. That means these theories may or may not work at your reality. In order to better understand these questions, I intend to use simulation and analytic techniques to explore further. Stay tuned.

In Scenario 1, a balanced line is simulated with everyone starts with a single dice (same capacity) and the same 4 pennies (Initial buffer size).


In this simulation, WIP has increased from 20 to 26 by the 20th round and the total output is 62 pennies. This “throughput” number can be compared to the 70 pennies, which is the average dice point (3.5) times 20 rounds. 62 is in general less than 70 because of throughout lost as a result of variability.

In order to improve the performance of throughput, it was suggested to unbalance the line and create a constraint. Murphy is given only 1 dice while everyone else is then given 2 dices. The results look like the following:


This time WIP has increased from the initial 20 to 42 by te 20th round and total output is 81 pennies. This is significant throughput improvement but with a high WIP, especially around the bottleneck in front of Murphy.

In order to further improve the performance, a DBR (Drum-Buffer-Rope) method is introduced. In this case, Amy’s dices are being taken and she only releases pennies to the line according to the signal given by Murphy on what he rolls. In addition, Murphy is given a higher initial inventory buffer of 12 pennies.


This time WIP has actually decreased from 28 to 23 by the 20th round and the total output is at 91.

In the final case, the team discussed about improving the yield of at the bottleneck through Lean and Six Sigma. In order to simulate this, the dice roll of Murphy is mapped to number betweens 4 to 6.


The results indicated that WIP stayed low at 21 after 20 rounds, the throughput has been further improved 110.

It is shown that the simulation described in the book is generally repeatable. The logic behind these calculations can be nicely summarized with a G/G/1 queue and solved with Markov Chain analysis. We will discussed how practical are these results in application to real production line next time.

Golf Lessons from Lean, Six-Sigma and TOC

November 14, 2010 2 comments

After taking lessons from several coaches, I noticed some very fundamental differences between their approaches. My current coach is very good at giving a one point advice based on my swing. Although one day I would like to swing like Ernie Els, right now I am settled with my ugly swing and  happy to experience the notable score improvement after every lesson. That is quite different from the lessons that my friend took. His coach basically asked him to forget all he had learnt and tried to revolutionize his swing in order to take him to the next level. He is scared to go to the course now because he is stuck with a setback before he can get any better. He however believes that he is taking the necessary steps towards his goal of turning professional someday.

What are your long and short term goals and which approach is more suitable for you?

Lean:

You should focus on eliminating muda in your swing. Do not try to “push” the club head towards the ball but rather let a synchronized body turn to naturally “pull” the club head in order to achieve a smooth flow of your swing. The game of golf is a process of relentless continuous improvement. We do not generally recommend you to invest too much energy to your tools because dependence on such frequently undermines the development of the correct mindset. If you focus on improving every little piece, your efforts will eventually show up in your score and hence your handicap, which should not be your ends but means to the way of golf.

Six-sigma:

Golf is a game of consistency. You should hence focus on reducing variability of your swing. We have a set of statistical tools to measure the defects of your swing as well as scientific instruments to monitor and track your progress. You need to certify your skills from green to black belts. Through leveraging the right tools with scientific measurement and objective feedback, you will ultimately reduce your swing variability to under 6-sigma.

TOC (Theory of Constraint):

You can maximize the return of your practice time by focusing on identifying and improving the bottleneck. At every stage of your skill development, there is a constraint that determines the throughput of your entire game. At one point it may be the grip or the address or the swing plane or approach shot or putt … but the point is that the bottleneck moves. By identifying the bottleneck and concentrate on it, you will be able to get notable handicap reduction within the shortest time. While lean and six-sigma can get you closer to the “perfect” swing, TOC focused on optimizing what you have already got to quickly improve your score.

Whatever the approach you pick to improve your golf game or to help transform your manufacturing operations, you can benefit from applying technology that automatically records your current swing (or process) to then give you instant feedback on what to improve. In my opinion, there is no better example than golf to illustrate how your actual execution can be deceptive to the best intended plan.

The Neglected Law of 6-Sigma

September 18, 2010 Leave a comment

I have just come back from a consulting engagement at a manufacturing plant. This plant is the most successful plant of a global enterprise. In the past 6 years they have been exceeding improvement targets in productivity and order fulfillment. Even in this economy, they have been turning out record profit. Marketing department loves to promote their product because of their high profit margin. Success has brought them more pressure because any improvement would impact the company performance. I can’t held to ask “Why would such a successful operation need any sort of consulting and not just continue to do what have made them successful?”

This turns out to have something to do with the natural law of business process.

Take the example of order fulfillment which is their most important metric. They were at 70% 6 years ago. A yearly target of 5% improvement has taken them to around 96%. But then, they are hitting a wall. Well why should the last 5 % be more difficult than the others? The basics of 6-sigma would cast light on this problem.

Going from 70% to 96% is the journey of going from 1 sigma to 2 sigma. The natural law of business processes says that it will require the same level or more effort to increase every sigma level. Frequently it is a totally different ball game that requires significant resource investment, skill acquisition, technology advancement and cultural transformation to ramp up each sigma level.

They have been relying on automation and Lean methodology in the past 6 years and have succeeded in the journey from  1 to 2 sigma. In order to get to 3 sigma, I have suggested them to start applying scientific and statistical tools to business processes. Without taking more detail measurements and applying appropriate quantitative methods along with Lean, there is a limitation on how far that they can further improve. It is indeed a different ball game that they are more than ever in need of enabling information technology to get them the data and visibility required for scientific management. Years of neglect in IT investment at manufacturing may come to a point to limit further growth of the operation.

Does your operation set key improvement targets, the associated resource and infrastructure investment based on target 6-sigma level? How far can you go down the 6-sigma journey without implementation of enabling information technologies?