Forget Internet of Things – The Real Business Transformation Explained

January 12, 2016 Leave a comment

I took my 5-year old to Jet Propulsion Lab one day he was quite amazed with this shining object exhibit that is supposed to demonstrate how data are being transmitted from space. Once upon a time we must have been amazed by imagining these lights moving along indicating live communication with a remote object. Now we all take the data transmitting part for granted. On the contrary, apps and cartoons about how our lives have changed by technologies are what is getting more of our attention today. One day we will look at the Internet of Things (IoT) in the same way.


In the past year, there have been many inquiries from manufacturers around the world about buzz words such as Industrie 4.0, Smart Manufacturing and IoT. While there have been a lot of information on new technologies, less focus has been on the business transformation underway. What is beyond the automation of existing tasks? What are the fundamental changes from a business perspective? How do you prioritize the transformation of different parts of business for such changes?

To answer these questions, we need to better understand the underlying mechanism of these changes. I would propose one way is to look at these issues in the light of “Smart Pull,” a concept that expands the traditional definition of Pull in Lean manufacturing to a new world enabled by digital technologies.

The mechanism of Pull processes – those triggered by an actual event instead of a forecast (Push) – is nothing new. It is at the heart of many successful manufacturing strategies, MTO (Make-to-Order) and JIT (Just-in-Time) models. In the new paradigm, there are three major types of Pull made possible by digitization.

Collaborative Pull – This refers to the ability to draw out people and resources inside and outside an organization to collaborate on addressing a need as it appears. Some of the key technology enablers are virtual prototyping, simulation, additive manufacturing, social networking and enterprise search. These technologies enable people and resources across the globe to be identified quickly to work on a design or to solve a particular challenge, and to do so with better efficiency.

Services-oriented Pull – This refers to the ability to deliver a capability addressing a need as it appears. Famous examples are Uber personal transportation services or AirBnB lodging services. In each of these cases, a transportation or accommodation are delivered as a service to address a customer’s need through a social platform powered by Apps. In many cases, the pricing model is also based on the actual consumption of these services. MaaS or Manufacturing-as-a-Service is the consumption of manufacturing capacity as a service. Enabling technologies including IoT, intelligent sensors, cloud computing and social network platforms that carry out transactions required for matching sellers and consumers on-demand.

Adaptive Pull – This is the most common type of Pull used in manufacturing and logistics operations. Within traditional Lean manufacturing, Pull-based processes are being powered by digital technologies to go beyond elimination of over-production and inventory. Quality, maintenance, costing, procurement, production and inventory control can leverage these types of Pull processes. For example, a large number of real-time quality data can be gathered to analyze and benchmark across global production sites. Frontline workers and managers can now be automatically notified based on risks identified from such actual data. This is again a form of digital Pull that can be used to trigger an action based on actual data instead of a forecast. Enabling technologies include intelligent sensors, M2M, IoT, Big data analytics, BPM (Business Process Management), simulation and digital modeling.

These different types of Pull processes can be mixed and matched to create new customer experiences, or to pursue new level of efficiency. An example of a mixed Pull is the case of agricultural equipment manufacturer Kubota. They put sensors on their agricultural equipment to gather usage data at the field. Based on these data collected and analyzed, they now offer a value-added service to the customers on how to better optimize their farming operations. At the same time, such information is used to understand demand patterns. This knowledge has let Kubota know when to sell what types of equipment, and to which type of user. In this scenario, an equipment manufacturer and farmers work collaboratively to optimize farming operations (Collaborative Pull). The data analysis can then be sold as a service (Service-oriented Pull). The selling of additional equipment based on actual demand is also a type of demand Pull (Adaptive Pull).

The advantage of thinking in terms of the above “Smart Pull” concept is that it takes the focus away from technology and put it on business transformation. Challenges and inefficiency based on Push are then presented as opportunities. Perhaps you can start asking the following questions to identify areas of opportunities.

  • What kind of improvement in capital lockup or wait time has most impact to business?
  • Can these areas be improved by Pull?
  • Which part of your customer engagement or operation inefficiency can be addressed by changing from push to pull?

Perhaps the day will come when design, engineering, manufacturing, logistics and after-sales resources are all available as services to be called upon on demand just like what Uber is doing to transportation resources. A consumer could then use his or her phone to customize an order that meets his or her special needs. All the necessary industrial resources could then be orchestrated – on demand – based on Pull to fulfil each specific order. The concept of “Smart Pull” is truly revolutionary, given its role to help bring these business transformations to market.


The Post Industry 4.0 World

August 14, 2015 Leave a comment

Veerle from France was on a business trip at California and met her colleague Linda, the Californian lady who was wearing a silver bracelet of a spiral design that got her attention. She quickly took a picture and posted on her facebook public page about this surprise finding and commented that while this was a really nice piece of artwork, her skin may be allergic to silver and so she would not be able to wear it. She then attended a business meeting on the topic of Industry 4.0 that she did not realize at that time, could change her whole experience on her tasteful affection towards accessories.

Hype or Hope?

There has been many interpretation and messages overflowing the media about the 4th industrial revolution. Depending on the agenda behind each technology provider, automation vendor, system integrator or consulting partner, the emphasis has been different. Some of the common themes would include:

  • Integration from topfloor to shopfloor
  • Integration across supply chain
  • Integration between manufacturing and engineering
  • Shifting to a services-oriented business model
  • Installation of new robots and automated equipment
  • Driven by the Internet of Things (IoT)

And hey, aren’t we already doing all these things before someone labelled it as Industry 4.0? So is there anything truly revolutionary or is this just another marketing hype? Adding to the confusion is that the terminology came from Germany’s government initiative and many countries have been following suit to start their own initiatives of similar kind. China 2025, La nouvelle France industrielle, Smart Manufacturing Leadership Coalition (US), Robot Revolution Initiative Council (Japan)… to name but a few. All these initiatives are back by government resources. What is the common core concept that is driving all these initiatives with different terminologies?

This is not a revolution that has already happened. It is about groups of organizations putting resources to start a revolution. Many who do not understand the core concept and its endgame would easily jump into the conclusion that this is nothing more than an abstract marketing hype without any substance behind it.

It is 2008 All Over Again

To judge whether this could be truly revolutionary, I believe one should fast-forward and take a look at the new industrial world that is in the making. The industrial world today is somewhat like the consumer world 7 years ago, before cell phones become “smart” and mobile tablets were almost unknown. Probably very few could have seen why we need our mobile devices to connect to the internet other than for the purpose of reading emails and most would contend with their lives without Apps, Social and all other gadgets such as watches connected to their cell phone. Back then, some of the most advanced phones were made in Japan and names like Nokia and Blackberry were the dominant forces in the cell phone market. Few would have seen the coming of iOS and Andriod as the dominant software platforms that eventually pushed out phone giants who did not adapt. It has become a world that is all about the Apps. Phones which cannot run these killer apps won’t sell despite their superior HW capability. Billion dollars of transactions are now running today on these platforms touching almost every part of our daily life.

The Brave New World

In the Post Industry 4.0 world, industrial Apps will be running on a few dominant software platforms that orchestrate smart products, people, devices, sensors, production cells, robots, lines, factories which all will have not only their own IP addresses and smart built-in logics but also the capability to collaborate with each other through a set of standards and protocols. Manufacturing of a product could be about running an App on an operating system platform that coordinates all the manufacturing resources globally on demand. Production Lines will be so flexible and adaptable that they are no longer lines but individual cells that reconfigure themselves according to each product that carries its own specifications and bill of material. Every product coming out of the lines is hence custom built according to demand. Any unplanned interruption like quality issues, machine problems, and skilled worker sick leaves would be handled on spot through dynamic negotiation between intelligent agents to arrange an alternative path, somewhat similar to how flight delay and weather conditions are handled by travel agents. This new world operates in drastic contrast to the paradigm of factory automation and CIM (computer integrated manufacture) which were initiatives a decade ago based on centralized control. This world of smart devices are more adaptable and agile as they operate through a dynamic network of decentralized intelligence, capable of identifying themselves, discovering the others as well as collaborating and  optimizing on-the-fly.

The end game does not stop there. In the post Industrial 4.0 world, these industrial Apps will be able to talk to all the other Apps in the consumer world to act according to demand. The Facebook, Amazon, twitter world will now have access to the vast resources in the industrial world and orchestrate them to meet individual consumer demand. The line between B to C and B to B will be blurred and consumer will experience a whole new world.

The world of design and applied research will also join the game. The science of physics, biology, chemistry, material science will be part of the building blocks for designing new products from a molecular level, as they are being exposed as Apps and services, pulled by consumer demand as needed. This mechanism is sometime called “Smart Pull”.

There are apparent obstacles ahead in coming to terms with global standards, converging SW, HW, ICT technologies and some of today’s players will extinct or evolve. This new world may dawn slowly and gradually through-out the next decade as the industrial world is highly complex and interwoven. Many leaders are currently caught up by the complexity and forgot to view Industry 4.0 in the light of new era of experiences.

The Unique Experience

During Veelre’s wedding anniversary 2 months later, she was extremely surprised that her husband got her a new watch that was made with the same design that delighted her in California, with a new material of silver engineered for her DNA so it does not cause skin allergy. In the post Industry 4.0 world, intelligent Apps and Agents across both consumer and industrial platforms will be working actively behind the scene to dynamically synthesize science, design, manufacturing and logistics to create nothing short of revolutionary when viewedburberry-engraved-charm-bracelet-watch from the perspective of consumer experiences.

Categories: Uncategorized

Boston Bombing, Earthquake in China and Cost of Quality (COQ)

April 23, 2013 Leave a comment

The Powerful Power Law


What does last week’s devastating events of Boston bombing and earthquake hitting Sichuan of China have in common with the nature of COQ at a manufacturing company? In fact they all are observed to follow a simple statistical rule called the power law. Simply put, plotting the logarithm of the magnitude of the events against the logarithm of the probability of occurrence will result in a straight line with a negative slope relationship. In the case of a terrorist event, the magnitude can be measured by the number of casualty. A number of research has shown that this obeys the power law. In case of an earthquake, the relationship between magnitude and the probability of occurrence at a given time and region is described by the Gutenberg-Richter law as a type of power law distribution.

How are these related to COQ? This figure is an analysis of the warranty claim data of an automotive tier 1 supplier within a period of 1 year.

This data set indicates that the larger claims (above $10,000) follow the power law very well. The circled area are smaller claims that most likely indicates many smaller size defects have skipped the system and hence have lower occurrence than predicted by the power law. Typically, empirical earthquake data also demonstrates similar behavior known as “roll-off”. Assuming these data are representative patterns, they are showing that the power constant is approximately equal to -1. This means that the occurrence of above $100K claim is about 100 cases in a year, that of above $1M claim is about 10 cases/year and that of above $10M claim is about once every year.

Studies on terror events all over the world have found that very similar relationship exists between casualty and the probability of occurrence. In fact the power constant for terrorism is found to be about -2.5. In other words, the occurrence of a 200 casualty event such as the Boston bombing is approximately 10^2.5= 316 times more likely than a casualty 2000 and above event such as Sept 11.

Why do important quality events exhibit Power Law behavior?


There are 2 main reasons, both are results of the network nature of the manufacturing supply chain.

  1. Interdependency – Supply chain elements are highly interdependent. An example is that during my early career as a storage media quality engineer, there was an incidence that one day a small crack was discovered at the glass furnace at a remote factory in Japan. This turned out to be a devastating event because this glass furnace was the only one that made glass substrate for storage media in multiple brands of magnetic disk drives. These drives were supplied to make servers and PCs. That small crack hence stalled the entire server and PC supply chain for days costing millions of dollars.
  2. Positive feedback – An example of how positive feedback works is Toyota ‘s “unintended acceleration” case that ended up costing Toyota over billion dollars. At first those were considered isolated cases but as more cases were suspected to be connected, Toyota identified potential root cause as the floor mats from certain suppliers. Number of reports increased as the publicity of the case increased which in turn lead to the suspicion of Toyota hiding something increased. Toyota was drawn by Congress for hearing and later being fined for about $1.1B even there had been no proof that could relate the unintended acceleration cases to any electronic or software defects. Each cycle of litigation and probes reinforced the public’s suspicion of something was wrong with Toyota till the point of avalanche even when no major defects were identified by those investigations.

Six Sigma and the Power law


This power law behavior of COQ offers important insights on how quality executives should deal with important quality events. This is particular counter-intuitive to many quality professionals who have gone through six sigma training or are themselves six sigma professionals. The foundation of six-sigma builds on the normal distribution or the Bell curve. COQ, however, observes the power distribution, not the normal distribution. Here are some major differences.

  • There is no average – In other words, it is meaningless to talk about the average size of a warranty claim. The Power distribution has no average value like the Normal distribution.
  • The most important data points are the outliners – In our data set, the top 10 claims among the total of 412 claims contributed to more than 50% of the total warranty cost. These large claims are the outliners that are typically ignored by six-sigma methodology.
  • Black swan events occur – The theory was developed by Nassim Nicholas Taleb to describe highly unlikely events that determines the course of human history. According to the above data set and the underlying power law, a warranty claim that costs over billion dollars occurs in about every century. Such event though rare can easily lead to termination of responsible executives or even bankruptcy of the business.

The Power law Strategy


Just like security gates alone cannot eliminate terrorist events, government bodies run drills and set early warning systems to reduce the risk of terrorist events. Similar method can be applied to catch quality defects.

In order to tackle the Power Law phenomenon, a strategy is needed to tackle its fundamental elements. This involves 3 major steps. The first step is to enable track and trace of the interdependency of the supply chain. Once interdependency tracking is established, the second step is to conduct further analysis that enables early warning (such as using Big data technology) based on the interdependency. Warning signals detected need to tie to a series of actions that involves PDCA cycles. The third step is a containment strategy to quickly respond to quality events before their effects were amplified by positive feedback. These measures will significantly lower the probability of isolated events escalating into catastrophic events through self-reinforcing cycles of positive feedback. It is worth noting that traditional ROI analysis based on average annual return rarely can be used to justify investment on implementing such strategies and solutions. When dealing with the potential catastrophic effect of the Power law, executive decision is required to set organizational direction. Seeking average annual return of such investment just does not make sense in the world of Black Swan events.



Applying Nobel-winning Physics Techniques to Management

October 10, 2012 Leave a comment

2012’s Nobel Prize in Physics goes to Serge Haroche of France and American David Wineland. They showed in the 1990s how to observe individual particles while preserving their bizarre quantum properties, something that scientists had struggled to do before. While this contribution may first seem far-fetching and remotely detached from the daily management challenges of a business executive, I am going to argue otherwise.

The Principle of Uncertainty

Let me first touch on the significance of this discovery. At the beginning of last century when quantum physics was born, physicists have discovered that classical laws of physics break down at sub-atomic level. Everyday objects that we are used to have deterministic states.  For example, given the starting location and the velocity of a car, we can easily determine its location at any time. Tiny particles on the other hand behave differently. The foundation of quantum mechanics was first built on the Heisenberg’s principle of uncertainty, which describes the possibility of physical objects having multiple states. Hence given the initial location and velocity of a particle, multiple locations described by probability functions are possible. This is what makes quantum mechanics such a bizarre subject for most people. Making things worse, it was not possible to observe this type of behavior. For example, observing a photon will require lights to be absorbed by our eyes or any image sensors, hence altering the state of the photon itself. This observer effect and uncertainty relation has been captured in many ways in philosophical studies such as those of Karl Popper and reflexivity. The latter one has been mentioned by George Soros as the principle behind his investment strategy. Working around these monumental theoretical and philosophical hurdles is hence what the 2 Nobel literates have achieved.

What Can Managers Learn From Quantum Physicists?

While the bizarre world of quantum mechanics may seem distant, the principle of uncertainty for tiny objects prevails well in business management. For example, many companies have installed some type of ERP systems to get a real time view of the state of their business. There is a strong belief in the existence of a single version of the truth on financial data that are at company or division levels. Day-to-day decisions are made based on this information. This is almost in analogy to management by classical physics. However, when it comes down to highly granular information like events on critical machines, individual operator performance, inventory by SKU and bin locations, or even OEE for machines, business executives tend to think of them as the world of tiny objects like the bizarre world of quantum mechanics. It is not uncommon to have multiple truths in such manufacturing operations. The reported OEEs from different plants for the same type of machine can be based on very different measurement methods and subject to different degree of human errors. Different departments on the manufacturing shop floor have different recognition of the true state of their operations. The variable cost by product line by shift can be far from the aggregate cost that was captured in ERP. The inventory accuracy by SKU quantity can be way below the ERP inventory accuracy that is based on total aggregated financial numbers. It is far too common that business executives have admitted the principle of uncertainty and allowed their manufacturing operations to operate based on multiple uncertain states.

Mastering the Quantum Bits of Your Business

It does not have to be that way. Just like the Nobel Prize winners have discovered, the technology to observe and measure the quantum bit of manufacturing information exists. Some companies have already tapped into the power of this technology and achieved significant improvement in profit margins and working capital. In the increasingly complex and turbulent world, tiny quantum bit of information can explode into a perfect storm in a very short time. The capability of a business to leverage these quantum bits is already distinguishing the winners from the losers in the marketplace.

While the technology to observe the quantum bits of manufacturing information may be a far cry from getting its own Nobel Prize, the application of such technology should not be left as a subject of uncertainty anymore.

The Future of Lean Manufacturing through the World of Warcraft

October 5, 2011 3 comments

Any seasoned Lean manufacturing expert will tell you that implementing lean is not about JIT, Heijunka or any sort of tools. It is about implementing a lean culture of continuous improvement. In fact in Toyota, they consider their ultimate competitive advantage is the “intoxication of improvement” by every employee from shopfloor to top floor. Thousands of improvement ideas are created every day even for the smallest mundane tasks. This is in big contrast to “don’t fix what is not broken” mindset prevails in most other organizations. Well, what they believe is one thing. Have any of these been scientifically proven? Can we simulate this kind of organizational behavior and measure its output? And if we can, what can we learn from such about managing thousands of ideas and distill them to actions every day?

In this video, Dr. John Seely Brown, one of my favorite business writer talks about the innovation dynamics within the World of Warcraft (WoW), which also happens to be my favorite on-line video game. At the end, Brown said “This may be for the first time that we are able to prove exponential learning … and figure out how you can radically accelerate on what you’re learning”. Indeed, I have found this game could interestingly cast light on the social dynamics of lean culture and how it will evolve in the future.
[gigya movie=”” quality=”high” allowFullScreen=”true” allowScriptAccess=”always” flashVars=”config=″ src=”” type=width=”640″ height=”383″ quality=”high” type=”application/x-shockwave-flash”]

Guild structure and QC circles

“There is too much information changing too fast…The only way to get anything done seriously is to join a guild” said Brown. These guilds in WoW are groups of 20-200 people helping each other to process ideas. This greatly resembles the Quality Circle movement, in which employees are not just hired to perform a task but rather to form part of small groups that constantly seeking ways to self-improve. The differences of QC circles to these guilds could be the technology that they are using as indicated below.

Everything is measured; everyone is critiqued by everyone else

In the WoW, it is easy to record every action and measure performance. There are after-action reviews on every high-end raid and everyone is critiqued by everyone. This resembles the typical PDCA (Plan-Do-Check-Act) process used by QC circles. The challenges however in the manufacturing world are that too much information is still recorded on paper or if recorded electronically, on multiple segregated systems. This inhibits the sharing, retrieval and analysis of information that enabled the rapid group self-improvement dynamics of WoW.

Personal dashboard are not pre-made, they are mashups

Another key learning from the WoW is that you need to craft your own dashboard to measure your own performance. Brown even said that the Obama administration is stealing the idea from WoW and trying to do the same. So much for the software companies who are trying to sell pre-packaged KPIs to measure corporate performance.  Imagine a new manufacturing world that every operator and supervisor has real-time feedback on his/her own performance. Seeing how minute by minute idle time or over-production is affecting bottom-line and return on capital. The future of performance measurement technology is detail, real-time and personalized.

Exponential learning

The last slide in the video shows learning speed exponentially increases as one goes up the level in WoW. The high performance guilds need to distill what they have learnt from their own guild and share with other guilds throughout the network. Those who can do that effectively tend to move up level faster. In the manufacturing world, there are many companies trying to share best-practices across and within organizations. However, manufacturing executives may not realize that effective continuous improvement and best-practice sharing can lead to a state of exponential learning that constitutes an ultimate competitive advantage.

In a sense, the computer world of WoW is able to simulate the social dynamics of how individuals could form groups to process and create ideas, how groups could measure and improve within themselves and how groups could interact with each other in order to accelerate learning that results in high performance. Such social dynamic also resembles that of the lean culture, long promoted within companies like Toyota. Looking forward, the promises of manufacturing 2.0 are technologies to enable almost everything to be measured, allow information from individuals to interact freely as groups and also empower groups to effectively share best-practices. Such multi-tier collaboration from shopfloor to topfloor will bring about a new form of highly competitive organization that harnesses the power of exponential learning. On that note, the future evolution of lean culture may not be that much different from the present World of Warcraft.

How technologies have changed the way I deal with the Great East-Northern Japan Earthquake?

March 21, 2011 Leave a comment

I remember the president of Mitsui told me about why he started his pet IT project with me back at 2005. “My vision is that if Mitsui can function even during the great Tokyo earthquake, then we will be the number one company in the world. It all depends on how we handle unexpected events, not routines.” He believed that through business process management and hence process automation, Mitsui can function even during an unexpected disruption of unprecedented scale. While dealing with the aftermath of the recent events in Japan is certainly a bigger problem than trying to be the world’s number one company, it would be interesting to check back with him on how much his vision has achieved.

I was there at Kobe when the last earthquake hit with M7.9 at 1995. That quake had destroyed houses, freeway and brought down all the lamp posts around me in mere 15 sec. How did it feel in the recent event of which was a M8.9 (100 times stronger) hit for 6 minutes is way beyond my imagination. Nevertheless, information technology has leapfrogged in the past 16 years and I have noticed a lot of changes in how people in the world dealing with such an event. Back then, I had only been able to turn on my car engine and listen to the radio. I had not had any means to contact anyone. Had I been outside of Japan back then, I might not have known about the event till much later. Even if I had known, I could not have done much than being a sitting duck and praying.

Here are a few major changes that I noticed:

1. Respond faster through a Distributed rather than Centralized network

I was instant-messaging with a friend in Tokyo who told me an earthquake had just hit at 3/10 Thursday evening California time. I quickly did a Google search on “Japanese Earthquake” and I could not believe the number that I saw: M8.9. I thought there might be an error in the system. I then turned on TV and did other searches but there were very limited information to indicate a major disaster had just happened. Because of my experience in Kobe earthquake, I immediately knew that M8.9 could be a 100 times worse than what I had experienced back then. How should I confirm that before any images come on TV news? The next thing that I did was checking on Live Web Cam in Japan. Most of them were down but after several trials, I got some images of cars stopped in messy orientation at Tokyo downtown. I knew then that this was actually happening. I quickly posted on Facebook and emailed some friends to check with people whom I know. I spent the next few hours emailing, SMS, Skyping and twittering till I got tired and went to sleep. I did not get to see the horrible images of the Tsunami on TV till the next morning. Direct contact between individual devices that are loosely connected was definitely spreading information faster than a centralized architecture such as TV or radio broadcast.

2. Discover solutions on-the-fly through collaboration

I got a close relative living in Sendai who had not checked-in. I posted that on Facebook and quickly got several suggestions on how to locate him from my friends around the world. I then contacted his company through emergency line and we registered at Google people finder. I kept monitoring twitter and Facebook on minute-by-minute live events as people around Japan posted their feeds. We finally located him after more than 24 hours after the earthquake when one of his colleagues who identified him sent me a text message. That was such a relief. In reflection, it is not as easy for his colleague to send us a message because power to the mobile was such precious asset under the circumstance. My relative could not contact us because his phone was out of power.

3. Leverage Real-time monitoring across the globe

I thought I could catch my breath after I confirmed safety of all my friends and relatives but then come the news of the nuclear plant explosion. I have been keeping a look on the real-time radiation levels across multiple locations around the Fukushima nuclear plant through an official website.

4. Employ agent-based alert to catch and respond to events

I have also set to receive alert email on the aftershocks and how transportation systems are being affected. Based on this information, I do not have to hunt for information but being notified when events that I am interested in occur. I have hence adjusted the travel schedule accordingly.

5. Derive strategy through social media

It is interesting to point out that the rolling power outrage after the earthquake when Fukushima nuclear plant went down was first socialized through social media before putting into action. Social media was also used to gain support on the call for stopping panic buying. Irrational buying behavior was generally not observed and resulted in a big contrary to the run on salt and baby formula in some neighboring countries. (OK, I admit that part of this was owing to the very beautiful side of Japanese culture)

Internet, web, mobile devices, social media, Wi-Fi, physical sensors and webcams, event-driven alert and alarm, real-time monitoring from anywhere, all are indicating that democratization of information has replaced or complemented central broadcasting of news through TV and radio.

How about the manufacturing world?

It is kind of ironical to think of many global manufacturing companies that I am working with has not really leveraged much of the above mentioned technologies. Executives and managers still depend on occasionally bumping into colleagues at the hall way to discover whether the most critical machine in their supply chain is down. Even the enabling technologies are available, there is still limited sharing of best-practices manufacturing processes across geographic locations. KPI report upon which million dollars decisions are depended, are still weeks or sometimes months after the fact. The majority of the mobile devices, sensors and individual control units are not interconnected. Centralized systems like ERPs that depend on aggregating data and then broadcast a plan are still driving the majority of the manufacturing process. In wake of such an unexpected event that went beyond anyone’s imagination, I suppose that it is high time to ask: how well prepared is your organization for the next Tsunami?

The Dice Game of “Velocity” – Part 1

November 22, 2010 54 comments

I have just finished reading “Velocity: Combining Lean, Six Sigma and the Theory of Constraints to Achieve Breakthrough Performance – A Business Novel” with my Kindle. The author Jeff Cox is the co-author of  “The Goal“. This time the story is about Amy, the newly named president of Hi-T Composites Company could not get any bottom line improvement after implementing Lean Six Sigma for a year. In the end, she convinced her team to combine TOC with LSS approach in order to achieve and exceed the bottom line goal.

A critical piece of the story is a dice game. It is this dice game that has finally got everyone on the same page, including the stubborn LSS guy Wayne to change his approach. A key insight is to abandon the balanced line approach at which Wayne has been working. The team finally has agreed on changing to an unbalanced production with everything synchronized to the bottleneck.

In the book, Amy was betting her career on this dice game to convince her staffs as well as to generate the same results in actual production. It worked out that way in the novel. But in practice, would you bet your career on a dice game? I cannot held to ask the following questions:

  • How repeatable are the results of the dice game  described in the novel? How sound is the statistics behind it?
  • How close is the game in resemblance to the reality of a production line? What are the limitations? Under what conditions would the TOC approach (Drum-Buffer-Rope) work better or worse?
  • Under what conditions does a balanced line with takt time work better or worse than an unbalanced line? How to quantify the variability in order to determine which approach to use?

The book has left these questions unanswered. That means these theories may or may not work at your reality. In order to better understand these questions, I intend to use simulation and analytic techniques to explore further. Stay tuned.

In Scenario 1, a balanced line is simulated with everyone starts with a single dice (same capacity) and the same 4 pennies (Initial buffer size).

In this simulation, WIP has increased from 20 to 26 by the 20th round and the total output is 62 pennies. This “throughput” number can be compared to the 70 pennies, which is the average dice point (3.5) times 20 rounds. 62 is in general less than 70 because of throughout lost as a result of variability.

In order to improve the performance of throughput, it was suggested to unbalance the line and create a constraint. Murphy is given only 1 dice while everyone else is then given 2 dices. The results look like the following:

This time WIP has increased from the initial 20 to 42 by te 20th round and total output is 81 pennies. This is significant throughput improvement but with a high WIP, especially around the bottleneck in front of Murphy.

In order to further improve the performance, a DBR (Drum-Buffer-Rope) method is introduced. In this case, Amy’s dices are being taken and she only releases pennies to the line according to the signal given by Murphy on what he rolls. In addition, Murphy is given a higher initial inventory buffer of 12 pennies.

This time WIP has actually decreased from 28 to 23 by the 20th round and the total output is at 91.

In the final case, the team discussed about improving the yield of at the bottleneck through Lean and Six Sigma. In order to simulate this, the dice roll of Murphy is mapped to number betweens 4 to 6.

The results indicated that WIP stayed low at 21 after 20 rounds, the throughput has been further improved 110.

It is shown that the simulation described in the book is generally repeatable. The logic behind these calculations can be nicely summarized with a G/G/1 queue and solved with Markov Chain analysis. We will discussed how practical are these results in application to real production line next time.