RPA and Lean, A Must

I have read a number of papers and articles on the reasons RPA projects struggle or fail and have personally witnessed a number of struggling RPA initiatives.  For anyone that has seen a mature RPA tool in action, you will be surprised to hear that failure numbers as high as 50% are being reported.  RPA tools are very easy to use, lightweight on the network, and easy to adjust as needed.  I have seen very capable RPA bots built in less than a day and complex bots built in a couple of weeks.  So why are so many RPA initiatives struggling?

One of the key reasons for RPA project struggling and even failure is a lack of true process expertise.  Process expertise is needed in the assessment, design, and implementation of bots.  It is also needed for the bot building process itself.  To solve this problem, enlightened organizations are adopting the proven methods and principles of Lean.

In this article, I provide a quick overview of the Lean principles that apply to RPA and how, along with a glimpse into the powerful body of knowledge MSI has developed in our Lean Automation practice.

Lean Facilitation Skills: A true Lean Master has conducted dozens and sometimes more than one hundred Lean process events of various types from Lean Strategy (a.k.a. Hoshin Kanri) to Lean Design through basic Lean Improvement events.  These Lean facilitation skills are vitally important in the RPA bot building process to move swiftly to the ideal process and to get stakeholder agreement on what a bot is supposed to do and how.  In our experience, stakeholder agreement on the steps a bot will take is often the most time-consuming task in the bot life-cycle.

Value Stream Analysis: The ability to define and assess value is a vital first step in the creation of a bot project portfolio. Understanding value, defining value streams, and the subsequent analysis allows us to create an orchestrated bot portfolio that actually reduces the time from input to outcome.  This is a serious problem with most RPA implementations.  They are speeding up micro level subtasks within value streams that merely create backlogs and do nothing to actually reduce the time to value or increase throughput.  Further, the definition of Value Streams provides us a meaningful basis for measuring the ROI of an RPA initiative.

Lean Process Design and Improvement: Matrix Based Design or Axiomatic Design combined with Lean thinking enables you to create profoundly complete and capable requirements for one or more bots in a single pass when your processes need a serious overhaul. For tweaking well designed processes, Lean process improvement will help identify common mistakes such as batch processing and other forms of waste ensuring that processes being automated don’t just speed up bad processes.

Lean Work Cells: Lean Work Cells (a.k.a. Scrum Teams) should be deployed for the bot building life-cycle to ensure work is conducted at its finest practical increment and focus on production is maintained with no hand-offs. The Lean RPA Work Cell may be the most important Lean method you can apply to your RPA program. Clear accountability for bot production and elimination of bot life-cycle hand-offs is of paramount importance.

Kan Ban: Kan Ban should be used by RPA Work cells to control rate of work in a “pull system”, Work In Process (WIP), and to ensure quality specifications for each project.

One Piece Flow: One Piece Flow should be used to ensure in-process inventories/backlogs are not created, that projects are right sized, and that the workload is balanced across work cells.

Poke Yoke: Mistake proofing should be employed in both the bot life-cycle as well as the process automated by each bot. A Lean expert will be familiar with mistake proofing techniques in the digital world making a significantly more robust system.

Be warned, simply doing the same old stuff and calling it Lean will not deliver results. Hanging post-it notes on walls to map processes is not Lean. You must incorporate actual Lean expertise that is only forged through Master’s level eduction, industry experience, and industrial certification to properly adopt and train Lean methods in your organization.

At MSI, we have 18 years of corporate Lean consulting experience and numerous highly respected Lean experts. We have a similar number of years with process automation using various process automation technologies and with the introduction of RPA, MSI has become a pioneer in the adaptation of our Lean Automation and Process Oriented Design techniques into the deployment and management of this breakthrough technology.

Our Lean RPA framework contains all aspects for Lean execution of a Lean Automation program from the Center of Excellence and strategic integration with the business down to hands on Lean RPA Bot development. RPA is coming to your organization, like it or not. Many organizations will stumble and flail about for years attempting to control RPA and turn it into ROI, while those adopting a Lean Thinking approach will benefit early and often.

A subject for another article is Hoshin Kanri, the Lean approach to operationalizing strategy. RPA programs should promote the use of Hoshin Kanri within their organizations and integrate a bot candidate review process for strategic initiatives within the Hoshin Plan. By integrating RPA into strategic initiatives, RPA can propogate throughout an organization strategically, prove its value, increase the probability of success for initiatives, and increase the measurability of initiatives.

RPA, The Lean Six Sigma Game Changer

If you are a Lean Six Sigma Black Belt, Master Black Belt or another type of process engineer / business process reengineering expert, you need to be aware of and get smart on Robotic Process Automation (RPA) technologies.  As a Master Black Belt that has participated in hundreds of process improvement projects in a career spanning more than twenty years, I have seen process improvements attempted in practically every manner possible with varying degrees of executive support, stakeholder commitment, etc.  This is not a post on the importance of leadership commitment or approach.  Plenty of articles and papers are published on those topics.  This is about a technology that will become part of every business computer user’s world within the next five years and it is targeted directly at creating efficiency.

I can tell you that without question, the most successful Lean Six Sigma projects I have been part of all embraced technology as a means for implementing improvements, measuring performance, and driving continuous improvement.  They often leveraged Business Process Management (BPM) or workflow technologies as a program-wide platform.  While very successful, these projects often take longer and require serious dedication from the process improvement team.  They also require an LSS Black Belt capable of Systems Thinking (not all are) and comfortable with technology.


With RPA this is all about to change.  Even in its infancy, RPA is a user-friendly technology that allows users to build digital assistants that automate pretty much any task performed on a computer.  It is like having a multiple application macro builder that watches what you do and then does it for you, over and over again.  For example, one can very easily build an RPA Robot (a.k.a. a bot) that reads all the emails sent to your inbox on a given day, moves all the ones from your retailers into a separate folder, then identifies all the ones with the word return anywhere in the email, then copies all the return forms into a folder, and then copies all the relevant data from each return into an Excel spreadsheet that totals the amounts, and then emails you or sends you a text letting you know that the daily items return list is ready for action with the total number of items in the list and the total dollar amount.  If you want, you could even have it post the returns to your accounting system.  This would all happen within seconds of launching the bot each day rather than the hours it would take to perform by hand.  Other common applications include collecting invoices from vendors and creating consolidated bills for building tenants or collecting personnel data on a single form and then having a bot post it to numerous internal IT systems rather than entering into each system manually.  Bots can take scanned applications, merge them with electronic applications, and then disseminate the information to other systems or create additional documentation and analysis based on the inputs.  Industry is recognizing tremendous savings already in call centers and large-scale financial service centers.  Corporations at the Intelligent Automation Conference in Austin last week including Kraft, Coca Cola, and John Hancock were reporting saving tens of thousands of man-hours and we are aware of Federal Agencies targeting man-hour work reductions on repetitive tasks in the tens of thousands for FY19.  This is just the tip of the iceberg.

I am personally interested in investigating the application of RPA to the massive wave of FOIA requests submitted to the Government each month.  Imagine if a bot simply read each request and then did a scan of each relevant agency system and file server for all related content and then ran a scan for sensitive content and then placed the information in a staging folder for someone to review prior to release.  The Government annually spends millions responding to FOIA requests and rarely meets statutory timelines.   RPA could be a game changer.

If you are like me, you are probably saying to yourself: “This sounds like task automation, not process automation.” If so, good job.  You are right.  RPA is really more about automation of repetitive tasks, not processes with all of their handoffs, business rules, waiting, rework loops, etc.  Other technologies such as BPM and Workflow platforms are very good for that and RPA bots can operate well underneath a workflow technology.  This does not mean that RPA is not a process tool. What it means is that people will be using RPA to automate tasks in silos, or worse, creating the appearance of process improvement and reducing the need for process improvement projects, because they will be reporting a tremendous reduction in man-hours and cost.  If you have read The Goal or you are a process improvement expert that understands Lean, yield, and throughput, you understand how dangerous this could be if real process experts are not involved.

RPA can and should represent a breakthrough for process improvement professionals around the world.  We can leverage RPA to drive significant savings and efficiencies as mentioned above.  We can clearly articulate and execute implementation of process improvements.  We can reduce the LSS project timeline and drastically reduce the need for training.  We can implement a well-constructed architecture of bots that provide real-time process telemetry and drive continuous improvement.  Heck, we can finally start doing SPC and SQC on soft processes!   We can stop process users from blaming the lack of integration among IT systems as the reason they cannot implement efficiencies.  RPA can represent the tangible execution of Improve and Control phases of the DMAIC methodology while getting LSS projects back to a more agile and focused outcome oriented endeavor like they were meant to be.

Alternatively, RPA can be a major blow to our profession by creating the illusion of process improvement when in fact it is really only isolated tasks being improved.  I find that unacceptable.  Process improvement professionals need to take a leadership role in the adoption of RPA within every organization, they must play a key role if not the lead role on all bot implementations, and must ensure RPA deployment creates a more value-centric and measurable enterprise.  There are already numerous articles posted by experts on the extreme importance of process improvement when implementing RPA, see below.  We as process professionals must head the call.

If you are a process improvement professional here is what you need to do.

  • First, start reading on the topic.  You know how to do that, Google it!
  • Second, go to one of the free online training sites provided by vendors such as UiPath and take their free online training. You can get to the point where you can build simple bots on your own with the free training.
  • Third, start asking around about RPA in your organization and do what you can to make sure your CIO and Chief of Process Improvement are collaborating on the topic.  Your CIO is surely already aware of RPA. They must ensure a process improvement expert is on every RPA project.
  • Fourth, ensure your process improvement shop is establishing the standards and practices for process automation from assessment through control of each bot
  • Fifth, begin a campaign to make RPA a standard part of your continuous improvement toolkit such that all Black Belts are trained and able to use it in projects and events.
  • Lastly, via your gate review or some other project review process, ensure RPA is being considered as a tool for improvement on all existing process improvement projects.

As people see the power of RPA when combined with our knowledge of process and our facilitation skills, the entire practice of process improvement will take a giant step forward.

Here are a few links on RPA implementation. You will see they all refer to process selection, design, and change management.

7 key reasons why Robotic Process Automation can fail

Why RPA implementations fail

8 keys to a successful RPA implementation

Three factors for RPA implementation success

10 step RPA Implementation Guide: Pitfalls & Best Practices

What RPA Governance Model is Right for You: Take the Quiz and See

Twelve Years of Change and Nothing Has Changed

I remember when I first started working with the Federal Government in 2005 my first big project was with an organization that had been around for more than 100 years.  I remember being nervous about my lack of organizational knowledge and thinking they must have very complex and mature business processes since they had been doing essentially the same thing for more than 100 years.  My role was to provide process engineering expertise in the improvement of a truly mission critical process.  This being all I knew prior to the first meeting, I envisioned complex analysis of detailed process and performance data, modeling and simulation of various alternatives, followed by integration with some enterprise systems along with related instrumentation and reporting.  I envisioned seasoned experts would be running the processes.  On the first day, I was slapped with the reality that this mission critical process was total anarchy.  The information tools were antiquated and based on a platform that in its prime was a bench warmer.  The people manning the process didn’t even see it as a process, just work and they had no subject matter expertise.  Many items were not processed using the information system. Performance of the process was totally based on the person championing the item.  The customers of the process had no transparency.  There were numerous unnecessary handoffs and reviews.  You get the point.  The process, despite being around for one hundred years, was totally immature, inefficient, and did not deliver results.  Does this sounds familiar? Have you also been part of dozens or even hundreds of “improvement projects” in the last decade, but look around and see that things are still very manual, lack transparency, perform poorly, and are near impossible to measure?

Okay, truth is, that some processes have been improved over the last decade, and in rare cases continued to improve.  I will cite what DoD has achieved in air fields, maintenance depots, and arsenals as the best examples of transformation that has lasted the test of time.

However, we still see the preponderance of processes and functions operate at a very low-level of performance and maturity.  Process performance is rarely measured.  The measurements that do exist are very manual and not reliable.  We have to ask ourselves why.  Why, with all this Lean, Six Sigma, Process Re-engineering, Process Automation, combined with massive IT investments and enterprise BI and reporting tools, are governmental process still immature, hard to measure, and riddled with mistakes?  Why is it that industry is able to create efficient and internationally competitive processes through the same investments?  Here are a few differences between industry and government that explain the problem.



Focus on customers and profit Focus on work and tasks
Clear and immediate impact Abstract impact
Consequences for failure lagging and rare or none at all
Career progression based on performance Career progression based on tenure, training, & politics
Shared accountability, shared rewards Individual performance plans
High demands, intense pressure for outputs Pressure to develop reports, letters, policies, and plans
Deep specialized expertise Generalists often with mismatched skillsets
Process centric IT systems Function centric IT systems
Constantly seeking newer and better ways to beat the competition Hoping no more improvement programs bother them
Poor leadership swiftly punished, good leadership significantly rewarded Poor leadership has to be waited out, good leadership constantly moving to the next thing
Management involved with operations and embraces Lean, visual management, collaboration, data based decisions Management working to promote personal or political agenda
Our Money Other Peoples’ Money

Creating a governmental organization that breaks this cycle is no simple task and there is no single answer, but a few key things have proven successful that when employed in combination break the cycle and begin the path to maturity and performance.

  • Leadership adopt an operational performance discipline proven effective by industry (e.g., Lean) and boldly profess the importance.  Also, learn from the way industry employs the discipline, do not create some bureaucracy heavy governmental approach.
  • Adopt Hoshin Planning
  • Enforce process oriented design and require matrix based design and/or design for Lean Six Sigma for all software systems
  • Eliminate individual performance plans and replace them with team based performance plans.  Hold the teams accountable.
  • Eliminate decision making via slide deck in lieu of real time data dashboards

Seriously, something has to change. It is unacceptable for our government organizations to continue the never ending cycle of immature operations.  If we are able to create government operations of high performance, government employees will enjoy a more rewarding and enjoyable work life and the citizens of the nation will be much better served.  It just makes sense.

Federal Shared Services

I am providing this post on a matter of importance to anyone under the shadow of the U.S. Federal Government.  The topic is something government insiders call “Shared Services”.  For the non-insider, Shared Services is simply back office consolidation or centralization, something anyone with a career in industry has most likely experienced one or more times.  According to Wikipedia, Shared Services is:

The provision of a service by one part of an organization or group, where that service had previously been found in more than one part of the organization or group. Thus the funding and resourcing of the service is shared and the providing department effectively becomes an internal service provider.

The case for Shared Services (back office consolidation) is academic and has been around since the industrial revolution.  Simply, economies of scale from consolidation of functions like HR, payroll, finance, IT, purchasing, fleet management, travel management, etc. can drive significant savings through reduction in manpower, unified technology, reduced office space, and high volume buying power.  In addition to these savings opportunities, operational effectiveness and efficiencies can be improved through better training, sharing of best practices, alignment of culture, improved chain of command, and so forth.  The classic downside with too much consolidation is twofold (1.) too much power wielded by back office organizations such that mission centric operations (service, delivery, sales) spend excessive time with back office bureaucracy taking them away from value adding activities.  This is what many call “the tail wagging the dog.” (2.) The second major risk with consolidation / shared services is that that the balance between processes and systems tailored to meet local and organizational needs versus standardization into a single approach focused on savings swings too far towards rigid standardization.  This takes away organizational ability to serve the customer and shift with changing market/environment demands.

In industry, this type of consolidation typically takes place as part of a merger or acquisition.  It is one of the first places value is sought in post M&A activities.  The process involves numerous planning and design activities followed by years of consolidation work and they rarely go occur according to plan.  Nonetheless, the processes and techniques for effective back office consolidation are well known by industry experts and the various consulting firms supporting these efforts.  I have personally been involved with a number of M&A situations from large scale acquisitions at Verizon to small mergers among IT service providers.  I have never seen one go exactly as planned.  What I have witnessed is the ones where the outcomes were sensible, clearly communicated, measured, and rewarded were ultimately successful and the ones where outcomes and synergies were mysterious, excessive energy was placed on processes, systems, governance, and so forth, ultimately ended in failure.  This is not to say that processes and systems are not important, because they are very important, but they are not the goal.

Since the George W. Bush Administration there has been some form of push in government toward back office consolidation (Shared Services). They called it creating lines of business. The Obama Administration saw a significant move toward Shared Services for financial, payroll, and HR back office functions.  More financial and payroll than HR, but at least there was movement.  Payroll, in particular, is a fairly well evolved Government Shared Service.  At the time, several financial shared services providers were established: the Department of Agriculture’s National Finance Center; the Department of the Interior’s Interior Business Center; the Department of Transportation’s Enterprise Services Center; and Treasury’s Administrative Resource Center.   One can see how financial management shared services is a logical thing for Treasury, but why the agencies in charge of agriculture, forests, and transportation would somehow be the right place for this seems odd.  Being closer to the matter than most, I know the rationale was that these agencies were good at financial management and they were also willing to take on the role as Shared Services provider.  There is some logic to this, but will an approach like this lead to a sensible business architecture for our government if all Shared Services are migrated in this way?

To date, the financial and payroll lines of business are the largest intentional initiatives by the civilian side of Government and the Department of Defense has seen significant and long term benefits from organizations such as the Defense Finance and Accounting Service (DFAS).  A recent analysis of DFAS cost per transaction shows costs comparable to industry providers of similar services.  The jury is still out on the financial line of business and we know firsthand that significant workforces still exist in the agencies that were supposed to divest these financial capabilities. These people continue on for the purpose of interfacing and translating the operations with the Shared Services providers.  Does that make sense?

Most recently, a number of agencies and offices are jumping on the shared services bandwagon in response to President Trump’s executive orders on reorganization and the establishment of a White House Office of American Innovation.  Shared Services is a buzz around DC, driven by the tone from both the Administration and the Hill being one of reduced bureaucracy and reduced Government cost to America.

So besides the buzz, what is the Establishment actually doing to consolidate and reduce cost to the Tax Payer?  Our research shows that the only official, funded, and operating entity in place is a small office buried under the Office of Government-wide Policy (OGP) which is under the General Services Administration (GSA), staffed with a small number of Government employees called the Unified Shared Services Management (USSM) Office.  The USSM is in place to define and oversee shared services. In October 2015, the USSM helped establish a Shared Services Governance Board (SSGB) which is a board of executives from what looks like 13 Federal Agencies.  We can find no evidence that the SSGB has published any decisions, plans, or guidance.

Questions abound.  Is the Administration serious about reducing the cost of Government through back-office consolidation (a.k.a. Shared Services)?  Are the people that work in these agencies capable of migrating to shared services?  Will our political cycles tolerate the time it takes to execute and assess the migration to a shared service?

To this point, I for one am not encouraged.  An analysis of the single product of the USSM, their shared services migration framework called the M3, lacks critical elements’ emphasizes the development of even more bureaucracy early in the process, is technology centric, and lacks a focus on results.  The mere fact that USSM decided that their first task was to spend time and money on a migration framework is a sign that a business mindset is lacking in this organization.  Further, most articles and commentary from Government leaders openly discussing shared services are IT centric, extoling Software as a Service (SaaS) as a magic bullet that will solve our back office woes.  The most robust document published on the subject is a Federal Shared Services Implementation Guide published by the Federal CIO Council in April 2013.  While a thorough document on the subject, it lacks a clear approach and completely ignores the reality that savings must be recognized through a reduced workforce.  In fact, we can find nothing published by the Government that discusses the most common form of savings in Shared Services, Reduced Headcount.  Rather, Agency representatives are pointing to inefficiencies as a lack of investment in their IT infrastructure and increased buying power.  Not sure how that reconciles with the billions spent on the technology firms with high rent offices around the beltway, but somehow we are supposed to believe that the Government does not spend enough on technology and that they don’t have significant buying power already.  It as if the people formulating the current approach to shared services want to ignore industry best practices and lessons learned as well as the decades of Government IT failures, add beuracracy as a way to create efficiency, and let the agencies decide the service architecture of the Federal Government in a haphazard approach.

So what will it take for our Government to be successful with shared services?  I have discussed this topic with experts in my circle.  These are people with solid industrial experience, relevant degrees, and strong familiarity with Government.  Here are a few things we all agree on.

  1. The Government needs to except and clearly communicate the fundamental premise of consolidation which is reduction of manpower. More efficient and effective technology investments are also possible, but are secondary. This business of claiming impossible to measure efficiency gains has to stop.
  2. Focus on results, not oversight, governance, methodologies, boards, playbooks, etc. Everything you need to get this done has already been invented by industry.
  3. Congress needs to develop a sensible architecture for the executive/administrative branch. Going back to the fact that the people responsible for trees and chickens are now also providing financial shared services, does this really make sense in the long run?
  4. Shared services need to be studied and migrated in a holistic manner avoiding rather than encouraging more IT spending. It is the people that will ultimately make this a success, not technology.  Once a shared services line of business is established and at an acceptable level of performance and maturity, then we will know enough to start considering technology investments.
  5. A massive infusion of industry based people is required. The people leading the charge cannot be career Federal employees.  Rather, the steering committee must be a mix of industry and Government experts and the industry experts must have the support of the President so they are not dismissed by the career Federal leaders.  An industry style culture, approach, metrics, and so forth are vital.  Further, anyone that comes from industry to become a Federal employee leading this charge needs to be hired on a temporary basis and safe-guards such as cooling off periods and conflict of interest restrictions must be put in place to mitigate against fraud and abuse.  The government wide initiative to shared services cannot establish its own bureaucracy.
  6. Lastly, a sense of urgency must be established along with incentives and punishments. It is clear from the timeline of Shared Services dating back to the Bush Administration that things are not moving fast.  The current administration will be gone before anything significant happens unless an acute sense of urgency is created via the appropriations process.

As a citizen that loves our country, in the unique situation of living near the capital city, and working with civilian and defense organizations, I truly hope our Government can successfully move to shared services.  There are numerous opportunities for tremendous savings and improved quality of service in the back office.  This is money that can be diverted to important missions or used to pay down our escalating debt.  In this regard, I am hopeful.  All indications are that the Administration wants to do this for us.  Let’s hope the right people are put in charge and the process gets rolling soon.

Agile Process Improvement

One of my favorite sayings is “if you want to learn somethig new, read an old book.” For years now, we have been hearing about this thing called Agile Development. It seems there are numerous definitions of what that is and even more variations of Agile in practical execution. I have seen everything from rigid compliance with modern Agile Development standards such as Agile Unified Process to stumbling through a series of high paced mistakes and calling it an Agile approach. The relevance of an Agile approach to software development in the modern age is clear. Numerous forces have converged to make the case for Agile very compelling: Cloud based systems; rapid technological change; useful standards; platform based development; and a growing knowledge gap between IT experts and business leaders. We have all witnessed or at least heard of high profile software failures using drawn out design engineering and waterfall approaches to software development. Examples include the massive Government HR systems, Global Combat Support Systems, and a countless number of industry ERP systems.

Many Agile Development texts, sites, etc. make the connection between Lean and Agile, recognizing that philosophically, Agile is an attempt to make a modern adaptation of agile process improvement methods developed by the likes of W. Edwards Deming, Walter Shewhart, and Joseph Juran as far back as 60 years. Yep, that`s right, Agile is not new. Oh sure, there’s a bunch of new terms and techniques and an emerging mess of beuracracy the merit of which is debatable, but the core value of Agile is the nested set of Agile experiments executed in concert as a learning system to drive superior outcomes in short order. My first exposure to effective implementation of Agile Software development was from a speaker at a Lean conference in 2011. The CEO of Menlo Innovations, Richard Sheridan gave a fantastic speech on the Lean agility that his company was using to reduce mistakes, meet schedules, improve culture, and drive learning. What they were doing was clearly rooted in Lean principles and concepts such as cellular work, one piece flow, in line quality, and visual management.

Now for the irony, the plethera of so called process improvement experts haunting the halls of our corporations and Government agencies are still stuck in the dogma of highly formal, sluggish methods such as the DMAIC. Our consultants regularly see process improvement efforts that span 12 or even 24 months to implement improvements that were generally conceived at the start of the project. In my role as a Master Black Belt for a client, I recently sat in on a final tollgate review so that I could bless a project and allow a trainee to receive his Lean Six Sigma belt certification. This particular project involved simple change of process and policy in only one region, yet took two years to complete. The team rigorously used analytic tools and went through methodology toll gates. After the presentation was over and I did my obligatory questioning on tools, measurements, and ongoing improvement it was decided the project was compliant and the trainee would be annointed. Afterward I asked the trainee if what they implemented was different than what he expected on day one. His answer, “no, my team new this was how to fix it.” So they took two years to do something that could have been done in one month using a more agile approach. The marginal utility of the extra 23 months was a bit more consensus and more certainty that the changes would work, but that’s about it.

I believe one reason this happens is because we have a massive community of Lean Six Sigma professionals that skipped over the foundational principles of Quality and jumped right into DMAIC centric Lean Six Sigma. To make matters worse, many of these LSS professionals have little to no practical experience in production or management. They have made a career of being an outsider that merely facilitates other people’s improvement efforts. One does not have to think too hard to see the potential conflict of interext. As a leader in the process improvement community, I tell you there must be a change. Process improvement experts must adopt, adapt, and use more agile methods where appropriate and in my experience that is at least 50 percent of all process improvement efforts.


At MSI, we have developed and proven an Agile process improvement approach driving rapid results for numerous clients. Of course, we have seasoned experts with many years of operational process improvement experience so we can see the right path well ahead of LSS belts that simply follow a methodology such as DMAIC without knowing where it is taking them or how long it will take to get there. MSI’s approach can be applied in part to isolated projects or in whole to serious transformation efforts incorporating a portfolio of improvement efforts. Applied in whole to serious efforts, the approach incorporates a rapid goal setting and high level systems thinking design endeavor to paint a clear and quantifiable picture of the enterprise’s future state, we then use simplified matrix management, chartering, and reporting techniques to priorItize, launch, and track the work. The process improvement work itself is predominantly a portfolio of nested PDCA projects taking 30 days or less each. Each project cycle generates common outputs feeding control/measurement, process architecture, training, and policy. Further, each cycle improves upon best practices and informs decision making on future cycles. Hence, the approach is a rapid execution, high functioning system that learns, informs, and improves as it rolls through the enterprise. When appropriate, methods such as DMAIC and DMADV are employed for larger scale or more design oriented efforts, but they are also conducted with an eye toward agility, understanding that time is our enemy and the perfect solution may also be the irrelevent solution if it is a day late.

Extreme Customer Loyalty

Here are a few tips on creating an organization with extreme customer loyalty.

With the holiday shopping season kicking into gear, it seems like a good time to discuss the importance of customer satisfaction and loyalty.  Even if you are not in a retail businesses depending on the holiday season to make your annual numbers, it is important to make sure you understand your customers so they return year after year.  Okay, so if you work for a Government agency or some other organization that does not have traditional customers, you are not excused.  Keep reading.  Fact is, that Government organizations, even the military, have customers.  They may not be clearly defined, but they exist.  Government organizations also have to fight for budget and human resources.  Often your customer is the very organization that grants your budget and resources. Understanding your customers and their ever changing needs is vital to defining Value, the guiding light for Lean Agile delivery of products and services.  If you are an organization espousing Lean as part of your strategy for organizational excellence, you must understand your customers or you do not understand Value and hence cannot be Lean.

Customer Loyalty Cycle

To begin, let’s review a few key facts about customer satisfaction.

Customer satisfaction is:

  • A mix of perception and reality
  • A moving target
  • Highly correlated to employee satisfaction
  • Driven by process and product excellence
  • Quantifiable
  • Critical to the success of almost every organization

In my years of study and consulting with various organizations, I have come to believe that organizations with fiercely loyal customers subscribe to two key principles of extreme loyalty.

  1. Directly addressing customer satisfaction does not work. Successful organizations excel with the root cause drivers of customer satisfaction.
  2. Customer satisfaction, employee satisfaction, process excellence, and corporate performance are bound in a continuous cycle of performance.

Consider the organizations known for extreme customer loyalty: Apple, Toyota, BMW, Walmart, McDonald’s, Twitter, Amazon, etc.  Each of these organizations has a clear market strategy based on a detailed understanding of customer segments and they effectively manage the drivers of customer satisfaction rather than react to the symptoms. Business insider has an interesting albeit dated list of brands with fierce loyalty, http://www.businessinsider.com/brand-loyalty-customers-2011-9?op=1.

Consider Toyota’s focus on quality and Lean process with products that are very sensible for their target customers.  I have always found it interesting that lower quality auto manufacturers advertise the quality of their vehicles, while consistently being beat out by Toyota who sells more and owns the global customer perception of Quality. Consider alternately Apple and their brash strategy of forging new markets.  It seems to work for them time and again, because they know their customer base eagerly awaits the next iThing. At the same time Hewlett Packard and others try the same strategy, but fail time and again.  Different customer base means different strategy.  McDonald’s provides tremendous training and career opportunity for its employees.  All of these organizations excel in the root drivers of customer loyalty and almost all of these organizations understand that success is a cycle of learning, improving, and delivering.  Without learning cycles at all levels, these incredible brands would not enjoy such success.

Here are a few quick tips for understanding and managing your organization for extreme customer loyalty.

  1. Begin with the end in mind. Understand that you must work toward a model that aligns customer segments, lines of business, internal operations, products and services into a measurement and management program that drives customer loyalty.
  2. Analyze and Segment Customers
    1. Clearly identify customer segments


  1. Use key attributes and key performance metrics
    1. Understand that perception and reality are not always the same.
  1. Perception must be measured and managed
  2. Reality must be measured and managed
    1. Data must be collected in recognition of timing and method
  1. Timing: At time of service & 6 months after service
  2. Direct Methods: Web, paper, phone, in person
  • Indirect Methods: Process data, returns, price analysis, social media
  1. Create customer segments and further analyze each segment by developing Kano models and House of Quality

The Kano Model of Customer Satisfaction


The House of Quality associates key customer requirements with product and service attributes.


  1. Develop detailed measurement and management plans for each customer segment.  Make sure accountability for continued improvement of the relationship with each segment is clearly defined and make sure each Line of Business knows their role in overall customer management.


  1. Develop a detailed plan of action for near term customer loyalty and satisfaction improvement along with implementation of the mechanisms that will ensure ongoing success with each customer segment.  I recommend the use of the Hoshin Planning Matrix as a way to establish a five year plan with clear goals and accountability.  See Mitigating the Effects of Baseline Budgeting for more information on the Hoshin Planning Matrix.  When conducting your customer loyalty and satisfaction improvement initiative, remember the following.
  • Customer satisfaction is A priority, not the priority
  • Satisfied employees drive satisfied customers
  • Teach employees about customer satisfaction
  • Define customer segments, models, & metrics
  • Employ overt and covert (ubiquitous) satisfaction management methods

The point here is that extreme customer loyalty comes from an understanding of what is of value to clearly defined customer segments and then focusing on the core competencies that drive the effective and efficient delivery of value to the customer.  Customer loyalty must be addressed from the inside out.  It is like personal health.  One can eat healthy, exercise, and sleep well to stay out of the hospital or you can ignore the fundamental drivers of health and medicate problems as they arise.  Eventually, the problems become too many to medicate, the medications begin to interact, and a death spiral begins.  There is a long history of organizations that ignore the drives of customer loyalty and instead waste time and money on customer satisfaction mitigation (symptom) strategies such as warranties, clubs, and price manipulation.  Some of these companies include: Blockbuster, Border’s Books, Circuit City, and the long list of home improvement chains run out of business by Home Depot.  This is not a list on which you want to be.

So while the holiday season is here and everyone is out being a customer, think about what makes you loyal to a brand, a company, or an organization then consider starting the new year with a plan to get better at managing the underlying root drivers of customer loyalty for your organization.

Mitigating the Effects of Baseline Budgeting


This posting is on a topic of particular concern to me.  As someone that has worked for and provided consulting services to major corporations and our Federal Government for more than 20 years, I have found baseline budgeting to be at the root of tremendous waste,  bloated budgets, and overgrown organizations.  It is my sincere hope to see our Government take serious steps to reduce the effects of baseline budgeting, for the sake of us all.  Here is some content from a concept paper I recently authored on the subject. Click here to download the entire paper Mitigating Effects of Baseline Budgeting.  Also, please post your comments and ideas on other ways to mitigate the effects of baseline budgeting.

Baseline budgeting is the financial planning practice in which an organization has an annual budget developed and approved based on a baseline of spending plus requests for additional funding in each financial planning cycle.  The baseline is based on previous year’s approved spending.  Additional funding is based on many factors including inflation, cost of materials, new programs, new technology, and other forms of growth. This is the approach to budgeting predominately used by Government agencies and some large businesses.  The most apparent problem with baseline budgeting is the assumption that current spending levels are the appropriate baseline or “bottom line” of spending.  This assumption is problematic for many reasons. Given that financial planning cycles range from 18 months (Industry) to four years (DoD), numerous things can reduce the required baseline of spending needed for an organization. Baseline budgeting is a root cause of inefficient use of Government resources.  The financial costs are measurable and easy to comprehend.  The human and performance costs are nearly impossible to measure on a large scale, but as we have witnessed, can outpace financial costs by several orders of magnitude, especially when baseline budgeting operations manage expensive end items (planes, tanks, buildings, or human capital).

Ways to reduce the effects of baseline budgeting, in order of importance, include:

  1. Hoshin Kanri (a.k.a. Hoshin Planning, Goal Deployment)
  2. Standardized Business Case Analysis[1]
  3. Cost/budget reduction based performance incentives

Hoshin Kanri, a Proven Method for Strategic Management

Hoshin Kanri is the most powerful technique for mitigating the problems with baseline budgeting. It is used by many of the world’s highest performing corporations including Toyota, General Electric, and Hewlett Packard.  It is just now starting to get traction in Government. Hoshin Kanri connects tasks to strategy through simple step-by-step planning, commitment to the plan, and rigorous management to the plan through a set of tools that continuously align every day activity to strategic goals.



In addition to the disciplined process for performance and financial planning established by Hoshin Kanri, the tool that specifically helps mitigate the effects of baseline budgeting is the Goal Deployment Matrix, shown below.  The Goal Deployment Matrix is particularly useful as a tool against baseline budgeting in that it captures all goals and objectives for the organization, plots them against the organization’s core functions and initiatives, and establishes ownership and performance metrics both horizontally and vertically.  When fully implemented, the Hoshin Goal Deployment Matrix is a matrix based catalog of every function (operational and developmental) within an organization and identifies the value each of these functions is supposed to drive.  This is used to mitigate baseline budgeting through enforcement of a budgeting process that requires all budget line items to be associated with the Goal Deployment Matrix.  If a budget line item does not have a clear place on the Goal Deployment Matrix, then it is not aligned to value and it is waste.  All new developmental initiatives are vetted against the Goal Deployment Matrix to again identify their value and place within the plan.

Standardized Business Case Analysis Drives Spending Discipline

Standardized business case analysis is another method that can help mitigate the negatives of baseline budgeting.  Though not as powerful, business case analysis is a great tool in conjunction with Hoshin Kanri.  Business case analysis primarily addresses the expansion of the baseline by forcing all new starts and developmental efforts through a standard business case process.  In each case, new starts must be vetted against a balanced set of criteria, must add value, and must align with strategic goals.  Further, the standard business case process forces a set of process steps and approvals that cannot be short circuited to enable end of year spending. The diligence of the process forces management to think more strategically about spending activities.

Individual Performance Metrics, a Useful Tool for Specific Problems

Perhaps the most difficult technique for reducing the effects of baseline budgeting is the application of individual performance metrics targeted at cost savings.  This is also the most risky, because personal performance metrics drive personally motivated behaviors.  These behaviors may not be best for the organization.  In other words, if people are incented to reduce cost for personal gain, they may do so at the expense of increased market share or improved customer satisfaction for the organization.  However, cost reduction individual performance metrics can be used sparingly when targeted at specific functions or offices within an organization.

Improving Financial Planning Processes, The Time is Now

It has been our observance during nearly two decades of management consulting that baseline budgeting is a root cause of significant Government inefficiency.  It is the source of compounding financial excess and irrational management behavior.  It is not feasible for Government to perpetually increase its percentage of financial and human resources consumed.  Sequestration and recent cuts in Government spending have created an opportunity for new ways of managing the tax payers’ dollar.  It is incumbent upon Government leaders to seek out and employ new strategic planning, financial planning, and human capital management techniques that ensure Government agencies build upon and institutionalize recent change.


[1] GAO INFORMATION TECHNOLOGY – OMB NEEDS TO IMPROVE ITS INVESTMENT GUIDANCE. (2011). Retrieved January 29, 2015, from http://www.gao.gov/assets/590/585915.pdf

Cloud Computing is Great, but …

While cloud computing applications have many benefits for business users of all sizes, Google is helping us Cloud app users remember there is a down side.  Cloud computing applications for business are seemingly limitless.  One can run accounting, HR, sales, order processing, service, sales, project management, collaboration, and just about any other business function via Cloud applications for reduced cost, on modern technology, with ubiquitous access.

However, what if you come to rely on an application for a critical part of your business and the provider of that application decides to make a global change that negatively impacts your processes or what if they shut down the application completely.  That is what Google is doing with the popular iGoogle portal that allows users to view a single site that aggregates all of your favorite apps they call gadgets.  While iGoogle is not likely a mission critical app for many businesses, it is a very popular tool great for collaboration and easy view of data from various sources.

Google warned users of the impending change with an announcement on July 3, 2012.  Many thousands of users have asked Google to keep iGoogle running, but to no avail.  One petition alone reached 10,000 signatures. http://www.change.org/petitions/google-don-t-kill-igoogle.  iGoogle will be gone on November 1st.  Tens of thousands of users will have to find new and probably less efficient ways to view their data.

The point here is that while Cloud computing is great for business of all sizes, you need to make sure you fully understand the future plans a provider has for an app before you sign that agreement. Your organization will quickly become dependent upon each app and change is costly and difficult.  Have a plan B in place.  Know who the competitors are and what they offer.  Also make sure you can download all needed data in a usable format in case you need to bring the application back in house or switch to another provider.

Building a Great Mission Statement

It is the time of year when many organizations are doing their strategic planning for the year ahead.  In the spirit of the season, here is a strategic planning tool for revising or building anew a mission statement for any size or scope of organization.  The approach is simple and effective.  It originates from the management discipline known as Hoshin Kanri.  I have used it very successfully for many years in numerous environments including telecommunications, product development, defense, and information technology.

The method is very simple.  It begins with a mad libs (remember those?) style fill in the blanks.  The template is shown below.

Mission Statement Template

The mission statement building process will take from one to four hours depending on size and complexity of your organization.  Bring your team together, include a representational cross section from leadership, management, and staff and make sure you have a good facilitator.  Building the mission statement if fun and easy from this point.  Using a large white board, the facilitator should solicit answers to each blank line, beginning with purpose.  Get everyone to participate and throw out some stimulating ideas.  For example, you can propose to the team that the purpose of the organization is “to put all competitors out of business” or “to dominate the world.”  These are not likely realistic or productive as a true purpose, but they stimulate discussion and get people to loosen up and participate.

I like to begin the process with an example mission statement built using the process.  I often use a mission statement leveraging my past as a landscaping business owner, back when I was in college.

Lawn Mission Statement

In this simple and effective mission statement, we identify that our ultimate goal is to be the regional leader that protects its customer base and that we recognize our most important capabilities are with our crew chiefs and our equipment.  This creates both operational and strategic focus for the organization.

I regularly see mission statements from multi-billion dollar corporations that are less concise and less meaningful.  Yet, those mission statements typically take months to develop.  This approach can develop a great mission statement in hours.  Even if your mission statement is not open for revision, going through this exercise is a great early phase tactic for your strategic planning team.  It helps to get everyone focused and often exposes strategic gaps or lack of alignment.  If you choose to give this method a try, please send a comment to let us know how it worked out.






Simple and Effective Process Tool

This month’s post is about a simple, effective, and often overlooked process mapping technique tool called the Trans Interaction Diagram also called a sequence diagram.  I have been using these diagrams for more than ten years with great success, especially when there is a software development element to the process improvement project.  Trans interaction diagrams are great facilitation tools and they can pack a ton of useful information into a simple to understand document.  Trans-interaction diagrams are rarely taught in Lean Six Sigma classes which is a shame.  I teach all Black Belts that I mentor how to use the trans-interaction diagram and without fail, they find it to be a useful tool in Defining, Analyzing, Designing, and validating improved processes.  Trans-interaction diagrams are particularly useful in documenting structure document centric processes and structured projects.  I believe anytime software is being developed to take an organization paperless or to automate a document/project flow with a finite number of statuses the item can enter and exit, the trans interaction diagram should be used.

Trans-interaction diagrams are made of a series of vertical bars that represent the “states” in which a document or project can reside.  For example: draft, technical review, legal review, financial review, pricing, clarifications, cancelled, approved, archived.  These are all answers to the question “What is the status of the document? (e.g., application, proposal, order)”.  The vertical bars are connected by transition arrows that indicate the paths the documents can follow from one state to another.  Below each state, you can pack in loads of information relevant to software developers, policy writers, and managers.  You can include information such as who has read and write access at each state; the owner; time metrics; system actions; decision criteria; and rules.  A simple example of the trans interaction diagram for a generic paperless document system is shown below.


You can easily see how the trans-interaction diagram communicates a wealth of information about the document or project.  Personnel in your organization should be able to answer the basic questions answered by the trans interaction diagram, such as what is the status of my request?  What happens next? How long does it take? Who can help me?

The trans-interaction diagram is also a great way to facilitate people in your organization toward consensus on how things should be processed and what rules should exist in the processing.  By asking people what can happen next who can do it, why, and so forth, staff can quickly see the paths documents can take as well as the impact of not doing things right the first time.

If it is decided to automate your process using a COTS system such as SharePoint, a BPM tool, or custom software, developers can get a great start on understanding what you want as your final product through review of the trans interaction diagram and you can use the trans interaction diagram as a QC tool to ensure the developers created the software as specified.

I hope you find this tool helpful for improving your business processes.  For more information or support in using this tool, send me an email at gsieber@msi6.com.

%d bloggers like this: