Twelve Years of Change and Nothing Has Changed

I remember when I first started working with the Federal Government in 2005 my first big project was with an organization that had been around for more than 100 years.  I remember being nervous about my lack of organizational knowledge and thinking they must have very complex and mature business processes since they had been doing essentially the same thing for more than 100 years.  My role was to provide process engineering expertise in the improvement of a truly mission critical process.  This being all I knew prior to the first meeting, I envisioned complex analysis of detailed process and performance data, modeling and simulation of various alternatives, followed by integration with some enterprise systems along with related instrumentation and reporting.  I envisioned seasoned experts would be running the processes.  On the first day, I was slapped with the reality that this mission critical process was total anarchy.  The information tools were antiquated and based on a platform that in its prime was a bench warmer.  The people manning the process didn’t even see it as a process, just work and they had no subject matter expertise.  Many items were not processed using the information system. Performance of the process was totally based on the person championing the item.  The customers of the process had no transparency.  There were numerous unnecessary handoffs and reviews.  You get the point.  The process, despite being around for one hundred years, was totally immature, inefficient, and did not deliver results.  Does this sounds familiar? Have you also been part of dozens or even hundreds of “improvement projects” in the last decade, but look around and see that things are still very manual, lack transparency, perform poorly, and are near impossible to measure?

Okay, truth is, that some processes have been improved over the last decade, and in rare cases continued to improve.  I will cite what DoD has achieved in air fields, maintenance depots, and arsenals as the best examples of transformation that has lasted the test of time.

However, we still see the preponderance of processes and functions operate at a very low-level of performance and maturity.  Process performance is rarely measured.  The measurements that do exist are very manual and not reliable.  We have to ask ourselves why.  Why, with all this Lean, Six Sigma, Process Re-engineering, Process Automation, combined with massive IT investments and enterprise BI and reporting tools, are governmental process still immature, hard to measure, and riddled with mistakes?  Why is it that industry is able to create efficient and internationally competitive processes through the same investments?  Here are a few differences between industry and government that explain the problem.

INDUSTRY

GOVERNMENT

Focus on customers and profit Focus on work and tasks
Clear and immediate impact Abstract impact
Consequences for failure lagging and rare or none at all
Career progression based on performance Career progression based on tenure, training, & politics
Shared accountability, shared rewards Individual performance plans
High demands, intense pressure for outputs Pressure to develop reports, letters, policies, and plans
Deep specialized expertise Generalists often with mismatched skillsets
Process centric IT systems Function centric IT systems
Constantly seeking newer and better ways to beat the competition Hoping no more improvement programs bother them
Poor leadership swiftly punished, good leadership significantly rewarded Poor leadership has to be waited out, good leadership constantly moving to the next thing
Management involved with operations and embraces Lean, visual management, collaboration, data based decisions Management working to promote personal or political agenda
Our Money Other Peoples’ Money

Creating a governmental organization that breaks this cycle is no simple task and there is no single answer, but a few key things have proven successful that when employed in combination break the cycle and begin the path to maturity and performance.

  • Leadership adopt an operational performance discipline proven effective by industry (e.g., Lean) and boldly profess the importance.  Also, learn from the way industry employs the discipline, do not create some bureaucracy heavy governmental approach.
  • Adopt Hoshin Planning
  • Enforce process oriented design and require matrix based design and/or design for Lean Six Sigma for all software systems
  • Eliminate individual performance plans and replace them with team based performance plans.  Hold the teams accountable.
  • Eliminate decision making via slide deck in lieu of real time data dashboards

Seriously, something has to change. It is unacceptable for our government organizations to continue the never ending cycle of immature operations.  If we are able to create government operations of high performance, government employees will enjoy a more rewarding and enjoyable work life and the citizens of the nation will be much better served.  It just makes sense.

Federal Shared Services

I am providing this post on a matter of importance to anyone under the shadow of the U.S. Federal Government.  The topic is something government insiders call “Shared Services”.  For the non-insider, Shared Services is simply back office consolidation or centralization, something anyone with a career in industry has most likely experienced one or more times.  According to Wikipedia, Shared Services is:

The provision of a service by one part of an organization or group, where that service had previously been found in more than one part of the organization or group. Thus the funding and resourcing of the service is shared and the providing department effectively becomes an internal service provider.

The case for Shared Services (back office consolidation) is academic and has been around since the industrial revolution.  Simply, economies of scale from consolidation of functions like HR, payroll, finance, IT, purchasing, fleet management, travel management, etc. can drive significant savings through reduction in manpower, unified technology, reduced office space, and high volume buying power.  In addition to these savings opportunities, operational effectiveness and efficiencies can be improved through better training, sharing of best practices, alignment of culture, improved chain of command, and so forth.  The classic downside with too much consolidation is twofold (1.) too much power wielded by back office organizations such that mission centric operations (service, delivery, sales) spend excessive time with back office bureaucracy taking them away from value adding activities.  This is what many call “the tail wagging the dog.” (2.) The second major risk with consolidation / shared services is that that the balance between processes and systems tailored to meet local and organizational needs versus standardization into a single approach focused on savings swings too far towards rigid standardization.  This takes away organizational ability to serve the customer and shift with changing market/environment demands.

In industry, this type of consolidation typically takes place as part of a merger or acquisition.  It is one of the first places value is sought in post M&A activities.  The process involves numerous planning and design activities followed by years of consolidation work and they rarely go occur according to plan.  Nonetheless, the processes and techniques for effective back office consolidation are well known by industry experts and the various consulting firms supporting these efforts.  I have personally been involved with a number of M&A situations from large scale acquisitions at Verizon to small mergers among IT service providers.  I have never seen one go exactly as planned.  What I have witnessed is the ones where the outcomes were sensible, clearly communicated, measured, and rewarded were ultimately successful and the ones where outcomes and synergies were mysterious, excessive energy was placed on processes, systems, governance, and so forth, ultimately ended in failure.  This is not to say that processes and systems are not important, because they are very important, but they are not the goal.

Since the George W. Bush Administration there has been some form of push in government toward back office consolidation (Shared Services). They called it creating lines of business. The Obama Administration saw a significant move toward Shared Services for financial, payroll, and HR back office functions.  More financial and payroll than HR, but at least there was movement.  Payroll, in particular, is a fairly well evolved Government Shared Service.  At the time, several financial shared services providers were established: the Department of Agriculture’s National Finance Center; the Department of the Interior’s Interior Business Center; the Department of Transportation’s Enterprise Services Center; and Treasury’s Administrative Resource Center.   One can see how financial management shared services is a logical thing for Treasury, but why the agencies in charge of agriculture, forests, and transportation would somehow be the right place for this seems odd.  Being closer to the matter than most, I know the rationale was that these agencies were good at financial management and they were also willing to take on the role as Shared Services provider.  There is some logic to this, but will an approach like this lead to a sensible business architecture for our government if all Shared Services are migrated in this way?

To date, the financial and payroll lines of business are the largest intentional initiatives by the civilian side of Government and the Department of Defense has seen significant and long term benefits from organizations such as the Defense Finance and Accounting Service (DFAS).  A recent analysis of DFAS cost per transaction shows costs comparable to industry providers of similar services.  The jury is still out on the financial line of business and we know firsthand that significant workforces still exist in the agencies that were supposed to divest these financial capabilities. These people continue on for the purpose of interfacing and translating the operations with the Shared Services providers.  Does that make sense?

Most recently, a number of agencies and offices are jumping on the shared services bandwagon in response to President Trump’s executive orders on reorganization and the establishment of a White House Office of American Innovation.  Shared Services is a buzz around DC, driven by the tone from both the Administration and the Hill being one of reduced bureaucracy and reduced Government cost to America.

So besides the buzz, what is the Establishment actually doing to consolidate and reduce cost to the Tax Payer?  Our research shows that the only official, funded, and operating entity in place is a small office buried under the Office of Government-wide Policy (OGP) which is under the General Services Administration (GSA), staffed with a small number of Government employees called the Unified Shared Services Management (USSM) Office.  The USSM is in place to define and oversee shared services. In October 2015, the USSM helped establish a Shared Services Governance Board (SSGB) which is a board of executives from what looks like 13 Federal Agencies.  We can find no evidence that the SSGB has published any decisions, plans, or guidance.

Questions abound.  Is the Administration serious about reducing the cost of Government through back-office consolidation (a.k.a. Shared Services)?  Are the people that work in these agencies capable of migrating to shared services?  Will our political cycles tolerate the time it takes to execute and assess the migration to a shared service?

To this point, I for one am not encouraged.  An analysis of the single product of the USSM, their shared services migration framework called the M3, lacks critical elements’ emphasizes the development of even more bureaucracy early in the process, is technology centric, and lacks a focus on results.  The mere fact that USSM decided that their first task was to spend time and money on a migration framework is a sign that a business mindset is lacking in this organization.  Further, most articles and commentary from Government leaders openly discussing shared services are IT centric, extoling Software as a Service (SaaS) as a magic bullet that will solve our back office woes.  The most robust document published on the subject is a Federal Shared Services Implementation Guide published by the Federal CIO Council in April 2013.  While a thorough document on the subject, it lacks a clear approach and completely ignores the reality that savings must be recognized through a reduced workforce.  In fact, we can find nothing published by the Government that discusses the most common form of savings in Shared Services, Reduced Headcount.  Rather, Agency representatives are pointing to inefficiencies as a lack of investment in their IT infrastructure and increased buying power.  Not sure how that reconciles with the billions spent on the technology firms with high rent offices around the beltway, but somehow we are supposed to believe that the Government does not spend enough on technology and that they don’t have significant buying power already.  It as if the people formulating the current approach to shared services want to ignore industry best practices and lessons learned as well as the decades of Government IT failures, add beuracracy as a way to create efficiency, and let the agencies decide the service architecture of the Federal Government in a haphazard approach.

So what will it take for our Government to be successful with shared services?  I have discussed this topic with experts in my circle.  These are people with solid industrial experience, relevant degrees, and strong familiarity with Government.  Here are a few things we all agree on.

  1. The Government needs to except and clearly communicate the fundamental premise of consolidation which is reduction of manpower. More efficient and effective technology investments are also possible, but are secondary. This business of claiming impossible to measure efficiency gains has to stop.
  2. Focus on results, not oversight, governance, methodologies, boards, playbooks, etc. Everything you need to get this done has already been invented by industry.
  3. Congress needs to develop a sensible architecture for the executive/administrative branch. Going back to the fact that the people responsible for trees and chickens are now also providing financial shared services, does this really make sense in the long run?
  4. Shared services need to be studied and migrated in a holistic manner avoiding rather than encouraging more IT spending. It is the people that will ultimately make this a success, not technology.  Once a shared services line of business is established and at an acceptable level of performance and maturity, then we will know enough to start considering technology investments.
  5. A massive infusion of industry based people is required. The people leading the charge cannot be career Federal employees.  Rather, the steering committee must be a mix of industry and Government experts and the industry experts must have the support of the President so they are not dismissed by the career Federal leaders.  An industry style culture, approach, metrics, and so forth are vital.  Further, anyone that comes from industry to become a Federal employee leading this charge needs to be hired on a temporary basis and safe-guards such as cooling off periods and conflict of interest restrictions must be put in place to mitigate against fraud and abuse.  The government wide initiative to shared services cannot establish its own bureaucracy.
  6. Lastly, a sense of urgency must be established along with incentives and punishments. It is clear from the timeline of Shared Services dating back to the Bush Administration that things are not moving fast.  The current administration will be gone before anything significant happens unless an acute sense of urgency is created via the appropriations process.

As a citizen that loves our country, in the unique situation of living near the capital city, and working with civilian and defense organizations, I truly hope our Government can successfully move to shared services.  There are numerous opportunities for tremendous savings and improved quality of service in the back office.  This is money that can be diverted to important missions or used to pay down our escalating debt.  In this regard, I am hopeful.  All indications are that the Administration wants to do this for us.  Let’s hope the right people are put in charge and the process gets rolling soon.

Mitigating the Effects of Baseline Budgeting

Followers,

This posting is on a topic of particular concern to me.  As someone that has worked for and provided consulting services to major corporations and our Federal Government for more than 20 years, I have found baseline budgeting to be at the root of tremendous waste,  bloated budgets, and overgrown organizations.  It is my sincere hope to see our Government take serious steps to reduce the effects of baseline budgeting, for the sake of us all.  Here is some content from a concept paper I recently authored on the subject. Click here to download the entire paper Mitigating Effects of Baseline Budgeting.  Also, please post your comments and ideas on other ways to mitigate the effects of baseline budgeting.

Baseline budgeting is the financial planning practice in which an organization has an annual budget developed and approved based on a baseline of spending plus requests for additional funding in each financial planning cycle.  The baseline is based on previous year’s approved spending.  Additional funding is based on many factors including inflation, cost of materials, new programs, new technology, and other forms of growth. This is the approach to budgeting predominately used by Government agencies and some large businesses.  The most apparent problem with baseline budgeting is the assumption that current spending levels are the appropriate baseline or “bottom line” of spending.  This assumption is problematic for many reasons. Given that financial planning cycles range from 18 months (Industry) to four years (DoD), numerous things can reduce the required baseline of spending needed for an organization. Baseline budgeting is a root cause of inefficient use of Government resources.  The financial costs are measurable and easy to comprehend.  The human and performance costs are nearly impossible to measure on a large scale, but as we have witnessed, can outpace financial costs by several orders of magnitude, especially when baseline budgeting operations manage expensive end items (planes, tanks, buildings, or human capital).

Ways to reduce the effects of baseline budgeting, in order of importance, include:

  1. Hoshin Kanri (a.k.a. Hoshin Planning, Goal Deployment)
  2. Standardized Business Case Analysis[1]
  3. Cost/budget reduction based performance incentives

Hoshin Kanri, a Proven Method for Strategic Management

Hoshin Kanri is the most powerful technique for mitigating the problems with baseline budgeting. It is used by many of the world’s highest performing corporations including Toyota, General Electric, and Hewlett Packard.  It is just now starting to get traction in Government. Hoshin Kanri connects tasks to strategy through simple step-by-step planning, commitment to the plan, and rigorous management to the plan through a set of tools that continuously align every day activity to strategic goals.

HoshinMatrix

 

In addition to the disciplined process for performance and financial planning established by Hoshin Kanri, the tool that specifically helps mitigate the effects of baseline budgeting is the Goal Deployment Matrix, shown below.  The Goal Deployment Matrix is particularly useful as a tool against baseline budgeting in that it captures all goals and objectives for the organization, plots them against the organization’s core functions and initiatives, and establishes ownership and performance metrics both horizontally and vertically.  When fully implemented, the Hoshin Goal Deployment Matrix is a matrix based catalog of every function (operational and developmental) within an organization and identifies the value each of these functions is supposed to drive.  This is used to mitigate baseline budgeting through enforcement of a budgeting process that requires all budget line items to be associated with the Goal Deployment Matrix.  If a budget line item does not have a clear place on the Goal Deployment Matrix, then it is not aligned to value and it is waste.  All new developmental initiatives are vetted against the Goal Deployment Matrix to again identify their value and place within the plan.

Standardized Business Case Analysis Drives Spending Discipline

Standardized business case analysis is another method that can help mitigate the negatives of baseline budgeting.  Though not as powerful, business case analysis is a great tool in conjunction with Hoshin Kanri.  Business case analysis primarily addresses the expansion of the baseline by forcing all new starts and developmental efforts through a standard business case process.  In each case, new starts must be vetted against a balanced set of criteria, must add value, and must align with strategic goals.  Further, the standard business case process forces a set of process steps and approvals that cannot be short circuited to enable end of year spending. The diligence of the process forces management to think more strategically about spending activities.

Individual Performance Metrics, a Useful Tool for Specific Problems

Perhaps the most difficult technique for reducing the effects of baseline budgeting is the application of individual performance metrics targeted at cost savings.  This is also the most risky, because personal performance metrics drive personally motivated behaviors.  These behaviors may not be best for the organization.  In other words, if people are incented to reduce cost for personal gain, they may do so at the expense of increased market share or improved customer satisfaction for the organization.  However, cost reduction individual performance metrics can be used sparingly when targeted at specific functions or offices within an organization.

Improving Financial Planning Processes, The Time is Now

It has been our observance during nearly two decades of management consulting that baseline budgeting is a root cause of significant Government inefficiency.  It is the source of compounding financial excess and irrational management behavior.  It is not feasible for Government to perpetually increase its percentage of financial and human resources consumed.  Sequestration and recent cuts in Government spending have created an opportunity for new ways of managing the tax payers’ dollar.  It is incumbent upon Government leaders to seek out and employ new strategic planning, financial planning, and human capital management techniques that ensure Government agencies build upon and institutionalize recent change.

 

[1] GAO INFORMATION TECHNOLOGY – OMB NEEDS TO IMPROVE ITS INVESTMENT GUIDANCE. (2011). Retrieved January 29, 2015, from http://www.gao.gov/assets/590/585915.pdf

Using Lean Practices in IT, Big Savings and Performance Improvement

It has been a busy 2012 for everyone, It seems everyone I know is like me, working harder than ever, hoping the economy picks up and that our Government gets budgeting and contracting plans set and in motion.  While weathering the current economic storm, I had the opportunity to work with a client in the first quarter of this year in one of my favorite environments – IT.  I have found myself in various IT positions and consulting roles throughout my career, picking up technical skills as well as experience with ITIL, CMMi, and various development methodologies.  I have in many cases applied Lean and Six Sigma in conjunction with IT best practices to help organizations improve performance and save money.  Recently, I helped an organization shave millions in annual operating costs while significantly improving performance by applying agile work cells and leveraging a few simple Lean techniques such as one-piece-flow.  This article summarizes three methods nearly any IT support organization can use to drastically cut costs and improve performance.  Given the current economic and budget environment, this may be a timely read for some of our followers.

******

A graduate school professor assigns a software development final class project to teams of five students each.  All students on each five person team will receive the same grade and it accounts for 50% of the final grade for the class.  Team 1 (Sally, Jack, Dan, Tom, and Ken) and Team 2 (Bill, Dave, Will, Cindy, and Beth) take two different approaches to tackling the project. Team one assigns requirements to Sally, Design to jack, Development to Dan, testing to Tom, and presentation of the product to Ken.  Each person will perform their respective part of the project and then hand their work to the next until all work is complete and the final product is handed off to Ken who will present the final product to the professor and his assistant and the grade for the project will be determined.

Team Two decides that they will tackle each task from requirements through final presentation together, but take turns as the task leader based on the strengths of each team member.  The members of team two do not completely trust each other and want to make sure the project progresses on schedule and with high quality since it can cause them to fail the class if done poorly.  They also want to make sure they are all at the presentation to the professor just in case something goes wrong.

Which team would you want to be on? On team one, you know your part of the work, you can focus on your piece and the roles and responsibilities are clear.  On team two, the roles and responsibilities are blurred and everyone on the team will be pounding away the entire time to create the best product possible.  Anyone that has been to graduate school knows that almost without exception, students inherently adopt the collaborative approach of Team Two.  Why do these young minds, yet to be trained by corporate master-minds and molded by policy, politics, bureaucracy, and standardization take this collaborative approach almost every time.  It is because they still have “common sense” and they feel a true sense of urgency and commitment to excellence.  They know their future is on the line with each major project.  They also know that the deadline cannot be changed, resources are constrained, and quality cannot be sacrificed.  Common sense brings them together into what Lean experts call a work cell and software professionals call a Scrum team using an agile approach.

Is this really the best approach?  Many managers will say “this approach is a waste of labor.” “By taking team one’s serial approach they can have multiple projects underway, or Work In Process (WIP) with the same labor pool, while the team two approach limits me to one project at a time with the same resources.”  This is a logical pitfall.  Managers should focus on level of effort in value adding activities and throughput while minimizing WIP.  The Team Two approach on the other hand ensures a “connection” among tasks and essentially eliminates rework within the process.  It also eliminates work waiting in queues and mitigates risks associated with single points of failure in your labor pool.

Lean/Operations Research people call this one-piece-flow in a cellular work cell system.  This article is not a lesson in Lean.  It is a message that assembly line type of thinking is simply wrong in diverse IT environments as proven by the many successful Lean/Agile Scrum implementations.  Yet, this thinking is still prevalent and still the dominant approach to tackling IT projects in most organizations.

I have worked with numerous IT organizations from the largest telecommunications companies to start-up software companies and internal IT shops and can tell you with certainty, that a serial assembly line approach is almost always the wrong way to complete IT projects.  I remember clearly how one of the telecommunications firms I worked with, rooted in old school Bell Corps thinking insisted on breaking the work of implementing new equipment at their Points of Presence (POPs) into a serial approach of power installation, rack installation, equipment installation, wiring, configuration, test, and turn-up.  They would go round-and-round trying to get vital services running with each step of the process pointing fingers at the other. Projects usually made their way to some ridiculous phone conference where the bosses of the various technical teams would accuse each other of various sins until someone higher up would finally tell them all to get their technicians to the site at the same time and don’t leave until the service is working.  In other words, they pulled together a collaborative work cell to get the job done.  The technical leads would declare victory after working nights and weekends to get the service running. This scenario would play out over and over, but the process never changed. That was well over a decade ago and I would like to believe things have changed, but from what I hear, they are only a little better.

To avoid these same inefficiencies, to improve quality, and to drastically cut costs there are three simple methods we recommend.  We call this the “Lean Agile Work Cell Approach”.

  1. Align your IT work force to customer value propositions (a.k.a. value streams)
  2. Implement a Lean centric, pull based cellular work cell approach.
  3. Adopt a unified process automation environment

Aligning your work force to customer value propositions is actually very simple.  Think in terms of the services your IT organization provides to the end customer.  For example, enterprise software projects and support, desk applications and support, and printer support.  You can also think in terms of Service Oriented Architecture (SOA).  You need to understand the Voice of the Customer (VOC) for each line of value.  You need to quantify the key metrics for satisfaction, performance, cost, etc.  You then need to align your teams and processes to these value propositions and measure everyone in them against the key metrics.

Agile work cells aligned to the customer value proposition

Creating pulled based cellular work cells is partially explained above.  As an example, consider the process of deploying new desk top operating systems in your enterprise.  If this process is done in a serial process, disconnected from the customer, you get a situation in which technical details and requirements are gathered for months, then handed off the a integrator that has to figure out the requirements and then develop integration scripts, which are then handed off to a testing team who will then get into a series of re-work loops with the integrator and after many months of testing, the operating system push will be sent to some type of distribution team that will then blindly launch the distribution process.  Alternatively, a series of small work cells containing integrators, testers, distributors, and customer technical liaisons can work together on a focused integrated project team to integrate, test, and deploy operating systems in a rapid order.  By testing at an incremental level and developing deployment scripts with an eye for testing and distribution at the same time, cycle time is significantly reduced and defects essentially eliminated.  This approach also allows for the operating system deployment team to learn from each iteration and get better with each operating system project from customer to customer.  In my firm, we have implemented similar work cells in various organizations with profound improvements in both effectiveness and efficiency.

Lastly, adopting a unified process automation platform is a powerful way to significantly reduce your IT license costs, maintenance costs, and software support costs while improving process performance.  For the last ten years, these tools have been called Business Process Management (BPM) and for the organizations that have adopted them and implemented well, great savings, process improvement, and transparency have been recognized.  The process automation environment becomes a process centric integrated development environment eliminating the need for custom software development, institutionalizing processes, but raising the need for proper IT Governance and enterprise architecture.  Today, process automation tools are available in cloud computing platforms and costs are falling precipitously.

In summary, the Lean Agile Work Cell approach to information system management creates agile work cells working in alignment with customer needs that are developing new software and managing their own work within a common process automation/development environment.  This is bad news for custom software development shops and great news for the CIO with a shrinking budget and pressure to show results.

Email, Email, Email – How to Survive the Information Flood

If you are like me, there is no way you can possibly keep up with the never ending flood of emails hitting your inbox.  Career professionals today have multiple personal and business email accounts and many of the people we work with from our customers to our children’s teachers depend on email to communicate important messages.

Unfortunately, not everyone observes the same email protocols and worse there are those that abuse email or even use it for malicious intent.  In this article, I share my research and practices my firm teaches to manage the flood of emails from those that are trying to send us messages they think are important.  With this article, I will share some best practices for effective and efficient email management and ask that readers please share their knowledge on this subject, as it is still a poorly defined body of knowledge.

I have researched email management several times in recent years in support of clients and spent time researching email management on the Internet and found many articles to find the latest trends.  The first thing I can tell you is that I could find no authoritative source for email management.  There are some good articles and blogs, but nothing one would consider a real standard for email centric personal planning and management.  If someone knows of such a standard setting organization, please post a comment letting us know.

My second major finding is that most of the articles and blogs say essentially the same things.  They say things like “setup email screening rules,” “keep replies simple and brief,” etc.  Rather than repeating the guidelines, here are three of the best articles I found on the subject.

http://hbswk.hbs.edu/archive/4438.html

http://www.dailyblogtips.com/10-tips-for-managing-email-effectively/

http://blogs.wsj.com/juggle/2009/04/09/your-inbox-is-full-managing-email-overload/

From my research and experimentation, I have found six email management techniques that stand out.  These are the must have email management techniques.

  1. Ignore email.  Important messages will get to you.  Yes, this sounds risky, but I do it and it works.  Of course, your eyes are going to quickly identify emails from important people like your CEO, your customers, and your spouse.  Everyone knows how busy you are and if they do not hear from you on an important message, they will call, or instant message, or text message you.  Of course, you need to make sure you are technologically accessible to the people that matter in your life.
  2. Set an email schedule – Read emails on a schedule throughout the day, or limit yourself to a certain number of email minutes per time period.  To get thoughtful work completed, you must take time to focus without distractions and email is distraction number one for most people.  Some It firms are not allowing email for their developers and it significantly improves cycle times, quality, and even collaboration, because they are forced to actually talk about development ideas and issues.
  3. Keep it brief – Keep all messages to one subject per email and the same for all replies.  This is very important.  Emails are hard enough to interpret.  When people combine multiple subjects, the meaning of the message is lost.
  4. Enforce proper behavior – this is one not mentioned in the email articles I read, but I use it successfully.  Essentially, it goes like this.  If someone sends you an email with more than one topic, respond to them stating that you will review and respond to their email as soon as you can, but in the future they should only send emails with one topic per email.  That way you can quickly comprehend and succinctly respond to their email.  If the person persists in sending you voluminous emails, call the person and explain that you simply cannot take the time to read and comprehend his or her emails and that if they need to discuss complex topics to please pick up the phone and call.  Remember, the inverse of behavior shaping is also true.  If you behave badly with email, then your correspondents will do the same in return.  If you send large emails, they will probably give it right back.  If you respond quickly to every email, then people will continue to flood your inbox.  Email is not instant messaging and should not be used as such.
  5. Use lists and bullets – This is a great technique for communicating the steps you want a person to follow in a task, or the items you want them to deliver on a project.  Rather than weaving items into the text of email paragraphs, simply provide a list.  Many people consider lists to come across as harsh and impersonal and perception can be reality, but they are also more concise and accurate.  It is common practice these days to ask them to pardon your brevity at the end of an email, so if you are worried about hurting feelings by listing tasks, simply post that little disclaimer and they should get over any unhappy feelings.
  6. Use follow-up flags and categories – This is a great way to ensure the important emails that require your action get segregated from the masses of the marginally valuable.  Most email clients have some type of follow up flag or star you can click to tag the email.  When you are scanning your inbox, you should go through a process of deleting, flagging, and categorizing.  Personally, I prefer deleting emails, but that is not always an option.  Flag emails that need action and sort your inbox based on flag, then received date.  You can also categorize with most email clients and use hot key to quickly

There is also one recommendation from my research that I have not tried, but it is certainly interesting.  It is to charge a fee for emails.  The example given was an executive that takes a few dollars from the department of each person that sends him an email.  Supposedly, his inbox only contained important company emails because of this practice and collaboration with his leadership team was improved.  This sounds risky, but it may have a positive effect if people talk rather than email.  If someone tries it, please let us know how it goes.

Email is and will continue to be a major part of personal and professional life for years to come.  Like all technologies, it will eventually be replaced by something more efficient and effective.  Until then, we can hope that the ways in which email is used and managed will continuously improve.

Again, please post your comments and suggestions on how readers can improve email management.

This will be my last post of 2011 – Happy Holidays!!!

Level Zero Value Stream Maps

This week’s post is about a simple tool any group of managers can use to help clarify relationships and streamline operations among major organizations in a value stream.  Level Zero Value Stream Maps or Phase Maps as they are sometimes called are an excellent tool for gaining consensus on the way things are or should be accomplished at a strategic level across organizations.  The level zero map is a high level overview documenting who owns and who supports each phase of an enterprise value stream.  It communicates things like the major inputs and outputs of each phase, the objectives of each phase, information systems used, and the major tasks associated with each phase.  Something I like to add to my level zero maps is the overall set of objectives for the value stream.  In fact, I like to do this first.  It is a “begin with the end in mind” approach to documenting the value stream and it gets everyone on the team aligned to a common set of goals.  From this start, the first pass is to move backwards defining the inputs and outputs of each phase such that you can pull the string on a single objective and see how it draws on inputs and outputs all the way back to the beginning.  The second, forward pass is to flush out the details.

The minimum set of information a level zero value stream map should include is.

  • A brief description of each phase
  • Who leads, who executes, and who supports each phase
  • Inputs and outputs of each phase (documents, etc. for back office processes)
  • Entry and exit criteria for each phase
  • The objective of each phase
  • High level list of tasks for each phase
  • Information systems used
  • Policy, manuals, or other references
  • Each phase should be numbered
  • Give each phase a meaningful title
  • High level metrics
  • Identify the variants of the value stream, meaning the different ways things enter and flow through (e.g., [micro, normal, large] or [trucks, trailers, spare parts]).  Create level zero at an level where these variants can be generically described, but make it clear that each is actually processed differently.

Creating a level zero value stream map requires some facilitation skill and it is easy to document something a bit myopic if the wrong person is leading the team, but is not rocket science.  The key is to be open minded, focus on the objectives and be willing to ask dumb questions like “Why do we create that document every time when it does not seem to be part of an objective?”

During the process of creating the level zero map, keep the notes on easel pads or a large white board.  Capture lists of the following: Problems Identified; Risks to the Desired Outcomes; Action Items; and Ideas.  When documenting the problems and risks, make a note of where they reside in the process this will be your first indication of where you may want to start working on performance improvement of the value stream.  It usually makes sense to identify the serious problems within the final outputs of the value stream and then perform root cause analysis to find the places in the value stream where those serious problems are starting.  A cluster of problems in a specific area is not enough to decide where to start working.  Make sure the problems being fixed are the ones most important to the final outcomes.

The style of flow chart used at this level varies greatly.  The best advice when it comes to style is to choose a style for your level zero map that will be consistent with lower level detailed process maps and will enable upward and downward integration of the process maps.  An example of a simple level zero is shown below.

If the value stream you work in does not have a value stream map accurately representing how business is done, you need one.  Make the development of one an agenda item at the next executive off site, or pull together a workshop with your value stream stakeholder to build one together. It is a great exercise that brings management together, develops unified vision, and can be the starting point for serious process improvement.

Note: There are formal standards such as BPMN and the Learning to See approach for documenting your value streams.  I have often found these standards are a great starting point, but not the total solution.  Do some research on these standards and come up with an approach that works for you and your stakeholders.

For more information, visit us at http://www.msi6.com

A Simple Strategic Analysis Tool

Table of Strategic Constraints – A simple tool that exposes significant constraints in enterprise processes and value chains

Large organizations often find that the internal and external functions of supply chains and value chains are at odds with each other. They battle over lead times, quality of documentation, requirements, specifications, delivery schedules, pricing, engineering plans, etc.  A common example is the eternal battle between sales and delivery in numerous industries such as telecommunications, medical devices, and construction.  I will pick on telecommunications since I know that industry well.  Sales personnel sell circuits and value added services in various configurations across the globe.  Inevitably, what was sold is reviewed by a sales engineer under tremendous pressure to get reviews done.  Working with limited information and disconnected from the reality of field engineering, he does his best to approve the sale.  Once the sale is done, it ends up in the hands of some provisioning center and assigned to field installation and configuration personnel that immediately reach out to the customer to find out what they actually want and often to tell them they can’t get everything they were promised when it was promised or maybe not at all.  The same scenario plays out over and over in Government, DoD, and numerous industries.  While we all know this exists, it is often difficult to document and communicate.  This week, I am posting about a tool any manager or leader can use to document organizational misalignment and conflicts that cause these inefficiencies.

The tool is what I call the table of strategic constraints.  It is essentially a system analysis and optimization tool that anyone can use to quickly document certain important attributes of each major function in a supply chain or value stream to easily identify where functions, departments, or entire organizations are out of alignment and possibly even working against each other.  In a process, the optimization of a step or sub process places constraints upon related steps.  Sub system optimization creates whole system sub optimization. Yet, in many value streams each phase struggles to optimize itself at the expense of others.  The result is a never ending series of myopic initiatives to reduce costs and improve performance.  To solve this, the entire process must be optimized on the whole at a strategic or enterprise level.  This will ultimately lead to sub optimal performance of the steps within.  This model analyzes the root causes or drivers of sub process optimization and myopia by qualitatively assessing the objectives and incentives of each phase of the process.  An example of a completed table is shown below.

The concept and the process are simple.  Call a meeting of managers from each of the organizations in your supply chain or value stream.  You can make this as broad or narrow as you wish.  Use some common sense.  Explain to everyone that the exercise is to help everyone in the chain, not to point fingers at any one organization.  If they are honest, they will all learn things that can help everyone to better serve the end customer and streamline their relationships.  Starting at the top, list the phases as shown.  You can also list the organizations if desired.  Now continue to work your way down one row at a time with the team.  Identify the Primary Objectives, then the cost, cycle time, and performance objectives.

The motives and incentives is where cold hard honesty is required.  Ask “what are the people in this phase really incentivized to do.”  You should see things like “avoid getting called into the boss”, “earn commissions”, “execute the budget”, “sell the inventory”.  This is an area where you can truly expose a lack of alignment with the needs of the customer.  You can also expose root causes of lingering problems.  There are no hard and fast rules for completing the table, just enter honest and meaningful information that can be compared across the columns.   Use consistent terminology across the columns of each row.  In other words, for cycle time, don’t enter “yes”, “100%”, “per metrics”.  These entries are almost impossible to compare.  Rather, enter useful and comparable information, such as “top priority”, “no concern”, “based on artificial metrics”.  In this example, one can deduce that the first phase makes cycle time a priority with the customer and the rest of the value chain either does not care or has established internal metrics they probably fudge to make themselves look good.

Completing the table will take several iterations.  Once it is complete, simply scan each row, across the columns and identify the areas in which the phases/organizations are out of alignment.  Document these problems and discuss them with the team and brief them to leadership.  The findings can also become valuable inputs for your strategic planning process.

Times are Tough, Stick to Your Strategy

In times like these when new business is hard to find, top employees are weighing their options, and suppliers are begging for help, you must have a clear and concise strategy and you must align all resources and activities to that strategy. This week’s post is to share a tool that will help your organization operate daily in a manner that aligns with strategy, maximizing the effect of your precious resources. If you are like me, you have more great ideas than you do resources to execute. Further, it is natural for managers to see any opportunity as a good opportunity in lean times. The worst thing you can do in these times is chase every opportunity that comes along in the hopes that something good happens. One of my favorite axioms “Focus and Achieve” applies now more than ever. This doesn’t mean you need to be myopic. Not at all. It just means you need to have a good strategy, SMART objectives, and effective initiatives in place. Then you need to track your market and competitors closely, adjust smartly, and do not allow shiny bobbles to distract you away from your strategy.

The tool best suited for keeping your organization’s activities in alignment with strategy is the goal deployment matrix. This tool is from the Japanese Hoshin Kanri body of knowledge and it is used successfully in numerous high performing organizations such as Toyota, Xerox, and Caterpillar. An example of the matrix is shown below.

Goal Deployment Matrix

Typical Goal Deployment Matrix

Using the tool is simple Once you have identified your goals and objectives, enter them in to the left most column. Then work with your internal teams and departments to define the means for accomplishing these objectives and enter them into the top row. This is often accomplished through a back and forth vetting process. Assign ownership and metrics for the objectives and the means. If you are a Lean Six Sigma organization, you will charter the means, assign Black Belts, etc. Then use this tool to review status at least monthly. If you have a large organization, you can cascade these goal deployment matrices to avoid creating one overwhelming document. Now, every great idea and every request for funding must be analyzed against the goal deployment matrix for fit and alignment. Further, if progress on strategic objectives and means is not apparent in monthly reviews, you may have a hidden factory that needs to be exposed.

To learn about the entire body of knowledge for goal deployment or Hoshin Kanri, comment here and I will help you as time permits, or you can find a lot of good information on the Internet.

Effective and efficient operations don’t happen by chance.

%d bloggers like this: