Orchestrating Your Product Development Process with Milestones

Effective milestones are an important part of a company’s development process, especially in today’s era of team-based sprints and stand-ups. Yet many companies struggle to successfully create and employ milestones; and some don’t even understand their relevance beyond updating senior leadership. In fact, the topic comes up so frequently in my travels that I thought it would be worth a slightly longer discussion than usual.

Well-designed, thoughtful milestones do a great deal more than just mollify senior leaders. Milestones can and should be like the sheet music that, along with a skilled conductor, aligns and guides your development orchestra. To that end, I’ll share some thoughts on the purpose of milestones, how to create useful ones, and a few tips on holding effective milestone reviews.

Milestone Purpose

Milestones, as the name implies, provide important information to the development team to guide them on their development journey.

1)  They provide a reference to determine normal from abnormal conditions: Milestones tell the team if they are on track so that they can decide how best to proceed, like the lines on the floor of an assembly-line workstation. In these stations, a set of yellow lines can indicate the percent of work to be completed at that point in time. If the worker is at the 50 percent line, and only 25 percent of the work is complete, he or she can pull the andon to signal for help. The

Milestones tell the team if they are on track so that they can decide how best to proceed

team leader can then come over to help fix the issue in station without disrupting the rest of the line. Of course, this system is worse than useless if the team identifies abnormal conditions but has no signaling mechanism, or if leadership does not provide real help to the team. (One example of leadership help in this regard is the cadenced design reviews discussed in my previous e-letter. However, the goal is ultimately to identify and resolve issues early and effectively – to shorten management cycle time and keep the project on course.)

2)   They act as key integration points: Milestones are an important part of synchronizing work across functional groups.  They should be designed to recognize key interdependencies between disciplines like software and hardware or design and manufacturing and provide common reconciliation points. To do this effectively you must understand both the tasks, and sequence of tasks, within each functional discipline. This detailed knowledge allows you to sync up work across those functions. This in turn lets you maximize the utility of incomplete but stable data to optimize concurrent work. The better companies get at this, the faster they can go. In fact, this synchronization is far more effective in shortening lead-time than attempting to reduce individual task time.

3)   Milestones are a critical component of a company’s development operating system. Senior development leaders typically have many different programs to manage simultaneously. They must have the ability to recognize issues, respond quickly and effectively to struggling project needs and make adjustments as required in the rest of the development factory. A project-health dashboard built from milestone feedback can be a powerful tool to enable this work if you have properly designed milestones.

Creating useful milestones

My experience here is that milestones, like most things in life, are just about as effective as you make them – both in terms of design and adherence. I’ve found that useful milestones share these qualities:

1) A real purpose: Start by asking yourself, “Why do we have this milestone?” You need to be able to create a clear, concise, product-oriented purpose statement. If you can’t, you should question the need for the milestone. Another way to think about this is, “What problem are you trying to solve with this milestone?” Milestone purpose statements should optimally be linked to the Chief Engineer Concept Paper and reviewed in the program kickoff event. It is also crucial that you align cross functionally on the milestone purpose statement.

2) Clear Quality of Event Criteria (QECs): Many companies create milestones based on activities or events.  While this may be necessary it is not usually sufficient. Just completing an activity does not tell you very much about the program status or health. For example, you may complete an early prototype build event, but have done so with component parts that are not the correct pedigree for design or manufacturing process level, thus rendering subsequent testing and learning spurious. You have not closed the required knowledge gap nor reduced risk to a sufficient degree.

By establishing QEC for the milestone, the team gets a more realistic picture of where they are really at in the development journey.

However, because the team completed the prescribed activity, they and their leadership might be lulled into a false sense of security. By establishing QEC for the milestone, the team gets a more realistic picture of where they are really at in the development journey.

Four things I like to think about in evaluating QEC: A) The QEC should be the critical few predictors of project success, not a wish list of every possible failure mode you can brainstorm. B) Is the requirement binary? C) If it can’t be binary, is there a quantitative range that can be established and measured, and D) If it can’t be binary or quantitative, is there clarity about who decides?

3) Clear roles and responsibilities: It is important that participants are aligned on who is responsible to do what at each milestone. The time to align on this is at the start – not when you reach the milestone.

4) Scalability: Not all programs are alike. Levels of content, complexity and risk can vary significantly across projects. Well-designed milestones can be reconfigured to best fit the program without losing their basic intent or effectiveness.

Milestone Reviews:

1) The first principle in milestone reviews is to support the team. Updating leadership is important, but the primary intent should be to provide help and guidance as required.

2) It’s okay to be red, but it’s not okay to stay red. “What’s your plan to green?” was a question I first heard from Alan Mulally while I was at Ford. While you want to drive fear out of these reviews, you don’t want to eliminate accountability. The team must deliver on commitments.

While you want to drive fear out of these reviews, you don’t want to eliminate accountability.

3) Define who should attend each milestone review. Some reviews require senior leaders, functional representation or particular specialists – others not. Consider the milestone purpose for guidance here.

4) Milestones are an opportunity for the team to regroup, align and sync up on the way forward. They should energize the team; not demoralize them. Leaders should look at them as a chance to “turbo charge” the team like the old Hot Wheels spin stations. The cars come out with much more energy than they came in with.

5) Hold the reviews at the gemba whenever possible.

I hope that you found at least a few of these ideas useful. So at the risk of overextending the orchestra metaphor, even the best musicians can sound like screeching cats if they are not playing from the same score. Can better milestones help your team play sweeter music?

Best Regards,

Jim

 

PS:

  • Our friends at the Lean Product and Process Development Exchange (LPPDE) will hold their North America 2016 conference on September 26-29 in Philadelphia. The agenda has been designed to create EXCHANGE and learning around key questions in the evolution of Lean Product and Process Development. Learn more at lppde.org.
  • Our next LPPD Partner Learning Event is scheduled for November in Davis, California at FMC’s Schilling Robotics Center. We are looking forward to another incredible learning experience. Stay tuned for details.
  • The 2016 Lean Process Innovation Summit was held at Mackinac Island on August 16 – 18. A great event at a fabulous location that shows the rapidly growing interest in LPPD! Can’t wait until next year.

What Do I Tell My Leaders When Experiments Fail?

I’m dealing with the fallout of a failed experiment with set-based concurrent engineering (SBCE). As the product development operations specialist, I understand that LPPD experiments don’t always result in success – but my leadership team doesn’t. How do I help them understand that a failure in LPPD isn’t a total loss?

The first step we always talk about is, “What was the purpose of the experiment and were we very clear with the management team about what we were trying to prove or disprove?” A lot of times at the start of a project we communicate, “We’re going to do an experiment on set-based design to see its benefits,” and we leave it very open-ended. And like anything that you leave open-ended, that leads to different expectations from different people in the organization – those who are more familiar with LPPD obviously will have a more realistic expectation of what will come out of set-based design; senior managers will certainly be more interested in the end result, which may happen in the short term or, in accordance with product development, may happen years later.

So from their perspective, if they’re not seeing instant results, they may see the experiment as a failure when the jury may still be out. Or this may be a failure in that there was an experiment conducted with a specific target outcome, and we didn’t achieve that.

In the latter case, it’s important to remember that there is ALWAYS a silver lining that occurs. Very rarely do we have a complete and utter lack of learning in an experiment. If we do, we probably didn’t do something or just stuck with the status quo.

The first thing to usually do is do a good reflection with the experiment team to analyze what went well and what did not.

Then you can hold a report-out with the management team to share your findings. And I think this is where we start getting some clarity that the glass is either half-empty or half-full.

I remember a time where we had a failed experiment that was viewed as a failure. But when the group went through the reflection, it wasn’t quite as much of a failure as it was perceived. One team did their first set-based experiment on a given project and partway through, the regulatory requirements kind of changed up the project. And when the requirements changed, the parameters changed as well and the team was forced to change their design direction at that point.

Now the perception by the organization – the “talk around the water cooler,” if you will – was that the team was really struggling with set-based design since they’d obviously just had to change directions. They thought that was a failure in set-based design.

Interestingly, when we sat down with the team to discuss this failure, the team did not call it a failure – they called it a hiccup! They said, “Sure, it’s true we changed design direction but only with two out of the five sets we were working with.” But the real perception from the organization on why they thought the team had failed was because the project’s cost metrics went up. People interpreted the result not being achieved as a reflection on how poorly the team executed set-based design. But once the team had set the record straight with the leadership team, the response was “Oh. That makes sense. No problem – keep working on the project.” One of the directors even repeated the team’s explanation to the rest of the organization.

It’s my experience you need at least three or four experiments before organization starts to deeply understand the nuances of SBCE. And they learn enough from their successes as well as their failures to be able to really define their own way of what SBCE looks like in their own product development community.

Now, let’s talk about what you can do to manage your leadership’s expectations while the experiment is going on. You don’t want to wait until the end of the experiment to tell management about your progress. If you do, you may be disappointed, and frankly it may be several months or years – and as we all know, management likes to see results quickly.

You want to have a series of certain touch points when you’re going through the process. When we’re doing an experiment in product development – or in lean in general – we should be just as interested in the process as the metrics. We should be checking in with management more periodically when we’re going through the process in order to inform them when the process is in control or deviating. And that’s certainly where it can be helpful to have a steering committee or some sort of leadership champion that you can go to, because if the team has a problem, there needs to be a specific organization or group they can escalate it to when needed. It is a very effective way to avoid telling management about your “failure” in retrospect.