Where's the feedback?
In the 1950s, Peter Drucker advised managers to measure results, and today's managers proclaim, "You can't manage what you don't measure." Yet project portfolio management (PPM) lacks feedback. Consider these common practices:
- The primary methods of evaluating PPM are benchmarking and maturity models. Neither method measures results. Instead, they compare practices to other practices.
- Portfolio optimization evaluates itself by comparing its predictions to each other. For example, it compares its estimated value for funding down a ranking with its estimated value for portfolio optimization. This practice is tautological. It is like evaluating weather forecasts by comparing them to other forecasts, but never comparing any forecast to results. (See my discussion, "The optimization tautology.")
- Many decision analysis experts proclaim that good decisions can produce bad results. Their reason: one can suffer from bad luck. Blaming luck for unproductive pipelines is an abdication of a manager's responsibilities. (See my discussion, "How to count cards in blackjack.")
Drug development managers need techniques and metrics for evaluating their compound evaluations, compound selections and pipeline management.
Pipeline Physics is developing a new statistical analysis to estimate, for each phase of drug development:
- The fraction of unmarketable compounds, evaluated by the phase, that are mistakenly advanced (false-positive rate)
- The fraction of marketable compounds, evaluated by the phase, that are mistakenly canceled (false-negative rate)
- The fraction of compounds the phase evaluates that are marketable (base rate)
- The phase's ability to distinguish marketable compounds to unmarketable ones (resolution).
Additionally, given sufficient data, the statistical analysis estimates the resolution produced by NPVs, expected values and raw clinical data. The analysis models one's current selection criteria as well, and it compares a company's performance to the aforementioned metrics.
Likely, the analysis will require data pooled from multiple pharmaceutical companies, although large companies may have sufficient data on their own. To learn about these pipeline diagnostic tools, see my pipeline physics research proposal. (Contact me for the password needed to view the proposal.)
Since the advanced statistical analysis is still being tested, let's see a metric one can immediately apply.
Evaluating project evaluations: All data contains estimation errors. Project evaluation metrics propagate these errors through their calculations, which produces project evaluation errors. Consider estimating a project's expected net present value. Let eNPV be the estimate, eNPVT be the project's true value and e be the error in the estimate. Because of estimation error in data, a project's evaluate is the true value plus the error:
eNPV = eNPVT + e
Scoring models, decision trees and NPVs evaluate projects via weighted sums, which has two significant effects. First, the project evaluation error is distributed as a normal function, with a mean and variance, N(μe, σe2). The mean is the bias in the estimate and the variance is the error in the estimate (imprecision) Second, the weighted average averages away some estimation error, which limits σe2.
The size of the evaluation error, σe2, greatly affects project selection. Larger values produce more project selection errors, funding more projects of lesser value while canceling more projects of greater value. For this reason, one should estimate the size of the project evaluation errors, which one can do via Monte Carlo analysis.
(When performing the analysis one must includes positive correlations among the estimation errors in data. For example, suppose one estimates three possible revenue forecasts: optimistic, average and pessimistic. If the estimates use the same marketing model or if some information contributes to each estimate, the estimation errors will be positively correlated.)
To see the impact of the project evaluation errors on project selection one must calculate another statistic: the standard deviation of the project evaluations. Using the expected net present value as an example, consider all the projects one evaluated. The variation of their values is σ2eNPV.
With these two statistics, the reliability of project evaluation is:
In words, it is:
Suppose the entire estimate of project value is error. Then σe2 = σ2eNPV and the reliability is zero. Project evaluations provide no ability to distinguish valuable from less valuable projects. Now suppose project evaluations are perfect. Then σe2 = 0, the reliability is one, and project evaluations perfectly distinguish valuable from less valuable projects.
An additional possibility exists. The method described above performs the project evaluations and estimate of errors separately, so one can have negative reliability. Potentially, σe2 > σ2eNPV . In this case, one's method for evaluating projects ignores uncertainty in a most significant way and must be redesigned.
Reliability and project selection: Project selection techniques have varying sensitivity to project evaluation errors, and this sensitivity affects the frequency and number of selection errors.
- Generally, sophisticated techniques are more sensitive than simpler techniques, so using a sophisticated technique when project evaluations are unreliable increases the number of selection errors.
- Simple selection techniques err when project evaluations are reliable. With highly reliable evaluations one can exploit the fine details of a situation, but simpler selection techniques are too coarse to do it.
Table 1 summarizes the relationship between project evaluation errors, project selection techniques and performance.
Project Selection Technique | |||
Simpler | Sophisticated | ||
Project Evaluation Errors |
Small | Poor result (value left on the table) |
Best result (achieves action flexibility) |
Large | Good result (achieves state flexibility) |
Poor result (too many avoidable errors) |
Can one create rules that relate the reliability of project evaluations to selection techniques, so one can always pick the best selection technique for a situation? My current research strives for this goal. I am studying the sensitivity of selection techniques to the reliability of project evaluations, testing sophisticated optimization models, simpler optimization models, project rankings, cutoff values and methods scholars call Fast & Frugal heuristics.
In addition to the above metric, but drug development and PPM executives need a full toolbox of these metrics. Producing these tools is another goal of mine and part of the development of my pipeline physics framework.
As a final note, when estimating evaluation errors, via Monte Carlo analysis, the following discussions are useful: "Revenue forecasting errors dominate decision trees," "Overconfidence and underestimating risk," and "Estimating probabilities of success: it's not so successful." For additional information about estimation errors and project selection, see my discussion, "How erroneous data causes project selection errors." The following forthcoming discussions will be helpful as well: "You can't reduce uncertainty by planning" and "Four common errors in PPM's Monte Carlo analysis."
After reading my discussions, many managers wish to share their experiences, thoughts and critiques of my ideas. I always welcome and reply to their comments.
Please share your thoughts with me by using form below. I will send reply to you via email. If you prefer to be contacted by phone, fax or postal mail, please send your comments via my contact page.
© 2014 Pipeline Physics. All rights reserved.