Pipeline Physics Logo

Pipeline Physics

Pipeline Physics Logo
Pipeline Physics produces profit
Gary Summers, PhD 1700 University Blvd, #936
President, Pipeline Physics LLC Round Rock, TX 78665-8016
gary.summers@PipelinePhysics.com 503-332-4095

[an error occurred while processing this directive]

Where's the feedback?

In the 1950s, Peter Drucker advised managers to measure results, and today's managers proclaim, "You can't manage what you don't measure." Yet project portfolio management (PPM) lacks feedback. Consider these common practices:

Drug development managers need techniques and metrics for evaluating their compound evaluations, compound selections and pipeline management.

Pipeline Physics is developing a new statistical analysis to estimate, for each phase of drug development:

  1. The fraction of unmarketable compounds, evaluated by the phase, that are mistakenly advanced (false-positive rate)
  2. The fraction of marketable compounds, evaluated by the phase, that are mistakenly canceled (false-negative rate)
  3. The fraction of compounds the phase evaluates that are marketable (base rate)
  4. The phase's ability to distinguish marketable compounds to unmarketable ones (resolution).

Additionally, given sufficient data, the statistical analysis estimates the resolution produced by NPVs, expected values and raw clinical data. The analysis models one's current selection criteria as well, and it compares a company's performance to the aforementioned metrics.

Likely, the analysis will require data pooled from multiple pharmaceutical companies, although large companies may have sufficient data on their own. To learn about these pipeline diagnostic tools, see my pipeline physics research proposal. (Contact me for the password needed to view the proposal.)

Since the advanced statistical analysis is still being tested, let's see a metric one can immediately apply.

Evaluating project evaluations: All data contains estimation errors. Project evaluation metrics propagate these errors through their calculations, which produces project evaluation errors. Consider estimating a project's expected net present value. Let eNPV be the estimate, eNPVT be the project's true value and e be the error in the estimate. Because of estimation error in data, a project's evaluate is the true value plus the error:

eNPV = eNPVT + e

Scoring models, decision trees and NPVs evaluate projects via weighted sums, which has two significant effects. First, the project evaluation error is distributed as a normal function, with a mean and variance, N(μe, σe2). The mean is the bias in the estimate and the variance is the error in the estimate (imprecision) Second, the weighted average averages away some estimation error, which limits σe2.

The size of the evaluation error, σe2, greatly affects project selection. Larger values produce more project selection errors, funding more projects of lesser value while canceling more projects of greater value. For this reason, one should estimate the size of the project evaluation errors, which one can do via Monte Carlo analysis.

(When performing the analysis one must includes positive correlations among the estimation errors in data. For example, suppose one estimates three possible revenue forecasts: optimistic, average and pessimistic. If the estimates use the same marketing model or if some information contributes to each estimate, the estimation errors will be positively correlated.)

To see the impact of the project evaluation errors on project selection one must calculate another statistic: the standard deviation of the project evaluations. Using the expected net present value as an example, consider all the projects one evaluated. The variation of their values is σ2eNPV.

With these two statistics, the reliability of project evaluation is:

Equation for reliability

In words, it is:

Equation for reliability writen with words

Suppose the entire estimate of project value is error. Then σe2 = σ2eNPV and the reliability is zero. Project evaluations provide no ability to distinguish valuable from less valuable projects. Now suppose project evaluations are perfect. Then σe2 = 0, the reliability is one, and project evaluations perfectly distinguish valuable from less valuable projects.

An additional possibility exists. The method described above performs the project evaluations and estimate of errors separately, so one can have negative reliability. Potentially, σe2 > σ2eNPV . In this case, one's method for evaluating projects ignores uncertainty in a most significant way and must be redesigned.

Reliability and project selection: Project selection techniques have varying sensitivity to project evaluation errors, and this sensitivity affects the frequency and number of selection errors.

Table 1 summarizes the relationship between project evaluation errors, project selection techniques and performance.

Table 1: How to match your selection technique to your project evaluation errors.
Project Selection Technique
Simpler Sophisticated
Project
Evaluation
Errors
Small Poor result
(value left on the table)
Best result
(achieves
action flexibility)
Large Good result
(achieves
state flexibility)
Poor result
(too many
avoidable errors)

Can one create rules that relate the reliability of project evaluations to selection techniques, so one can always pick the best selection technique for a situation? My current research strives for this goal. I am studying the sensitivity of selection techniques to the reliability of project evaluations, testing sophisticated optimization models, simpler optimization models, project rankings, cutoff values and methods scholars call Fast & Frugal heuristics.

In addition to the above metric, but drug development and PPM executives need a full toolbox of these metrics. Producing these tools is another goal of mine and part of the development of my pipeline physics framework.

As a final note, when estimating evaluation errors, via Monte Carlo analysis, the following discussions are useful: "Revenue forecasting errors dominate decision trees," "Overconfidence and underestimating risk," and "Estimating probabilities of success: it's not so successful." For additional information about estimation errors and project selection, see my discussion, "How erroneous data causes project selection errors." The following forthcoming discussions will be helpful as well: "You can't reduce uncertainty by planning" and "Four common errors in PPM's Monte Carlo analysis."


After reading my discussions, many managers wish to share their experiences, thoughts and critiques of my ideas. I always welcome and reply to their comments.

Please share your thoughts with me by using form below. I will send reply to you via email. If you prefer to be contacted by phone, fax or postal mail, please send your comments via my contact page.


Contact Information

 First name
 Last name
 Title
 Company
 E-mail address