Archive

R&D

Normally collegial discussions took a nasty turn after I suggested that most universities lose money on sponsored research.

Incredulous: “I don’t believe it. My department tacks a 50% surcharge to all my contracts; how can they lose money?”

Defensive: “Here are all the reasons that doing research is a good thing, so what’s your point?

Defensive with an edge: “Why are you attacking research?

Let’s be be clear about it:  if it’s your institution’s mission to conduct research, then spending money on research makes perfect sense.  In fact, it would be irresponsible to deliberately starve a critical institutional objective like research.

On the other hand, there are not all that many universities with an explicit research mission.  But there is an accelerating trend among  primarily bachelor’s and master’s universities to become — as I recently saw proclaimed in a paid ad — the next great research university. The university that paid for the ad has absolutely no chance to become the next great research university.  Taxpayers are not asking for it.  Faculty are not interested. Students and parents don’t get it either.

The administration and trustees think it’s a great idea.  Research universities  are wealthy.  Scientific research requires new facilities and more faculty members.  Research attracts better students. Best of all, federal dollars are used to underwrite new and ambitious goals. Goals that would be out of reach as state funding shrinks. As often as not, the desire to mount a major research program is driven by a mistaken belief that sponsored research income can make up for shrinking budgets. It’s a deliberate and unfair confounding of scholarship and sponsored research

If your university is pushing you to write grant proposals to generate operating funds, then alarm bells should be going off.  Scholarship does not require sponsored research. Chasing research grants is a money-losing proposition that can  rob funds from academic programs.  It’s an important part of the mission of a research university, but for almost everyone else, it’s a bad idea.  It’s a little like shopping on Rodeo Drive:  there’s nothing there that you need, and if you have to ask how much it costs, you can’t afford it.

How is it possible to lose money on sponsored research?  After all, professor salaries are already paid for.  The university recovers indirect costs. Graduate and undergraduate students work cheap.

A better question is how can anyone at all can possibly make money on sponsored research. Many companies try, but few succeed.  A company that makes its living chasing government contracts might charge its sponsors at a rate that is 2-3 times actual salaries. Even at those rates, it is a rare contractor that manages to make any money at all.

On the other hand, a typical university strains to charge twice direct labor costs.  Many fail at that, but the underlying cost structure — the real costs — of commercial and academic research organizations are basically identical.  There is a widespread  but absolutely false assumption that underlying academic research costs are lower  because universities have all those smart professors just waiting to charge their time to government contracts. The gap between what universities charge and what sponsors are willing to pay commercial outfits is the difference between making a profit and losing a lot of money. Just like intercollegiate athletics, sponsored research programs tend to lose money by the fistful.

Let me say up front that the data to support this conclusion are not easy to come by.  Accounting is opaque. Sponsors know a lot about what they spend, but relatively little about what their contractors spend.  It is in nobody’s interest to make the whole system transparent.  But my conversations with senior research officers at well-respected research universities, paint a remarkably consistent picture.  With very few exceptions, it takes $2.50 to bring in every dollar of research funding.

Fortunately, the arithmetic is easy to do.  If you know the right questions to ask, you can find out how much sponsored research is costing your institution. Here are ten sure-fire ways to lose money on sponsored research. You do not need all of them to get to a negative 2.5:1 margin.  If you are clever just a couple will get you there.

  1. Reduce senior personnel productivity by 50%: university budgets are by and large determined by teaching loads, a measure of productivity. It is common to adjust the teaching loads of research-active faculty. Sometimes normal teaching loads are reduced by 50% or more.  It is, some argue, table stakes, but a reduced teaching load is time donated to sponsored research because funding agencies rarely compensate universities for academic year support.
  2. Hire extra help to make up for lost productivity: Courses still have to be offered, so departments hire adjuncts and part-time faculty.
  3. Do not build Cost of Sales  into the contract price: The sales cycle for even routine proposals can be  months or years.  Time spent in proposal development converts to revenue at an extraordinarily small rate. In nontechnical fields and the humanities where research support is rare, the likelihood of a winning proposal is essentially zero.
  4. Engage in profligate spending to hire promising stars: Hiring packages for highly sought-after faculty members can easily reach many millions of dollars.  A sort of hiring bonus, there is little evidence that this kind of up-front investment is ever justified on financial grounds.
  5. Make unsolicited offers to share costs: Explicit cost-sharing requirements were eliminated years ago at most federal agencies.  Nevertheless, grant and contract proposals still offer to pay part of the cost of carrying out a project.
  6. Allow sponsors to opt-out of paying the indirect  cost of research: An increasingly common practice is to sponsor a research project with a “gift” to the university.  Gifts are not generally subject to overhead cost recovery, so a university that agrees to such an arrangement has implicitly decided to subsidize legal, management, utility, communication, and other expenses, and
  7. Accept the argument that indirect costs are too high: The  meme among federal and industrial sponsors is that indirect costs are gold-plating that must be limited. Rather than believe their own accounting of actual costs of conducting research, they argue that universities, should limit how much they charge back to the sponsor.
  8. Build a new laboratory to house a future project: Sponsors argue that it is the university’s responsibility to have competitive facilities.  But that new building is paid for with endowment funds or scarce state building allocations that might have gone toward new classrooms or upgraded teaching labs.
  9. Offer to charge what you think the sponsor will pay, not what the research will cost:  Money is so tight at some funding agencies that program managers are told to set a (small) limit on the size of grants and proposals independent of the work that will be actually be required.
  10. Defray some of the management costs of the sponsoring agency: It has become so common that it is hardly noticed.  University researchers troop into badly-lit conference rooms to help program officers “make the case” to their management.
The list goes on. It is so easy to turn a sponsored research contract into a long-term commitment to spend money for which there is no conceivable offsetting income stream that institutions routinely chop up the costs and distribute them to dozens of interlocking administrative units.  The explosion in the number of research institutions has all the elements of an economic bubble.
  • It is motivated by a gauzy notion that all colleges and universities are entitled to federal research funds..
  • It is fed in the early stages by accounting practices that make it easy to subsidize large expenditures.
  • It has the cooperation of funding agencies who know that the rate of growth is not sustainable.

Virtually everyone involved in university research knows that the bubble will burst.  A colleague just showed me an email from his program director at a large federal research agency.  It said that — regardless of what he proposed — the agency was going to impose a fixed dollar amount limit on the size of its grants. But in order to win a grant, he had to promise to do more.  His solution: promise to do the impossible in two years instead of three.  Just like the famous Sydney Harris cartoon,  a miracle is required after two years. At least there would be enough money to pay the bills while a new grant proposal was being written.

There’s a kerfuffle on the eve of the United Nations Climate Change Conference in Copenhagen. 1,700 email messages  that were supposed to be stored on a secure server somehow found their way to open servers and were rapidly picked up by bloggers and others, who jumped on the opportunity to use the sometimes embarrassing messages to discredit  the overwhelming consensus of climate scientists that the earth is warming at an alarming rate and that human activity is the most likely cause. Aside from the shocking coincidence of events — what are the chances that a massive, worldwide fraud would be exposed at the same time the conspirators are getting together to impose their new world order? — and the uproar among climate scientists — who are launching ad-hominem attacks at every skeptic who pokes his head above ground — are there other lessons to be drawn from this shameless bit of theater?  My Georgia Tech colleague, climate scientist Judith Curry, hit the nail on the head when she  pointed out that: (1) there is really nothing in the released messages that discredits published scientific results and (2) scientists are being incredibly counterproductive by retreating into their Ivory Towers and passing up the opportunity to educate and engage both skeptics and the public.  Her Open Letter to Graduate Students and Young Scientists should be required reading for everyone interested in how to keep worlds from colliding:

…even if the hacked emails from HADCRU end up to be much ado about nothing in the context of any actual misfeasance that impacts the climate data records, the damage to the public credibility of climate research is likely to be significant. In my opinion, there are two broader issues raised by these emails that are impeding the public credibility of climate research: lack of transparency in climate data, and “tribalism” in some segments of the climate research community that is impeding peer review and the assessment process.

For “climate science” you can substitute “innovation” and the message is the same. If you’ve circled the wagons and are shooting at anything that moves, the easy target is public understanding of not only science but innovation in general.  The American public is not interested in the long-term thinking required to make sense out of squabbles like this. There are simply not enough people like San Diego Florist Steve Boigon, who — according to the New York Times — downloads MIT physics lectures because he  finds that:

I walk with a new spring in my step and I look at life through physics-colored eyes.

Curry did not go after the easy targets. Instead, she talked honestly to students about the importance of climbing down from the Ivory Tower. The interactive relationship between basic science, technological innovation and public policy — what Donald Stokes calls Pasteur’s Quandrant —  is a hot topic these days, because  so many important societal issues can only be resolved at their intersection.

There’s a veil that conceals the inner workings of creative science and engineering  from the lay public, and attempts to lift it sometimes produce  bizarre reactions.  I was once struck speechless  at an all-hands meeting when one of my engineers stood to scold  the  CEO for making product decisions because he knew “nothing about electronics.”  A prominent member of my Board of Advisers at the National Science Foundation once countered criticism of his particularly cumbersome approach to software development by angrily proclaiming,  “Programming is like playing a piano.  Only virtuosos should do it!”  A world-renowned engineer once responded to an essay critical of his methods by widely distributing a letter entitled “On a Political Pamphlet from the Middle Ages.”  I was one of the young authors who was at the receiving end of that one.  When  outsiders try to lift the veil, the best course is to repair to the upper reaches of the Ivory Tower, hope that the hubbub goes away, and shoot down if it doesn’t.

It is a world view that is somehow wired into university training. The Medieval regalia, semi-religious icons,  and murmured  incantations that convey special status on the conferees reinforce the impression at every college commencement that something mystical has taken place. Science textbooks are uniformly silent on how science is done, presenting instead the subject as a linear, completed work — orderly in progression and tidy in its use of knowledge.  Nearly every engineering textbook guides  readers through well-rehearsed exercises to successful completion of design tasks. Why would anyone want to learn how to build a bridge that falls down?

Insiders, of course, know differently. What takes place behind the curtain is as important as the finished product.  Some of the best technical books ever written lift the veil.  Proofs and Refutations by Imre Lakatos describes  the centuries-long frustration of mathematicians  trying — and repeatedly failing —  to precisely define polyhedra.  The process led some of  the greatest mathematical results of all time. Why Buildings Fall Down by Mario Salvatori and To Engineer is Human by Henry Petrosky are both compelling arguments that progress in  engineering is inextricably tied to understanding engineering failure.  Insiders know that failure is part of the package.  That’s exactly what makes the most outrageous of the climate change attacks so improbable.

There is a sub-genre of humor devoted to obvious, boundlessly incompetent scientific failure, real or imagined.  The Journal of Irreproducible Results is perhaps the defining publication that holds technical vanity up to ridicule. An article entitled Peaceful Use of Nuclear Explosives helpfully noted that

Development of hydro power in the desert of North Africa awaits only the introduction of water

My personal favorite medical discovery was an announcement entitled The Incidence and Treatment of Hyperacrosomia in the United States:

Some very famous Americans  have indeed been afflicted with Acute Hyperacrosomia, among them Abraham Lincoln, George Washington and Lyndon Johnson.  Their condition is readily apparent upon comparison with normal individuals such as Napoleon Bonaparte, Truman Capote  and Dick Cavett…..Since the male population does express the condition to a higher degree, it falls primarily to the female population to objectively consider the risks of involving themselves with hyperacrosomic males…

The jokes are so well-known that Henry R. Lewis apparently had not second thoughts when he wrote The Data Enrichment Metho d:

The following remarks are intended as a non-technical exposition of a method which has been promoted (not by the present author) to improve the quality of inference drawn from a set of experimentally obtained data.  The power of the method lies in its breadth of applicability and in the promise it holds in obtaining more reliable results without recourse to the expense and trouble of increasing the size of the sample of data.

I have a hazy understanding of the data manipulation charges that climate skeptics are leveling at researcher, but I am pretty sure that The Data Enrichment Method was not involved.  There is also the issue of transparency that is specific to climatologists, but Curry handles that well. And then there are the charges that editors of journals were unduly influenced by political considerations.  Like the Inspector in Casablanca, I would be shocked — truly shocked — to hear that hundreds, perhaps thousands, of smart, educated, and highly ambitious people make decisions based on self-interest. The secret that Curry reveals is that it may be regrettable, but  it doesn’t matter in the long run.  Science is not an orderly, axiomatic progression of knowledge. It is a social process.

Even a brief peek under the veil would be enough to convince many fair-minded skeptics that if there were another, compelling, contradictory analysis of the same data, it would have by now appeared in a reputable scientific journal.  Why?  Because it would be a career-making result.  The article would write itself.  What editorial board could long resist publishing an epochal article?  History teaches that political manipulation is much more likely to focus on who gets priority as multiple groups rush to publish simultaneously.  It’s a to maintain a conspiracy when everyone is looking out for himself.  None of this means that everything that has been published is correct. It just means that it’s very unlikely that the shrill cries of  systematic fraud have any validity.



So strong is the urge to seek out systematic scientific fraud, that there is a magazine devoted to the subject. The Skeptical Inquirer (SI) is a kind of companion to The Journal of Irreproducible Results. It specializes in debunking academic myths and scientific hoaxes.  It has over the years exposed magicians, perpetual motion charlatans, creationists, and hundreds of scientific frauds.  Who are these crusaders?  They are the very power brokers that would have to be co-opted if the climate change conspiracy theorists were right.  Here’s a partial list of SI Fellows:

If there is  a less easily manipulated group under one banner, I have not seen it.

Judy Curry’s Open Letter does not only apply to climate scientists. It applies to every boardroom that squashes the discussion of how innovation takes place and every executive suite where technologists are too busy innovating to engage seriously with corporate management.  Of course, it also applies to the easy targets — facile business leaders who confuse near term planning with technical progress and are too quick to jump to the “bottom line” — but that discussion will have to wait for another post.

BewareSharpEdges

I am sometimes chastised for saying it out loud, but engineers have a hard time with context.  Every physics homework problem that advises, “ignore the effects of gravity and friction” adds another brick to the wall that separates solutions to technical problems from solutions that are meaningful to customers.  I am not making a value judgment.  In fact, we would never make technical progress at all if every possible real-world variable had to be taken into account at the outset of a project.  An engineer once worked for me who insisted on starting every engagement with “What do we mean by reliability?”  before listing all of the possible ways that a system – any system, not necessarily just the one we were supposed to be talking about – could be unreliable.  None of those discussions ever came to a satisfactory conclusion.

However, as we saw in “Well, what kind of fraud is it?“, worlds collide when there is confusion about context. The collisions are damaging to business and sometimes it is impossible to recover from them.  It may be a technical feat to hone the edges of a warning sign to lethal sharpness, but it is not the purpose the sign.

Corporate culture can make it hard to blend context, and it is especially hard for companies with strong engineering roots to draw the line between valued technical advice and technical value that can be delivered to customers.  There was an internal joke at HP:

How can you tell where the sales meeting is?  Look for a dozen white Ford Tauruses in the  visitor parking lot.

The typical HP company car was a white Taurus, and it was common to hold customer meetings in which HP engineers outnumbered customers by five to one or more.

There is one sure-fire way you can tell that engineering culture is driving the business operations to a destructive collision.  I call it the catalog rule.  Imagine a sales meeting with N salesmen and M customer representatives.  One of the salesmen should be able to arrive with all of the sales material and, regardless of how large N is, there should be only M sales packets on the table — one for each of customers.  It happens so often that there are M times N catalogs on the the table that you sometimes scarcely notice it.  A customer wants to buy a solution to a complex problem. At the first customer engagement, glossy specifications for all of the carefully engineered component parts are dumped on the table.  This is the point in the meeting where the customer is supposed to have a flash of insight, leap to his feet and start congratulating the engineers.  In the real world, however, the reaction is a little different.   Very few customers want to be their own system integrators. My former Telcordia Applied Research colleague Dennis Egan puts it this way: “Our engineers just want to see their stuff used.”  It seems like a simple thing to ask for, but sometimes this urge for appreciation trumps all other concerns.   In particular, it can confuse the true business context, although you might have to look hard to find it.

It wasn’t that long ago that choosing a data communications service was a confusing and expensive task.  Many telecom customers chose the safe path and called their traditional voice telephony service providers, although it was frequently a big mistake to do that.  Data services in 1995 were a jumble of  software and hardware standards,  confusing pricing models, and regulatory inconsistencies.  A phone call to Bell Atlantic in 1995 inquiring about ISDN service inevitably led to questions that few commercial customers and almost no residential customers could answer.  The question “How far are you from the Central Office?” would usually be met with: “What’s a Central Office?” Because maps and engineering diagrams were frequently inconsistent, an ISDN customer would sit patiently through explanations of loads and coils and why the service probably would not perform as advertised anyway.  A thick reference book titled Engineering and Operations in the Bell System, published by Bell Labs, was given to every engineer in the company. Later, after the 1984 divestiture of the regional phone companies put the physical plant in the hands of seven independent regional operators, Bellcore maintained Engineering and Operations as the network engineering manual for all telephone infrastructure in the country.  By the time DSL service became widely available in 1997, Engineering and Operations specified a work flow diagram for providing DLS service to a single customer with steps that could only be completed after a hundred other independent steps all were completed.

These were the early days of e-commerce, and a clever group of entrepreneurs formed a company with the wonderful name Simplexity to simplify the life of telecom customers in the new age of data.  They had been buoyed by Michael Dell’s brilliantly simple business plan for the company that was to be Dell Computer™:  four pages that said in plain language that it was a hassle to buy computers and that virtually every potential buyer would choose to make a single phone call directly to a manufacturer if it would cut the hassle.  Buying data service was a hassle, too.  Simplexity’s founders reasoned that the 1997 equivalent of Dell’s single phone call for telecom services was this simple website:

Simplexityloginscreen

By negotiating with service providers for a percentage of all subscription fees – a process that was well understood in the industry because resellers of voice and data services were common – Simplexity was able to project a steady growth in revenue as data customers chose the Dell direct-sales shopping model.  Their first few customers apparently verified the market hypothesis, and Simplexity was one of the start-up successes of 1997, raising substantial venture funding and positioning itself for a successful IPO.

The engineering was flawless.  Simplexity’s Virginia-based development lab looked a lot like silicon valley start ups: an open floor plan with ping pong tables, bean bag chairs and board games scattered everywhere.  Java programmers seemingly fresh out of high school chattered excitedly about the next generation of services that would be marketed through Simplexity.com.

Then Simplexity’s revenue growth stalled.  The large number of smaller contracts that investors had anticipated did not follow the small number of large, early contracts.  In fact, new revenue began to decline even as data services began to explode.  Surprisingly, reseller revenue continued to rise as new customers shopped around and additional data service contracts were added to existing customer accounts in record numbers.  Simplexity began cutting its technical staff and adding traditional sales staff to compete head-to-head with the resellers.  This undercut the cost savings as Simplexity found itself paying more in commissions to order-book-carrying salesmen.  By early 2000, Simplexity had run out of cash, and, shortly after that, the company ceased operations.

In my discussions with company executives it was clear that they understood only too late that Michael Dell’s model did not work in telecom.  Customers had been purchasing voice and data services from human salesmen for years and the inherent inefficiency in doing that was more than offset by the personal relationships that drove sales.  A website – no matter how efficient – could not replace the long-standing social ties between buyers and sellers.  Simplexity was a great technology in a marketplace that did not need it.   The Dell model was a red herring.  Dell worked in the PC marketplace because there was no longstanding and trusted way of buying computers that had to be displaced.

Why didn’t Simplexity’s market research expose such a basic flaw in their business model?  I attended Simplexity’s early customer briefings – meetings for engineers aimed at selling their technical advantages.  They went out of their way to avoid positioning themselves as just another vendor.  Meanwhile their bricks-and-mortar competitors were fighting it out over who would get the next order.  It was “just another vendor” who got the order.

This is the message that I give to new start ups:  if it’s a choice between an exciting technology meeting and a boring sales meeting at which you are just another vendor, choose boring.   Your customer may not understand it, but if your product is really that good it will outshine the competition anyway.   And, if you are in a vendor meeting, chances are someone  is interested in buying.   It may be more exciting to warn everyone about your sign’s incredibly sharp edges, but that’s not the real reason it’s there.

.

One of the reasons that the world of R&D collides with product worlds is that their agendas don’t quite line up the way you might think they should.  There are of course the questions of culture, incentives and time.  I will return to these questions in later posts, but today I want point out something more fundamental that I think helps explain why Alice and Edward in “Well, kind of fraud is it?” lived in worlds that were on a collision course from the beginning: many R&D managers are not even in the same business as their counterparts in product management and sales.

The Industrial Research Institute is an association of 200 R&D-intensive companies and is one of the most important forums for sharing data and best practices.  Among its members are recognizable brand names in consumer products, manufacturing, electronics and pharmaceuticals.  Alcoa, Xerox, and General Motors are members.  It is fair to say that the IRI represents traditional, orthodox R&D management thought.  Microsoft, Google, and Intel are not members.   It is interesting that innovation models based on the Internet, software, nanotechnology and other industries where startups often lead the way and product development cycles are compressed are notably absent from IRI.

The IRI Medal is awarded for impact on R&D in some of the largest corporations in the world, and in 1996 it was awarded to Robert A. Frosch, who for ten years led the General Motors Research and Development Center.  He anticipated by a generation the importance of industrial ecological impact. Frosch is a true visionary.  His Medalist’s Address to IRI was entitled “The Customer for R&D is Always Wrong!”.  It was a fascinating and very influential piece, but, because the IRI membership is not open to individuals, it is hard to find.

My first thought on hearing the address was that Frosch was talking about something like the “future value of research” (see “Loose Cannons”) until I read the published version of the speech[1]:

I have seldom, if ever, met a customer for an application who correctly stated the problem that was to be solved.

Frosch went on to describe many approaches to establishing and maintaining an effective R&D organization, and that’s what I remembered from the address until GM started its public foundering last year.

I started to wonder, “Did the GM R&D Center fail General Motors?”  I don’t think that’s a fair assessment. After all GM had for many years made vast research investments in efficient engine technology, telematics, and safety – many of the component technologies that we now know are important to the automobile industry,   I think the fault lies elsewhere: traditional R&D management often does not know who the customer is.  R&D managers talk mainly to each other, and senior management enables this behavior.  They worry – necessarily so I’m afraid – about sources of funding from the product divisions.  According to Frosch:

The R&D people must swim in an ocean of corporate problems, present and future.

To Frosch and many organizations charged with innovation, the customer is the one paying the bills for R&D not the one buying the products.  This is a bigger deal than you might imagine, because it shifts your perspective.   It helps explain why R&D organizations have been historically ineffective in resolving Clayton Christensen’s Innovators Dilemma[2], and it helps explain why Alice and Edward had such a hard time aligning their goals.

Frosch says that R&D performance should be measured by:

  • Past performance, not promises/predictions
  • Summing the value of the successes and comparing with the total cost of the research lab, not individual projects.
  • Projecting the value of successes ove their product or process life – the internal rate of return can be surprisingly high

These are internal measures, and there are many examples of R&D organizations that continued to be successful even as their parent companies spiraled into the ground. The IRI membership list is impressive but there are also members who make up  a veritable Who’s Who of companies that were stunningly wrong in their assessment of their markets, and had their R&D laboratories been focused on the real customers they might have avoided disaster.


[1] Robert A. Frosch, “The Customer for R&D is Always Wrong!”, Research Technology Management, November-December 1996: 22-27

[2] Clayton Christensen, The Innovator’s Dilemma, Harvard Business School Press, 1997

Business and engineering goals sometimes seem to be in perfect alignment when just the opposite is true.  When everyone seems to be making progress but the goal is not getting any closer, it might be time to ask whether worlds are colliding.  This happens more often in large organizations, but in fact everyone is vulnerable: if you misread your partner’s agenda there are very few ways to avoid a disastrous collision.

Here’s an example[1].  The project was high profile and complex, but not so complex that it could not be managed by a single project manager – let’s call her Alice — who reported in a line to senior executives.  The ultimate goal was to produce a working prototype based on new computing technology. A successful demonstration of the prototype would almost certainly lead to full-scale development of a new product, a spectacular win and probable promotion for Alice. There were a few nuts-and-bolts engineering goals but the overriding goal was a dramatic safety improvement, and this was how the project was sold internally: a public demonstration that the prototype would function safely under the most adverse conditions. There were many ways to achieve this goal, but Alice had been sold on a radical new technology that would not only leapfrog existing approaches but would be a platform for many future projects.

Alice invested well.  She funded a group of skilled and capable engineers and scientists.  In fact, she funded the team that invented the technology, so her investment was leveraged by several years of prior research, and  — refer to my last post “Loose Cannons, Volume 1” — this is the way managers are supposed to select promising technologies. The scientists were led by Edward, a senior technologist who had guided his R&D team to a string of patents, technical reports and publications that slowly and carefully put in place the building blocks for the prototype.

Under Edward’s supervision, the building blocks for careers were also being put in place.  A PhD dissertation here, a toolkit from a master engineer there, and senior R&D managers whose reputations were to some extent staked on the applicability of the technology to an important product just like this one.  At Alice’s direction, the engineering team focused on near-term milestones.  One was a technology demonstration for a critical component.  Another was the integration of key components.  A third was a real-time simulation of the prototype.  At each step, in careful technical prose, the engineering team reported constant and impressive progress,

But there were internal and external critics who thought that the technology was overly complex and that the claims needed to be more carefully examined.  Some critics, like Bob, were promoting competing technologies.  Others thought, like Charlie, that the underlying approach was flawed and should be discarded.  Others were seasoned but neutral scientists like Doris, who was skeptical of all sweeping claims but had no particular ax to grind.  Even the critics agreed, however, that the engineering team was first-rate and that if the approach could be made to work at all, this was the team that could pull it off. Alice was aware of the critics and to help her balance the technical risks, she invited Bob, Charlie and Doris to serve on her Advisory Board – to become her skeptical insiders for the project.

Quarter after quarter Alice reported both the steady progress and the risks to her management who asked the appropriate questions but gave her the green light to continue, largely on the growing reputation of Edward’s team.  As the project drew to a close, Alice was asked to prepare a balanced summary and recommendation.  Alice scheduled a final project review.  Bob, Charlie and Doris helped select a dozen additional reviewers while Edward began assembling the massive project documentation and preparing his team to brief the reviewers.  Alice’s direction to Edward was this: “We all understand your technology, so you don’t have to educate us about it.  We need to know exactly what was accomplished.”

It took several months to prepare for the review.  About two months before the review date, Alice and Edward scheduled a series of demonstrations at headquarters.  Charlie was there along with a group of a dozen executives including some of the review panelists, but the marketing nature of the meeting was unmistakable.  Sprinkled in the group were senior representatives from customer organizations, government agencies, most of Alice’s managers, and Edward’s boss.  Alice had staked her personal credibility on a successful outcome.  She was confident enough to preview the results and she wanted use that preview to build excitement as the product phase was launched. To the rest of the group – and especially to Edward — she was not Edward’s customer.  Alice was a partner in a new and exciting era that was being launched that day.

The day did not go as planned. The demonstration was a computer simulation of the prototype.  The group crowded around the color monitor (a big deal in those days) as the prototype was put through its paces.  Alice told the group she knew that a live demo was gutsy.  Then the image on the display began spinning and then froze.  Edward rebooted the simulation.  Still nothing.  Alice pushed on as if nothing had happened, inviting the group to a demo at the upcoming project review.  It is not clear that Alice and Edward understood the significance of this episode.

Couriers delivered large review packages to the reviewers’ offices as preparations for the meeting accelerated.  Charlie started receiving phone calls from Bob: “Charlie, I’ve been looking over the reports, and I have some problems with what Edward is claiming.”  “These are based on papers published in top journals,” Charlie said.  “It’s not the scientific claims,” Bob said, “It’s their application to the product.  I think they messed with the experiments to get the result they wanted.”

The review began on Tuesday morning in a large conference room.  Bob’s comments had spread quickly through the Advisory Board and there were perhaps a dozen back-channel conversations taking place about what it meant.  Edward’s team should have been on edge, but, although the atmosphere in the room was tense, the younger team members — buoyed by Alice’s collegial demeanor and Edward’s favorable report to the team of the outcome of the live demo — seemed unusually relaxed.

Over the next two days, every scientific claim was dissected. “Yes, we see what was claimed in this published report, but it looks like a purely mathematical result. What does it have to do with the prototype?”,  said one reviewer.   Several panelists wanted Edward to square published claims with the apparent inconsistency of the disastrous live demo.  Still another rushed to the blackboard and proceeded to find a counter-example to a published claim.  Bob wanted to know how Edward’s team could have pulled off what Bob’s competing team could not do.  This was hardball, but it was nothing that Alice had not expected.

Finally, at the end of day two, William – the youngest member of the Edward’s team  — moved to the podium and began a scientific summary that included his original research and the less technical summaries of it that had been prepared for popular consumption.  It was clear that William’s PhD dissertation had an enormous impact on the course of the project. .

Finally, from the back of the room, Doris spoke up, “I want you to explain this claim right here” pointing to a critical and widely reported result that apparently cleared the way to broad applicability of the technology.  Doris had been nearly silent to that point. The dramatic effect of her question brought everything to a stop.  Edward gave a nontechnical answer.  William jumped in with technical details.  Other members of the engineering team tried to help.  Doris wasn’t buying any of it and brushed aside all of the responses with well-reasoned arguments taken from their own published reports.

Doris said, “I certainly believe William’s claim, here.  It’s a groundbreaking result.  But what I don’t believe is the following report that it was used successfully in the prototype you are showing us today.”  The response was not planned, but William blurted: “It wasn’t.  We used a simplified version of the prototype.”  The room went silent.  “There’s no way we could have used the final version.  It would have been too complex.”  Alice stood up and stared at Edward:  “That’s not what you reported to me.”  At that moment, in Edward’s eyes, Alice, snapped back into focus as a customer, and Edward understood that Alice’s goals were not aligned with his. As the effect of Alice’s words sunk in, the more inexperienced William tried lighten the mood with a little humor: “Look.  Everything we said was true.  It’s not out and out fraud.”

Doris rose.  “Well, what kind of fraud is it?”

It took a long time for the panel’s report to appear. The project was buried, the product was never built and although Alice recovered successfully, Edward and his team were wounded, although William and some of the other engineers went on to careers in pure research, continuing their work on the underlying technology.

Edward’s team had been making progress on technology, and their primary loyalty was to the community of peers who would celebrate their continued success.  The prototype was an interesting but not essential piece of their research program – useful only to the extent it helped advance their research goals.  William’s work was the least tightly coupled to the prototype and in fact his primary interest in the project stemmed not from the prototype but from ideas born years before while he was still a graduate student.  They all interpreted Alice’s support over the years as not only endorsement of the underlying technology but also a kind of professional endorsement of career choices that were tied to scientific acceptance of the research.   Alice interpreted the acceptance of Edward’s team as a validation of her own credentials as a technology leader.

This was Edward’s R&D world that went crashing into Alice’s product world, a world where the prototype had value independent of whatever underlying technology it used.  Alice only too late understood that success in the R&D world had its own set of goals and rules for achieving them and that her support did not necessarily advance her own product goals. The Engineering team saw her as an ally in achieving their goals.  Alice saw Edward as a fellow traveler.  He was not.  Edward was imagining the many future projects that would regard his achievement as an enduring technological innovation.


[1] For reasons that will become obvious, I’ve disguised the names of the organizations and people involved, but I’ve been faithful to the conversations and the underlying message.