LEDA Home Page Harvard Law School LEDA at Harvard Law School



The Prescription Drug User Fee Act: A Solution to Drug Lag?

Alusheyi J. Wheeler

Class of 2003

Submitted in fulfillment of the 3rd Year Written Work Requirement

April 2003

For decades, scholars criticized the United States Food and Drug Administration for delaying consumers’ access to vital new drug therapies. They argued that the FDA regulated system of drug development and approval was born out of disaster and therefore extremely overcautious. Critics alleged that pharmaceuticals available in other industrialized nations began saving lives years before the FDA would allow them onto the American market. “Drug Lag” as it came to be known, was blamed for killing thousands of Americans needlessly. In response, Congress passed The Prescription Drug User Fee Act of 1992. The act allowed the FDA to charge user fees to drug companies in order to generate a new influx of cash that would allow for faster approval of new pharmaceuticals. The legislation enumerated very specific new goals for the agency to meet in its review of various submissions. The legislation also required that the FDA submit an annual report to Congress outlining the agency’s progress in meeting the goals. The reports reveal that the FDA has achieved staggering success. In almost every category, the FDA met or exceeded aggressive goals outlined by Congress. Unfortunately for Congress, the FDA, and the American consumer, this success did not end drug lag. The time and money required to develop a new drug has ballooned dramatically. This occurred because Congress and the FDA crafted a statute that only deals with a small part of the problem. At its core, drug lag involves the amount of time and money required to take a drug from laboratory creation to store shelves. Since the drafters of the PDUFA did not fully address this problem, the legislation cannot solve it .

Prior to the 20th Century, American society generally left the regulation of the quality of foods and drugs to market forces. However, beginning in 1906, a series of disasters and near-disasters led Congress to continually expand the power of the agency that would become the Food and Drug Administration (“FDA”). The world famous thalidomide disaster convinced Congress to give the FDA vast authority to ensure drug safety and efficacy. Within fifteen years, critics of the new system emerged, arguing that the requirements created unnecessary delay and expense for companies developing critical new drug therapies. They also maintained that this “drug lag” was quietly killing thousands of Americans who never realized that the FDA’s processes had hindered potentially life-saving pharmaceuticals.

For years, drug lag was largely an academic debate between those intimately involved with and interested in pharmaceutical regulation. The AIDS crisis changed that. AIDS victims mobilized into a formidable political force that heavily focused on the FDA’s slow and expensive drug approval process. They and victims of other life-threatening diseases finally brought drug lag into the public’s consciousness. Eventually Congress responded with the Prescription Drug User Fee Act of 1992 (“PDUFA”) (P.L. 102-571). The principal feature of the Act is a provision that allows the FDA to charge a “user fee” to private companies submitting a new drug application. The FDA used the fees to hire more staff members to review new drug applications (“NDAs”) and therefore expedite the review process. Letters accompanying the legislation created specific numerical goals for the agency to achieve. They usually require the FDA to review submissions within a specified time frame. Each renewal of the legislation, in 1997 and 2002, led to additional and more aggressive goals. The Act requires the FDA to submit an annual report detailing whether or not the FDA has achieved the enumerated goals.

A thorough review of these reports reveals that the FDA has succeeded. The FDA has met or exceeded a surprising number of aggressive goals. This success highlights a fundamental failure of the PDUFA, however. From 1992 to the present, Congress and the FDA have failed to address the multiple aspects of the drug lag issue. The academic literature reveals that drug lag involves at least four interconnected issues that are not always explicitly separated. First, drug lag involves the allegation that foreign consumers receive faster access to new drugs. Secondly, many maintain that the excessive amount of clinical study required for NDA approval hinders drug development. Drug lag also involves the speed (or lack thereof) with which the FDA reviews new drug applications. Finally, the term drug lag represents the concern that, everything else aside, the entire process takes too long and people therefore die needlessly. These ideas are not mutually exclusive, but for years the FDA has treated them as if they are. The agency has focused on speed of NDA review and foreign versus U.S. drug access. Only recently has the FDA made any efforts to improve its own performance during clinical trials. The result has been only limited success in shortening the length of clinical development despite overwhelming success in achieving PDUFA goals. Given that drug development currently costs $800 million spent over 10-15 years[1] , it is also clear that Congress’ and the FDA’s myopic vision resulted in a statute that utterly failed to efficiently deliver inexpensive pharmaceuticals to the public.

Part I of this paper discusses the history of FDA regulation and the series of incidents that led to the current regulatory scheme. Part II describes the process of drug approval in the United States from discovery of a new compound to final marketing approval. Part III introduces the drug lag issue and some of the steps that the FDA has taken to address these concerns. Part IV takes an in-depth look at the required PDUFA reports and compares the FDA’s performance with the goals laid out by Congress. Part V looks at some of the research on drug development since the PDUFA was passed. Part VI suggests modifications to the American drug approval system that might address some of the aspects of drug lag that the FDA has been reluctant to recognize. Finally, Part VII briefly concludes the paper.

Part I: The History

Prior to the 20th Century, almost none of the modern regulatory system existed in the United States. In 1820, the first U.S. pharmacopoeia, a compilation of drugs and medicinal preparations, was established.[2] Much of the other 19th Century regulation outlawed misbranding and adulteration.[3] The government could only remove misbranded or adulterated goods from the market, no pre-market review mechanism existed.[4] Concerns about defective foods and drugs led to the creation of the Bureau of Chemistry in the Department of Agriculture in 1862.[5] Through a series of name changes and movements through departments, this eventually became the current FDA located in the Department of Health and Human Services.[6]

Numerous disasters provided the impetus to create the current regulatory regime. The first tragedy occurred in Camden, New Jersey in 1901. Several children died when a diphtheria vaccination became contaminated with tetanus.[7] In response, Congress passed the Biologics Act of 1902, giving a FDA precursor the power of pre-market approval for all biological drugs (these include “blood and blood products, vaccines, derivatives of natural substances for treating allergies, and extracts of living cells”).[8]

Public concern over the safety of foods and drugs only heightened during the first decade of the 20th Century. In 1906, Upton Sinclair published his famous book, The Jungle , that chronicled the horribly unsanitary conditions prevalent in the meat packing industry. In the same year, Congress passed the Meat Inspection Act and the Pure Food and Drug Act.[9] While primarily concerned with food, the Pure Food and Drug Act also dealt with drug safety and fraud. Charlatans frequently sold consumers of the day “impure drug substances and fake medical preparations.”[10] To counter this, Congress granted the FDA the power to seize adulterated and/or misbranded products.[11]

The state of drug regulation remained relatively static for three decades until the elixir sulfanimide tragedy prompted another increase in governmental authority. Sulfanimide was one of the first highly effective drugs to emerge from a laboratory.[12] It served as a critical anti-infection agent in an era before the widespread use of antibiotics.[13] Consumers traditionally ingested the drug in tablet form, but in the late 1930’s, the Massengill Company decided to create and market a liquid form of the drug. To create the liquid, pharmacists used a substance called diethylene glycol.[14] This chemical compound, which today is commonly found in automobile antifreeze, had often been used as a solvent for sulfa.[15] Diethylene glycol has “a slightly sweet taste and a pleasant pink color” which belies the fact that it is quite poisonous.[16] Over 100 people, many of them children, died after ingesting the “elixir sulfanimide.”[17] Even the most basic toxicity tests would have exposed the problem but such tests were not required at the time.[18]

In response to this tragedy, Congress passed the Food Drug and Cosmetics Act of 1938. This legislation still forms the foundation of much of the FDA’s current duties. The Act radically changed the law in a number of ways: 1) cosmetics and certain types of medical devices were brought under FDA jurisdiction; 2) the act prohibited false advertising of food, drugs, and cosmetics; 3) ‘added poisons’ in foods were either prohibited or subjected to tolerance levels; and 4) the act required factories to operate under federal permits where pubic health could not be preserved by other means.[19] Most significantly, the Act required those who wished to sell a new drug to submit an application to the FDA.[20] This was the first time that Congress gave the FDA sweeping pre-marketing power over drug manufacturers. The FD&C Act required drug companies to submit an application establishing the proposed use of the drug and safety at the proposed dose.[21] The Act gave the FDA 60 days to decide if the drug was safe or unsafe. If the agency failed to act during this period, the drug proceeded to the market.[22]

From 1938 to 1962, Congress amended the FD&C Act and passed other relatively minor food and drug legislation. Major change would not occur until disaster struck again in the form of thalidomide. This final disaster spurred Congress to create the regulatory framework that is basically still in place today. In 1958, doctors in Europe began prescribing thalidomide, a drug originally developed in Germany, as a mild sedative agent.[23] Its primary use was to ease the “morning sickness” symptoms that many pregnant women experience. Unfortunately, the drug caused terrible birth defects in as many as 10,000 children. Many were born with severely deformed limbs, flipper type appendages where arms or legs should have been, or with limbs missing completely.[24] Dr. Frances Kelsey, a FDA employee, delayed thalidomide’s approval in the United States and the birth defects became apparent before the drug reached the U.S. market.[25]

Despite avoiding tragedy, Congress was sufficiently frightened to pass the Kefauver-Harris Amendments to the FD&C Act which radically altered the drug approval process. The amendments abolished the prior regime of default approval. Drug companies seeking to market a new drug must still apply with the FDA, but an affirmative action by the agency is now required prior to marketing.[26] Additionally, the FDA now requires that a drug sponsor prove both safety and effectiveness.[27] The amendments also create good manufacturing practices (“GMP”) standards that must be met to call a drug unadulterated, create requirements that clinical subjects give informed consent and report adverse reactions, and give the FDA jurisdiction over prescription drug advertising.[28]

Part II: The United States Drug Approval Regime

A. Pre-Clinical Phase

With a brief history complete, one can now begin to understand the American drug approval system. But to understand how drug lag became the topic of newspaper stories, one needs to see how the post-1962 system functions. The first step a drug company must take is to select a compound that may have a therapeutic effect for one or more illnesses. The company can create a completely new substance in a laboratory, a new chemical entity (NCE), or use a pre-existing substance.[29] Often company scientists will screen a variety of new compounds in animals (usually rats or mice) in hopes that some sort of effect will manifest itself.[30] Another approach is to “develop a disease model in animals that resembles the disease process in man, and then screen compounds in this model.”[31] Unfortunately, this method can be very expensive and laboratory results in animals often don’t transfer to humans. Additionally, many human diseases lack animal counterparts that could provide the basis for this kind of research.[32]

Once a substance is chosen, the FDA requires rather rigorous testing on laboratory animals before human testing (i.e. clinical testing) can begin. Drug companies often conduct two kinds of preclinical studies, pharmacology studies and toxicity studies. Pharmacology studies isolate a compound and determine how it interacts with the physiology of the test animal. This includes information about absorption, distribution, metabolism and excretion.[33] Drug sponsors also conduct toxicity studies of several types. Acute toxicity testing determines the short-term effects of high exposure to a drug. Often this sort of testing will include a determination of the lethal dose. The FDA requires that the scientists observe motor function, excretion, respiration, and behavioral problems in addition to lethality. Subacute toxicity testing is similar but requires a long-term look at the effects of the drug on the laboratory animals. These tests usually range in length from one to three months. Finally, chronic toxicity testing looks for similar problems over an even longer period of time. These tests can last from one to two years.[34]

B. Clinical Phase

The next stage in drug development begins the process of testing the compound in human subjects. The drug sponsor must compile and organize all the data from the tests discussed above. The sponsor then presents the information to the FDA in a filing called a Notice of Claimed Investigational Exemption for a New Drug (“IND”).[35] This is an application to the FDA asking permission to send the drug in interstate commerce.[36] The FDA has 30 days to grant or deny the IND application. If the FDA takes no action within 30 days, the sponsor can proceed to clinical testing.[37] In making its decision, the FDA generally considers “the protection of the human research subject,” “the adequacy of animal studies already completed and analyzed,” “the scientific merits of the research plan,” and “the qualifications of the investigator.”[38]

After approval of an IND, most drug companies do not attempt to perform clinical testing on their own. The drug companies will submit information from pre-clinical testing to institutional review boards (“IRB”) of medical schools and independent testing entities.[39] The company must convince an IRB that their product is safe and worthy of testing in humans.[40] Prior to 1981, IRB clearance only applied to trials conducted at hospitals or universities, but the requirement now applies to all trials including those performed by private practitioners.[41] The IRB principally determines whether or not the proposed clinical trials take adequate measures to assure patient safety. A second, related goal is assuring that the benefit to risk ratio suggests that exposing patients to the potential side effects of the drug is wise.[42] While any IRB is free to reject a proposed set of clinical trials, a sponsor may approach different IRBs until one agrees to sanction the study. The sponsor is also free to alter the study according to suggestions by an IRB.

Once the drug maker receives FDA approval (or lack of protest) and satisfies an Institutional Review Board, clinical trials may begin. FDA regulations prescribe that clinical investigation must occur in three phases.[43] Phase I begins the process with the drug’s first introduction into a human being. These trials usually start with a single healthy volunteer receiving a single dose of the drug.[44] Often this initial dose is very low, even well below that required to cause the desired effects. Over time, as safety is shown, researchers gradually increase the dosage. The researchers closely monitor the patient for side effects and record how the patient metabolizes and excretes the drug.[45] Assuming no problems occur, the researchers carefully expand the number of people taking the drug to 20-80.[46] Generally, early Phase I subjects are healthy volunteers with flexible schedules that allow for easy monitoring. Sometimes, however, researchers allow hospitalized patients with a mild form of an illness to participate.[47] During Phase I, along with all the other stages, if significant problems are discovered the drug can be completely abandoned.

Phase II testing shifts the focus away from safety and towards the efficacy of the new drug.[48] For the first time, many of the participants will be victims of the condition that the drug seeks to treat.[49] Approximately 100-200 people will participate in a Phase II study.[50] The researchers closely monitor the patients for side effects since a relatively small number of humans have used the drug at this stage. Researchers will also devise a protocol to determine whether or not the drug is effective. The protocol will vary for every drug and every condition that a drug seeks to treat. For example, “the efficacy of a new cancer agent...will probably be determined by measuring the size of malignant tumors before, during, and after drug treatments.”[51]

At the end of Phase II, the drug sponsor is entitled to a conference with the FDA.[52] A drug’s classification usually dictates how soon this meeting occurs. Conferences are generally easier to arrange for drugs that ‘constitute an important therapeutic gain’ or have important toxicity problems.[53] This is not a trivial matter since the FDA generally dislikes pauses between the clinical phases.[54] Once the sponsor obtains a meeting time, it must gather a considerable amount of information. The FDA will generally want to see: 1) tables documenting the number of patients and how many actually completed the trial, 2) explanations for why certain patients were lost or withdrew from the study, 3) a summary of measurements taken to determine both safety and effectiveness, and 4) a statement describing whether the data has been subjected to statistical analysis, and if so, a “justification of the adequacy of such analysis.”[55] During the conference a FDA employee will examine the data and discuss any inadequacies or problems that the sponsor needs to address. The meeting is also an opportunity to reach an agreement on an acceptable Phase III protocol.[56]

The final clinical studies occur during Phase III where hundreds or even thousands of patients receive the experimental new drug.[57] The setting should approximate that in which patients would use the drug if approved by the FDA.[58] These larger studies allow researchers to identify side effects that the smaller trials did not detect and help establish a final correct dosage.[59] Phase III studies are often monitored less closely than Phase I or II studies and many of the participants will be outpatient.[60] Researchers use a control (placebo) and experimental (new drug) group and investigate potential drug interactions.[61] The FDA almost always requires at least two Phase III studies before it will consider approving a new drug. Predictably, this complicated and rigorous process tends to eliminate many potential new drugs. Only about 10% of IND applicants survive the entire clinical process and warrant the filing of a NDA.[62]

C. New Drug Application

If a substance successfully navigates the stages outlined above and shows potential to treat a condition, the maker submits a new drug application to the Food and Drug Administration. This is the formal request of the drug company for the FDA to allow the marketing of the drug. NDAs can be staggering documents. The sponsor must submit copies of practically all the data it has collected about the drug over the past years of preclinical and clinical development.[63] This includes data from every human study, every animal study, every toxicity test, every reproduction study, and every efficacy study.[64] The sponsor must include both positive and negative data about the drug to the FDA. The NDA must also detail the drug’s manufacturing process, including how quality will be maintained.[65] For some drugs this raw data plus required summary material totals over 100,000 pages.[66] The regulation of NDAs even deals in minutiae such as the type of paper that must be used and which sections of the application must be placed in certain color-coded folders.[67]

Part III: Drug Lag

While AIDS advocates did increase public focus on drug lag, they did not invent the drug lag idea. By the early 1990s, scholars had been focused on the issue for over 15 years. One of the first scholars to discuss drug lag was University of Chicago economist Dr. Sam Peltzman. His 1974 book, Regulation of Pharmaceutical Innovation , is essentially a quantitative treatise against the FDA’s expanding role in drug approval. One of Peltzman’s primary arguments was that the 1962 Kefauver-Harris amendments had significantly slowed the rate of new drug development in the United States.[68] He noticed that “in the decade before the amendments an average of forty-three NCEs (new chemical entities) were introduced annually compared with an annual average of sixteen in the (subsequent) decade.”[69] Peltzman employed market growth statistics and concluded that this was not merely a coincidental correlation. “The 1962 amendments have been responsible for substantially all the post-1962 decline in drug innovation.”[70] Peltzman also determined that, as of 1974, the new efficacy and clinical trial requirements were adding at least two years to the time required to produce and market a new drug.[71] In 1962, the FDA approved the average NDA in seven months, but by 1967, with new requirements to review voluminous efficacy data for every drug, average review time had skyrocketed to 30 months.[72]

Dr. Peltzman continued to find other problems with the post-1962 approval mechanism. One of the goals of the Kefauver-Harris amendments was to reduce the amount of money consumers spend on ineffective drugs.[73] This was the primary impetus behind instituting the new efficacy requirement for all new pharmaceuticals. Dr. Peltzman determined that the amendments did successfully reduce consumers’ expenditures on ineffective drugs, but only by reducing consumer access to all new drugs.[74] He concluded that “(t)he incidence of ineffective new drugs does not appear to have been materially reduced” and “(e)ven if it had been, the pre-1962 waste on ineffective new drugs that might now be prevented appears to have been too small to compensate for the benefits consumers have had to forgo because of reduced drug innovation.”[75] In total, Dr. Peltzman’s figures showed that the Kefauver-Harris amendments had cost American consumers approximately $250 million.[76] The reduced flow of new drugs caused by regulatory delay cost consumers $300-400 million, higher prices for drugs due to lack of competition from delayed new drugs cost consumers $50 million, and the benefit of reduced purchases on ineffective new drugs only saved consumers $100 million.[77]

Other scholars have seriously questioned the economic and practical benefits of the FDA’s system. Many drugs that were approved in Europe and began saving lives were delayed considerably in the United States. For instance, “in 1978, W.M. Wardell estimated that Practolol, a drug in the beta-blocking family, could save 10,000 lives per year if allowed in the United States...The agency’s withholding of beta blockers alone was responsible for probably tens of thousands of deaths.”[78] Other researchers have compared the incidence of drug disasters in the U.S. and foreign nations. One researcher estimates that “the benefits of FDA regulation relative to that in foreign countries could reasonably be put at some 5,000 casualties per decade or 10,000 per decade for worst case scenarios. In comparison...the cost of FDA delay can be estimated at anywhere from 21,000 to 120,000 lives per decade.”[79]

The AIDS crisis finally brought the drug lag issue out of academia and into the consciousness of the general populace. Acquired Immune Deficiency Syndrome was first discovered as an independent disease in 1979.[80] Two years later, doctors diagnosed the first case in the United States. Once the deadly disease gained a foothold, it spread extremely quickly. “The first 50,000 cases of AIDS were reported to CDC from 1981 to 1987; the second 50,000 were reported between December 1987 and July 1989.”[81] Drug companies developed therapies for the disease, but many in the AIDS-victim community argued that the FDA approval regime hindered this process. They argued that it is illogical to delay a drug for years to carefully test for safety and efficacy when the target population of that drug is already assured death.

In the late 1980’s, the FDA began to relent and made a variety of changes and special exceptions to the drug approval system. In 1987, the FDA changed its rules to allow “treatment INDs.”[82] This program allows doctors to prescribe drugs still in the IND phase of development to terminally ill patients for whom comparable alternative treatments do not exist.[83] A treatment IND generally allows physicians to prescribe a medicine between Phases II and III, when research suggests that the drug may have some efficacy and does not pose an unreasonable health risk.[84] The agency has even allowed emergency INDs that allow a doctor to obtain permission over the phone to use an unapproved drug to treat a patient in dire need.[85] Many activists remained unsatisfied, however. They argued that the FDA was reluctant to allow new drugs into the treatment IND program and too strict in deciding which situations are life-threatening enough to warrant the use of an experimental drug.[86]

As a direct result of the AIDS crisis, the FDA started a fast track approval program in 1987.[87] This program rapidly accelerates the stages of drug development discussed in the previous section. Prior to ‘fast track,’ many AIDS victims participated in “underground” trials of the drugs that were only available in foreign countries. Not only were the side effects potentially life-threatening, but the surreptitious trials also “seriously threaten(ed) the FDA’s legitimate clinical trials by undercutting the validity of FDA research data.”[88] Many patients were unwilling to risk receiving a placebo in the FDA approved studies and would use the ‘underground’ drug concurrently.[89] The fast track procedure applies to drugs that will treat “life-threatening” or “severely debilitating” diseases. Life-threatening “is defined to include diseases where the likelihood of death is high unless the course of the disease is interrupted (e.g. AIDS and cancer) as well as diseases and conditions with potentially fatal outcomes where the end point of clinical trial analysis is survival (e.g. increased survival in persons who have had a stroke or heart attack).”[90]

Probably the most important aspect of the fast track mechanism is the cooperation between the FDA and the manufacturer. In the development of the AIDS drug AZT, for example, the FDA worked closely with sponsor Burroughs-Wellcome to develop the preclinical and human trials.[91] In this case the government even took the drastic step of combining Phase II and III trials.[92] With a record-setting NDA review time of 3.5 months, the total process, which at that time averaged 8 years, took only 2 years for AZT.[93] In addition to expedited clinical and review phases, the fast track procedure also allows a sponsor to forego a control group, thereby eliminating the chance that a very sick patient will receive a placebo.[94]

The FDA also attempted to facilitate pharmaceutical development through the Orphan Drug Act. An orphan drug “is a potentially useful drug that has not been adopted by a sponsor to carry out the testing necessary to gain Food and Drug Administration approval...because of (it’s) limited marketability, commercial value, and/or potential liability.”[95] The most common problem is limited commercial value due to a small patient population. In fact, many orphan drugs are not discovered through purposeful research but by accident.[96]

Congress passed the Orphan Drug Act in 1983.[97] The most important provision of the Act grants the sponsor of the orphan drug seven-year market exclusivity.[98] This helps the sponsor defray the costs of developing a drug that will only appeal to a limited number of consumers. Patent protection is often not sufficient since it can expire soon after or even before the FDA approves a drug.[99] The Act also gives certain tax credits to sponsors of these drugs and allows the secretary of Health and Human Services to make grants to private and public organizations willing to develop these drugs.[100]

Part IV: The PDUFA Goals and FDA Performance

A. Background on the Act

The Prescription Drug User Fee Act of 1992 created the biggest and most important changes at the FDA over the past generation. The legislation permits the FDA to charge a fee to a drug sponsor who submits a new drug application. The fees allowed the FDA to increase its NDA review staff by 50%.[101] Congress hoped that with more staff the agency could approve NDAs faster and therefore reduce the drug lag problem. Before passage, average NDA approval times had ballooned to over two years, even though the Kefauver-Harris amendments had envisioned that NDA review would require only 180 days.[102]

The user fee concept was not new when Congress first authorized the PDUFA in 1992. The Independent Offices Appropriation Act of 1952 allowed the FDA to charge user fees to sponsors seeking color and insulin certification.[103] By 1971, the General Accounting Office recommended that Congress pass legislation to allow the FDA to impose user fees on all companies under the agency’s authority. The proposal was largely ignored and the issue did not receive more attention for fifteen years.[104]

The issue of user fees resurfaced during the Reagan administration. The FDA’s workload continued to increase in volume and complexity and agency performance was beginning to suffer due to lack of resources. The FDA suggested a system of user fees for all new drugs and antibiotics. The administration showed only lukewarm interest in this proposal. Five million dollars worth of user fees were appropriated in the 1985 and 1986 budgets, but this came directly out of the agency’s normal appropriations, i.e. the FDA operating budget did not increase.[105] During the early 90’s, considerable support finally aligned behind expanded implementation of user fees. In addition to pressure to expedite the NDA approval process, several other conditions encouraged this development: “a) the long-standing resource constraints on the agency had become critical, b) between 1980 and 1991 Congress had enacted 34 laws that placed additional resource demands on the agency, c) during the same period the annual number of (IND) filings increased from 66 to 504, and d) FDA projections put the number at over 3000 by 1998, and the federal government’s deficit problems made the prospect of increased FDA appropriations highly unlikely.”[106]

The Pharmaceutical Manufacturers Association (“PMA”) insisted upon four primary features that Congress incorporated into the statute. The money from the user fees must be added to existing FDA appropriations, not subtracted from them. The fees must be reasonable, predictable, and used directly to expedite the drug approval process. Finally, the PMA insisted that the system improvements be long term in nature. The resulting legislation created a system that required sponsors of prescription and over the counter drugs to pay a certain amount of money to obtain NDA review. Blood products and generic drugs are not subject to user fee legislation.[107] The amount of the user fee is set by regulation and is currently $313,320.[108] Half of the fee is due at submission of the NDA and the remainder is due within 30 days of an approval letter, an approvable letter (i.e. a letter detailing an application’s deficiencies and remedial measures required), or a nonapprovable letter.[109]

Before Congress passed the PDUFA, many critics argued that user fees would stifle pharmaceutical innovation. To address these concerns the legislation uses two tactics. First, the FDA can reduce or even waive a user fee under certain circumstances.[110] The circumstances include: when the fee will inhibit the development of an important drug for public health, when the amount of the fee will be greater than the FDA’s reviewing expenses, when the drug is substantially similar to a generic drug for which no user fees were required, or when the application is withdrawn before any significant work is completed on it. Secondly, small drug companies may be eligible for a reduced fee provision. Pharmaceutical sponsors with less than 500 employees can receive a 50% reduction in the fee and a one-year deferral before payment is due.[111] The FDA granted four small business reductions during the first two years of the PDUFA[112]

B. Report Data

Each version of the PDUFA was accompanied by a letter outlining specific numerical goals for the FDA to meet. Congress developed these benchmarks to allow it and the public to accurately evaluate the user fee program. Although never stated explicitly, the accomplishment of these goals (many of them quite aggressive considering FDA performance in the years leading up to the Act) was to signal the end of the drug lag problem.

The goals center around four main areas. The first category is new drug applications, including product and equipment license applications (“PLAs” and “ELAs”). The second category concerns the quick handling of efficacy supplements to new drug applications. Efficacy supplements are submissions to the FDA asking permission for new dosage regimens, new claims of efficacy vis a vis a competitive drug, or some sort of change in the patient/consumer population, such as changing the drug to over the counter.[113] The third major category involves the manufacturing supplements that the FDA requires when a drug company wants to alter a drug’s production process. This could entail a change in raw materials, manufacturing plant location, or machinery used in production.[114] The final major category tracks the FDA’s progress in quickly reviewing resubmitted NDAs, PLAs and ELAs. As the name suggests, these are applications that were submitted once and withdrawn after the FDA noticed problems or requested additional data. If the sponsor can resolve the problems, they will amend the application and resubmit it to the agency.

The PDUFA also required the FDA to meet certain secondary goals and collect certain information for later review. In the 1997 reauthorization, Congress required the agency to schedule meetings with drug companies and distribute minutes of those meetings in a timely fashion. At the passage of the PDUFA, the FDA had accumulated a large backlog of NDAs and efficacy/manufacturing supplements. The legislation required the agency to eliminate this backlog within a specific time frame. The Act also required the FDA to hire the additional staff that the user fees would fund by a certain date. Finally, the user fee act forced the FDA to track its workload, the amount of money that user fees generated, and how that money was being allocated.

The FDA must send annual reports to Congress that address all of the issues discussed above. A comprehensive review of these reports shows that the FDA has achieved staggering success. As the summaries and tables below indicate, the FDA met almost every goal laid out by Congress. In some cases, FDA performance ran years ahead of schedule. In most areas workload increased in the early years of the act but leveled off as the years progressed towards the present. The ultimate result is that since the beginning of the user fee program the average time for NDA approval has decreased from 27 months to about 12.5 months.[115]

New Drug Applications

Starting in fiscal year (FY) 1994, Congress devised specific goals for the completion of original new drug applications.[116] During FY1994, Congress required the FDA to review and act upon 55 percent of NDAs within 12 months. By comparison, in 1987 average NDA approval time was 29 months.[117] In FY95 the goal was increased to 70% of NDAs acted upon within 12 months and by FY96 that number went to 80%.

As Table 1.1 illustrates, the FDA performed extremely well. As of Sept. 30, 1995, the FDA had acted upon 93% percent of all NDAs from FY94 within the 12-month time frame. The report for FY95 shows that 95% of all submissions were reviewed within 12 months when the goal was only 70%. The following year FDA performance stayed ahead of schedule with 96% of NDAs completed on time, well over the goal of 80%.

Table 1.1 NDA Goals and Review Times for FY94-96[118]

Goal Actual Performance

FY94 55% within 12 months 93% within 12 months

FY95 70% within 12 months 95% within 12 months

FY96 80% within 12 months 96% within 12 months

Beginning in FY97, the goals outlined by Congress divide according to normal and priority new drug applications. For 1997, the FDA goal for normal NDAs reviewed within 12 months increased again to 90%. For priority NDAs Congress set the goal at 90% within 6 months. This goal for priority NDAs remained constant for the remaining fiscal years up to the present. For regular NDAs however, Congress continued to raise the bar. For FY99, the 90% within 12 months goal remained, but Congress also required the FDA to review and act upon 30% of applications within only 10 months. In FY2000, the goal increased to 50% within 10 months, in FY2001 70% within 10 months, and in FY2002 90% within 10 months.

Once again, the FDA accomplished the goals created by Congress. Table 1.2 shows that 100% of standard applications were completed within 12 months during fiscal year 1997. The FDA achieved similar success with priority applications during FY97 with 96% reviewed on time. This number might have climbed to 100%, but the reports are unclear on the resolution of two applications. During fiscal year 1998, the FDA completed the review of all 120 standard and priority NDAs within the time frame required by the act. For FY99, all 31 priority applications were acted upon within the 6 month window. The FDA also reviewed all 94 standard applications within 12 months. Sixty-eight percent of standard applications were acted upon within 10 months. Finally, in fiscal year 2000, the FDA reviewed 97% of priority applications within 6 months. For standard applications, 97% were acted upon within the 12 month goal and 81% were acted upon within the 10 month goal. This is well ahead of the goal which required only 50% during FY00.

Table 1.2 NDA Goals and Review Times for FY97-00

Goal Actual Performance

FY97 standard 90% within 12 months 100% within 12 months

priority 90% within 6 months 96% within 6 months

FY98 standard 90% within 12 months 100% within 12 months

priority 90% within 6 months 100% within 6 months

FY99 standard 90% within 12 months 100% within 12 months

30% within 10 months 68% within 10 months

priority 90% within 6 months 100% within 6 months

FY00 standard 90% within 12 months 97% within 12 months

50% within 10 months 81% within 10 months

priority 90% within 6 months 97% within 6 months

Efficacy Supplements

In FY94, Congress required the FDA to review and act upon 55% of efficacy supplements within 12 months of submission. For FY95, the PDUFA increases this goal to 70% and for FY96 to 80%. Once again, the FDA outstripped the stated goals considerably. In FY94, 73% of efficacy supplements were reviewed and acted upon within the 12 month time frame. For FY95, the FDA completed 93% of efficacy supplements within 12 months. Finally, in FY96, 96% of efficacy supplements were acted upon within 12 months, well above the goal of 80%.

Table 2.1 Efficacy Supplements Goals and Review Times for FY94-96

Goal Actual Performance

FY94 55% within 12 months 73% within 12 months

FY95 70% within 12 months 93% within 12 months

FY96 80% within 12 months 96% within 12 months

For fiscal year 1997 and beyond, the goals for efficacy supplements divide according to standard and priority submissions. For FY97 itself, the PDUFA requires that the FDA review and act upon 90% of priority efficacy supplements within 6 months and 90% of standard supplements within 12 months. For priority supplements this goal (90% within 6 months) remains static through 2002. For standard supplements, the PDUFA steadily makes the standard more aggressive. For FY99, 90% had to be acted upon in 12 months and 30% within 10 months of receipt. In FY00, the act requires 50% completion within 10 months and 90% completion within a year. The goal is raised to 70% within 10 months in FY 2001, and 90% within 10 months in FY2002.

Table 2.2 illustrates the mostly successful results. In FY97, all priority supplements were reviewed on time and 99% of the 149 standard supplements submitted were reviewed and acted upon within 12 months. For FY98, the FDA acted upon only 80% of priority supplements within 6 months, but it reviewed 99% of standard supplements within the act’s guidelines. In FY99, the FDA once again fell short of its goal vis a vis priority supplements by only completing 88% within 6 months. The FDA acted on all standard supplements within 12 months and completed 86% within 10 months. This far exceeded the goal of 30% completed within 10 months. Finally, in FY00, the FDA improved in the priority supplement area, reviewing all 20 priority efficacy supplements within 6 months. Ninety-nine percent of standard supplements were reviewed within 12 months and 91% were reviewed within 10 months.

Table 2.2 Efficacy Supplement Goals and Review Times for FY97-00

Goal Actual Performance

FY97 standard 90% within 12 months 99% within 12 months

priority 90% within 6 months 100% within 6 months

FY98 standard 90% within 12 months 99% within 12 months

priority 90% within 6 months 80% within 6 months

FY99 standard 90% within 12 months 100% within 12 months

30% within 10 months 86% within 10 months

priority 90% within 6 months 88% within 6 months

FY00 standard 90% within 12 months 99% within 12 months

50% within 10 months 91% within 10 months

priority 90% within 6 months 100% within 6 months

Manufacturing Supplements

The third major set of goals created for the FDA by the user fee act concerns manufacturing supplements. For FY94, the Act set the goal at 55% of supplements reviewed and acted upon within 6 months of receipt. The Act steadily increased the goal to 70% in FY95, 80% in FY96, and 90% for FY97 and FY98.

Table 3.1 illustrates that the FDA enjoyed considerable success in meeting these goals. During FY94, the FDA reviewed 69% of all manufacturing supplements within the allotted six month time frame. By FY95, performance increased to 89%, and in subsequent years the number remained over 95%. The 89% figure for FY95 put the FDA only one percentage point shy of being two years ahead of schedule.


Table 3.1 Manufacturing Supplements Goals and Performance for FY94-97

Goal Actual Performance

FY94 55% within 6 months 69% within 6 months

FY95 70% within 6 months 89% within 6 months

FY96 80% within 6 months 96% within 6 months

FY97 90% within 6 months 99% within 6 months

For FY98, the goal for review and action on manufacturing supplements remained 90% within 6 months. The goals of the act become more aggressive beginning in FY99. For that year the FDA was required to review 90% of manufacturing supplements within six months and 30% within four months. The Act steadily increased the goal to 50% within four months in FY00, 70% within four months in FY01, and 90% within four months by FY02.

Once again, the FDA far exceeded the standards required of it. In fiscal year 1998, the FDA reviewed and acted upon 99% of all manufacturing supplements within the six month time frame. By FY99, 98% of all supplements were completed within 6 months and 76% within four months. This performance in 1999 exceeded the goal laid out by the act for FY01. Finally in FY00, the FDA continued its success by completing 97% of manufacturing supplements within six months and 79% within four months.

Table 3.2 Manufacturing Supplement Goals and Review Times for FY98-00

Goal Actual Performance

FY98 90% within 6 months 99% within 6 months

FY99 90% within 6 months 98% within 6 months

30% within 4 months 76% within 4 months

FY00 90% within 6 months 97% within 6 months

50% within 4 months 79% within 4 months

Resubmitted Applications

The final major category of goals enunciated for the FDA in the PDUFA concerns resubmitted applications. As mentioned previously, the FDA finds some new drug applications lacking and asks the sponsor to make changes and resubmit later. Since the FDA has already had a chance to review the initial application, the required review times are shorter than those discussed for NDAs. For fiscal year 1994, the act requires that the FDA review 55% of resubmitted applications within 6 months. This goal increases to 70% in FY95, 80% in FY96, and 90% for FY97.

Table 4.1 shows that the FDA achieved considerable success in meeting these goals. During FY94, the FDA reviewed and acted upon 81% of resubmitted applications within six months and in FY95 this number increased to 96%. Both of these achievements were over two years ahead of schedule. The figures for FY96 and FY97 were 99% within six months and 92% within six months respectively.

Table 4.1 Resubmitted Applications Goals and Performance for FY94-97

Goal Actual Performance

FY94 55% within 6 months 81% within 6 months

FY95 70% within 6 months 96% within 6 months

FY96 80% within 6 months 99% within 6 months

FY97 90% within 6 months 92% within 6 months

Beginning in fiscal year 1998, the goals for resubmitted applications became more aggressive and bifurcated between Class 1 and Class 2 applications. Class 1 resubmitted applications are those considered more urgent by the FDA. For FY98, the act required that 90% of all resubmitted applications be reviewed and acted upon within six months and 30% of class 1 resubmitted applications be reviewed within 2 months. For FY99, the act requires the FDA to complete 90% of class 2 applications within six months. It also requires the completion of 90% of class 1 resubmitted applications within four months and 50% within 2 months. The goal for class 2 resubmissions remains static at 90% within six months for the remainder of the statutory period. For class 1 the goals continue to become more aggressive. In FY00, the FDA must complete 70% of class 1 resubmissions inside two months (along with 90% within four months). In FY01 and FY02 the act requires the FDA to complete 90% of class 1 resubmissions within two months.

Up to FY00, the FDA was meeting the aggressive goals laid out by the user fee act. In FY98, the FDA completed 79% of class 1 resubmissions within two months and all were complete before six months had past. All class 2 resubmissions during FY98 were reviewed and acted upon within the statutory period. During FY99, the FDA performed extremely well, completing 100% of class 1 resubmissions within two months and 100% of class 2 submissions within six months. Finally, in FY00, the FDA acted upon all class 1 resubmitted applications within four months and 96% within two months. Ninety-eight percent of class 2 resubmissions were reviewed and acted on within the statutory time frame.

Table 4.2 Resubmitted Applications Goals and Review Times for FY98-00

Goal Actual Performance

FY98 all 90% within 6 months 100% within 6 months

Class 1 30% within 2 months 79% within 2 months

FY99 Class 1 90% within 4 months -

50% within 2 months 100% within 2 months

Class 2 90% within 6 months 100% within 6 months

FY00 Class 1 90% within 4 months 100% within 4 months

70% within 2 months 96% within 2 months

Class 2 90% within 6 months 98% within 6 months

Workload

The annual reports also contain data on the FDA’s yearly workload. Congress wanted this information in order to accurately evaluate the FDA’s performance. The FDA met Congressional goals despite a steadily increasing workload during the early years of the Act. As the 1990s progressed however, the agency workload tended to remain high yet stable. Presumably this aided the agency in achieving the very aggressive goals required by Congress in the more recent years of the Act.

The workload pattern of new drug applications clearly illustrates this trend. The number of NDAs submitted increased during the first few years of the act, from 96 in FY93 to 133 by FY97. The number of submissions then hovered between 120 and 134 during the years of 1998-2000. The preliminary data for 2001 suggests a large drop off to only 101 new drug applications.

Table 5.1 NDA Workload During FY93-01

# of submissions Difference % increase/decrease

FY93 96 - -

FY94 98 +2 +2%

FY95 121 +23 +23%

FY96 115 -6 -5%

FY97 133 +18 +16%

FY98 120 -13 -10%

FY99 127 +7 +6%

FY00 134 +7 +5.5%

FY01 101 -33 -25%

For efficacy supplements the trend was somewhat similar, however, the workload increase was steadier and the reduction in 2001 was small. As Table 5.2 illustrates, the number of efficacy supplements hovered around 100 for the first four years of the PDUFA. In FY97, the number of efficacy supplements spiked from 113 to 162. The trend leveled off during fiscal years 1998 and 1999, with 138 and 145 supplements respectively. Another big jump occurred in FY00 with the number increasing to 187. The reports indicate a projected slight decrease to 168 for FY01.

Table 5.2 Efficacy Supplement Workload during FY93-01

# of submissions Difference % increase/decrease

FY93 100 - -

FY94 93 -7 -7%

FY95 87 -6 -6%

FY96 113 +26 +30%

FY97 162 +49 +43%

FY98 138 -24 -15%

FY99 145 +7 +5%

FY00 187 +42 +29%

FY01 168 -19 -10%

The number of manufacturing supplements refused to follow the trends of the other three categories. The numbers steadily increased from 1,248 in FY93 to 1,600 in FY97. The upward trend continued in FY98, FY99, and FY00 with 1,834, 1,936 and 2,025 manufacturing supplements submitted to the FDA respectively. Unlike the other categories, the number of supplements did not decrease in FY01, rather the FDA reported a modest gain.

Table 5.3 Manufacturing Supplement Workload during FY93-01

# of submissions Difference % increase/decrease

FY93 1248 - -

FY94 1058 -190 -15%

FY95 1519 +461 +43.5%

FY96 1479 -40 -3%

FY97 1600 +121 +8%

FY98 1834 +234 +15%

FY99 1936 +102 +5.5%

FY00 2025 +89 +4.5%

FY01 2069 +44 +2%

Finally, the number of NDA resubmissions followed the trend of upward movement and subsequent leveling off outlined above. During FY93, the FDA only received three resubmissions. This number grew during the next three years, increasing to 98 during FY96. In the subsequent years, the number of resubmissions decreased and hovered between 90 in FY97 and 71 in FY98. FY2001 saw a slight decrease to 76 but, as Table 5.4 indicates, this was not a large decrease given previous numbers.

Table 5.4 Resubmissions Workload during FY93-01

# of submissions Difference % increase/decrease

FY93 3 - -

FY94 37 +34 +1100%

FY95 61 +24 +65%

FY96 98 +37 +61%

FY97 90 -8 -8%

FY98 71 -19 -21%

FY99 77 +6 +8%

FY00 89 +12 +15.5%

FY01 76 -13 -15%

Procedural/Processing Goals

The PDUFA created various secondary goals for the FDA to accomplish. One concerned recruitment of new staff. As discussed earlier, Congress hoped user fees would fund new drug review personnel who would facilitate faster review of agency submissions. This necessitated hiring goals during the initial stages of the Act. The goal was to add approximately 700 people to the review staff of the FDA at the Center for Biologics Evaluation and Research (“CBER”) and the Center for Drug Evaluation and Research (“CDER”). The effect of attrition makes this task more difficult, i.e. the FDA must hire many more than 700 people to achieve a net gain of 700 people. Table 6.1 shows that as of September 30, 1995, the FDA was making progress towards its goal. Unfortunately, subsequent reports do not mention whether the goal was ultimately met. The silence, in addition to the FDA’s success in meeting numerical goals, might suggest that the FDA met its hiring goals.

Table 6.1 User Fee Recruitment (10/1/92-9/30/95)

New Hires Net Additions

Medical/Dental 122 65

Chemist 60 25

Consumer Safety Officer 102 64

Microbiologist/Biologist 166 52

Biostatistician 27 21

Pharmacologist 40 25

Other Scientist 99 45

Support 407 102

Total 1023 399

Another problem identified before the passage of the PDUFA was a consistent ‘backlog’ of submissions that prevented a quick turn-around for any newly submitted items. The PDUFA goals required the FDA to complete pre-act manufacturing and efficacy supplements within 18 months of the beginning of user fee payments. Therefore, the FDA had to review and act on 569 supplements by January 2, 1995. The agency achieved this goal. The FDA approved 388 (68 percent) of the efficacy and manufacturing supplements. Sponsors withdrew 97 of the remaining 181 submissions.

Table 6.2 Supplement Backlog Elimination (10/1/92-1/2/95)

Backlog at Passage Backlog as of 1/2/95

Efficacy 60 0

Manufacturing 509 0

Total 569 0

The Act also required the FDA to eliminate a backlog of NDAs, PLAs, ELAs and related documents. The Act gives the FDA exactly two years, making July 2, 1995 the deadline. The FDA achieved the goal in all categories. Of the backlogged 34 new drug applications, the FDA approved 21, five were withdrawn, four resubmitted, and four were classified as ‘not approvable.’ The FDA also cleared 92 backlogged PLAs, ELAs and related supplements. Sponsors withdrew 55, the FDA approved 22, the FDA classified two as approvable, and classified 13 as ‘not approvable.’

Table 6.3 Application Backlog Elimination (10/1/92-7/2/95)

Backlog at Passage Backlog as of 7/2/95

NDAs 34 0

PLAs/ELAs 22 0

PLA Efficacy Supps . 6 0

ELA Manufacturing Supps. 64 0

Total 126 0

The Prescription Drug User Fee Act requires the FDA to submit financial statements to Congress outlining the amount of fees collected and how they were used. The reports show that the user fee system has been highly effective in generating millions of dollars in extra revenue for the FDA. The user fees began in FY1993 with the “full fee”, i.e. the fee for reviewing an NDA containing clinical data, set at $100,000.[119] As mentioned earlier, the current fee totals over three hundred thousand dollars. As Table 6.4 illustrates, the result has been over $100 million in additional revenue for the CDER and CBER. While the table does not make it completely clear, the FDA did not collect the expected amount of fees during FY01. This resulted in a much lower carryover balance than usual that will endanger FDA’s continuing adherence to PDUFA goals.

Table 6.4 User Fee Collections and Carryover Balances-- FY93-01[120]

FY Starting Carryover Net Collections Obligations Ending Carryover

93 - 28,531,996 8,949,000 19,582,996

94 19,582,996 53,730,244 39,951,020 33,362,220

95 33,362,220 70,953,500 74,064,015 30,251,705

96 30,251,705 82,318,400 85,053,030 27,517,075

97 27,517,075 93,234,125 84,289,046 36,462,154

98 36,462,154 132,671,143 101,615,000 67,518,297

99 67,518,297 126,580,456 122,515,000 71,583,753

00 71,583,753 133,060,339 147,276,000 57,369,092

01 57,368,092 138,761,294 160,713,000 35,416,386

The first version of the PDUFA did not contain any goals related to the IND phase of drug development. After industry complaints that the FDA was often difficult to work with, the 1997 reauthorization included some modest procedural goals aimed at facilitating the relationship between drug sponsors and the FDA. The reauthorization required the FDA to expedite its correspondence with drug sponsors and to avoid unnecessary delays during the clinical phase. Even though drug sponsors and independent research facilities conduct a potential new drug’s clinical development, the FDA is often highly involved in the process. The drug sponsor wants to develop the data the FDA wants to see in order to increase the chances of viable NDA.

The goals in this area divide into three separate issues. The first involves requests for a formal meeting by FDA regulated entities. The general goal requires that the FDA notify the requestor of a formal meeting date within 14 days of the request. For FY99, the Act requires this for 70% of requests, for FY00 80%, and FY01 90%. The PDUFA II (i.e. the 1997 reauthorization) also set goals for the amount of time between the request and the date of the actual meeting. Depending on the type of meeting, it must occur within 30, 60, or 75 days from the date of the request. The reports do not indicate a sliding scale of achievement as in other areas. It is unclear whether this means that the implicit goal was 100% compliance. Finally, Congress required the FDA to compile minutes for every meeting that clearly outline disagreements and issues for further discussion. The FDA must complete the minutes within 30 calendar days of the meeting. Once again there are no progressive percentage goals.

Generally, the FDA performed quite well. The percentage of responses to meeting requests within the 14-day window ran well ahead of the stated goals. In FY99, 88% of meeting notifications were on time, this increased to 90% in FY00, and 91% in FY01. The FDA was consistent in regards to scheduling meetings within the time guidelines set by the act. In each fiscal year from 1999-2001, 87% of the meetings were scheduled according to the congressionally ordered time frame. The FDA did not perform as well in achieving the meeting minutes goals. Compliance fell from 83% in FY99, to 78% in F00, to 74% by FY01.

Table 6.5 Percentage of Meeting Management Targets Met (FY99-01)

Requests Scheduling Minutes

FY99 88% 87% 83%

FY00 90% 87% 78%

FY01 91% 87% 74%

The other procedural goals involve other aspects of the IND process. In the PDUFA reauthorization, Congress asked the FDA to respond to a drug sponsor’s request for a clinical hold within 30 days of receipt. For years 1999-2001, Congress set the goal at 90% of requests responded to in this time frame. Congress also began regulating the FDA’s response to a sponsor’s appeal of an agency decision. The Act requires the FDA to respond to these ‘major dispute resolution’ issues within 30 days of receipt from the drug sponsor. The target percentage of responses in this time moves from 70% in FY99, to 80% in FY00, to 90% in FY01. Finally, the FDA must respond to a sponsor’s request to evaluate a clinical protocol within 45 days of such a request. Percentage of compliance moves from 60% in FY99, to 70% in FY00, to 80% in FY01.

Table 6.6 shows that the FDA achieved most of the goals mentioned above. The percent of clinical hold responses that occurred within the given time frame remained high. During FY99, 92% of responses occurred on time. In FY00 this number increased to 94%, but in FY01 the number dropped slightly below target to 89%. Appeal responses ran ahead of the outlined percentage goals. Seventy-one percent of responses occurred on time in FY99, and in both FY00 and FY01, 100% of responses were completed with 30 days. Finally, responses to requests for protocol design changes exceeded the required goals, with 97% of responses occurring in a timely fashion during FY99 and FY00 and 83% on time during FY01.

Table 6.6 Percentage of Meeting Management Targets Met (FY99-01)

FY99 FY00 FY01

Clinical Hold

Goal 90% 90% 90%

Actual 92% 94% 89%

Response to Appeals

Goal 70% 80% 90%

Actual 71% 100% 100%

Response to Evaluation Reqs.

Goal 60% 70% 80%

Actual 97% 97% 83%

Part V: Drug Lag Persists

By all internal measures the PDUFA was a spectacular success. The user fees generated millions of dollars, the FDA hired hundreds of new employees, and review times have radically decreased in almost all categories. Despite this, scholars continue to write articles about drug lag and empirical research suggests that major problems with the American drug approval system remain. How is this possible? Unfortunately, the FDA and Congress failed to address the entire set of issues that comprise the drug lag phenomenon. They conceived of drug lag as a problem of slow and inefficient review of NDAs. They believed that by expediting this process drug approval times in America would begin to mirror those in Europe and Asia and the drug lag issue would disappear. The FDA and Congress did not address the fact that drug lag also involves the time required for clinical development and, at the most general level, the total amount of time and money required to guide a drug through the entire process. The result has been a statute that accomplished many goals, yet has enjoyed only limited success in reducing clinical development times and has miserably failed to prevent the time and money required to develop new medicines from skyrocketing. Congress did not design the PDUFA to address these additional aspects of drug lag and therefore the statute cannot be the ultimate solution to the problem.

A. Legislative History

The legislative history of the Prescription Drug User Fee Act illustrates the myopic vision of Congress when it passed the statute. The drafters of the statute clearly felt that they were competently dealing with drug lag. The legislative history shows representatives using words like “groundbreaking”[121] and “historic.”[122] In the section entitled ‘Background and Need for the Legislation,’ the House report on the PDUFA states that “the subject of drug lag has been studied by academic experts, commissions, Congressional committees, and the Food and Drug administration.”[123] It goes on to state that “the public interest is served by more rapid approval of safe and effective drugs” and that as a result of the bill “patients will have access to new drug therapies much sooner.”[124]

The text of the statute is the first evidence of Congressional failure to deal with all of drug lag’s aspects. The section of the bill entitled findings states that:

Congress finds that (1) prompt approval of safe and effective new drugs is critical to the improvement of the public health so that patients may enjoy the benefits provided by these therapies to treat and prevent illness and disease; (2) the public health will be served by making additional funds available for the purpose of augmenting the resources of the Food and Drug Administration that are devoted to the process for review of human drug applications.[125]

Plainly, prompt approval of new drugs is important and the reduction of review time is a worthy goal. However, these findings begin by discussing the tail end of the drug development process. If preserving the public health requires prompt new drugs, then one must discuss the entire phase of drug development prior to NDA submission.

The comments of some individual representatives illustrated this curious failure to

even consider earlier stages of drug development. During a hearing, Senator Coats stated that “(i)t takes roughly 12 years to bring a drug from infancy to market.”[126] He accurately called this situation “disturbing”[127] and “extremely important.”[128] In the same statement backing the PDUFA, Senator Coats mentions that the bill is designed to reduce standard NDA review time by 8 months.[129] There is no evidence that the Senator realized that these two statements are in tension. If the true problem is the 12 year average, how can an eight month reduction be a panacea. Certainly any amount of reduction is helpful. However, it is illogical to propose a bill as a solution to expanding drug development times when that proposed solution will only reduce times by a fraction of one year.

The legislative history also suggests that Congress never considered alternatives to a user fee system. The debate seems to operate on the premise that the only avenue for reform is the initiation of user fees. A statement by Congressman Waxman illustrates this point:

Mr. Speaker, we have been struggling for years in this country to find a way to speed up the process for the approval of breakthrough drugs. While many ideas have been proposed, in my view there are only two approaches that can work. We could water down the safety or efficacy standards applicable to drugs, but that would unacceptably undermine the public health. Or we could get the Food and Drug Administration more resources, but that has been almost impossible in recent years. The Prescription Drug User Fee Act of 1992 is a groundbreaking bill because it will increase FDA resources without using the traditional appropriations process. As a result, the public will benefit by getting access to lifesaving drugs sooner.[130]

Congressman Waxman fails to realize that, if the goal is getting drugs to consumers faster, other options are available. The last section of this paper discusses several steps Congress and the FDA could take to streamline the clinical phases and reduce persistent problems that delay drug development. The congressman’s comment rests on an incorrect assumption that improvement of the NDA process is the only avenue of reform. Senator Kennedy shared this fixation on giving the FDA more resources to the exclusion of other potential remedies for drug lag. While encouraging the bill’s passage, he argued that “(t)he need for the legislation is obvious. The current federal budget situation offers little prospect that adequate resources will be available to the FDA to do the job it should be doing in the years ahead.”[131] Certainly Senator Kennedy was correct that the FDA needs enough resources to perform its job. But if its job is to ensure the safety and efficacy of drugs in the most efficient manner possible, just giving the agency more money cannot be the entire solution.

Dr. David Kessler may bear some of the blame for Congress’ failure to consider alternatives to, or provisions to coincide with, user fee legislation. Dr. Kessler was the Commissioner of the FDA in 1992 when Congress initially passed the PDUFA. He testified at most of the hearings designed to investigate the user fee proposal. His statements at different hearings were generally quite similar and usually included a portion where he stated that, “if we really want to go to the next step, if this Nation really wants to get new drugs reviewed and on the market more quickly, the only answer is to provide more reviewers.”[132] By the 1997 reauthorization, the situation does not appear to have significantly improved. The lead deputy commissioner told a committee of congresspersons about the FDA’s many successes under the PDUFA.[133] No one appears to have realized that this commendable success left little room for continued improvement and made it even more critical that additional measures be discussed and considered.

B. Empirical Data

As one might expect, the failure to focus on the clinical phase of drug development has resulted in limited progress in this area. This, in turn, has limited the PDUFA’s success in increasing the overall speed of drug development. One of the first people to address the clinical issue was Kenneth Kaitin in 1997. His report focused on the development of new chemical entities (NCEs) during fiscal years 94-96. A NCE is “any new molecular compound not previously approved in the United States, excluding vaccines, diagnostic agents, and over the counter products.”[134] To compile the data, researchers sent user fee surveys to pharmaceutical and biotechnology companies during the years 1994-1996. Each year between 50 and 60 surveys were mailed and response rates varied from approximately 40% in 1995 to 64% in 1994. The surveys asked the companies about all of their user fee compounds currently under review at the FDA. They also inquired about the type of applications that were pending and the dates on which various types of applications were filed.[135] Researchers supplemented survey data with data from the Tufts Center for the Study of Drug Development (CSDD), the FDA annual reports to Congress, and information from public sources such as the Federal Register.

The study did show that drugs subject to user fees navigated through the NDA process faster. Of the NCEs included in the study, the ones subject to user fees obtained approval letters 53% faster than non-user fee entities.[136] The user fee NCEs averaged 14.5 months from submission of the NDA to approval letter while the non-user fee NCEs averaged 31.0 months. This positive data was offset however, by the realization that much of the efficiency gain at the FDA was being lost during the IND phase. During the 94-96 period, the average IND phase for user fee NCEs was 9% longer than for non-user fee NCEs. This made the overall process of drug development only slightly shorter (102 months as opposed to 112.1 months) for user fee drugs.[137]

A subsequent study performed by the Tufts CSDD suggested that some progress had been made.[138] The sample of NCEs examined in the study suggested that during 1997-1998 average clinical phases had dropped by an impressive 17 months, from 81.4 months to 64.3 months.[139] One should be wary of drawing large conclusions from this, however. First, it is not clear if this drop in clinical phase time will become a trend. It could be an anomalous result attributable to an unknown factor. Secondly, the number of NCEs reported for the 97-98 period was considerably lower than for all other periods. The study included 90 NCEs during 94-96 and 105 during 1981-85.[140] By contrast, the study measured only 42 NCEs during 97-98. If this represents a reduction in the total number of NCEs reviewed by the FDA, the agency may have been more efficient due to a temporarily light workload. Thirdly, the second study found that clinical times for new biopharmaceuticals continued to rise. During 1997-98, the average clinical phase was 62.7 months, the highest it had been since 1981.[141]

More recent information has confirmed that clinical delays continue for various classes of drugs. In 2001, the CSDD reported that while clinical times have fallen for AIDS antivirals, anti-infectives, respiratory, and anti cancer drugs, they have risen for analgesics, endocrine, central nervous system[142] , and cardiovascular drugs.[143] The CSDD also maintains that biotechnology clinical development times have steadily increased from 1982 to the present.[144] To the extent that clinical development times are falling, it is not clear that the PDUFA is the cause. Certainly, the procedural and meeting goals outlined in PDUFA II might be responsible. However, it could also be the result of companies, in response to continually rising costs, becoming more efficient and dealing with the FDA more effectively. In fact, research shows that the pharmaceutical industry tripled expenditures on research and development during the 1990s, from $10 billion to $30 billion, and industry executives have focused on fostering relationships with the FDA to facilitate drug development.[145] Even if one were to assume that clinical times are truly falling on a long term basis, this still does not authorize anyone to declare the death of drug lag. With drug development still requiring 10-15 years and over $800 million there is still considerable room for improvement. [146]

Some have argued that even on its own terms the PDUFA has had limited success. One of the primary goals of the legislation was to improve FDA performance versus that of other countries. However, in March of 2000, eight years into the user fee experiment, a CSDD study found that FDA approval times of biotech drugs were lagging behind those of the European Medicines Evaluation Agency (“EMEA”).[147] Average approval time for the FDA was 452 days while the EMEA averaged 417 days. The gap for recombinant DNA products was much larger with an average review time of 411 days in Europe and 548 days in the United States.[148]

The FDA has vigorously defended its drug review performance in the face of these allegations. In 1996, Commissioner Kessler wrote an article boldly claiming that “drug lag” did not exist. In the Journal of the American Medical Association, Kessler used drug approval data from the United States, the United Kingdom, Germany, and Japan to argue that Americans were getting important, life-saving drugs as fast or faster than people in comparable countries.[149]

Kessler looked at 214 drugs that came onto the world market between January 1990 and December 1994. The drug approval agencies of the four nations were asked to provide a marketing approval date for any of the 214 drugs that they had approved. “Information was supplied by the FDA in the United States, the Medicines Control Agency in the United Kingdom, Bundesinstitut fur Arzneimittel und Medizinprodukte and the Paul-Ehrlich-Institut in Germany, and the Ministry of Health and Welfare in Japan.”[150] For the purposes of this study a NCE was considered a ‘new drug.’ The study generated two primary comparisons. First, it compared the United States individually to each of the other three nations in the study. Secondly, researchers examined the data to determine the order in which drugs received approval from the four nations.

In the ‘head to head’ competition with Great Britain, Kessler’s study found that the U.S. and U.K. had both approved 58 of the 214 NCEs covered in the study. Of these, the U.S. was first to approve 30, a little over half. The average lead times favored the U.S. slightly as well; i.e. “the drugs approved first in the United States were approved an average of 17 months ahead of U.K. approval, while the 28 U.K.-first drugs were approved an average of 15.8 months ahead of U.S. approval.”[151] The study showed that 29 products had been approved in the U.K. and not the U.S. and 18 products had been approved in the U.S. and not the U.K. Kessler maintained that there was no significant drug in the U.K.-only group that the FDA considered important. “Two of the 29 U.K.-exclusive drugs were initially considered priority drugs by the FDA, but both were subsequently withdrawn from the worldwide market prior to U.S. approval primarily for safety reasons.”[152] Kessler considered the remaining U.K.-only drugs to be similar to drugs already on the U.S. market. In contrast, he maintained that nine of the U.S.-only drugs were compounds that the FDA considered ‘priority drugs.’ Among these was a HIV drug, a drug to treat Alzheimer’s disease, and a drug to treat previously unresponsive forms of epilepsy.[153]

In the comparison with Germany, the study found that both countries had approved 44 out of the 214 drugs measured. Of these, 31 were approved first in the United States. Again, the U.S. had a better lead time with the U.S.-first drugs approved an average of 17.9 months earlier than German approval and the Germany-first drugs approved only 10.8 months prior to U.S. approval. Germany approved 34 drugs that the U.S. did not, of which the FDA only considered one a priority. Kessler argued that the remaining 33 drugs had equivalents on the U.S. market. Conversely, the FDA designated half of the 32 U.S.-only drugs as priority drugs.[154]

Finally, the comparison between Japan and the United States revealed similar results. Both countries had approved only 14 drugs during the period covered by this study. Ten of them were approved first by the FDA. The U.S.-first drugs were approved 22.4 months ahead of Japanese approval while the Japan-first drugs were approved an average of 18.5 months before U.S. approval. The U.S. only considered three of the 82 Japan-only compounds priority drugs. In contrast, 33 of the 62 U.S.-only drugs were FDA-designated priority drugs. Among them are two anti-epileptic drugs, drugs for breast and ovarian cancer, and an AIDS drug.[155]

The data Kessler cites also looks at how many of a country’s new drugs were first approved in that country. For example, of the 76 NCEs approved in the U.S. during the studied five year period, 45 (59%) received approval first in the United States. The U.K. achieved similar success with 41 (47%) of its 87 new drugs approved before any of the other three countries. Germany did poorly with only 20 (26%) of its 78 new drugs approved before all of the other countries. Kessler disregarded the fact that Japan had approved 82% of its new drugs first since so many of the drugs Japan approved were not approved in any other country.[156]

While valiant, Kessler’s attempt to argue away the drug lag problem is not persuasive. Kessler stresses the fact that Americans consistently have access to drugs that the FDA deems a priority. This doesn’t prove very much, however. It makes sense that the FDA would approve drugs it considered important faster. Research would probably reveal that every industrialized nation quickly approves the drugs that it considers a priority. By focusing on these drugs Kessler effectively “stacked the deck” in his favor. Dr. Joe DiMasi of the Tufts University Center for the Study of Drug development responded to the study by saying that “(i)t is only the FDA’s judgment that other drugs are not that important.”[157]

Even the drugs Kessler used for the study skewed the results. The time frame of the study “left out the many drugs first introduced in Europe in the late 80’s and then submitted for U.S. approval in the 90s, as well as drugs still on hold here (the U.S.) from the 80s.”[158] Again, this allowed the FDA to use a set of data that it knew would contain favorable results. The study also frequently used median numbers rather than mean (or average) calculations. Looking at merely the approval time that was in the middle of all drugs allowed the FDA to hide particularly lengthy reviews. When one doesn’t compute the average, these long reviews don’t affect the median which allows the FDA to mask problems.

The Pharmaceutical Research and Manufactures Association was not impressed by Kessler’s analysis either. They maintained that the U.S., as of 1996, was still lagging well behind other industrialized nations in drug development time. “(A)ccording to PHRMA’s figures, more than 60% of the drugs approved by the FDA between 1980-1994 were products that had been marketed in other countries first.”[159] During 1994 alone, two-thirds of new molecular entities approved by the FDA were previously available in foreign markets. In some cases “Americans were decades late in gaining access to these medicines.”[160]

Finally, Kessler’s analysis does not discuss the overall length of drug development in the United States versus the other three countries. Just because a drug made it to market in the U.S. before the U.K. does not mean U.S. consumers are better off. That same drug may have been in the pipeline in the U.S. years prior to its development in the U.K. Furthermore, Kessler’s preoccupation with comparisons with other countries again highlights his and the FDA’s myopic vision in regards to drug lag. As previously argued, the ultimate problem facing the drug approval regime is the length of time between first development and the permitted marketing of a drug. Even if every drug were marketed in the U.S. first, there could still be situations where important drugs were delayed for years due to needless testing. As a former head of the agency, Kessler’s failure to recognize the complexity of the problem cripples the FDA's ability to solve drug lag.

The drug AZT provides an excellent example of this issue. Scientists first isolated the human immunodeficiency virus as the cause of AIDS in 1984.[161] The National Cancer Institute (NCI) and the National Institute of Health (NIH) asked private drug companies to make drugs available for testing to determine if the retrovirus would respond to any of them. Drug companies submitted a variety of entities that they had developed but not put on the market for various reasons (lack of observed efficacy, limited market potential, failure to obtain an approved NDA, etc.). Burroughs-Wellcome submitted azidothymidine (AZT), a drug that was developed in the 1960s as a possible anti-tumor agent but proved ineffective.[162]

Once the drug illustrated efficacy in fighting HIV, the FDA dramatically expedited the entire process. Researchers recorded the first positive test results in February 1985 and by September 1986 the completion of Phase II trials was yielding significant results. The FDA initiated a treatment IND and by March 1987 the FDA approved the drug “only two and a half years after the...initial request for drugs.”[163] Victims of the AIDS epidemic needed new drugs quickly. While they certainly were concerned about drugs being approved in foreign countries, the primary concern was with expediting drug development. In employing the PDUFA, the FDA has forgotten this lesson. The agency has been overly concerned with comparisons with other nations and forgotten that people simply want quick access to new drugs. If AZT had been approved first in the U.S. after eight years, that would have been very little consolation to the AIDS lobby and the thousands of people who would have died during drug development.


Part VI: Potential Solutions


A. Finding Clinical Patients

One of the first things the FDA could do to reduce drug lag is to increase patient awareness of clinical trials. For some time there has been a shortage of patients willing and able to participate in clinical trials.[164] This slows the IND phase of drug development and delays ultimate marketing of a drug. Many patients simply aren’t aware of the trials that are occurring. Robert Comis, the president of the Coalition of National Cancer Cooperative Groups maintains that “clinical trials represent the very best care we have to offer for many cancer patients, yet many do not take advantage of the opportunity because of lack of awareness or misconceptions of what a clinical trial really is.”[165]

A survey conducted by Harris Interactive suggested that Mr. Comis is correct. The survey polled 6,000 cancer patients and found that approximately 85% of them were unaware or unsure about their ability to participate in clinical trials. Seventy-five percent of those surveyed said they would have been willing to participate in a clinical trial had they known about one. The survey found that more than 80% of cancer patients believe that clinical trials are either “essential” or “very important” and that “all new prescription drugs or other new treatments should be tested on human beings in clinical trials before they are approved for general use.”[166] Of the 16% of survey respondents who were aware of clinical trials, three-quarters of them were dissuaded from participating by various reasons. Roughly one third thought that the medical care they would receive in a trial would be less effective than standard care, another third was concerned about the risks of receiving a placebo, and the remaining respondents cited concerns about being treated like a “guinea pig” and insurance companies’ unwillingness to cover treatment costs.[167]

The lack of knowing and willing participants for clinical trials not only slows drug development, it also erodes the efficacy of the trials researchers conduct. The smaller the sample of the population that participates, the lower the chance that doctors can accurately observe side effects and levels of efficacy. The bulk of the population will be using these drugs even though they were only tested on a very small subset. One manifestation of this problem has been the difficulty researchers have encountered in ensuring sufficient participation by racial minorities and seniors. A study published in the New England Journal of Medicine recorded how many persons over 65 participated in clinical cancer trials conducted by the Southwest Oncology Group from 1993 to 1996.[168] The study found that seniors were only 25% of all clinical patients despite accounting for 63% of cancer cases nationwide. For breast cancer in particular, only 9% of clinical patients were over 65 even though 49% of breast cancer victims nationwide are seniors. The study concluded that “too many doctors...assume that older patients (will) not be able to tolerate or benefit from many of the most promising treatments under study” and that this could be “disastrous” in the future as the American population continues to age. For racial minorities, the study found similar results. A recent trial of a new breast cancer drug attracted so few African-American women that the results of the study could not be confidently applied to this group. This is because “African American women are known to have estrogen-receptor-negative tumors more often than white women and to develop the disease at a younger age.”[169]

The FDA could attempt to alleviate this problem in several ways. Obviously, the FDA needs to do a better job of informing disease victims about clinical trials. A relatively easy way would be to set up an internet database on the FDA website, and other health sites like NCI and NIH, that would list clinical trials and their location. The database could be searchable by disease and location to facilitate victims’ preferences. Patients also need to have more general information about clinical trials. Recently the American Association of Health Plans released a pamphlet entitled “Should I enter a clinical trial.” The pamphlet includes information about how to find trials that need patients, the phases of a trial, the importance of informed consent, and the risks of participating in a trial.[170] More information like this should be available and given openly to victims of diseases like cancer and AIDS. In addition, health organizations need to encourage doctors to tell their patients about clinical trials, especially older and minority patients.

B. Privatization

A regulatory solution that many scholars have suggested is some level of privatization of FDA functions. Privatization could take two forms. One would be a system in which the decisions of foreign drug review bodies are given legal effect in the United States. This option will be discussed later. The other option would allow certain private for or not-for-profit entities to take over much of the FDA’s reviewing function. Many argue that this would lead to increased efficiency in the drug development process without sacrificing safety or efficacy.

Dr. Henry I. Miller offered one suggestion of this second kind of privatization.[171] Dr. Miller proposes that Congress give “drug certifying bodies” (“DCBs”) much of the day-to-day duties of the current FDA. A DCB would be comprised of personnel similar to the type the FDA currently employs; doctors, pharmacologists, biologists, etc., along with sufficient managerial staff. The DCBs would undergo a stringent accreditation process by the FDA. This process could mirror the “agency’s current scrutiny of the qualifications of the clinical investigators and institutions” that perform clinical studies or the “accreditation of third parties for the review of medical devices” that is currently allowed.[172]

Miller proposes allowing the drug certifying bodies to oversee clinical testing of a drug and ultimately approve a NDA at the conclusion of the process. The DCB would then submit a detailed report on the approved new drug to the FDA. The FDA would then have final say over whether or not the drug was approved.[173] Miller envisions enough drug certifying bodies such that each one could be highly involved with each drug under its supervision. The DCB would work with the sponsor from the time of drug development, through initial testing, award the IND, and help the sponsor craft the appropriate testing that will ultimately lead to NDA approval. The DCBs would be funded by user fees that would be governed by contracts between the boards and the drug sponsors.[174]

The principle benefit of this system would be to reduce the current incentives that slow the drug approval process. Many argue that the FDA has a strong impulse to be extra-cautious when approving new drugs. When the FDA reviews a drug there are four potential outcomes: 1) a drug that should get approved receives approval, 2) a drug that should not get approved does not receive approval, 3) a drug that should get approved does not receive approval, and 4) a drug that should not get approved does receive approval. These four possibilities are represented in the figure below.

Figure 1.1 Potential Outcomes of the FDA Review Process[175]

Drug is safe and Drug is unsafe or

effective ineffective

FDA grants the NDA Correct decision Type II error

FDA denies the NDA Type I error Correct decision

Many scholars believe that the FDA would much rather commit a Type I error as opposed to a Type II error. People may needlessly suffer because of a Type I error, but usually those victims are unaware that the FDA unwisely denied approval to a promising new therapy. Type II errors become front page newspaper stories, alarm the public, and subject the FDA to considerable criticism.[176] Having multiple DCBs would create competition between them to efficiently and effectively get NDAs through to the FDA and drugs onto the market. The incentive to avoid letting dangerous medicines on the market will surely remain, but it would be tempered by competition and profit motive.[177]

Miller also cites a host of other potential benefits from his proposed new system. The early involvement of the drug review boards would facilitate identification and rectification of problems that would otherwise occur during and therefore hinder the IND phase. Miller argues that sponsors will not have to endure “arbitrary, unexpected regulatory obstacles early in the clinical testing of a new product.”[178] For example, in the past the FDA has required single dose only Phase I studies, required that clinical studies begin with inappropriately low doses, or even required that foreign trials be completed before U.S. trials can commence. Elimination of these possibilities would expedite the entire IND process, an area where significant gains are still possible. The involvement of the DCBs would also allow the review of data during the clinical trials. Essentially this would lead to a “rolling NDA” which would reduce the ultimate amount of time needed to complete the process. Finally, the DCB can help a sponsor recognize as early as possible that a line of research should be cancelled. This will avoid the sponsor wasting time and millions of research dollars.[179]

The other form of “privatization” often suggested involves the United States recognizing the decisions of foreign drug monitoring agencies. This would alleviate pressure on the FDA and could produce competition between the agencies that could lead to some of the benefits discussed above. The European Medicines Evaluation Agency (“EMEA”) provides an example of this sort of system. While it is still in its infancy and has significant hurdles to overcome, it offers a concrete example of an alternative drug review structure.

The EMEA was established in 1993 by the Maastricht Treaty and became active on the first day of 1995.[180] As the name suggests, the eighteen countries and approximately 370 million people of the European Union (EU) are participating in the EMEA harmonization experiment.[181] Under the system, three methods of drug approval are available to drug sponsors. A company can submit an application directly to the EMEA which then makes a decision within 300 days that is binding on all EU members.[182] Second, a company can apply to the drug regulatory agency of a specific country and then forward a copy of the application to the other EU states. “If the drug application is approved by the first nation, the other nations are required to either recognize the new drug for sale within their borders or to file a formal objection for adjudication by the EC.”[183] Finally, a drug sponsor can submit a drug application for approval in one specific country and make no attempt to obtain approval in additional countries.

The United States could join this experiment in a couple of different ways. One proposal is that the FDA treat EMEA approval as “substantial evidence” of drug efficacy. The EMEA requires an efficacy showing that could probably satisfy the FDA’s standard.[184] Congress could also construct a mutual recognition agreement between the FDA and the EMEA. This process could work such that sponsors would submit drug applications to both entities and once one approved the drug the other would have a certain amount of time to concur or disagree. Finally, the most drastic measure would be for the U.S. to join the EMEA and give its decisions the effect of law in this country. While this does seem highly radical today, it may be the eventual end-point of the recent movement toward international harmonization.

Certainly the EMEA solution is not perfect. There have been complications and disagreements between the 18 nations of the European Union. Adding the United States will only add to the potential for conflict. It may be unwise for the U.S. to legally bind itself to the decision of every European nation. British performance tends to be faster yet equally if not more safe for consumers.[185] It is not clear, however, that the same would hold for Portugal, Greece or Italy.[186] Historically, the United States drug approval process has also been thought of as the ‘gold standard.’ This undoubtedly gives U.S. industry and the American consumer some measure of advantage in the world market. Joining or allying with the EMEA might give away this valuable asset in return for a benefit that will primarily accrue to foreign drug manufacturers.

C. The British System

Another possibility for U.S. reform is movement to a regime that closely mirrors the drug approval process of the United Kingdom. Britain provides an interesting comparison point for the U.S. because the system, in many ways, is quite similar to our own. Key differences exist, however, that could provide a useful example. Since the system has similar goals as well, the American public might be amenable to the British drug approval regime.

British laws concerning medicines date back to the Ordinances of Pepperers of Soper Lane passed in 1316.[187] “This early ordinance forbade the mixing of different quality wares and the subsequent adulteration of such products.”[188] For many of the subsequent centuries, British food and drug law mainly dealt with adulteration. The first law concerned with safety and efficacy, the Therapeutic Substance Act, was passed in 1925. The main purpose of the act was to control the quality and manufacture of biologicals, i.e. vaccines serums, toxins, and antibiotics.[189] The Medicines Act of 1968 governs the modern British drug approval process.[190] Agencies with various names have had the responsibility of reviewing drug applications under this act. The current administrative actor is called the Medicines Control Agency (“MCA”).[191] Formed in 1989 after an administrative reorganization, the MCA “currently operates as the sole statutory authority in the U.K. holding the responsibility of overseeing the new drug approval process.”[192]

As previously mentioned, many aspects of the system would look familiar to a United States FDA employee. Testing of drugs begins with animal trials and then proceeds through a serious of progressively larger clinical trials. The British system mandates ½ year chronic toxicity studies in two species of animal for drugs that are designed for long term use.[193] A drug must demonstrate safety, effectiveness, and quality before the MCA grants the ‘product license’ that allows the marketing of a drug. The British also have a system that mirrors the recent development of treatment INDs in America. Experimental drugs aren’t required to navigate through the entire approval process before a doctor can prescribe them. This system turns out to be somewhat more flexible than the U.S. treatment IND system so more patients have access to cutting edge drugs faster.[194]

Despite the similarities, three informative differences exist between the British and American systems. The British system relies much more heavily on post-marketing data. Much of the inquiry concerning safety and effectiveness relies on data obtained from patients using the drug after it has been released onto the market. The MCA oversees a “yellow card system” by which physicians record adverse drug reactions and report the information back to the agency.[195] Subsequent remedial steps include changing a drug’s information sheet, issuing public warnings, or even rescinding the drug’s marketing license. This system allows consumers faster access to drugs since some of the costly and time-consuming trials that the FDA requires can be avoided. Secondly, the efficacy standard has been interpreted differently in the two countries. The efficacy standard in the U.S. is formally codified, while in the U.K. there are no truly universal efficacy standards.[196] “(D)ecisions on efficacy in Great Britain are made on an individual, drug-by-drug basis.” This, coupled with a general preference for post-marketing surveillance, leads the MCA to find efficacy sooner in the process than the FDA would.[197]

Finally, the British system of reviewing new drug applications proceeds from what many call a “top down” approach. When a drug sponsor submits a new drug application to the MCA, they are allowed to use ‘summary tables’ of data and include executive summaries of the value of the drug.[198] The MCA does not require that the drug company submit every ounce of data. The FDA takes the opposite, ‘bottoms up’ approach. The sponsor of a NDA must submit almost all the raw data on the drug from the time of first development in the laboratory. This includes computer data tapes of case report forms, case report tabulations, and narratives of clinical study reports.[199] The fundamental difference is that American NDAs are now tremendous in size compared to other nations like Britain. It is not uncommon for an NDA submitted to the FDA to be 200,000 pages long and weigh over 5,000 pounds.[200] While the FDA has certainly improved the speed of NDA review, it probably could be even better if the FDA didn’t have so much paper to manage.

While one might assume that the ‘shortcuts’ the MCA takes would negatively affect safety, there is little evidence to substantiate this. The British system generally achieves similar safety results while allowing consumers broader access to new medicines.[201] A 1995 study compared the rates of drug withdrawals due to safety in the U.S. and the U.K. The study found a 3% withdrawal rate in the United States and a 4% rate in Britain.[202] The study also found that American consumers were receiving access to a considerably smaller number of drugs. Of the 104 new medicines in the study, 26.9% were approved only in the U.K. but only 17.3% were approved only in the U.S.[203]


D. Efficacy

Finally, a radical, yet likely effective, regulatory change would be for Congress to completely eliminate the efficacy requirement. As discussed earlier, prior to the 1962 Kefauver-Harris amendments the FDA did not require proof of drug effectiveness, just safety. Before the amendments, the agency was bound by a Supreme Court decision that ruled that the 1906 Pure Food and Drugs Act did not give the FDA any authority over assuring efficacy.[204] During the early 20th Century, randomized clinical trials had not been developed anyway, so efficacy testing was not a terribly productive enterprise to begin with. After the thalidomide tragedy, the national mood shifted and people were much more willing to give the FDA considerable power. Section 355(d) of the amended Food Drug and Cosmetic Act placed the burden on drug sponsors of gathering scientific evidence to establish a new compound’s effectiveness.[205] The general rationale for the effectiveness standard was that people suffering from serious illnesses would injure themselves by using a new ineffective medicine rather than a previously approved efficacious medicine. Many also reasoned that ineffective remedies for minor conditions might displace older effective drugs, essentially leaving the public without an appropriate medicine on the market.[206]

The statutory standard the FDA uses is that a drug must show” substantial evidence” from “adequate and well controlled investigations that a drug is effective for its intended use.”[207] To meet this standard, the FDA almost always requires that at least two independent clinical trials substantiate a claim of effectiveness. “This so called ‘replication requirement’ embodies the scientific truism that positive findings from any single study are credible only to the extent that they are confirmed by subsequent research.”[208]

Despite the logic advanced by the FDA, many people have been quite critical of the onerous efficacy requirement. The replication and effectiveness requirements generate contention between the FDA and drug sponsors. Even large and very complicated clinical studies often do not meet the FDA’s standards for statistical and clinical significance. Many critics have also maintained that the FDA applies the effectiveness requirement inconsistently. The FDA has occasionally approved drugs that treat diseases with significant and politically powerful victims after only one clinical trial. In addition, some allege that the FDA tends to be less onerous when evaluating biologics, even when they are the functional and legal equivalent of drugs produced in a lab.[209] Most basically, the efficacy requirement dramatically increases the amount of time and money required to bring a product to market. Part II of this paper described the lengthy and complicated procedure that the efficacy requirement created. Consumers bear this cost in the form of higher drug prices and delayed new medicines. The cost of assuring efficacy is especially burdensome to small companies who don’t have the cash flow to fund two complicated and expensive studies.

This cost might be justifiable if the efficacy requirement was logical, but it is not. For many people with serious illnesses, treatments are unavailable or ineffective. These people do not benefit from anything but the briefest look into efficacy. For those suffering from serious illnesses for which several courses of treatment are available, using an ineffective new drug could cause harm. This danger is often overstated however. People with serious, life threatening diseases are often under the close supervision of doctors who are experts in the given disease. These expert doctors will have a good idea of whether a new drug might work for a patient and will understand the risks of foregoing other treatment options. In a regime with limited or no efficacy testing, patients would not be left alone to treat their malignant tumors with ‘snake oil.’ Finally, the danger from people taking ineffective drugs for minor ailments seems quite minimal. An ineffective product probably could not displace an effective one that enjoys established market share. Even if effective over the counter headache medicines could be driven off the market by an ineffective competitor, market conditions would soon be ripe for an effective compound to reappear. With an NDA already approved, the original compound itself could make a relatively painless resurgence.

The prescribing practices of American doctors also make the efficacy requirement largely irrelevant. When the FDA evaluates a NDA it is looking for efficacy in treating a certain specified condition. However, once this threshold is met and the drug receives approval, a physician can prescribe the drug for any condition.[210] This phenomenon is known as off-label prescribing. An example of this is the antibiotic amoxicillin. This drug was rigorously tested for efficacy in treating respiratory tract infections. After it was approved, doctors and scientists began attributing many stomach ulcers to bacterial infection. Antibiotics like amoxicillin became very effective ulcer treatments. Today, amoxicillin is a textbook treatment for stomach ulcers even though it never was, and probably never will be, tested for this application. The commonly accepted standard of treatment for many ailments now involves off-label use of FDA approved drugs.[211] This state of affairs suggests that rigorous efficacy testing for a stated use is largely irrelevant in many areas of medicine.


Part VII: Conclusion

The aim of this paper is not to portray the PDUFA as a complete failure. The user fees initiated by this legislation have increased the speed and efficiency of the FDA review process. The annual reports to Congress consistently show that the agency is meeting aggressive new goals year after year. Most importantly, the average NDA review time has been more than cut in half, from 27.5 months to 12. The reduction in any unnecessary delay in the drug approval process benefits drug companies and consumers.

The true solution to drug lag is not contained in the PDUFA however. Clinical development times refuse to fall in many areas. Average drug development time has continued to rise unchecked. Whether due to lack of vision or lack of willingness Congress did not draft the PDUFA to effectively deal with these issues. The statute proceeds from a position that only post-NDA submission reforms can facilitate faster access to new drugs. The FDA itself has been preoccupied with comparing its own performance with that of other industrialized nations. Unfortunately, Americans will continue to die needlessly until Congress finds a way to improve the IND phase of drug development.

The solutions proposed by this paper are certainly not perfect. Efforts to find more clinical participants might help but it is not clear how much. Research did not reveal any concrete estimates of how much the shortage of clinical patients slows drug development. Furthermore, a gigantic influx of potential clinical patients could create administrative problems that might hinder trials’ progress. Privatization is always a contentious issue in American politics. The American people might not favor a measure to give private entities the sort of power that the FDA possesses. The gains that might be achieved from DCBs are largely speculative and the competition motive may prove insufficient to counteract the incentive to commit Type I errors. Many would argue that the American public has become too risk averse to move to a British-like system of post marketing review or to accept a FDA without the power to assure efficacy.

The fact that these solutions are not perfect does not matter however. What matters is that other options exist that Congress should explore. Drug lag will continue until Congress and the FDA stop patting themselves on the back about the PDUFA’s successes and start thinking about its failures. Until this happens significant reduction in clinical development times or drug development costs will not occur.


[1] Tufts Center for the Study of Drug Development, Backgrounder: How New Drugs Move through the Development and Approval Process. November 1, 2001, available at http://csdd.tufts.edu/NewsEvents/RecentNews.asp?newsid=4.

[2] STEPHEN CECCOLI, THE POLITICS OF NEW DRUG APPROVALS IN THE UNITED STATES AND GREAT BRITAIN 102 (UMI Dissertation Services 1998).

[3] HENRY MILLER, M.D., TO AMERICA’S HEALTH: A PROPOSAL TO REFORM THE FOOD AND DRUG ADMINISTRATION 12 (Hoover Institution Press 2000).

[4] Id.

[5] Ceccoli, supra note 2 at 102.

[6] Id.

[7] Miller, supra note 3 at 11.

[8] Id.

[9] Ceccoli, supra note 2 at 102.

[10] Id.

[11] Id.

[12] MARK MATHIEU ED., NEW DRUG DEVELOPMENT: A REGULATORY OVERVIEW 129 (OMEC International 1987).

[13] Id.

[14] Ceccoli, supra note 2 at 104.

[15] Veronica Henry, Problems with Pharmaceutical Regulation in the United States: Drug Lag and Orphan Drugs , JOURNAL OF LEGAL MEDICINE , December 1993, at 619.

[16] OMEC, supra note 12 at 129.

[17] Henry, supra note 15 at 619.

[18] PETER BARTON HUTT & RICHARD A. MERRILL, FOOD AND DRUG LAW: CASES AND MATERIALS 476 (Foundation Press 1991).

[19] Id. at 12.

[20] Id. at 476.

[21] Miller, supra note 3 at 13.

[22] Id.

[23] RITA RICARDO CAMPBELL, DRUG LAG: FEDERAL GOVERNMENT DECISION MAKING 5 (Hoover Institution Press 1976).

[24] March of Dimes,Quick Reference and Fact Sheets: Thalidomide. November 1998, available at http://www.marchofdimes.com.

[25] Campbell, supra note 23 at 5.

[26] Miller, supra note 3 at 15.

[27] Hutt, supra note 18 at 477.

[28] OMEC, supra note 12 at 133.

[29] Id. at 12.

[30] RICHARD A. GUARINO ED., NEW DRUG APPROVAL PROCESS 5 (Marcel Dekker 1993).

[31] Id.

[32] Id.

[33] OMEC, supra note 12 at 22-23.

[34] Id. at 24-25.

[35] Guarino, supra note 30 at 39.

[36] Id.

[37] Hutt, supra note 18 at 515.

[38] Id.

[39] Guarino, supra note 30 at 88.

[40] Id.

[41] Id. at 225.

[42] Id.

[43] 21 C.F.R. § 312.21 (2003).

[44] Hutt, supra note 18 at 516.

[45] Id.

[46] OMEC, supra note 12 at 68.

[47] Id.

[48] Id. at 71.

[49] Hutt, supra note 18 at 516.

[50] OMEC, supra note 12 at 71.

[51] Id. at 72.

[52] Guarino, supra note 30 at 162.

[53] Id. at 162, 168.

[54] Id. at 165.

[55] Id. at 166.

[56] Id. at 167.

[57] Hutt, supra note 18 at 516.

[58] Id.

[59] OMEC, supra note 12 at 74.

[60] Id.

[61] Hutt, supra note 18 at 516.

[62] Id.

[63] Id. at 519.

[64] Guarino, supra note 30 at 270-83.

[65] Hutt, supra note 18 at 519.

[66] Id.

[67] Guarino, supra note 30 at 267-68.

[68] SAM PELTZMAN, REGULATION OF PHARMACEUTICAL INNOVATION: THE 1962 AMENDMENTS 13 (American Enterprise Institute for Public Policy Research 1974).

[69] Id.

[70] Id. at 16.

[71] Id. at 17.

[72] Id.

[73] Ceccolli, supra note 2 at 107-9. (Senator Estes Kefauver had been concerned about excessive profits generated by the drug industry for many years before the 1962 amendments. During the 1950s he initiated hearings on drug pricing and efficacy and even submitted a bill to the Senate. Senator Kefauver made little progress on the issue, or the legislation, until the thalidomide tragedy.)

[74] Peltzman, supra note 81 at 48.

[75] Id.

[76] Id. at 81.

[77] Id.

[78] Daniel B. Klein, Time to end America’s Drug Lag , CONSUMERS RESEARCH MAGAZINE , April 1, 2002, at 3.

[79] Id.

[80] Hutt, supra note 18 at 552.

[81] Id.

[82] 21 C.F.R. § 312.34 (2003). See also Vivian I Orlando, The FDA’s Accelerated Approval Process: Does the Pharmaceutical Industry Have Adequate Incentives for Self-Regulation? , 25 AM . J.L. & MED . 543, 546 (1999).

[83] Myron Marlin, Treatment INDs: A Faster Route To Drug Approval? , 39 AM. U.L. REV . 171, 185 (1989).

[84] Julie C. Relihan, Expediting FDA Approval of AIDS Drugs: An International Approach , 13 B.U. INT’L L.J. 229, 234 (1995).

[85] Hutt, supra note 18 at 553.

[86] Id.

[87] Mary T. Griffin, AIDS Drugs & the Pharmaceutical Industry: A Need for Reform , 17 AM. J.L. & MED . 363, 379 (1991).

[88] Relihan, supra note 84 at 236.

[89] Id.

[90] Hutt, supra note 18 at 560.

[91] Griffin, supra note 87 at 381.

[92] Id.

[93] Hutt, supra note 18 at 561.

[94] Relihan, supra note 84 at 236.

[95] Henry, supra note 15 at 628-9.

[96] Id. at 629.

[97] 96 Stat. 2049 (1983).

[98] Id. at 630.

[99] Id. at 631.

[100] Id.

[101] Dan Kidd, The International Conference on Harmonization of Pharmaceutical Regulations, the European Medicines Evaluation Agency, and the FDA: Who’s Zooming Who? , INDIANA JOURNAL OF GLOBAL LEGAL STUDIES , Fall 1996, at 198.

[102] Id.

[103] Sheila Shulman & Kenneth Kaitin, The Prescription Drug User Fee Act of 1992: A 5-Year Experiment for Industry and the FDA , PharmacoEconomics, February 1996, at 123.

[104] Id.

[105] Id.

[106] Id. at 124.

[107] Id.

[108] Notice, 67 Fed. Reg. 2223 (Jan. 16, 2002).

[109] Shulman, supra note 103 at 123.

[110] Id. at 125.

[111] Id. at 125-6.

[112] Id. at 126.

[113] Department of Health and Human Services, Application to Market a New Drug, Biologic, or an Antibiotic Drug for Human Use, September 2002, available at http://forms.psc.gov/forms/MSWFDA/FDA-356h.doc.

[114] Center For Drug Evaluation and Research, Report to the Nation: 1999, March 8, 2001, available at http://www.fda.gov/cder/reports/rtn99-2.htm.

[115] United States Food and Drug Administration, New Drug Approval Time: The Facts, February 22, 2002, available at http://www.fda.gov/oc/pdufa/thefacts/questions.html#question3.

[116] Although the term new drug application is used, the numbers discussed in the text and used in the tables reflect FDA performance on NDAs, PLAs, and ELAs combined.

[117] FDA Third Annual Performance Report: Prescription Drug User Fee Act at 2 (1995). (available at http://www.fda.gov/cder/pdufa/default.htm ).

[118] This table, in addition to the ones involving efficacy supplements, manufacturing supplements, resubmissions, and workload, contain data complied from the FDA Annual Performance Reports to Congress from years 1995-2001. The reports themselves generally contain complete data for only a couple of years, i.e. the latest report does not summarize FDA performance over the entire period that the PDUFA has been in effect. The reports are available at http://www.fda.gov/cder/pdufa/default.htm.

[119] Shulman, supra note 103 at 124.

[120] FDA FY 2001 PDUFA Financial Report at 18 (February 2002) (available at http://www.fda.gov/oc/pdufa/finreport2001/financial-fy2001.htm) .

[121] 138 CONG. REC . H9095 (1992) (daily ed. September 22, 1992) (statement by Rep. Waxman).

[122] 138 CONG. REC . H9099 (1992) (daily ed. September 22, 1992) (statement by Rep. Dingell).

[123] H.R. REP. NO . 102-895 (1992).

[124] Id.

[125] Prescription Drug User Fee Act of 1992, Pub. L. No. 102-571, §102, 106 Stat. 4491, 4492 (1992).

[126] Prescription Drug User Fee Act of 1992: Hearing on H.R. 6181 Before the Senate Comm. on Labor and Human Resources , 102nd Cong. 12 (1992) (statement of Sen. Dan Coats).

[127] Id. at 7.

[128] Id. at 8.

[129] Id.

[130] Waxman, supra note 121.

[131] 138 CONG. REC . S17234, 17238 (1992) (daily ed. October 7, 1992) (joint statement by Senators Kennedy and Hatch).

[132] Prescription Drug User Fee Act of 1992: Hearing on H.R. 6181 Before the House Comm. on Energy and Commerce , 102nd Cong. 4 (1992) (statement of Dr. David Kessler, Commissioner of Food and Drugs) (emphasis added).

[133] Reauthorization of the Prescription Drug User Fee Act and FDA Reform: Hearing on H.R. 1411 Before the House Subcomm. on Health and Environment , 105th Cong. 14-17 (1997) (statement of Michael A. Friedman, Lead Deputy Commissioner of Food and Drugs).

[134] Kenneth I. Kaitin, The Prescription Drug User Fee Act of 1992 and the New Drug Development Process , AMERICAN JOURNAL OF THERAPEUTICS , 1997, at 168.

[135] Id.

[136] Id. at 169.

[137] Id.

[138] Janice M. Reichert & Jennifer Chee, The Effects of the Prescription Drug User Fee Act and the Food and Drug Modernization Act on the Development and Approval of Therapeutic Medicines , Drug Information Journal, 2001.

[139] Id. at 92.

[140] Id.

[141] Id. at 91.

[142] Note: central nervous system drugs include certain pain medications, anesthetics, sedatives, asthma drugs, and anti-convulsants.

[143] Tufts Center for the Study of Drug Development, Outlook 2001 , available at http://csdd.tufts.edu/InfoServices/OutlookPDFs/Outlook2001.pdf.

[144] Tufts Center for the Study of Drug Development, Outlook 2003 , available at http://csdd.tufts.edu/InfoServices/OutlookPDFs/Outlook2003.pdf.

[145] Id.

[146] Tufts, supra note 1.

[147] Tufts Center for the Study of Drug Development, Impact Report: European Approval of New Biotech Drugs Outpaces US Approval. March 2000, available at http://www.tufts.edu/med/research/csdd.

[148] Id.

[149] David A. Kessler, Approval of New Drugs in the United States Comparison With the United Kingdom, Germany, and Japan , JOURNAL OF THE AMERICAN MEDICAL ASSOCIATION , December 11, 1996.

[150] Id.

[151] Id.

[152] Id.

[153] Id.

[154] Id.

[155] Id.

[156] Id.

[157] Julie DeFalco, The FDA vs. Reform , INVESTORS BUSINESS DAILY , June 7, 1996, available at http://www.cei.org/gencon/025,01454.cfm.

[158] Id.

[159] Ken Rankin, Is Kessler Using a Stacked Deck in Apparent Drug-Lag Denial , DRUG STORE NEWS , January 8 1996, at 9.

[160] Id.

[161] Griffin, supra note 87 at 387.

[162] Id. at 388.

[163] Id.

[164] Susan J. Landers, Guide Addresses Questions on Clinical Trials , Health & Science, available at http://www.ama-assn.org/sci-pubs/amnews/pick_02/hlsc0527.htm.

[165] Biomed Central, US FDA User Fees May Have Accelerated New Drug Development but a Recent Survey Suggests Difficulties in Obtaining Volunteers for Human Clinical Trials Still Slow Progress. January 30, 2001, available at http://www.biomedcentral.com/news/20010130/04/.

[166] Landers, supra note 164.

[167] Id.

[168] Willis-Knighton Department of Radiation Oncology, Taking Advantage of Clinical Trials , August 7, 2000, available at http://www.wkhs.com/cancerftr/CancerNews/080700.html.

[169] Id.

[170] Landers, supra note 164.

[171] Miller, supra note 3 at 90-101.

[172] Id. at 94.

[173] Id. at 90.

[174] Id. at 91.

[175] Ceccoli, supra note 2 at 44.

[176] Elizabeth C. Price, Teaching the Elephant to Dance: Privatizing the FDA Review Process , 51 FOOD DRUG L.J. 651, 655 (1996).

[177] Miller, supra note 3 at 98.

[178] Id. at 97.

[179] Id. at 97-8.

[180] Price, supra note 176 at 667.

[181] Kidd, supra note 101 at 188.

[182] FDA Reform and the European Medicines Evaluation Agency , 108 HARV. L. REV . 2009, 2018 (1995).

[183] Id.

[184] Id. at 2019.

[185] Price, supra note 176 at 670.

[186] Id.

[187] Ceccolli, supra note 2 at 88.

[188] Id.

[189] Id. at 89.

[190] Relihan, supra note 84 at 241.

[191] Ceccolli, supra note 2 at 101.

[192] Id.

[193] Relihan, supra note 84 at 241.

[194] Id.

[195] Id.

[196] Ceccolli, supra note 2 at 163.

[197] Id.

[198] Id. at 169-70.

[199] Id. at 169.

[200] Id. at 170.

[201] Price, supra note 176 at 670.

[202] Klein, supra note 78.

[203] Id.

[204] Jennifer Kulynych, Will FDA Relinquish the “Gold Standard” for New Drug Approval? Redefining “Substantial Evidence” in the FDA Modernization Act of 1997 , 54 FOOD DRUG L.J. 127, 132 (1999).

[205] Id.

[206] Id.

[207] Hutt, supra note 18 at 525.

[208] Kulynych, supra note 204 at 129.

[209] Id. at 137-8.

[210] Klein, supra note 78 at 4.

[211] Id.