First Do No Harm: History of Treatment, and Pharmaceuticals (Jacalyn Duffin)

If the whole materia medica as used, could be sunk in the bottom of the sea, it would be all the better for mankind and all the worse for the fishes.

-Oliver Wendell Holmes (1883)

In June 1991 the body of American President Zachary Taylor was exhumed for a medico-legal examination. He had been dead since 1850 – officially of diarrhea – but a question of poisoning had been raised. The president may not have been murdered, the papers claimed, but he had been killed by his physicians.

Stories like this one irritate me. The writers presume that the patient was not seriously ill until he accepted treatment, and that the illness did not contribute to his demise. Without denying the dangers of past therapy, I find these tales disturbing, because hit close to hematologic home. Chemotherapy makes people vomit and lose their hair; it also reduces their immunity – and, by the way, it shrinks tumours. One of my teachers used to call it ‘poison with anti-cancer side effects.’ We hope that safer and more effective treatments will be found in the future. In the meantime, however, we give these potentially lethal drugs- not to kill our patients but to help them live longer and better. Are we deluding ourselves?

The history of therapeutics is the latest frontier of medical history partly because it is so embarrassing. Until recently, ridicule was the goal of the few who wrote about past medicines. Current practices are assumed to be rational and scientific, while those of predecessors are not; what we do now is flawless, and what they did back then could not possibly have worked. Many recent examples of this literature none appear in the bibliography for this chapter. History conducted through such a prejudicial lens (called ‘presentism’ by historians; see chapter 16) will be limited and insensitive, even if it is lively and well written. Only recently have historians begun to examine why certain drugs now thought dangerous were once endorsed by orthodox medicine. Others study the parallel folk medicines of self-help — a much more difficult task, lacking sources that can easily be identified.

Most therapies were discovered by empirical means – observation, and trial and error. Eating or doing something was followed by improvement. But the empirical methods do not prevent ‘reasoning’ from helping to transform observation into medical dogma. And they rely on at least two prior conditions: agreement on what constitutes the disease (i.e. the need) and an opportunity to observe. For pharmacology, these conditions comprise the Pastorian ‘preparation of the mind’ before a treatment is discovered (see chapter 3). A rationale for why the treatment was presumed to work will always have been applied – in many cases, after the drug’s benefits were noticed – and it will be subject to historical vogue in science.

Change in a disease concept can alter the rationale without necessarily changing the treatment. By the same token, a drug’s mechanism of action can shift without refuting its benefits. For example, in the 1970s, hydrochlorothiazide was thought to lower blood pressure through its diuretic and saliuretic effect; now it is thought to have some additional effect on the smooth muscle of blood vessels. A similar revision can be applied to digitalis, as we will see below.

Undesirable side effects can also lead to new applications. For minoxidil was introduced in the mid-1970s as a powerful antihypertensive, with the depressing side effect of hirsutism; now it is prescribed for external use as a treatment for baldness. Similarly, the adrenergic drug methylphenidate (Ritalin) originally used as a stimulant, but its paradoxically calming side effect on children with hyperactivity (now called ‘attention deficit disorder’) became its principal application.

Defunct medical practices were neither irrational nor unscientific in their heyday; the rationale was reconciled with prevailing science and concepts of disease. For example, when medicine looked for acids or bases in urine, remedies were selected to alter urinary acidity toward health; when fevers were associated with too much blood, bloodletting made sense. When syphilis first appeared as an import from the New World, the wood product guaiacum was thought to ‘work’ for two reasons: first, like the ailment, it came from America; second, the naturally occurring, spontaneous remissions in the disease could be attributed to whatever intervention had preceded them. Similarly, the colour red was a therapy for smallpox since at least the tenth century in Japan and Europe: red clothes, red rooms, red food, and red light – erythrotherapy. This idea persisted into the twentieth century with the work of the 1903 Nobel laureate Niels R. Finsen.

Therapeutic rationale depends on the time and place. All medical systems appeal to reasoning, including the past of orthodox medicine and the present of ‘unorthodox,’ or ‘alternative,’ medicine. In homeopathy, invented in the late eighteenth century by Samuel Hahnemann, the dominant assumption is that ‘like cures like’ – often expressed in Latin assimilia similibus curantur. The best remedy will be the one which, if taken in large doses, produces symptoms similar to the disease; treatment then consists of giving that remedy in tiny, ‘homeopathic doses.’ Therapeutic rationale also changes with perceptions of disease.

For example, when peptic ulcer was associated with personality, stress, hyperacidity, and disordered motility, the correct treatments dealt with those problems. But in the late 1970s, histamine-2 antagonists dramatically altered prescribing practices for ulcer. By the early 1990s, a microbial explanation came to the fore, and management of the now-infectious condition changed accordingly.

Therapeutic rationale may vary by patient, and perceptions of peop1e can also alter through time. In the medical journals of the 1950s and 1960s, tranquillizers promised to help women cope with the strain of housework. The idea that increased opportunities for outside employment might offer a better solution was not a therapeutic consideration. Since then, tranquillizer remedies have not been discarded, but their target population has altered in conjunction with cultural norms of health and behaviour.

Finally, some medically sanctioned treatments once hailed as miracle cures turned out to be useless or harmful. In the last century, increasing awareness of this possibility led to legislation designed to protect professionals and their patients from unjustified claims and unforeseen side effects. Pharmaceutical literature changed accordingly, and the ‘small print’ increased with time. Advertisements from a century ago contain few such warnings about composition, side “effects, drug interactions, and contraindications. The less-than-noble therapeutic past has itself become a mover in the history of pharmacology. Nevertheless wild swings in acceptable treatments still occur. Since the first edition of this book, several top remedies have vanished, as we will see below. They were shown to have unacceptable side effects on the heart and other organs.

Mysticism, Religion, and Magic: Do They Work?

Since prehistoric times, doctors have been making recommendations for therapeutic intervention. Ancient remedies ‘worked,’ and some still work, including magic, prayer, and divine supplication. Sick people are among the pilgrims who flock to shrines such as Lourdes in France, Fatima in Portugal, Santiago de Compostella in Spain, and the Oratoire St-Joseph, Cap-de-la-Madeleine, and Ste-Anne-de-Beau-pré in Quebec. Sites of divine healing enjoy a charisma akin to that of medical meccas such as the Mayo Clinic. In our time, however, physicians leave the prescribing of pilgrimages to other professionals.

The spiritualistic or vitalistic aspect of treatment has been reified .in the concept of ‘placebo’ (from Latin ‘I shall please’). The term ‘placebo’ long signified the administration of harmless but inert compounds; however, in the mid-twentieth century, placebo was found to be effective in virtually every form of intervention and for any kind of disease. The last decade has witnessed an explosion of interest in its nature, uses, and history.

An Experiment: Shifting Therapeutic Claims

Go to the library (not the Web!) and examine drug advertisements in the medical journals of the past (the online archives often omit advertising). You will find:

  • • drugs that are no longer used because they are now considered dangerous
  • • drugs that have been replaced by others with completely different actions, because our idea of the disease has changed (e.g., anxiolytics and antispasmodics for ulcers)
  • • drugs for problems no longer considered diseases (e.g., agents to promote weight gain)
  • • drugs to help women cope with housework and school meetings;
  • • advertisements that seem tasteless, humourless, or corny because aesthetic standards have changed too.

The older the journal, the more curiosities will be encountered; however, even recent publications contain advertisements for products that are now considered inefficacious or harmful.

Finally, imagine how advertisements for present practice might appear to observers in fifty or one hundred years from now.

Greco-Roman Treatments and Medical Botany

Greco-Roman diseases arose from an imbalance in the humours (see chapter 3); therefore, treatment consisted of trying to re-establish the balance.  Modification of diet and lifestyle were intended to alter the relative proportions of the elemental substances. After spiritual therapies, they are probably the oldest forms of medicine.

The Hippocratic treatises of the fifth century B.C. refer to many non-drug remedies, such as bloodletting, special diets, baths, exercise or rest, and applications of heat or cold. In addition, more than 300 medications are cited, most of plant origin; they could be administered either externally or internally by mouth, rectum, vagina, and other orifices. Hippocratic doctors tended to be conservative in their philosophy. They believed in the healing power of nature (pis medicatrix naturae), which governed the body’s response to illness. Medicine was to help the body heal itself; it was not supposed to hurt, but the Hippocratics readily acknowledged that, sometimes, it could. ‘To help, or at least to do no harm’ (Epidemics I, 11) is a saying often written in Latin as primum non nocere (first, do no harm).

‘Expectant’ treatment – patiently waiting for nature’s cure – has wandered in and out of fashion since the fifth century B.C. In their eagerness to glorify Hippocrates, some historians may have projected more caution onto their Greek predecessors than the texts would justify. At the time of writing, expectant medicine is no longer in vogue an energetic public lobby for its rediscovery. Folk medicines, herbal remedies, and natural products now compete for market with the purveyors of gleaming capsules. Because medical practice tends to follow demand, orthodoxy may eventually bend toward la médecine douce (gentle medicine); some physicians are expanding their practice to include these areas, but most medical schools do not require instruction in them. Certainly, since the year 2000, pharmacies have embraced the so-called natural trend with rapidly expanding sections for these over-the-counter remedies that would not be recommended by doctors, mostly because doctors know little or nothing about them.

The Greek word for drug, pharmakon, from which pharmacology is derived, means a drug, a remedy, and a poison. In the earliest classifications, drugs were either toxins or antidotes; the antidotes were medicinal. In the first century B.C., King Mithridates VI of Pontus in Asia Minor feared being murdered by the Romans, with whom he was often at war. He is thought to have experimentally immunized himself against poisons by drinking the blood of ducks fed on toxic substances. A universal antidote bore his name. Ironically, the king was later unable to commit suicide with poison and had to ask a servant to polish him off with a sword.

Another ancient antidote was which was developed to counteract animal poisons. The word ‘theriac’ is derived from the Greek word therion (wild beast) and reflects the composition of the remedy as well as its purpose. Depending on which of the many recipes was followed, theriac contained up to seventy
ingredients, including the flesh of vipers. Both theriac and mithridates were used to treat infectious diseases, conceived of as ‘pests,’ or poisons. These remedies enjoyed almost mystical stature into the nineteenth century, and medical museums display magnificent faience jars for their keeping. The nineteenth-century physiologist Claude Bernard worked in a pharmacy in his youth, where he saw theriac made by mixing the dregs of all the other preparations in a vat.

Galen, of the second century A.D., was a successful therapist. Among his many medications were vegetable derivatives, which came to be known as galenicals, or simples. His treatments could be aggressive, but he knew of placebo and that his patients’ confidence in his reputation could help him to effect cures. He was ready to take credit for the healing accomplished by nature or by stealth.

A Galenic Therapeutic Strategy: Winning Confidence

I completely won the admiration of the philosopher Glaucon by the diagnosis which I made in the case of one of his friends… Observing on the windowsill a vessel containing a mixture of hyssop and honey, I made up my mind that the patient, who was himself a physician, believed that the malady from which he was suffering was a pleurisy … Placing my hand on the patient’s right side … I remarked: ‘This is the spot where the disease is located.’ He … replied with a look which plainly expressed admiration mingled with astonishment.

– Galen, De locis affectis, cited in L. Clendening, Source Book of Medical History (New York: Dover, 1960), 45-7

Galen’s pharmacopoeia embraced the therapies of his predecessor, Dioscorides, a first-century Greek surgeon who served the army of the Roman Emperor Nero. Dioscorides’ medical botany described more than 600 plants, animals, and their derivatives. He classified his remedies by their physical qualities: oils, animals, cereals, herbs, and wines. Wine made with mandragora (mandrake root) was a potion and anesthetic (see figure 5.1). Mandrake’s anthropoid appearance may have had something to do with its legendary powers. Humans daring to pull it from the ground would be killed by its screams; a dog should be tied to the root and tempted to ‘harvest’ it with a nearby dish of meat.

Dioscorides’ botany remained the most influential book on materia (medical substances) for 1,400 years. Most other medical botanies were simply commentaries on his work. The first medical book printed by the German inventor Johannes Gutenberg was the 1457 Laxierkalender, a collection of laxative remedies. Some herbals were written first in Greek, translated into Arabic, then Latin, and finally a modern language. Often the texts are garbled with missing passages. None of the original Greek illustrations have survived. Illustrations in later manuscripts or books are highly stylized or mismatched; sometimes they describe plants that are difficult to identify or no longer exist as in the illustrated commentary of Pietro Andrea Matthioli of 1554. Research on the ancient writings of Dioscorides and  other botanists continues; its success relies on accurate translation and knowledge of manuscript sources as well as the plants.

Advent of Metals

Copper had been mentioned in the Hippocratic treatises, but it was not until the late fifteenth century that metals were widely used as medical therapy. By then, the Greco-Roman element ‘earth’ thought to have expanded to include three new elements: mercury, salt, and sulphur. Among the proponents of medical metals was Theophrastus Bombastus von Hohenheim, who called himself Paracelsus. Born in 1493 in the German-speaking Swiss mining town of Einsedeln, he deplored the fact that minerals were not used in pharmacy. Influenced by alchemists, he thought that plants and minerals contained specific healing properties called arcana. For every disease, he maintained, a specific remedy must exist, and he proposed that diseases be classified by the drugs that cured them, a notion that has currency today in the concept of the therapeutic trial. Paracelsus expounded his ideas with elaborate demonstrations, including public burnings of the works of Galen and Avicenna. Such behaviour little to help him find and keep employment, and he wandered over Europe for much of his career.

To today’s student and many earlier observers, Paracelsus’s writings seem confused and incoherent. For his bombastic style, he has been portrayed as a ridiculous villain whose legacy is easily dismissed. Recently, however, scholars in history, medicine, and even administration have reconsidered his work to find that his impact may have been greater than previously thought, if only because he dared to challenge the established authority of ancient writers.

New substances, including mercury, sulphur, and antimony, became the wonder drugs of the late Renaissance. In the early sixteenth century, Girolamo Fracastoro recommended mercury to treat the ‘new’ European epidemic of syphilis. Mercury causes gastrointestinal disturbances, gum swelling, salivation, and neurological toxicity, but it does appear to have been an effective treatment for syphilis.

Similarly, antimony compounds produce nausea, vomiting, purging, and cardiovascular collapse. This toxicity led to a ban on antimony at several medical faculties, including Heidelberg and Paris. In the form of tartar emetic, however, the drug was said to cure almost everything, and the ban was overturned by popular demand after it was credited with saving the French king, Louis XN, from typhoid fever in 1657. In the nineteenth century, high-dose tartar emetic was used for pneumonia; clinical statistics testified to its efficacy, but toxicity led to its disappearance once again. One of my own research projects, with Dr Pierre René of Montreal’s Royal Victoria Hospital demonstrated that – its toxicity notwithstanding – tartar emetic has bactericidal properties.

Apothecaries and the Persistence of Plants

Botany was a standard subject in medical education until about 1900. Medical schools and hospitals maintained botanical gardens, not only for teaching but also for a reliable supply of remedies. A few of these gardens remain, such as the Chelsea Physic garden in London; others have been recreated. But doctors were not the experts; an apothecary tradition in Europe was well established as a distinct guild. Training was by apprenticeship, and a wide variety of standards and forms of practice arose – a situation referred to as ‘pluralism.’ Inheriting an apothecary practice was one of the earliest professional opportunities for women. Gradually the role of the English apothecary shifted to become the basis of general practice (see chapter 14). The Worshipful Society of Apothecaries of London has roots going back to the twelfth century. Still in existence, it operates an excellent site and runs many fascinating courses, including a prestigious course in medical history and training in conflict and catastrophe medicine.

Reflecting the intimate relationship between medicine and growing plants, the first Europeans to cultivate land in New France are said to have been the family of the apothecary-settler, Louis Hébert of Paris, who arrived in 1617. Some eighty years later, the first herbarium of North American plants was collected by the French physician and surgeon Michel Sarrazin also of Quebec; he corresponded with scientists in France and is said to have collected over 800 specimens of native North American flora; among them the pitcher plant (Sarracenia purpurea). The identities of the first apothecaries of colonial America are unknown. Wise women and autodidacts served this function, as did politicians and preachers; many early doctors, especially those in rural areas, were obliged to prepare their own remedies. The earliest apothecary stores were attached to medical practices as dispensaries: the 1698 account book of Bartholomew Browne of Salem, Massachusetts, attests to this activity. By 1721, Boston is said to have had fourteen apothecaries, and the Irish immigrant, Christopher Marshall, opened his Philadelphia shop in 1729. During the Revolutionary War, the military position of Apothecary General for the rebels was filled by the twenty-one-year-old Scot, Andrew Craigie. The first college of pharmacy in the United States was established in Philadelphia in 182l.

Many drugs still in use today were originally derived from plants, although most are now synthesized in laboratories for commercial distribution. Some have been around for a long time. Senna has been known as a laxative since at least 1550 B.C.; castor oil comes from the garden plant ricinus, which also was known to the Egyptians; foxglove has provided digitalis since at least the eighteenth century; and an aspirin-like substance is found in the bark of willow trees and low birch. The benefits of some vegetable remedies are rediscovered with much fanfare, as was the case with the gastrointestinal effects of bran, promoted by Denis P. Burkitt in 1973. Similarly, the cholesterol-lowering value of oat bran was widely publicized in the 1980s. Modern treatments originally derived from plants include the leukemia drug vincristine, found in the Madagascar periwinkle; podophyllotoxins (VP-16 and etopiside), derived from the root of the mayapple; and the breast cancer agent taxol, first extracted from ancient yew trees of Japan and the Pacific Northwest.

Effective remedies extracted from complex plants challenge scientists to imagine other miracle cures lurking in the bushes. In the twentieth century, Parke Davis became one of the first drug companies to sponsor a systematic search of the jungle for new remedies. The 1960s fascination with the psychedelic plants known to aboriginal peoples also brought ethnobotany to the attention of scientists. More recently, the destruction of the rain forest has led to a certain panic over the potential extinction of three-quarters of the world’s plant species, with a presumed loss of thousands of potential remedies, some of which may be known to indigenous peoples. The Journal of Ethnopharmacology, founded in 1979, provides a forum for investigators. Several projects, both botanical and anthropological, are under way to survey, identify, and analyse the medical potential of plants, some of which are already used by the peoples of Amazonia.

Botanist John Thor Arnason of the University of Ottawa identified and studied the pharmacological properties of plant products known to the native peoples of North America. Like the classicists who study antiquity, ethnobotanists need language skills to interpret myriad dialects and oral traditions; much information has already been lost. For example, early accounts of European settlement tell how the 1535-6 winter encampment of Jacques Cartier at Stadacona (Quebec) was healed of ‘great disease,’ probably scurvy, by a so-called white cedar (or spruce) tea given them by the natives. By the winter of 1605-6, when Samuel de Champlain founded the habitation at Port-Royal (Annapolis Royal, Nova Scotia), the remedy could no longer be identified, although Champlain knew of Cartier’s experience. Evergreen needles contain vitamin C, but dialect discrepancies over the precise name mean that scientists have been unable to trace the exact cure.

Classification and Therapeutic Change

The earliest classifications sorted drugs into poisons and antidotes. Other classifications were based on their physical properties (Dioscorides) and, later, their physiological effects. For example, poppy juice (containing opium) and nightshade (containing atropine) were both classified as sleep-inducing narcotics, although the latter is no longer thought of in that context. Willow bark, which contains salicylic acid, was an ‘astringent’ that dried secretions, explaining its effect on gout. Substances that produced vomiting were emetics. Those that caused diarrhea were laxatives, cathartics, or purges, depending on their ferocity. Sudorifics made patients sweat. Stimulants woke them up. Diuretics made them urinate. The classification followed the description of the physiological effects, whether or not the effects were the reason for administering the drug. To a certain extent, we still view drugs in this way, but now we tend to explain the side effects through

a chemical rationale.

For example, digitalis was first thought to be a diuretic because it reduced peripheral swelling and increased urinary flow. It is now a heart-strengthening drug or cardiotonic, but still it reduces edema and increases urinary output. In other words, the rationale has changed, but the benefits are constant. In his treatise of 1785, William Withering brought digitalis into medical orthodoxy, described its harmful effects, and reported his experiments on poor patients. He had learned of foxglove leaf, he said, from a secret remedy belonging to ‘an old woman in Shropshire, who sometimes made cures after the more regular practitioners had failed.’ Sadly, the identity of this woman is unknown, although some say that her name was Hutton. Medical history holds many other unknown progenitors, while experiments conducted on disadvantaged people without consent continued well into the twentieth century (see chapters 6, 7, 11 and 13).

During the nineteenth century, the pharmacopoeias of Europe
and North America contained drugs that are now considered poisons: mercury in the form of calomel; antimony in the form of tartar emetic; jalap, a powerful cathartic; strychnine to stimulate appetite and bowel action; opium and laudanum for pain and sleep; alcohol as a stimulant. Combined with restrictive diets, vicious enemas or clysters and various means of bleeding, such as phlebotomy, leeches, and cupping, this style of interventionist therapy has been called drastic, or heroic. Not everyone took it lying down – hence, the famous
artistic and literary lampoons of Molière, Thomas Rowlandson, James Gilray, Honoré Daumier, and G.B. Shaw. The word ‘heroic,’ which normally signifies admiration, became a pejorative term in medicine. Originating from the vigorous last-ditch attempts to save lives, it implies overdrugging, overdosing, and overreacting.

Wonder Drug, 1665

SGANARELLE: What, sir, you are a heathen about medicine as well? … You mean you don’t believe in senna or cassia or emetic wine? … You must have a very unbelieving soul. But look what a reputation emetic wine has got in the last years. Its wonders have won over the most sceptical. Why, only three weeks ago, I saw a wonderful proof myself…. A man was at the point of death for six whole days. They didn’t know what to do for him. Nothing had any effect. Then suddenly they decided to give him a dose of emetic wine.

DON JUAN: And he recovered?

SGANARELLE: No. He died…. Could anything be more effective?

-Molière, Don Juan, Act 3 (1665); from Don Juan and Other Plays by Molière, trans. Ian Maclean and George Graveley (Oxford: Oxford University Press, 1998), 60

Medical therapeutics has undergone greater change in the last two hundred years than in the preceding two thousand. Why? A number of reasons can be offered: No doubt, the fashion of period and place had an influence. For example, in postrevolutionary France, things associated with the old order were rejected because they were old. Reflecting this ideal, the physiologist François Magendie hoped that physicians would abandon the complex derivatives of the past in favour of new, chemically pure drugs. In eight editions of the formulary for the Hôtel-Dieu hospital in Paris between 1821 and 1834, he recommended purified chemicals over the older ‘simples’ (the Galenic plant-based precursors), morphine over opium, quinine over cinchona bark. He referred to his animal tests of new alkaloids, such as codeine and bromide. Some scientists favoured therapeutic nihilism, but the degree to which it was actually practised is difficult to determine.

Three other reasons may account for the decline in drastic remedies. First was the rise of surgery following the advent of anesthesia and antisepsis, Why give a nasty pill forever if an operation will cure the problem in an instant? Second, the wide acceptance of germ theory in the 1880s (see chapter 4) and the discovery of hormones soon after caused doctors to turn from modifying disease
to finding a set of ‘magic bullets’ to eliminate the causes of disease. Third, pressures from homeopathy and other medical competition may have pushed medicine toward less drastic therapy. Using a computer-assisted analysis of prescription records from two urban hospitals, John Harley Warner (Therapeutic Perspective, 1997) elucidated a change in doctors’ prescriptions between 1820 and 1885: side-effect-ridden ‘heroics’ were replaced by more gentle therapies. Among other factors, Warner related the change to issues of professional identity between doctors (allopaths) and unorthodox practitioners whose remedies were less harmful and more attractive to patients. Other historians suggest that resistance to homeopathy prompted the professional organization of American physicians (see chapter 6).

Magic Bullets: Antibiotics, Hormones, and Twentieth-Century Optimism

When microorganisms became accepted as a cause of disease, research focused initially on producing vaccines to heighten natural immunity (see chapter 4); only secondarily were agents sought to attack bacteria. Cinchona (or Jesuit bark) had been used to prevent and treat malaria since the seventeenth century, long before the Plasmodium organism had been visualized; its ‘rationale’ was as a ‘tonic’ that heightened resistance to the noxious atmospheres thought to cause malaria. Discovery of the parasite in 1880 by the future Nobel laureate Charles Laveran provided an entirely new rationale for the still effective quinine. The conscious quest for agents that kill germ invaders yet leave a living, healthy patient has been called ‘the search for the magic bullet.’

The first two magic bullets were developed by Paul Ehrlich: the dye, trypan red, for experimental trypanosomiasis (1903); and the arsenic-containing Salvarsan, for human syphilis (1910). Ehrlich worked with dyes and stains that had a special affinity for bacteria, hoping that they would selectively carry a toxin into the invading cell. His 1908 Nobel Prize was awarded for theoretical work on immunity, although his work on drugs is better known.

Sulpha drugs also formed part of the magic bullet agenda. Gerhard Domagk, working for the Bayer laboratories in Elberfeld, Germant, developed the first sulpha drug, Prontosil. Having proved that it was effective against streptococcal infections in rats, Domagk’s first human trial was conducted on his own daughter, who suddenly developed septicemia in December 1933. She recovered. Domagk was awarded the Nobel Prize in 1939, but he was arrested and jailed by Gestapo for having attracted undue foreign approbation. He did not receive his award until 1947, when the prize money was no longer available. Few present-day physicians have heard of Domagk, possibly because of the wartime hostilities with Germany and possibly because he worked for a big pharmaceutical firm.

The most famous magic bullet is penicillin. Schoolchildren are taught the story of Alexander F1eming, who, in culturing bacteria, rejected plates that had been infected with mould – until he was struck by the significance. But historians have shown that Fleming’s 1928 ‘discovery’ that penicillium mould kills bacteria had been published earlier by others (notably, Bartolomeo Gosio of Rome in 1896 and E. Duchesnes of Lyons in 1897). Montreal mycologist Jules Brunel also reported that elderly Québécois had long used moulds on jam as therapy for respiratory ailments. Fleming recognized the potential of his findings but did not pursue applications, nor did he cite his predecessors. The Oxford researchers, Howard W. Florey and Ernest Chain, extracted, purified, and manufactured penicillin, which was released in 1939 a decade after Fleming’s observation. Fleming, Florey, and Chain shared the Nobel Prize in 1945.

Interferon is not really a magic bullet, because it does not kill directly so much as it helps the body to do so by stimulating the immune system. It is a cytokine that occurs naturally in response to foreign proteins. As a result it is used for treatment of viral infections, cancer, and autoimmune diseases, as it encourages the body to destroy or control invasions of viruses, malignant cells, or irrational antibodies that attack the self. Discovered by Japanese scientists in 1950s, it was scarce until 1980, when the techniques of recombinant DNA enabled manufacture on a large scale, making it one of the first drugs to result from genetic engineering. The many different types now available are applied to infectious and non-infectious diseases, including hepatitis C and multiple sclerosis.

Hormones and vitamins do not kill invading organisms, but they too act as magic bullets when they specifically target and replace deficiencies. (On vitamins, see chapter 13.) The isolation and elaboration of several hormones early in the twentieth century contributed to a rising medical optimism (see chapter 3). Frederick G: Banting, a practitioner in London, Ontario, became convinced from his reading that the cause of diabetes mellitus was in the pancreas. In the summer of 1921 he borrowed laboratory space from J.J.R. Macleod· at the University of Toronto to work with medical student Charles Best on experimentally induced diabetes in dogs. The rapid isolation and purification of the hormone was the elegant work of biochemist J.B. Collip. Within a short time, insulin was the first hormone to be developed as ‘specific replacement therapy for this widespread and· previously fatal disease. The 1923 Nobel committee overlooked Best and Collip and gave the prize to Banting and Macleod, who shared it with the other two.

Hormones were soon applied to the treatment of tumours, fuelling the growing quest for substances that could not only replace deficiencies but cure all disease. Several hormone discoveries and. treatments followed in succession. P.S. Hench and E.C. Kendall of the Mayo Clinic found the hormone of the adrenal cortex in 1949; in keeping with the buoyant mood of the time, their Nobel Prize was awarded the following year. Soon after their achievement had been announced, an awestruck clinician rushed up to historian E.H. Ackerknecht to tell him that he was a lucky man: all diseases would soon be wiped out and the only professor left in the medical faculty would be the historian (Ackerknecht, Therapeutics, 1973, 2). One of the byproducts of this overwhelming enthusiasm would be an effect on history itself- toward further ridicule of the past.

Clinical Trials

Historical comparisons with untreated human groups had long been made to introduce new treatments. Deliberate clinical testing began in the early nineteenth century in parallel with the development of statistical methods (see chapter 4). For example, P.C.A. Louis applied his numerical medicine to cast doubt on the value of bleeding. Animal trials, much used by Magendie and Bernard, continued to precede trials on humans.

In response to the many pharmacological discoveries of the early twentieth century, committees were formed to develop standards to ensure that results could be ascribed only to the drug and not to other extraneous factors (e.g., British MRC Therapeutics Trial Committee 1931). The active recruiting of concurrent, untreated ‘controls’ (ca 1900) was a conscious development of the twentieth century. They began with self or alternate controls (ca 1900); randomized controls came later (ca 1940). The practice of ‘blinding’ observers as well as subjects increased after 1940 as a means of dealing with the powerful placebo effect. Standardization meant that drugs were carefully tested on ‘the seventy-kilogram man’; effects on women, pregnant women, and racial minorities were often ignored. The zeal to investigate trod on patients’ rights, sometimes with disastrous results; the postwar Nuremberg Code was devised to address these abuses and clarify the process of informed consent (see chapter 15). The first randomized controlled trial (RCT) is often said to have been the Medical Research Council (MRC) study of streptomycin in tuberculosis (British Medical Journal 2 [1948]: 769-88); however other contenders for this honour have been identified, including a 1944 MRC-funded trial on patulin for the common cold. Throughout the 1950s many clinical trials were used in cancer medicine.

Randomized controlled trials and the evidence-based medicine (EBM) movement are indelibly associated with Scottish epidemiologist Archie L. Cochrane. He claimed his ‘first, worst, and most successful’ trial was a 1941 study on nutritional value of yeast and vitamins conducted on himself and twenty other starving prisoners of war. After the war, he did field research in Wales for the Medical Research Council. In his influential book of 1972 (Effectiveness and Efficiency), he complained that, despite many years of RCTs, most treatments were prescribed without good evidence of benefit. In his name, an international program was founded in 1993: the Cochrane Collaboration endeavours to collate all available RCT information in various areas of practice through systematic reviews.

‘Evidence-based medicine’ is a term coined in 1991 by Gordon Guyatt of McMaster University, Canada. He pioneered the movement with colleague David Sackett; they conducted research into improving methodologies and Sackett later directed a special centre at Oxford University. Another leader in this field was Iain Chalmers, who began to direct the Perinatal Epidemiology Unit’ of the United Kingdom in 1978 (see chapter 11). He became the founding director of the Cochrane Collaboration and was knighted in 2000.

The persuasive arguments of EBM proponents resulted in widespread acceptance of these principles for advancing and changing therapeutic practice as well as medical education. Keating and Cambrosio (2007) argue that it has created a new form of practice. Its history has yet to be written. But most medical historians are allergic to the term: they claim that it relies on static diagnostic categories and tends to imply (albeit unintentionally) that our predecessors did not consider evidence at all. Medline is now rife with articles showing that the great but forgotten ‘so-and-so’ wielded EBM principles first and long ago. As P.R. Rangachari aptly put it, EBM is ‘old French wine with a new Canadian label’ (J Royal Soc Med 90 [1997]: 280-4). Social critics of medicine argue that the turn to EBM forces individuals to conform to group norms. EBM proponents protest that these criticisms unfairly address unintended consequences or unrealistic applications of the method.

Recent Scepticism: Is There No Magic Bullet?

The mid-twentieth-century optimism was premature, if understandable. Aside from their many side effects, magic bullets created magic microbes. We now have drug-resistant malaria and gonorrhea, while multi-drug-resistant staphylococcus (MRSA) stalks the literature and the wards, colonizing unsuspecting patients, especially in chronic care facilities. And we worry about penicillin-resistant syphilis. Dreadful nosocomial infections lurk in the antibiotic-ridden ferment of hospitals, where few but resistant strains can survive. In his Medical Nemesis (1975), Ivan Illich suggested that the medical establishment had become a serious threat to health. More recently, the blunt title of Allan Brandt’s history of venereal disease, No Magic Bullet, expressed the postmodern disillusionment with the goal of universal disease eradication.

Antibiotics have certainly saved individual lives, but did they prolong life expectancy? Few historians were prepared to assess possibility that the new drugs, so effective in individual cases, might not be good for the collective. People live longer now than they did two hundred years ago, but how much of that enhanced longevity is actually due to medicine? For example, we now know that mortality from the leading killer, tuberculosis, began to decline before the advent of vaccination and antituberculous drugs. In other words, hygiene, diet, wealth, and lifestyle probably counted as much for the decline, if not more. The rise of tuberculosis in parts of North America during the 1990s coincided with decline in wealth, living conditions, and nutrition. A spate of novel infections beginning in the late 1990s only enhanced the scepticism (see chapter 7).

Thalidomide

The story of thalidomide provides a powerful example of innovation gone awry; it shook confidence in medicine on a global scale. A highly effective sedative developed in the late 1950s, thalidomide was applied often to morning sickness of pregnancy. Used in Germany from 1957 and in England from 1958, it was linked to birth defects in late 1961 by Australian William McBride and German geneticist Widukind Lenz after nearly a year of studying what seemed to be an rise in gross limb abnormalities in infants (phocomelia). Its removal in Canada came five months later in April 1962; the first affected Canadians were born in Saskatoon in February and June 1962 (CMAJ 87 [1962], 412, 670). Because the drug was teratogenic in the earliest weeks of pregnancy, the full scope of the tragedy was not known until nine months later. A total of 10,000 children in at least twenty-five countries were affected: 5,000 in Germany, 540 in Britain, 300 in Japan, 125 in Canada, 107 in Sweden. Probably many more pregnancies ended in miscarriage. The tragedy was averted in the United States because the drug had not yet been fully licensed, owing to the hesitation of physician-scientist Frances Oldham Kelsey of the Food and Drug Administration (FDA). In 1962, President Kennedy presented her with the award for distinguished Federal Civilian service. In 2005, she retired at age 90 from the FDA, a much lauded national hero.

Affected people were of normal intelligence and eager to work despite their missing limbs and other disabilities. The German drug company, Grünenthal, was sued, but an out-of-court settlement resulted in the company contributing 100M DM, matched bythe government, into a trust fund for pensions. A similar trust begain in Great Britain in 1973. In Canada, affected people received no compensation until September 1992, when they were thirty years old. Because the drug had been properly licensed, the government was held liable, not the pharmaceutical industry and not the medical practitioners who had prescribed it. Ironically, and some would argue inappropriately, thalidomide has now been reintroduced for the management of various skin conditions, including graft-versus-host disease, an iatrogenic disorder. The memory of the tragedy meant that these newer uses met with vigorous opposition.

Thalidomide is an extreme example of therapeutic disaster; even its victims understand that their deformities were unintended. But it should not be forgotten. Thalidomide reminds us that good intentions do not prevent medicine from being harmful; it helps to account for the complicated licensing procedures that are so often criticized for slowing innovation. Animal rights activists point out that animal testing has greatly increased since the tragedy; yet they argue that it is often futile as a tool for studying humans: for example, the original pregnant animal tests for thalidomide had been negative. The thalidomide story also renews and explains the public’s continued mistrust of the medical establishment. The negative image guarantees a market for the dissenting literature and for products of the largely unregulated folk-medicine and health-food industry, an industry whose net worth is difficult to determine, but in the United States alone, the annual market is said to exceed $60 billion.

Rational Derivatives

Magic bullets were extracted from the living tissues of animals, plants, and moulds, and they could also be synthesized in the laboratory. They were designed to repair the biological causes of infections and deficiencies. But in the early twentieth century, many other diseases were defined in a chemical or molecular sense. Attempts to ‘design’ rational remedies are based on an understanding of the precise biochemical error producing the disease. For example, in Parkinson’s disease, chemicals that appear to be deficient in the brain become the medicines administered by doctors. Other examples abound: histamine antagonists to reduce gastric acid secretion; beta-blockers to prevent transmission of certain nervous impulses; calcium channel blockers for ischemic heart disease. In the majority of these cases, the ‘designer drug’ emerged out of trials as the most effective and least toxic of a series of related compounds created in a laboratory to solve a chemical problem.

The 1988 Nobel Prize was awarded to James Black, Gertrude B. Elion and George H. Hitchings for the development of ‘rational methods for designing’ treatments; most of the drugs that they ‘discovered’ or developed are still widely used: cimetidine, propanolol, 6-mercaptopurine, 6-thioguanine, allopurinol, and trimethoprim. In each case, the drug was a chemical device invented to solve a chemical problem.

By the year 2000, the human genome project showed how many diseases could be defined by molecules; therefore, virtually any genetically defined problem could become a call for a rational derivative. Relying on the Nobel-winning technology of hybridoma research, ‘genetic engineering,’ so much discussed in the twentieth century, finally began to cash out. First used in 1977, the word ‘drugable’ reflects the pharmaceutical potential of molecular discovery (see figure 5.2). The best but not the only examples of drugable products are the ‘-mabs’ – monoclonal antibodies targeted directly against enzymes and tumour antigens: imatinib for chronic leukemia; rituximab for lymphoma; trastuzumab for breast cancer; dozens of others in the pipeline. These wonder drugs offer hope to millions, but they are very expensive to make and to use. (On biotechnology see chapter 9.)

The Pharmaceutical Industry

Since the late 1800s, when specific chemical agents were isolated and characterized, the need for standardization and synthesis of natural substances favoured the development of a drug industry. For more than a century, drug companies have engaged in and supported research with funds and laboratories. Not only does the industry have power over the sales and distribution of remedies, it controls more than 70 per cent of the funds spent on drug research even when that research is done in universities. Privately funded research is often extremely productive. Like Domagk of sulpha fame, the 1988 Nobel laureates were employees of major pharmaceutical firms.

Medical and public reaction to drug companies is ambivalent. Their discoveries are welcome, and for expensive new treatments their financial input is essential. But their big profits and their research grants are sources of discomfort. Critics worry that drug-funded research is ethically compromised or that accepting sponsorship is a form of advertising; scholars show that it skews publication. They also point to the financial gain from selling drugs, claiming that the industry is not motivated to cure disease; chronic illness is good for business. These worries are heightened by cases like that of Toronto’s Dr Nancy Olivieri, whose 1996 expression of concern over the side effects of a trial drug resulted in withdrawal of industry support for the trial and serious personal and professional harassment from her academic colleagues.

Drug patents have a long history extending back to the early modern period when they reflected royal approbation. Since the late eighteenth century, patent protections meant that the contents of remedies could remain secret for a period of time, usually seventeen years. Historically speaking, the term is often used to refer to nostrums and ‘quack’ remedies, but all drugs were eligible for patents, and all newly developed medications still seek them. The complex history of patents is written in precedents from case law and legislation in various countries, where swings between private protection of investments and public distrust of monopolies result in modifications and changes. Furthermore, wide cultural variation exists in different developed nations about the appropriateness of taking pills: Japan and France lead the world in taking medicines.

In the 1970s various procedures were implemented to allow Canadian pharmacists to replace expensive brand-name drugs with the least expensive substitute, often a copy with the same composition manufactured by a ‘generic’ company that had not invested in developing the drug. These policies were unpopular with the research industry because they ignored its heavy investment in developing products. The situation created problems with international trading partners. For example, in 1987 and again in 1993, Canada’s patent laws were to satisfy the demands of the General Agreement on Tariffs and Trade (GATT). Similar changes took place in other countries. The new laws guaranteed patent owners up to twenty years of exclusive sales of their product (seventeen years in the United States). In return, the pharmaceutical industry was obliged to increase its spending on research and development (R&D). It has complied, and private funding of academic research has increased, but it continues to lobby against drug substitution policies as cost-control measures.

But because of, or in spite of the changes, drug prices rose. In countries with universal health insurance, the higher costs take a larger slice out of the tax dollar, since drug spending for seniors and welfare recipients are covered benefits. These governments were motivated to regulate costs. Therefore around the mid-1990s, as laws changed to afford longer patent protection, national regulatory bodies for controlling the prices of patent drugs were established in Canada, France, Germany, Italy, Sweden, the United Kingdom, and elsewhere. Their methods vary. In Canada the agency can intervene to ensure that prices rise no higher than the consumer price index.

The United States does not regulate drug prices. Also around 1993, some countries, led by Britain, established bodies to establish and monitor ‘codes of practice’ concerning ethics an safety in the promoting and selling of drugs. In developing countries, the protected drugs are so expensive that they are unavailable, and international efforts focus on new regulations to allow generic substitutes or charitable donations.

Without regulation, prices in the United States rose much higher than in other countries. Critics argued that the sick were being forced to pay for the heavier advertising in direct-to-consumer practices that are illegal elsewhere; the sick must also bear the higher financial burden imposed by the for-profit health insurance industry and a litigious culture that spawns expensive lawsuits. In 2003, a grassroots movement of Americans organized for action: comprising mostly seniors living in border states such as Maine and Michigan, they travelled to Canada in busloads, or they ordered medications by mail. Internet suppliers leapt to the fore and Canadian physicians were pressured by ‘friends of friends’ to prescribe for people whom they had never met. The legality of the matter and the quality of Canadian medications were called into question by the U.S. FDA, although most of the products sought were identical to those sold (and approved) in the United States. In 2007, some clarity was established when legislation allowed American pharmacies to import drugs; however, interpretations have been variable and the U.S. Senate rejected a bill in late 2009.

Sensitive to the criticism of making money during a time of fiscal restraint and feeling unfairly blamed for the high costs of innovations, pharmaceutical manufacturers began to defend themselves in the early 1990s through their professional associations. In aggressive campaigns for public information, they argued that research into newer and better drugs for the management of illness helps to control health-care costs by keeping people out of hospital; it also supports . academic inquiry and provides jobs. The contributions to research meant that by 2003, according to JAMA (289, 454-65), at least a quarter of medical scientists in North America had financial affiliations with industry and two-thirds of universities held equity in companies – a growing problem of conflict of interest. In 2005 the approximately fifty member companies of the Canadian pharmaceutical organization donated $86 million to charity and invested almost C$1.2 billion in R & D.· In 2006, the equivalent group in Britain gave £3.9 billion to R & D, while in the United States the figure was US$43 billion. With the 2003 amendments to the agreement governing intellectual prop- (TRIPS), the World Trade Organization has attempted to make low-cost remedies available to poor countries while continuing to provide patent protection in rich nations. Canada’s first shipment of generic anti-HIV drugs went to Rwanda in September 2008.

The Industry View and Some Questions

British doctors are still reluctant to prescribe new medicines – clinicians in other countries are far more likely to prescribe medicines that have come on to the market in the past five years.

– Association of the British Pharmaceutical Industry (ABPI), http://www.abpi.org.uk/ (accessed 3 November 2008)

According to the same website, eight of the ten top-selling in Britain were launched within the preceding decade (average time since launch 7.9 years). Of the fifty top-selling drugs in Britain only two had been on the market for twenty or more years (L-thyroxine and Zoladex).

Is the relative reluctance of British doctors to prescribe new-drugs a bad thing in your opinion? In the opinion of the ABPI? Why?

Why have most best-selling drugs been on the market for less than twenty years?

What is the purpose of the ABPI?

The pharmaceutical industry also exercises considerable, though not exclusive control over drug information. Doctors are usually unable or ill-equipped to examine the research literature. As a result, they tend to learn about new drugs from roving representatives, advertisements in medical journals, and conferences, all vulnerable to industry influence. Continuing Medical Education initiatives of medical schools and professional bodies are working to improve the situation by keeping the onus for disseminating news of innovations and dangers in the hands of supposedly impartial practitioners. Rare scholars, such as Joel Lexchin and Jeremy Greene, try to sort out the relationships of doctors and industry now and in the past.

The Life Cycle of Innovations in Treatment

By 1954 Ernest Jawetz had shown that medical approval follows a pattern. At first, the use of a new remedy rises quickly in a period of optimism; then some untoward side effect is noted, and the approval drops rapidly to a low based on mistrust and fear; finally, use stabilizes at a moderate level – swings that have been called ‘from panacea to poison to pedestrian’ (see figure 5.3). The Jawetz model certainly fits the life cycle of chloramphenicol, which was developed in 1948 as an effective antibiotic. By 1967, it was found to cause aplastic anemia in one of every 30,000 recipients. Sales fell dramatically, and its manufacturer, Parke Davis, was forced to merge with Warner Lambert. Since then, chloramphenicol use has risen slowly to a stable but lower level.

Jawetz’s curve has been applied to the natural history of other remedies, including thalidomide and digitalis; the latter suffered a long period of unpopularity. For digitalis, the margin between therapeutic and toxic is narrow; levels high enough to be of benefit are close to those causing side effects. Only when dosage could be stabilized was medical approval stabilized too (Estes 1979).

Legislation and careful drug testing are intended to level off peaks and troughs, but the Jawetz curve is unlikely ever to become a straight line. Increasingly careful drug testing may eliminate the precipitous drops due to unexpected side effects, but gradual decline in a drug’s us will always occur as one remedy is replaced by safer and more effective products, or as the disease in question becomes something else (see chapter 4). Dips in the curve are generated not only by the recognized side effects but also by what disease happens to be in fashion and who comprises the target population.

The most used or most sold drugs have changed markedly in the two centuries (see table 5.1). But information for recent years is difficult to gather, and it is even more difficult to draw meaningful comparisons in time and space. Some is provided in terms of retail sales; some in terms of frequency of use (compare the differences in table 5.1 in columns 1997 and 2004). A top-selling drug means that it generates the most money – it does not mean that it is prescribed most often. Once a patent expires, generic companies can begin to sell versions of the same drug. As a result, the R&D pharmaceuticals are constantly looking for new ‘follow on’ drugs to answer to the same problems. Some critics argue that newer products have only slight differences from the substances that they replace, and that they might not be better.

At the time of writing, the list of best-selling drugs in Canada, Britain, and the United States is dominated by agents for the treatment of the risk factors (rather than symptoms) of heart disease, hypertension and high cholesterol, and for asthma, heartburn, mental disorders, and arthritis – all chronic problems, many the product of diet and lifestyle, conditions that can be treated unto death. The list is yet another sign of an aging society in which neither patients nor practitioners are particularly enamoured of the concept of disease prevention. The target diseases depend on place: for example, comparatively more asthma products are top-sellers in Britain; more heartburn remedies are top-sellers in the United States. Is this because of differences in the disease incidence, or is it because of geographic differences in rates of diagnosis and in willingness to take medication?

Over-the-counter remedies are not always included in these reports, though user surveys suggest that they are probably the most frequently used medications of all. Similarly because they generate less income, generic versions of older, off-patent remedies occupy lower places on the best-seller lists; but that list does not reflect frequency of use; nor does it comment on effectiveness.

The pharmaceutical industry also participates in the creation of disease to create larger markets for its products. The launch of Viagra was accompanied by an advertising campaign that was packaged as ‘raising awareness.’ But it also raised the status of the condition, changing its name from the weakling ‘impotence’ to the manly ‘ED’ (erectile dysfunction), and implying that one did not have to have a real disease to benefit from the pill. ‘E.D. is more common than think.’ Similarly the 1986 launch of Prozac, patented in 1977, accompanied by a vigorous campaign, again packaged as ‘raising awareness,’ that, at its peak, resulted in the drug being taken for shyness and nervousness and netted the company $3 billion in the year 2000; being alive was a Prozac-deficient state. At its official website, Eli Lilly now boasts that Prozac is ‘the most widely prescribed antidepressant medication in history’ and has been prescribed to ‘over 54 million people worldwide. ‘ However, this statement belies the serious problems that the company suffered in early 2002 when it finally lost its fight to extend the patent. (For more on SSRI drugs like Prozac, see chapter 12.)

The clock starts ticking on a patent as soon as it is filed; drug development can take another ten years before it can be sold. The generic version of fluoxetine sold for approximately 10 per cent of the Prozac price. Lilly lost 90 per cent of its Prozac market in a year; net worth collapsed $35 billion in a single day. Although fluoxetine is not among the best-selling drugs when measured by sales, its generic versions are used by millions of people every day. This example shows how a successful company uses the Jawetz curve and the duration of patents; it must plan to replace its new drugs even as they are launched, meaning that the search for something better is also driven by a parallel search for something different – a search that preoccupies economic and intellectual resources in finding another version of the same thing at the expense of finding solutions to rare diseases or putting useful remedies into the hands of poor people and poor countries.

More spectacular shifts in drug popularity were to come. In July 2002, JAMA published a clinical trial that showed hormone replacement therapy was associated with a higher risk of heart disease and cancer. Overnight, the widespread practice of giving prophylactic estrogen to all menopausal women virtually disappeared with the predicted economic consequences. In September 2004, another clinical trial implicated a ‘coxib’ drug, designed to treat arthritis, with causing heart disease. Merck, the company making a related drug (but not the one studied) withdrew its product (Vioxx) from the market; overnight, its stock collapsed; personal injury lawsuits mushroomed and are still being fought. Many earlier trials had attested to the benefits of these remedies.

Swings in drug fortunes can go the other way. Trastuzumab (Herceptin) is one of the elegant drugs designed from molecular medicine to treat a specific antigenic type of aggressive breast cancer (see chapter 8). When a clinical trial in 2005 showed that it was beneficial at the early stage, women took to the streets, generated petitions, lobbied their governments, and demanded the drug be made available to all who qualify, despite the $50K cost for a single patient. At the time of writing, whether adjuvant Herceptin is ‘the right thing to do’ depends on nationality, private insurance, or personal wealth.

As pharmacology as becomes more and more precise, the wild swings in fortune and practice seem no less extreme. Achieving the promise of clinical trials must lie ahead, because it cannot be found in the past.

Discussion Questions

1. Duffin talks about the tendency for historians of medicine to use “presentism” in their analyses: ridiculing the traditions and therapies of the past and assuming that the knowledge we have today is superior to that we had before. How do you think presentism comes into our understanding of the history of medicine today? How do you think future generations will look upon the medical therapies of the current day?

2. What single development in the history of therapeutics do you find most intriguing from this article? Has reading this article led you to think differently in any way about the role of drugs in modern medicine?

Advertisements

What do you think?

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s