Medicine History

Disease, Early Concepts

The good clinician thus knew his patients, but he also knew his diseases. Eighteenth-century practitioners trod in the footsteps of Thomas Sydenham and ultimately of Hippocrates, amassing comprehensive empirical case records, especially of epidemic disorders. Sydenham was much admired in England.
`The English Hippocrates' had served as captain of horse for the Parliamentarian army in the Civil War. In 1647, he went to Oxford and from 1655 practised in London. A friend of Robert Boyle and John Locke, he stressed observation rather than theory in clinical medicine, and instructed physicians to distinguish specific diseases and find specific remedies.
He was a keen student of epidemic diseases, which he believed were caused by atmospheric properties (he called it the `epidemic constitution') that determined which kind of acute disease would be prevalent at any season. Following Sydenham's teachings, a Plymouth doctor, John Huxham, published extensive findings about disease profiles in his On Fevers (1750); and a Chester practitioner, John Haygarth, undertook analysis of smallpox and typhus epidemics. John Fothergill, a Yorkshireman and a Quaker who built up a lucrative London practice, was another avid follower of Sydenham.
In Observations of the Weather and Diseases of London (1751-4), Fothergill gave a valuable description of
diphtheria (`epidemic' sore throat), which was then growing more common especially among the urban poor. His friend and fellow Quaker, John Coakley Lettsom, was the driving-force behind the clinical investigations pioneered by the Medical Society of London, founded in 1778.
Such medical gatherings, developing also in the provinces, collected clinical data and exchanged news. The birth of medical journalism also helped pool experience and spread information.
Systematic epidemiological and pathological research programmes did not develop until the nineteenth century; yet many valuable observations on diseases were made before 1800. In 1776, Matthew Dobson demonstrated that the sweetness of the urine in diabetes was due to sugar; in 1786, Lettsom published a fine account of alcoholism; Thomas Beddoes and others conducted investigations into tuberculosis, which was already becoming the great `white plague' of urban Europe. But no decisive breakthroughs followed in disease theory. Questions as to true causation (vera causa) remained highly controversial. Many kinds of sickness were still attributed to personal factors - poor stock or physical endowment, neglect of hygiene, overindulgence, and bad lifestyle. This `constitutional' or physiological concept of disease, buttressed by traditional humoralism, made excellent sense of the uneven and unpredictable scatter of sickness: with infections and fevers, some individuals were afflicted, some were not, even within a single household.
It also drew attention to personal moral responsibility and pointed to strategies of disease containment through
self-help. This personalization of illness had attractions and pitfalls that are still debated today.
Theories that disease spread essentially by contagion were also in circulation. These had much common experience in their favour. Certain disorders, such as syphilis, were manifestly transmitted person-to-person.
Smallpox inoculation, introduced in the eighteenth century, offered proof of
contagiousness. But contagion hypotheses had their difficulties as well: if diseases were contagious, why didn't everyone catch them?
Such misgivings explain the popularity of long-entrenched miasmatic thinking - the conviction that sickness
typically spread not by personal contact but through emanations given off by the environment. After all, everyone knew that some locations were healthier, or more dangerous, than others.
With intermitting fevers like `ague' (malaria), it was common knowledge that those living close to marshes and
creeks were especially susceptible. Low and spotted fevers (typhus) were recognized as infecting populations in the overcrowded slum quarters of great towns, just as they also struck occupants of gaols, barracks, ships, and workhouses.
It was thus plausible to suggest that disease lay in poisonous atmospheric exhalations, given off by putrefying
carcases, food and faeces, waterlogged soil, rotting vegetable remains, and other filth in the surroundings. Bad
environments, the argument ran, generated bad air (signalled by stenches), which, in turn, triggered disease.
Late in the century, reformers directed attention to the `septic' diseases - gangrene, septicaemia, diphtheria,
erysipelas, and puerperal fever - especially rampant in slum quarters and in ramshackle gaols and hospitals. The Hôtel Dieu in Paris had an atrocious reputation as a hotbed of fevers.
Disease theory greatly benefited from the rise of pathological anatomy. The trail was blazed by the illustrious
Italian, Giovanni Battista Morgagni, professor of anatomy at Padua, who built on earlier postmortem studies by
Johann Wepfer and Théophile Bonet. In 1761, when close to the age of eighty, Morgagni published his great work De Sedibus et Causis Morborum, which surveyed the findings of some 700 autopsies he had carried out. It quickly became famous, being translated into English in 1769 and German in 1774. It was Morgagni's aim to show that diseases were located in specific organs, that disease symptoms tallied with anatomical lesions, and that pathological organ changes were responsible for disease manifestations. He gave lucid accounts of many disease conditions, being the first to delineate syphilitic tumours of the brain and tuberculosis of the kidney. He grasped that where only one side of the body is stricken with paralysis, the lesion lies on the opposite side of the brain. His explorations of the female genitals, of the glands of the trachea, and of the male urethra also broke new ground.
Others continued his work. In 1793, Matthew Baillie, a Scot and a nephew of William Hunter practising in London, published his Morbid Anatomy. Illustrated with superb copperplates by William Clift (they depicted, among other things, the emphysema of Samuel Johnson's lungs) Baillie's work was more of a textbook than Morgagni's, describing in succession the morbid appearances of each organ.
He was the first to give a clear idea of cirrhosis of the liver, and in his second edition he developed the idea of
`rheumatism of the heart' (rheumatic fever).
Pathology was to yield an abundant harvest in early nineteenth-century medicine, thanks to the publication in 1800 of the Traité des Membranes by Francois Xavier Bichat, who focused particularly on the histological changes produced by disease.
Morgagni's pathology had concentrated on organs, Bichat shifted the focus. The more one will observe diseases and open cadavers, he declared, the more one will be convinced of the necessity of considering local diseases not from the aspect of the complex organs but from that of the individual tissues.
Born in Thoirette in the French Jura, Bichat studied at Lyon and Paris, where he settled in 1793 at the height of the Terror. From 1797 he taught medicine, working at the Hôtel Dieu. His greatest contribution was his perception that the diverse organs of the body contain particular tissues or what he called `membranes'; he described twenty-one, including connective, muscle, and nerve tissue.
Performing his researches with great fervour - he undertook more than 600 post mortems - Bichat formed a bridge between the morbid anatomy of Morgagni and the later cell pathology of Rudolf Virchow.
Medicine in the 19th Century The seventeenth century had launched the New Science; the Enlightenment propagandized on its behalf. But it was the nineteenth century that was the true age of science, with the state and universities promoting and funding it systematically. For the first time, it became essential for any ambitious doctor to acquire a scientific training.
Shortly after 1800, medical science was revolutionized by a clutch of French professors, whose work was shaped by the opportunities created by the French Revolution for physicians to use big public hospitals for research. Among physicians, they acquired a heroic status, not unlike Napoleon himself.
Perhaps the most distinguished was René-Théophile-Hyacinthe Laënnec, a pupil of Francois Bichat. In 1814, he became physician to the Salpêtrière Hospital and two years later chief physician to the Hôpital Necker. In 1816, Laënnec invented the stethoscope. Here is how he described his discovery: In 1816 I was consulted by a young woman presenting general symptoms of disease of the heart. Owing to her
stoutness little information could be gathered by application of the hand and percussion. The patient's age and sex did not permit me to resort to the kind of examination I have just described (direct application of the ear to the chest). I recalled a well-known acoustic phenomenon: namely, if you place your ear against one end of a wooden beam the scratch of a pin at the other extremity is distinctly audible. It occurred to me that this physical property might serve a useful purpose in the case with which I was then dealing.
Taking a sheet of paper I rolled it into a very tight roll, one end of which I placed on the precordial region, whilst I put my ear to the other. I was both surprised and gratified at being able to hear the beating of the heart with much greater clearness and distinctness than I had ever before by direct application of my ear.
I saw at once that this means might become a useful method for studying, not only the beating of the heart, but likewise all movements capable of producing sound in the thoracic vacity, and that consequently it might serve for the investigation of respiration, the voice, rales and possibly even the movements of liquid effused into the pleural cavity or pericardium.
By experiment, his instrument became a simple wooden cylinder about 23 centimetres (9 inches) long that could be unscrewed for carrying in the pocket. It was monaural (only later, in 1852, were two earpieces added - by the American George P. Cammann - for binaural sound). The stethoscope was the most important diagnostic innovation before the discovery of X-rays in the 1890s.
On the basis of his knowledge of the different normal and abnormal breath sounds, Laënnec diagnosed a
multiplicity of pulmonary ailments: bronchitis, pneumonia, and, above all, pulmonary tuberculosis (phthisis or
consumption). His oustanding publication, Traité de l'Auscultation médiate (1819), included clinical and
pathological descriptions of many chest diseases. Ironically, Laënnec himself died of tuberculosis.
Laënnec's investigations paralleled those of his colleague, Gaspard Laurent Bayle, who in 1810 published a classic monograph on phthisis, on the basis of more than 900 dissections. Bayle's outlook was different from Laënnec's. He was more interested in taxonomy, and distinguished six distinct types of pulmonary phthisis.
Laënnec had no interest in classification; rather, his ability to hear and interpret breath sounds made him primarily interested in the course of the diseases he examined. Like other contemporary French hospital physicians, he was accused of showing greater concern for diagnosis than for therapy - but this stemmed not from indifference to the  sick but from a deep awareness of therapeutic limitations.
Translations of Laënnec's book spread the technique of stethoscopy, as did the foreign students drawn to Paris. A man with a stethoscope draped round his neck became the prime nineteenth-century image of medicine: the instrument had the word science written on it. Laënnec remains the one famous name amongst the generation of post-1800 French physicians who insisted that medicine must become a science and who believed that scientific diagnosis formed its pith and marrow. At the time, however, the most illustrious was Pierre Louis, whose writings set out the key agenda of the new `hospital medicine'.
Graduating in Paris in 1813, Louis spent 7 years practising in Russia. On returning home, he plunged into the
wards of the Pitié hospital and published the results of his experiences in a massive book on tuberculosis (1825), followed 4 years later by another on fever.
Louis' Essay on Clinical Instruction (1834) set the standards for French hospital medicine. He highlighted not only bedside diagnosis but also systematic investigation into the patient's circumstances, history, and general health. He deemed the value of the patient's symptoms (that is, what the patient felt and reported) secondary, stressing the far more significant signs (that is, what the doctor's examination ascertained).
On the basis of such signs, the lesions of the pertinent organs could be determined, and they were the most definite guides to identifying diseases, devising therapies and making prognoses. For Louis, clinical medicine was an observational rather than an experimental science. It was learned at the bedside and in the morgue by recording and interpreting facts.
Medical training lay in instructing students in the techniques of interpreting the sights, sounds, feel, and smell of disease: it was an education of the senses. Clinical judgement lay in astute explication of what the senses perceived. Louis was, furthermore, a passionate advocate of numerical methods - the culmination of an outlook that had begun in the Enlightenment. Louis' mathematics were little more than simple arithmetic - quantitative categorizations of symptoms, lesions, and diseases, and (most significantly) application of numerical methods to test his therapies. To some degree, Louis sought to use medical arithmetic to discredit existing therapeutic practices: he was thus a pioneer of clinical trials. Only through the collection of myriad instances, he stressed, could doctors hope to formulate general laws.
Overall, the leading lights among French hospital doctors were more confident about diagnosis than cure, although Laënnec highlighted the Hippocratic concept of the healing power of nature - the power of the body to restore itself to health. But in the French school, therapeutics remained subordinate to pathological anatomy and diagnosis.
The meticulousness with which Laënnec, Louis, Bayle, and others delineated disease reinforced the nosological concept that diseases were discrete entities, real things. The move from reliance upon symptoms (which were variable and subjective) to constant and objective lesions (the sign) supported their idea that diseased states were fundamentally different from normal ones.
The `Paris school' was not a single cohesive philosophy of medical investigation. Nevertheless, there was
something distinguished about Paris medicine; and during the first half of the nineteenth century students from

Europe and North America flocked to France.

Young men who studied in Paris returned home to fly the flag for French medicine. Disciples in London, Geneva, Vienna, Philadelphia, Dublin, and Edinburgh followed the French in emphasizing physical diagnosis and pathological correlation.
They often also took back with them knowledge and skills in basic sciences such as chemistry and microscopy. Several leading English stethoscopists, including Thomas Hodgkin (of Hodgkin's disease), learned the technique directly from Laënnec himself.
Imitating the French example, medical education everywhere grew more systematic, more scientific. Stimulated by teachers who had studied in Paris, medical teaching in London expanded: by 1841, St George's Hospital had 200 pupils, St Bartholomew's 300.
There were hundreds of students in other London hospital schools as well, and from the 1830s London also boasted a teaching university, with two colleges, University and King's, each with medical faculties and purpose-built hospitals.
London become a major centre of scientific medicine. Amongst the most eminent investigators was Thomas
Addison, who became the leading medical teacher and diagnostician at Guy's Hospital where he collaborated with Richard Bright and identified Addison's disease (insufficiency of the suprarenal capsules) and Addison's anaemia (pernicious anaemia).
Bright for his part was a member of the staff at Guy's Hospital from 1820. His Reports of Medical Cases (1827-31) contain his description of kidney disease (Bright's disease), with its associated dropsy and protein in the urine.
Vienna also grew in eminence. The University of Vienna had well-established traditions: the old medical school
had bedside teaching on the model espoused by Herman Boerhaave in the early eighteenth century, but decay had set in towards 1800.
However, new teaching was introduced by the Paris-inspired Carl von Rokitanski, who made pathological anatomy compulsory. The age's most obsessive dissector (supposedly performing some 60,000 autopsies in all), Rokitanski had a superb mastery of anatomy and pathological science, and left notable studies of congenital malformations and reports of numerous conditions, including pneumonia, peptic ulcer, and valvular heart disease. In the USA, by contrast, high-quality medical schools and clinical investigations developed more slowly. In its laissez-faire, business-dominated atmosphere, many schools were blatantly commercial, inadequately staffed, and offered cut-price degrees.

Medical Breakthroughs, 20th Century

Building on the developments of the nineteenth century, the past hundred years have brought unparalleled
developments in biology, chemistry, and physiology and the opening up of new specialities within medical science. It would be quite impossible even to list here all the main twentieth-century breakthroughs in medical science, but a few fields and salient advances may be outlined.
Medicine History
Medical, Health, Nutrition, Alcoholism, Allergy, Anxiety, Asthma, Beauty, Cholesterol, Diabetes, Fitness, Heart Disease, Massage, Smoking, Weight Loss

Immunology

The microbiological research promoted by Louis Pasteur and Robert Koch led to the creation around 1900 of
immunology. The word `immunity' - exemption from a particular disease - was popularized as researchers grew more familiar with the enigmatic relations of infection and resistance.
Fascinated by the nutritional requirements of microorganisms, Pasteur had suggested a nutritional dimension to the resistance of a host and the attenuation of a parasite: the microorganism lost its power to infect because it could not longer flourish and reproduce.
Pasteur was more concerned with vaccine production than with the theoretical reasons why vaccines protected (or immunized). In 1884, however, a Russian zoologist, Elie Metchnikoff, observed in the water flea (Daphnia) a phenomenon he termed phagocytosis (ceil-eating), subsequently developing his observations into a comprehensive cellular view of resistance.
Metchnikoff saw amoeba-like cells in these lower organisms apparently ingesting foreign substances like vegetable matter. He deduced that these amoeba-like cells in Daphnia might be comparable to the pus cells visible in higher creatures. Microscopic examination of animals infected with various pathogens, including the anthrax bacillus, showed white blood cells assaulting and appearing to digest the disease germs.
Metchnikoff likened white blood cells to an army that was `fighting infection'. Extrapolating from these
hypotheses, Metchnikoff subsequently turned into a scientific guru, expounding striking beliefs on diet, constipation, ageing, and humanity's biological future. He became noted for his advocacy of eating yoghurt, arguing that the bacilli used in producing it inhibited the bacteria in the gut that caused harmful putrefactive by-products. Metchnikoff's cellular theory of immunity gained prominence within the French scientific community; in an era of tense scientific rivalry, German researchers proposed chemical theories. Robert Koch's scepticism about the immunological significance of phagocytosis carried great weight in Germany, and two of his younger colleagues, Emil Adolf von Behring and Paul Ehrlich, argued that immunological warfare was waged less by the white blood cells than in the blood serum.
Their chemical hypothesis had important factors in its favour. It was known that the cell-free serum of immunized creatures could destroy lethal bacteria, and that protection could be transmitted via serum from animal to animal: this implied there was more to immunity than the operation of white blood cells alone.
Moreover, two of Pasteur's own pupils, Emile Roux and Alexandre Yersin, showed in 1888 that cultures of
diphtheria bacilli were toxic even when the cells themselves had been filtered out. This seemed to suggest that it was not necessarily the bacterial cell itself that bred disease but rather some chemical toxin the cell manufactured. On the strength of these observations serum therapy was developed. Working with a Japanese associate, Shibasaburo Kitasato, Behring claimed in 1890 that the whole blood or serum of an animal, rendered immune to tetanus or diphtheria by injecting the relevant toxin, could treat another creature exposed to an otherwise fatal dose of the bacilli.
Serum therapy had some genuine triumphs, but it never proved a wonder cure - not least because epidemic diseases such as diphtheria were notoriously variable in their virulence. Nevertheless, serum therapies grew in popularity after 1890, and antitoxins were prepared for diseases other than tetanus and diphtheria, including pneumonia, plague, and cholera.
Many, however, remained convinced of the superior protective possibilities of vaccines. Vaccines developed from the treated organisms of plague and cholera were introduced around 1900 by the Russian-born bacteriologist, Waldemar Haffkine.
From the 1880s Ehrlich had been exploring the physiological and pharmacological properties of various dyes,
demonstrating, for example, the affinity of the newly discovered malaria parasite for methylene blue.
Applying the stereochemical ideas of Emil Fischer and other organic chemists, Ehrlich devised a `side-chain'
notion to explain how antigens and antibodies interacted. His formulation was essentially a chemical interpretation of immunity, part of a molecular vision of reality that included the possibility of pharmacological `magic bullets', the ultimate aim of chemotherapy.
Ideas of immunity linked in various ways with the study of the relations between nutrition and health. Nutrition
studies had various traditions on which to draw. Back in the eighteenth century, the problem of scurvy aboard ship had led to conjectures connecting diet and disease and to the first clinical trials by the Scottish doctor James Lind.

Digestion And Nourishment

The researches of Justus von Liebig in Germany helped put the organic chemistry of digestion and nourishment on a sound footing. Liebig's pupils explored the creation of energy out of food and launched the idea of dietary balance. Notable work was done by German physiologist Wilhelm Kühne, a professor at Heidelberg from 1871, who introduced the term `enzyme' to describe organic substances that activate chemical changes. There was a long tradition of explaining sickness in terms of absolute lack of food.
Around 1900, however, a new concept was emerging: the idea of deficiency disease - the notion that a healthy diet required certain very specific chemical components. Crucial were the investigations of Christiaan Eijkman into beriberi in the Dutch East Indies. The first to produce a dietary deficiency disease experimentally (in chickens and pigeons), Eijkman proposed the concept of `essential food factors', or roughly what would later be called vitamins.
He demonstrated that the substance (now known as vitamin B1) that gives protection against beriberi was
contained in the husks of grains of rice - precisely the element removed when rice is polished. Through clinical
studies on prisoners in Java, he determined that unpolished rice would cure the disorder.
Eijkman's researches were paralleled by the Cambridge biochemist Frederick Gowland Hopkins, who similarly
discovered that very small amounts of certain substances found in food (his name for them was `accessory food factors') were requisite for the body to utilize protein and energy for growth.
An American physiologist, Elmer Verner McCollum, showed that certain fats contained an essential ingredient for normal growth: this provided the basic research for the understanding of what became known as vitamins A and D.
In 1928, Albert von Szent-Györgyi isolated vitamin C from the adrenal glands, and it became recognized that that was the element in lemon juice that acted as an antiscorbutic. The idea of deficiency disease proved highly fruitful. In 1914, Joseph Goldberger of the US Public Health Service concluded that pellagra, with its classic pot-bellied symptoms, was not an infectious disorder but was rather caused by poor nutrition. Goldberger was able to relieve pellagra sufferers in the southern States by feeding them protein-rich foods.
By the 1930s the pellagra-preventing factor was proved to be nicotinic acid (niacin), part of the B vitamin complex. Study of nutrition could broadly be seen as part of the programme of research into the `internal environment' launched by Claude Bernard. So, too, was another new specialty - endocrinology or the investigation of internal secretions. The concept was that of the hormone, which arose out of the energetic research programme in proteins and enzymes pursued at University College London, by William Bayliss and Ernest Starling. In 1902, an intestinal substance called secretin that activates the pancreas to liberate digestive fluids was the first specifically to be named a hormone (from the Greek: I excite or arouse). It opened up a new field: the study of the chemical messengers travelling from particular organs (ductless or endocrine glands) to other parts of the body in the bloodstream.
The relations between the thyroid gland, goitre (an enlargement of the gland), and cretinism (defective functioning of the gland) were early established, and surgical procedures followed. The pancreas, ovaries, testes, and the adrenals were all recognized to be endocrine glands, like the thyroid.
Researchers sought to ascertain precisely what metabolic processes they controlled, and which diseases followed from their imbalances. Once it was discovered that the pancreas releases into the circulation a material contributing to the control of the blood sugar it became clear that diabetes was a hormone-deficiency disease. With a view to treating diabetes, a race followed to extract the active substance (called `insuline' by Edward Sharpey-Schafer) produced by the `islets of Langerhans' in the pancreas.

Pituitary Gland 

Attention was also given to the pituitary gland, which was recognized to secrete growth hormone. In his The
Pituitary Body and its Disorders (1912), an American surgeon, Harvey Cushing, showed that its abnormal
functioning produced obesity (he described the sufferer as a tomato head on a potato body with four matches as limbs). As with thyroidism, surgery was used to remove the adrenal gland or the pituitary tumour.
Further endocrinological researches led to the isolation of the female sex hormone, oestrone. By the 1930s the family of the oestrogens had been elucidated, as had the male sex hormone, testosterone. Twenty years later, on the basis of these discoveries, an oral contraceptive for women was developed.
Some of the most fundamental advances in the biomedical sciences have arisen with the progress of neurology. Their potential significance for medical practice is still imperfectly understood. From René Descartes onwards, the importance of the nervous system for the regulation of behaviour was acutely recognized, but speculation long outran experimentation.

Neurophysiology

Experimental neurophysiology made great strides during the nineteenth century. The series of major studies
stretching from Charles Bell (after whom Bell's palsy is named) to Charles Sherrington cannot be described here.
Sherrington's book, The Integrative Action of the Nervous System (1906), which is often called the `Bible of
neurology', clearly established that the operation of the brain cells involved two neurones with a barrier between one cell and the next, enabling the impulse to pass with differing degrees of ease (the synapse).
What remained a subject of passionate debate was how the nerve currents, identified in the work of David Ferrier, Sherrington, and others, were transmitted from nerve to nerve, across synapses, to their targets. Evidence began to accumulate that chemical as well as electrical processes were at work.
The English physiologist and pharmacologist, Henry Hallett Dale, found a substance in 1914 in ergot (a fungus), which he called acetylcholine. This affected muscle response at certain nerve junctions.
In 1929, Dale isolated acetylcholine from the spleens of freshly killed horses, and showed it was secreted at nerve endings after electric stimulation of motor nerve fibres. Acetylcholine was thus the chemical agent through which the nerves worked on the muscles. This was the first neurotransmitter to be identified.
Meanwhile in 1921, the German physiologist, Otto Loewi, was investigating the chemical basis of the muscular actions of the heart. He was to record that In the night of Easter Saturday, 1921, I awoke, and jotted down a few notes on a tiny slip of paper. Then I fell asleep again. It occurred to me at six o'clock in the morning, that during the night I had written down something most important, but I was unable to decipher the scrawl.
That Sunday was the most desperate day in my whole scientific life. During the night, however, I woke again, and I remembered what it was. This time I did not take any risk; I got up immediately, went to the laboratory, made the experiment on the frog's heart ... and at five o'clock the chemical transmission of nerve impulses was conclusively proved.
Loewi's experiments showed that the heart, when stimulated, secreted a substance directly responsible for certain muscular actions: this was the enzyme cholinesterase, a chemical inhibitor that interrupted the acetylcholine stimulator and produced the nerve impulse pattern.
Further work brought to light numerous other chemical agents that were found at work in the nervous system. At Harvard University, Walter Cannon identified the stimulative role of adrenaline, and this led to a classification of nerves according to their transmitter substances.
More research provided evidence of monamines in the central nervous system, including noradrenaline, dopamine, and serotonin.
The transmitter-inhibitor pattern thus became known, stimulating fresh work on controlling or correcting basic
problems in brain function. For instance, the action of tetanus and botulism on the nervous system could for the first time be explained.
Parkinson's disease, a degenerative nervous condition identified in the nineteenth century, was considered largely untreatable until it was associated with chemical transmission in the nervous system.
In the late 1960s, however, it was discovered that the adrenergic side could be stimulated with L-dopa, a drug that enhances dopamine in the central nervous system and acts on the precursor of noradrenaline, presumed to be the transmitter substance.
Every further development in the understanding of neurotransmission and the chemicals involved therein opens
new prospects for the control and cure of neurological disorders.

Genetics 

One other dimension of modern science and its medical applications that must be mentioned here is genetics. The establishment of Darwin's theory of evolution by natural selection inevitably gave prominence to the component of inheritance in human development.
But Darwin himself lacked a satisfactory theory of inheritance, and specious concepts of degenerationism and
eugenics achieved great and sometimes lethal consequence before modern genetics became soundly established from the 1930s.
Valuable advances were achieved, early in the twentieth century, in demonstrating the hereditary component
of metabolic disorders. Archibald Edward Garrod, a physician at St Bartholomew's Hospital in London, investigated what he first called Inborn Errors of Metabolism (1909), using as a model for this concept alkaptonuria, an inheritedmetabolic disorder in which an acid is excreted in quantities in the urine.
The real breakthrough came when the infant subdiscipline of molecular biology paved the way for the elucidation of the double-helical structure of DNA in 1953 by Francis Crick and James Watson, working at the Medical Research Council's laboratory in Cambridge.
The cracking of the genetic code has in turn led to the Human Genome Project, set up in 1986 with the goal of
mapping all human genetic material. Opinion remains divided as to whether this project will reveal that more
diseases than conventionally thought have a genetic basis.
Many believe that the next enormous medical breakthroughs will lie in the field of genetic engineering. In the
meantime, a combination of clinical studies and laboratory research has firmly established the genetic component in disorders such as cystic fibrosis and Huntington's chorea. The latter was shown to run in families as long ago as 1872 by the American physician, George Huntington.
Clinical Science in the 20th Century It is clear that the scientific pursuit of medical knowledge has undergone structural shifts during the past hundred years. Early nineteenth-century French medical science developed in the hospital, and German medical science pioneered the laboratory. New sites have emerged in more recent times to create and sustain clinical science.
In some cases, this has meant special units set up by philanthropic trusts or by government. A key initiative
in encouraging clinical research in the USA was the foundation in 1904 of the Rockefeller Institute for Medical
Research in New York.
Although the institute was at first entirely devoted to basic scientific studies, from the start the intention was to set up a small hospital alongside it, to be devoted to research in the clinic. The hospital was opened in 1910.
A vital influence on clinical research in the USA was Abraham Flexner's report on medical education, published in 1910. Flexner, educationalist brother of Simon Flexner, the first director of the Rockefeller Institute, drew attention to the parlous situation of many medical schools.
An enthusiastic supporter of the German model then developing at Johns Hopkins in Baltimore, Flexner considered that there were only five US institutions that could be regarded as true centres of medical research - Harvard and Johns Hopkins, and the universities of Pennsylvania, Chicago, and Michigan. Soon after the publication of the Flexner report, the Rockefeller Foundation made funds available to Johns Hopkins for the establishment of full-time chairs in clinical subjects.
This innovation spread throughout the USA, so that by the mid-1920s there were twenty institutions that could
match the best in Europe. The system received a further boost with the foundation in 1948 of the National Institutes of Health. Research grants were awarded to clinical departments, which grew enormously.
Since the First World War, American clinical research has been notable both for quantity and for quality. The
award of Nobel Prizes may be taken as some index. No British clinical research worker has won a Nobel Prize since Sir Ronald Ross, who won it in 1902 for the discovery of the role of the mosquito in the transmission of malaria.
Nevertheless, numerous British individuals have made internationally recognized contributions to clinical research in the twentieth century, among them James Mackenzie, who pioneered the use of the polygraph for recording the pulse and its relationship to cardiovascular disease. His work was particularly important in distinguishing atrial fibrillation and in treating this common condition with digitalis. His Diseases of the Heart (1908) summarized his vast experience, although he never properly appreciated the
possibilities of the electrocardiograph, then being taken up by the more technologically minded Thomas Lewis.
Thomas Lewis has been dubbed the architect of British clinical research. Born in Cardiff, Lewis went in 1902 to
University College Hospital (London), where he remained as student, teacher, and consultant until his death. He was the first completely to master the use of the electrocardiogram.
Through animal experiments he was able to correlate the various electrical waves recorded by an
electrocardiograph with the sequence of events during a contraction of the heart, which enabled him to use the
instrument as a diagnostic tool when the heart had disturbances of its rhythm, damage to its valves, or changes due to high blood pressure, arteriosclerosis, and other conditions.
In later life, Lewis turned his attention to the physiology of cutaneous blood vessels and the mechanisms of pain, conducting experiments on himself in an attempt to work out the distribution of pain fibres in the nervous system and to understand patterns of referred pain.
Lewis fought for full-time clinical research posts to investigate what he called `clinical science', a broadening of his interests signalled when in 1933 he changed the name of the journal he had founded in 1909 from Heart to Clinical Science.
By the early 1930s, Lewis had become the most influential figure in British clinical research, and his department at University College Hospital was the Mecca for aspiring clinical research workers. He claimed that `Clinical science has as good a claim to the name and rights of self-subsistence of a science as any other department of biology'.
Britain lagged behind the USA in the funding and organization of medical research. Before the First World War, the medical schools, especially in London, were private and rather disorganized institutions, and there was little encouragement of clinical research.
A Royal Commission on University Education in London initiated changes that led to the establishment of modern academic departments in clinical subjects with an emphasis on research. By 1925 five chairs of medicine were established among the twelve medical schools in London.
In the UK, financing of clinical research has come from two main sources - a government-funded agency, the
Medical Research Council, and from medical charities, such as the Imperial Cancer Research Fund, the British Heart Foundation, and the Wellcome Trust.
From its foundation in 1913, the Medical Research Committee - to become the Medical Research Council (MRC) in 1920 - sought to encourage `pure' science and also clinical research and experimental medicine. The MRC also made other major contributions to clinical research, supporting, for example, Thomas Lewis in London.
In the immediate post-war era, the MRC was involved in two vital innovations in clinical research. The first was
the introduction of the randomized controlled clinical trial. Advised by Austin Bradford Hill, professor of medical
statistics and epidemiology at the London School of Tropical Medicine and Hygiene, in 1946 the council set up a trial of the efficacy of streptomycin in the treatment of pulmonary tuberculosis.
The drug was in short supply and it was considered ethically justifiable to carry out a trial in which one group
received streptomycin whereas a control group was treated with traditional methods. The MRC trial emphasized the importance of randomization in selecting subjects for study. This, the first randomized controlled trial to be reported in human subjects, served as a model for other such studies.
The second major development was the application of epidemiology to the analysis of clinical problems. The MRC set up a conference to discuss rising mortality from lung cancer. The MRC enlisted the aid of Bradford Hill and in 1948 he recruited the young Richard Doll, later Regius Professor of Medicine at Oxford University, to join him in analysing possible causes of lung cancer.
Their painstaking survey of patients from twenty London hospitals showed that smoking is a factor, and an
important factor, in the production of cancer of the lung. They went on to establish that the same conclusion applied nationally and, in an important study of members of the medical profession, they demonstrated that mortality from the disease fell if individuals stopped smoking.
These observations were not only important in showing the cause of a commonly occurring cancer in Britain, and subsequently in other countries such as the USA, but also in establishing the position of epidemiology in clinical research. As this last example shows, medical science now knows no bounds; its methods and scope sweep from the laboratory to the social survey, in helping to forge an understanding of the wider parameters of disease.


Medical, Health, Nutrition, Alcoholism, Allergy, Anxiety, Asthma, Beauty, Cholesterol, Diabetes, Fitness, Heart Disease, Massage, Smoking, Weight Loss