Page 50«..1020..49505152

Category Archives: Molecular Genetics

Genetic Testing for the Healthy – Harvard Medical School (registration)

Posted: July 2, 2017 at 6:47 pm

Whole genome sequencing involves the analysis of all three billion pairs of letters in an individuals DNA and has been hailed as a technology that will usher in a new era of predicting and preventing disease.

However, the use of genome sequencing in healthy individuals is controversial because no one fully understands how many patients carry variants that put them at risk for rare genetic conditions and how theyand their doctorswill respond to learning about these risks.

Get more HMS news here

In a new paper published June 26 in the Annals of Internal Medicine by investigators at Harvard Medical School and Brigham and Womens Hospital, along with collaborators at Baylor College of Medicine, report the results of the four-year, NIH-funded MedSeq Project, the first-ever randomized trial conducted to examine the impact of whole genome sequencing in healthy primary care patients.

In the MedSeq Project, 100 healthy individuals and their primary care physicians were enrolled and randomized so that half of the patients received whole genome sequencing and half did not.

Nearly 5,000 genes associated with rare genetic conditions were expertly analyzed in each sequenced patient, and co-investigators from many different disciplines, including clinical genetics, molecular genetics, primary care, ethicsand law, were involved in analyzing the results.

Researchers found that among the 50 healthy primary care patients who were randomized to receive genome sequencing, 11 (22 percent) carried genetic variants predicted to cause previously undiagnosed rare disease.

Two of these patients were then noted to have signs or symptoms of the underlying conditions, including one patient who had variants causing an eye disease called fundus albipunctatus, which impairs night vision.

This patient knew he had difficulty seeing in low-light conditions but had not considered the possibility that his visual problems had a genetic cause.

Another patient was found to have a genetic variant associated with variegate porphyria, which finally explained the patients and family members mysterious rashes and sun sensitivity.

The other nine participants had no evidence of the genetic diseases for which they were predicted to be at risk. For example, two patients had variants that have been associated with heart rhythm abnormalities, but their cardiology workups were normal. It is possible, but not at all certain, that they could develop heart problems in the future.

Sequencing healthy individuals will inevitably reveal new findings for that individual, only some of which will have actual health implications, said lead author Jason Vassy,an HMS assistant professor of medicine at Brigham and Womens and primary care physician at the VA Boston Healthcare System.

This study provides some reassuring evidence that primary care providers can be trained to manage their patients sequencing results appropriately, and that patients who receive their results are not likely to experience anxiety connected to those results. Continued research on the outcomes of sequencing will be needed before the routine use of genome sequencing in the primary care of generally healthy adults can be medically justified, Vassy said.

Primary care physicians received six hours of training at the beginning of the study regarding how to interpret a specially designed, one-page genome testing report summarizing the laboratory analysis.

Consultation with genetic specialists was available, but not required. Primary care physicians then used their own judgment about what to do with the information, and researchers monitored the interactions for safety and tracked medical, behavioral and economic outcomes.

The researchers noted that they analyzed variants from nearly 5,000 genes associated with rare genetic diseases. These included single genes causing a significantly higher risk for rare disorders than the low-risk variants for common disorders reported by direct-to-consumer genetic testing companies. No prior study has ever examined healthy individuals for pathogenic (high-risk) variants in so many rare disease genes.

We were surprised to see how many ostensibly healthy individuals are carrying a risk variant for a rare genetic disease, said Heidi Rehm, HMS associate professor of pathology at Brigham and Women's anddirector of the Laboratory for Molecular Medicine at Brigham and Women's.

We found that about one-fifth of this sample population carried pathogenic variants, and this suggests that the potential burden of rare disease risk throughout our general population could be far higher than previously suspected,said Rehm, a co-investigator on the study who directed the genome analysis.However, the penetrance, or likelihood that persons carrying one of these variants will eventually develop the disease, is not fully known.

Additionally, investigators compared the two arms of the studyand found that patients who received genome sequencing results did not show higher levels of anxiety. They did, however, undergo a greater number of medical tests and incurred an average of $350 more in health care expenses in the six months following disclosure of their results. The economic differences were not statistically significant with the small sample size in this study.

Because participants in the MedSeq Project were randomized, we could carefully examine levels of anxiety or distress in those who received genetic risk information and compare it to those who did not, said Amy McGuire,director of the Center for Medical Ethics and Health Policy at Baylor College of Medicine.

While many patients chose not to participate in the study out of concerns about what they might learn, or with fears of future insurance discrimination, those who did participate evinced no increase in distress, even when they learned they were carrying risk variants for untreatable conditions, saidMcGuire, who supervised the ethical and legal components of the MedSeq Project.

There has also been great concern in the medical community about whether primary care physicians can appropriately manage these complicated findings. But when a panel of expert geneticists reviewed how well the primary care physicians managed the patients with possible genetic risk variants, the experts determined that only two of the 11 cases were managed inappropriately and that no harm had come to these patients.

MedSeq Project investigators note that the studys findings should be interpreted with caution because of the small sample size and because the study was conducted at an academic medical center where neither the patients nor the primary care physicians are representative of the general population. They also stressed that carrying a genetic risk marker does not mean that patients have or will definitely get the disease in question. Critical questions remain about whether discovering such risk markers in healthy individuals will actually provide health benefits, or will generate unnecessary testing and subsequent procedures that could do more harm than good.

Integrating genome sequencing and other -omics technologies into the day-to-day practice of medicine is an extraordinarily exciting prospect with the potential to anticipate and prevent diseases throughout an individuals lifetime, said senior author Robert C. Green, HMSprofessor of medicineat Brigham and Womens Hospital,associate member of the Broad Institute of Harvard and MITandleader ofthe MedSeq Project. But we will need additionalrigorously designed and well-controlled outcomes studies like the MedSeq Project with larger sample sizes and with outcomes collected over longer periods of time to demonstrate the full potential of genomic medicine.

The MedSeq Project is one of the sites in the Clinical Sequencing Exploratory Research Consortium and was funded by the National Human Genome Research Institute, part of the National Institutes of Health.

The Genomes2People Research Program at Brigham and Womens Hospital, the Broad Institute and Harvard Medical School conducts empirical research in translational genomics and health outcomes. NIH-funded research within G2P seeks to understand the medical, behavioral and economic impact of using genetic risk information to inform future standards. The REVEAL Study has conducted several randomized clinical trials examining the impact of disclosing genetic risk for a frightening disease. The Impact of Personal Genomics (PGen) Study examined the impact of direct-to-consumer genetic testing on over 1,000 consumers of two different companies. The MedSeq Project has conducted the first randomized clinical trial to measure the impact of whole genome sequencing on the practice of medicine. The BabySeq Project is recruiting families of both healthy and sick newborns into a randomized clinical trial where half will have their babys genome sequenced. Green directs the Program.

Adapted from a Brigham and Women's news release.

Link:
Genetic Testing for the Healthy - Harvard Medical School (registration)

Posted in Molecular Genetics | Comments Off on Genetic Testing for the Healthy – Harvard Medical School (registration)

Molecular Genetics Service – Great Ormond Street Hospital …

Posted: November 16, 2016 at 3:46 pm

Diagnostic, carrier and predictive testing is offered for a comprehensive range of single gene disorders as well as a DNA banking service whereby samples can be forwardedto external laboratories for approved requests providing funding is available.

A complete list oftesting services offeredis providedon this web site see:

Molecular Genetics Tests

or is available to download as a service pack:

Click here to download the price list for NHS patients

It is the responsibility of the patient's clinician to request a laboratory service/test and to ensure that all samples are correctly labelled and request forms completed to a minimum standard.

Consent is not required for DNA storage. It is the responsibility of the clinician to obtain consent before requesting a genetic test.

Click here for a copy of our Test Request Form

Click here for a copy of the Delay-Seizure (EIEE)panel proforma

Click here for a copy of the Hearing Loss panel proforma

Click here for a copy of the Immunodeficiency (PID/SCID)panel proforma

Click here for a copy of the Inflammatory Bowel Disease panel proforma

Click here for information about new NIPD tests

5ml venous blood in plastic EDTA bottles (>1ml from neonates)

Sample must be labelled with:

Tissue type and date of biopsy should be clearly documented on the referral information.

In the case of twins, special attention must be given to the identity of each sample.

Minimum criteria:

The Association for Clinical Genetic Science (ACGS)guidelines recommend at least two pieces of identifying information on every sample tube.

Find out more

Find out more

Find out more

The rest is here:
Molecular Genetics Service - Great Ormond Street Hospital ...

Posted in Molecular Genetics | Comments Off on Molecular Genetics Service – Great Ormond Street Hospital …

Molecular Genetics – mmrl.edu

Posted: November 12, 2016 at 1:45 am

Genetics seems rather intimidating, but in its purest sense it is rather simple.The basis of genetics is fairly simple: DNA => RNA => A Protein.

DNA, or deoxyribonucleic acid, (DNA) is a long molecule that contains our unique genetic code. Nearly every cell in a persons body has the same DNA. Most DNA is located in the cell nucleus (where it is called nuclear DNA), but a small amount of DNA can also be found in the mitochondria (where it is called mitochondrial DNA or mtDNA).

The information in DNA is stored as a code made up of four chemical bases: adenine (A), guanine (G), cytosine (C), and thymine (T). Human DNA consists of about 3 billionof these bases, and more than 99 percent of those bases are the same in every person. The order, or sequence, of these bases determines the information available for building and maintaining an organism.

DNA bases pair up with each other, A with T and C with G, to form units called base pairs. Each base is also attached to a sugar molecule and a phosphate molecule. Together, a base, sugar, and phosphate are called a nucleotide. Nucleotides are arranged in two long strands that form a spiral called a double helix. The structure of the double helix is somewhat like a ladder, with the base pairs forming the ladders rungs and the sugar and phosphate molecules forming the vertical sidepieces of the ladder.

Ribonucleic acid (RNA) is very similar to DNA, but differs in a few important structural details: RNA nucleotides contain ribose sugars while DNA contains deoxyribose and RNA uses predominantly uracil instead of thymine present in DNA. RNA is transcribed (synthesized) from DNA by enzymes called RNA polymerases and further processed by other enzymes. RNA serves as the template for translation of genes into proteins, transferring amino acids to the ribosome to form proteins, and also translating the transcript into proteins.

RNAs serve as the working set of blue prints for a gene. Each gene is read, and then the messenger RNAs are sent to the molecular factories (ribosomes) that build proteins. These factories read the blueprints and use the information to make the appropriate protein. When the cell no longer needs to make any more of that protein, the RNA blueprints are destroyed. but because the master copy in the DNA remains intact, the cell can always go back to the DNA and make more RNA copies when it needs more of the encoded protein.

An example would be the suns UV light activating the genes in your skin cells to tan you. The gene is read and the RNA takes the message or blueprint to the ribosomes where melanin, the protein that tans your skin, is made.

As we discussed, each gene is made up of a series of bases and those bases provide instructions for making a single protein. Any change in the sequence of bases may be considered a mutation. Most of the mutations are naturally-occurring. For example, when a cell divides, it makes a copy of its DNA and sometimes the copy is not quite perfect. That small difference from the original DNA sequence is a mutation.

Mutations can also be caused by exposure to specific chemicals, metals, viruses, and radiation. These have the potential to modify the DNA. This is not necessarily unnatural even in the most isolated and pristine environments, DNA breaks down. Nevertheless, when the cell repairs the DNA, it might not do a perfect job of the repair. So the cell would end up with DNA slightly different than the original DNA and hence, a mutation.

Some mutations have little or no effect on the protein, while others cause the protein not to function at all. Other mutations may create a new effect that did not exist before. Many diseases are a result of mutations in certain genes. One example is the gene for sickle cell anemia. The mutation causing the blood disorder sickle cell anemia is a single nucleotide substitution (A to T) in the base number 17 out of 438 As, Ts, Cs and Gs . By changing the amino acid at that point, the impact is that the red blood cells are no longer round, but sickle in shape and carry less oxygen.

Some of these changes occur in cells of the body such as in skin cells as a result of sun exposure. Fortunately these types of changes are not passed on to our children. However, other types of errors can occur in the DNA of cells that produce the eggs and sperm. These errors are called germ line mutations and can be passed from parent to child. If a child inherits a germ line mutation from their parents, every cell in their body will have this error in their DNA. Germ line mutations are what cause diseases to run in families, and are responsible for hereditary diseases.

Sudden cardiac death (SCD) is a widespread health problem with several known inherited causes. Inherited SCD generally occurs in healthy individuals who do not have other conventional cardiac risk factors. Mutations in the genes in charge of creating the electrical activity of the heart have been found to be responsible for most arrhythmias, among them Short QT Syndrome, Long QT Syndrome, Brugada Syndrome, Familial Bundle Branch Block, Sudden Infant Death Syndrome and Sudden Unexpected Death Syndrome.

As researchers discover the role genes play in disease, there will be more genetic tests available to help doctors make diagnoses and pinpoint the cause of the disease. For example, heart disease can be caused either by a mutation in certain genes, or by environmental factors such as diet or exercise to name a few.

Physicians can easily diagnose a person with heart disease once they present symptoms. However, physicians can not easily identify the cause of the heart disease is in each person. Thus, most patients receive the same treatment regardless of underlying cause of the disease.

In the future, a panel of genetic tests for heart disease might reveal the specific genetic factors that are involved in a given person. People with a specific mutation may be able to receive treatment that is directed to that mutation, thereby treating the cause of the disease, rather than just the symptoms.

The ultimate goal of the MMRLs Molecular Genetics Program is to identify the factors that are responsible for these diseases. This knowledge will facilitate the development ofgene-specific therapies and cures for arrhythmias and identify individuals at risk for sudden cardiac deaths.

With the addition of the Molecular Biology and Molecular Genetics programs, MMRL is now integrally involved in both basic and clinical research, and is among the relatively few institutions worldwide with a consistent and concerted focus on bridging basic and clinical science. With an eye toward designing specific treatments and cures for disease, the Laboratorys research has the potential to affect us all.

More here:
Molecular Genetics - mmrl.edu

Posted in Molecular Genetics | Comments Off on Molecular Genetics – mmrl.edu

Molecular biology – Wikipedia

Posted: November 12, 2016 at 1:45 am

Molecular biology concerns the molecular basis of biological activity between biomolecules in the various systems of a cell, including the interactions between DNA, RNA and proteins and their biosynthesis, as well as the regulation of these interactions.[1][2] Writing in Nature in 1961, William Astbury described molecular biology as:

"...not so much a technique as an approach, an approach from the viewpoint of the so-called basic sciences with the leading idea of searching below the large-scale manifestations of classical biology for the corresponding molecular plan. It is concerned particularly with the forms of biological molecules and [...] is predominantly three-dimensional and structuralwhich does not mean, however, that it is merely a refinement of morphology. It must at the same time inquire into genesis and function."[3]

Researchers in molecular biology use specific techniques native to molecular biology but increasingly combine these with techniques and ideas from genetics and biochemistry. There is not a defined line between these disciplines. The figure to the right is a schematic that depicts one possible view of the relationship between the fields:

Much of the work in molecular biology is quantitative, and recently much work has been done at the interface of molecular biology and computer science in bioinformatics and computational biology. As of the early 2000s, the study of gene structure and function, molecular genetics, has been among the most prominent sub-field of molecular biology.Increasingly many other loops of biology focus on molecules, either directly studying their interactions in their own right such as in cell biology and developmental biology, or indirectly, where the techniques of molecular biology are used to infer historical attributes of populations or species, as in fields in evolutionary biology such as population genetics and phylogenetics. There is also a long tradition of studying biomolecules "from the ground up" in biophysics.[citation needed]

Since the late 1950s and early 1960s, molecular biologists have learned to characterize, isolate, and manipulate the molecular components of cells and organisms.

These components include DNA, the repository of genetic information; RNA, a close relative of DNA whose functions range from serving as a temporary working copy of DNA to actual structural and enzymatic functions as well as a functional and structural part of the translational apparatus, the ribosome; and proteins, the major structural and enzymatic type of molecule in cells.[citation needed]

One of the most basic techniques of molecular biology to study protein function is molecular cloning. In this technique, DNA coding for a protein of interest is cloned (using PCR and/or restriction enzymes) into a plasmid (known as an expression vector). A vector has 3 distinctive features: an origin of replication, a multiple cloning site (MCS), and a selective marker (usually antibiotic resistance). The origin of replication will have promoter regions upstream from the replication/transcription start site.

This plasmid can be inserted into either bacterial or animal cells. Introducing DNA into bacterial cells can be done by transformation (via uptake of naked DNA), conjugation (via cell-cell contact) or by transduction (via viral vector). Introducing DNA into eukaryotic cells, such as animal cells, by physical or chemical means is called transfection. Several different transfection techniques are available, such as calcium phosphate transfection, electroporation, microinjection and liposome transfection. DNA can also be introduced into eukaryotic cells using viruses or bacteria as carriers, the latter is sometimes called bactofection and in particular uses Agrobacterium tumefaciens. The plasmid may be integrated into the genome, resulting in a stable transfection, or may remain independent of the genome, called transient transfection.

In either case, DNA coding for a protein of interest is now inside a cell, and the protein can now be expressed. A variety of systems, such as inducible promoters and specific cell-signaling factors, are available to help express the protein of interest at high levels. Large quantities of a protein can then be extracted from the bacterial or eukaryotic cell. The protein can be tested for enzymatic activity under a variety of situations, the protein may be crystallized so its tertiary structure can be studied, or, in the pharmaceutical industry, the activity of new drugs against the protein can be studied.

Polymerase chain reaction is an extremely versatile technique for copying DNA. In brief, PCR allows a specific DNA sequence to be copied or modified in predetermined ways. The reaction is extremely powerful and under perfect conditions could amplify 1 DNA molecule to become 1.07 Billion molecules in less than 2 hours. The PCR technique can be used to introduce restriction enzyme sites to ends of DNA molecules, or to mutate (change) particular bases of DNA, the latter is a method referred to as site-directed mutagenesis. PCR can also be used to determine whether a particular DNA fragment is found in a cDNA library. PCR has many variations, like reverse transcription PCR (RT-PCR) for amplification of RNA, and, more recently, quantitative PCR which allow for quantitative measurement of DNA or RNA molecules.

Gel electrophoresis is one of the principal tools of molecular biology. The basic principle is that DNA, RNA, and proteins can all be separated by means of an electric field and size. In agarose gel electrophoresis, DNA and RNA can be separated on the basis of size by running the DNA through an electrically charged agarose gel. Proteins can be separated on the basis of size by using an SDS-PAGE gel, or on the basis of size and their electric charge by using what is known as a 2D gel electrophoresis.

The terms northern, western and eastern blotting are derived from what initially was a molecular biology joke that played on the term Southern blotting, after the technique described by Edwin Southern for the hybridisation of blotted DNA. Patricia Thomas, developer of the RNA blot which then became known as the northern blot, actually didn't use the term.[4] Further combinations of these techniques produced such terms as southwesterns (protein-DNA hybridizations), northwesterns (to detect protein-RNA interactions) and farwesterns (protein-protein interactions), all of which are presently found in the literature.

Named after its inventor, biologist Edwin Southern, the Southern blot is a method for probing for the presence of a specific DNA sequence within a DNA sample. DNA samples before or after restriction enzyme (restriction endonuclease) digestion are separated by gel electrophoresis and then transferred to a membrane by blotting via capillary action. The membrane is then exposed to a labeled DNA probe that has a complement base sequence to the sequence on the DNA of interest. Most original protocols used radioactive labels; however, non-radioactive alternatives are now available. Southern blotting is less commonly used in laboratory science due to the capacity of other techniques, such as PCR, to detect specific DNA sequences from DNA samples. These blots are still used for some applications, however, such as measuring transgene copy number in transgenic mice, or in the engineering of gene knockout embryonic stem cell lines.

The northern blot is used to study the expression patterns of a specific type of RNA molecule as relative comparison among a set of different samples of RNA. It is essentially a combination of denaturing RNA gel electrophoresis, and a blot. In this process RNA is separated based on size and is then transferred to a membrane that is then probed with a labeled complement of a sequence of interest. The results may be visualized through a variety of ways depending on the label used; however, most result in the revelation of bands representing the sizes of the RNA detected in sample. The intensity of these bands is related to the amount of the target RNA in the samples analyzed. The procedure is commonly used to study when and how much gene expression is occurring by measuring how much of that RNA is present in different samples. It is one of the most basic tools for determining at what time, and under what conditions, certain genes are expressed in living tissues.

Antibodies to most proteins can be created by injecting small amounts of the protein into an animal such as a mouse, rabbit, sheep, or donkey (polyclonal antibodies) or produced in cell culture (monoclonal antibodies). These antibodies can be used for a variety of analytical and preparative techniques.

In western blotting, proteins are first separated by size, in a thin gel sandwiched between two glass plates in a technique known as SDS-PAGE (sodium dodecyl sulfate polyacrylamide gel electrophoresis). The proteins in the gel are then transferred to a polyvinylidene fluoride (PVDF), nitrocellulose, nylon, or other support membrane. This membrane can then be probed with solutions of antibodies. Antibodies that specifically bind to the protein of interest can then be visualized by a variety of techniques, including colored products, chemiluminescence, or autoradiography. Often, the antibodies are labeled with enzymes. When a chemiluminescent substrate is exposed to the enzyme it allows detection. Using western blotting techniques allows not only detection but also quantitative analysis. Analogous methods to western blotting can be used to directly stain specific proteins in live cells or tissue sections. However, these immunostaining methods, such as FISH, are used more often in cell biology research.

The Eastern blotting technique is used to detect post-translational modification of proteins.[5] Proteins blotted on to the PVDF or nitrocellulose membrane are probed for modifications using specific substrates.

A DNA microarray is a collection of spots attached to a solid support such as a microscope slide where each spot contains one or more single-stranded DNA oligonucleotide fragment. Arrays make it possible to put down large quantities of very small (100 micrometre diameter) spots on a single slide. Each spot has a DNA fragment molecule that is complementary to a single DNA sequence (similar to Southern blotting). A variation of this technique allows the gene expression of an organism at a particular stage in development to be qualified (expression profiling). In this technique the RNA in a tissue is isolated and converted to labeled cDNA. This cDNA is then hybridized to the fragments on the array and visualization of the hybridization can be done. Since multiple arrays can be made with exactly the same position of fragments they are particularly useful for comparing the gene expression of two different tissues, such as a healthy and cancerous tissue. Also, one can measure what genes are expressed and how that expression changes with time or with other factors. For instance, the common baker's yeast, Saccharomyces cerevisiae, contains about 7000 genes; with a microarray, one can measure qualitatively how each gene is expressed, and how that expression changes, for example, with a change in temperature. There are many different ways to fabricate microarrays; the most common are silicon chips, microscope slides with spots of ~ 100 micrometre diameter, custom arrays, and arrays with larger spots on porous membranes (macroarrays). There can be anywhere from 100 spots to more than 10,000 on a given array. Arrays can also be made with molecules other than DNA. For example, an antibody array can be used to determine what proteins or bacteria are present in a blood sample.

Allele-specific oligonucleotide (ASO) is a technique that allows detection of single base mutations without the need for PCR or gel electrophoresis. Short (20-25 nucleotides in length), labeled probes are exposed to the non-fragmented target DNA. Hybridization occurs with high specificity due to the short length of the probes and even a single base change will hinder hybridization. The target DNA is then washed and the labeled probes that didn't hybridize are removed. The target DNA is then analyzed for the presence of the probe via radioactivity or fluorescence. In this experiment, as in most molecular biology techniques, a control must be used to ensure successful experimentation. The Illumina Methylation Assay is an example of a method that takes advantage of the ASO technique to measure one base pair differences in sequence.[citation needed]

In molecular biology, procedures and technologies are continually being developed and older technologies abandoned. For example, before the advent of DNA gel electrophoresis (agarose or polyacrylamide), the size of DNA molecules was typically determined by rate sedimentation in sucrose gradients, a slow and labor-intensive technique requiring expensive instrumentation; prior to sucrose gradients, viscometry was used. Aside from their historical interest, it is often worth knowing about older technology, as it is occasionally useful to solve another new problem for which the newer technique is inappropriate.

While molecular biology was established in the 1930s, the term was coined by Warren Weaver in 1938. Weaver was the director of Natural Sciences for the Rockefeller Foundation at the time and believed that biology was about to undergo a period of significant change given recent advances in fields such as X-ray crystallography. He therefore channeled significant amounts of (Rockefeller Institute) money into biological fields.

Clinical research and medical therapies arising from molecular biology are partly covered under gene therapy[citation needed]. The use of molecular biology or molecular cell biology approaches in medicine is now called molecular medicine. Molecular biology also plays important role in understanding formations, actions, and regulations of various parts of cells which can be used to efficiently target new drugs, diagnosis disease, and understand the physiology of the cell.

Visit link:
Molecular biology - Wikipedia

Posted in Molecular Genetics | Comments Off on Molecular biology – Wikipedia

Human Molecular Genetics – amazon.com

Posted: November 12, 2016 at 1:45 am

Tom Strachan is Scientific Director of the Institute of Human Genetics and Professor of Human Molecular Genetics at Newcastle University, UK, and is a Fellow of the Academy of Medical Sciences and a Fellow of the Royal Society of Edinburgh. Tom's early research interests were in multigene family evolution and interlocus sequence exchange, notably in the HLA and 21-hydroxylase gene clusters. While pursuing the latter, he became interested in medical genetics and disorders of development. His most recent research has focused on developmental control of the vertebrate cohesion regulators Nipbl and Mau-2.

Andrew Read is Emeritus Professor of Human Genetics at the University of Manchester, UK and a Fellow of the Academy of Medical Sciences. Andrew has been particularly concerned with making the benefits of DNA technology available to people with genetic problems. He established one of the first DNA diagnostic laboratories in the UK over 20 years ago (it is now one of two National Genetics Reference Laboratories), and was founder chairman of the British Society for Human Genetics, the main professional body in this area. His own research is on the molecular pathology of various hereditary syndromes, especially hereditary hearing loss.

Drs. Strachan and Read were recipients of the European Society of Human Genetics Education Award.

Follow this link:
Human Molecular Genetics - amazon.com

Posted in Molecular Genetics | Comments Off on Human Molecular Genetics – amazon.com

Molecular Genetics (Stanford Encyclopedia of Philosophy)

Posted: October 30, 2016 at 9:51 pm

The term molecular genetics is now redundant because contemporary genetics is thoroughly molecular. Genetics is not made up of two sciences, one molecular and one non-molecular. Nevertheless, practicing biologists still use the term. When they do, they are typically referring to a set of laboratory techniques aimed at identifying and/or manipulating DNA segments involved in the synthesis of important biological molecules. Scientists often talk and write about the application of these techniques across a broad swath of biomedical sciences. For them, molecular genetics is an investigative approach that involves the application of laboratory methods and research strategies. This approach presupposes basic knowledge about the expression and regulation of genes at the molecular level.

Philosophical interest in molecular genetics, however, has centered, not on investigative approaches or laboratory methods, but on theory. Early philosophical research concerned the basic theory about the make-up, expression, and regulation of genes. Most attention centered on the issue of theoretical reductionism. The motivating question concerned whether classical genetics, the science of T. H. Morgan and his collaborators, was being reduced to molecular genetics. With the rise of developmental genetics and developmental biology, philosophical attention has subsequently shifted towards critiquing a fundamental theory associated with contemporary genetics. The fundamental theory concerns not just the make-up, expression, and regulation of genes, but also the overall role of genes within the organism. According to the fundamental theory, genes and DNA direct all life processes by providing the information that specifies the development and functioning of organisms.

This article begins by providing a quick review of the basic theory associated with molecular genetics. Since this theory incorporates ideas from the Morgan school of classical genetics, it is useful to sketch its development from Morgan's genetics. After reviewing the basic theory, I examine four questions driving philosophical investigations of molecular genetics. The first question asks whether classical genetics has been or will be reduced to molecular genetics. The second question concerns the gene concept and whether it has outlived its usefulness. The third question regards the tenability of the fundamental theory. The fourth question, which hasn't yet attracted much philosophical attention, asks why so much biological research is centered on genes and DNA.

The basic theory associated with classical genetics provided explanations of the transmission of traits from parents to offspring. Morgan and his collaborators drew upon a conceptual division between the genetic makeup of an organism, termed its genotype, and its observed manifestation called its phenotype (see the entry on the genotype/phenotype distinction). The relation between the two was treated as causal: genotype in conjunction with environment produces phenotype. The theory explained the transmission of phenotypic differences from parents to offspring by following the transmission of gene differences from generation to generation and attributing the presence of alternative traits to the presence of alternative forms of genes.

I will illustrate the classical mode of explanatory reasoning with a simple historical example involving the fruit fly Drosophila melanogastor. It is worth emphasizing that the mode of reasoning illustrated by this historical example is still an important mode of reasoning in genetics today, including what is sometimes called molecular genetics.

Genes of Drosophila come in pairs, located in corresponding positions on the four pairs of chromosomes contained within each cell of the fly. The eye-color mutant known as purple is associated with a gene located on chromosome II. Two copies of this gene, existing either in mutated or normal wild-type form, are located at the same locus (corresponding position) in the two second-chromosomes. Alternative forms of a gene occurring at a locus are called alleles. The transmission of genes from parent to offspring is carried out in a special process of cellular division called meiosis, which produces gamete cells containing one chromosome from each paired set. The half set of chromosomes from an egg and the half set from a sperm combine during fertilization, which gives each offspring a copy of one gene from each gene pair of its female parent and a copy of one gene from each gene pair of its male parent.

Explanations of the transmission of traits relate the presence of alternative genes (genotype) to the presence of alternative observable traits (phenotype). Sometimes this is done in terms of dominant/recessive relations. Purple eye-color, for example, is recessive to the wild-type character (red eye-color). This means that flies with two copies of the purple allele (the mutant form of the gene, which is designated pr) have purple eyes, but heterozygotes, flies with one copy of the purple allele and one copy of the wild-type allele (designated +), have normal wild-type eyes (as do flies with two copies of the wild-type allele). See Table 1.

To see how the classical theory explains trait transmission, consider a cross of red-eyed females with purple-eyed males that was carried out by Morgan's collaborators. The offspring all had red eyes. So the trait of red eyes was passed from the females to all their offspring even though the offspring's male parents had purple eyes. The classical explanation of this inheritance pattern proceeds, as do all classical explanations of inheritance patterns, in two stages.

The first stage accounts for the transmission of genes and goes as follows (Figure 1): each offspring received one copy of chromosome II from each parent. The maternally derived chromosomes must have contained the wild-type allele (since both second-chromosomes of every female parent used in the experiment contained the wild-type allele -- this was known on the basis of previous experiments). The paternally derived chromosomes must have contained the purple allele (since both second-chromosomes of every male parent contained the purple allele -- this was inferred from the knowledge that purple is recessive to red eye-color). Hence, all offspring were heterozygous (pr / +). Having explained the genetic makeup of the progeny by tracing the transmission of genes from parents to offspring, we can proceed to the second stage of the explanation: drawing an inference about phenotypic appearances. Since all offspring were heterozygous (pr / +), and since purple is recessive to wild-type, all offspring had red eye-color (the wild-type character). See Figure 1.

Notice that the reasoning here does not depend on identifying the material make-up, mode of action, or general function of the underlying gene. It depends only on the ideas that copies of the gene are distributed from generation to generation and that the difference in the gene (i.e., the difference between pr and +), whatever this difference is, causes the phenotypic difference. The idea that the gene is the difference maker needs to be qualified: differences in the gene cause phenotypic differences in particular genetic and environmental contexts. This idea is so crucial and so often overlooked that it merits articulation as a principle (Waters 1994):

Difference principle: differences in a classical gene cause uniform phenotypic differences in particular genetic and environmental contexts.

It is also worth noting that the difference principle provides a means to explain the transmission of phenotypic characteristics from one generation to the next without explaining how these characteristics are produced in the process of an organism's development. This effectively enabled classical geneticists to develop a science of heredity without answering questions about development.

The practice of classical genetics included the theoretical analysis of complicated transmission patterns involving the recombination of phenotypic traits. Analyzing these patterns yielded information about the basic biological processes such as chromosomal mechanics as well as information about the linear arrangement of genes in linkage groups. These theoretical explanations did not depend on ideas about what genes are, how genes are replicated, what genes do, or how differences in genes bring about differences in phenotypic traits.

Research in molecular biology and genetics has yielded answers to the basic questions left unanswered by classical genetics about the make-up of genes, the mechanism of gene replication, what genes do, and the way that gene differences bring about phenotypic differences. These answers are couched in terms of molecular level phenomena and they provide much of the basic theory associated with molecular genetics.

What is a gene? This question is dealt with at further length in section 4 of this article, but a quick answer suffices for present purposes: genes are linear sequences of nucleotides in DNA molecules. Each DNA molecule consists of a double chain of nucleotides. There are four kinds of nucleotides in DNA: guanine, cytosine, thymine, and adenine. The pair of nucleotide chains in a DNA molecule twist around one another in the form of a double helix. The two chains in the helix are bound by hydrogen bonds between nucleotides from adjacent chains. The hydrogen bonding is specific so that a guanine in one chain is always located next to cytosine in the adjacent chain (and vice-versa) and thymine in one chain is always located next to adenine (and vice-versa). Hence, the linear sequence of nucleotides in one chain of nucleotides in a DNA molecule is complimentary to the linear sequence of nucleotides in the other chain of the DNA molecule. A gene is a segment of nucleotides in one of the chains of a DNA molecule. Of course, not every string of nucleotides in DNA is a gene; segments of DNA are identified as genes according to what they do (see below).

How are genes replicated? The idea that genes are segments in a DNA double helix provides a straightforward answer to this question. Genes are faithfully replicated when the paired chains of a DNA molecule unwind and new chains are formed along side the separating strands by the pairing of complementary nucleotides. When the process is complete, two copies of the original double helix have been formed and hence the genes in the original DNA molecule have been effectively replicated.

What do genes do? Roughly speaking, genes serve as templates in the synthesis of RNA molecules. The result is that the linear sequence of nucleotides in a newly synthesized RNA molecule corresponds to the linear sequence of nucleotides in the DNA segment used as the template. Different RNA molecules play different functional roles in the cell, and many RNA molecules play the role of template in the synthesis of polypeptide molecules. Newly synthesized polypeptides are linear sequences of amino acids that constitute proteins and proteins play a wide variety of functional roles in the cell and organism (and environment). The ability of a polypeptide to function in specific ways depends on the linear sequence of amino acids of which it is formed. And this linear sequence corresponds to the linear sequence of triplets of nucleotides in RNA (codons), which in turn corresponds to the linear sequence of nucleotides in segments of DNA, and this latter segment is the gene for that polypeptide.

How do differences in genes bring about differences in phenotypic traits? The modest answer given above to the question What do genes do? provides the basis for explaining how differences in genes can bring about differences in phenotypic traits. A difference in the nucleotide sequence of a gene will result in the difference in the nucleotide sequence of RNA molecules, which in turn can result in a difference in the amino acid sequence of a polypeptide. Differences in the linear sequences of amino acids in polypeptides (and in the linear sequences of nucleotides in functional RNA molecules) can affect the roles they play in the cell and organism, sometimes having an effect that is observable as a phenotypic difference. The mutations (differences in genes) identified by the Morgan group (e.g., the purple mutation) have been routinely identified as differences in nucleotide sequences in DNA.

The modest answer to the question What do genes do? is that they code for or determine the linear sequences in RNA molecules and polypeptides synthesized in the cell. (Even this modest answer needs to be qualified because RNA molecules are often spliced and edited in ways that affect the linear sequence of amino acids in the eventual polypeptide product.) But biologists have offered a far less modest answer as well. The bolder answer is part of a sweeping, fundamental theory. According to this theory, genes are fundamental entities that direct the development and functioning of organisms by producing proteins that in turn regulate all the important cellular processes. It is often claimed that genes provide the information, the blueprint, or the program for an organism. It is useful to distinguish this sweeping, fundamental theory about the allegedly fundamental role of genes from the modest, basic theory about what genes do with respect to the synthesis of RNA and polypeptides.

Philosophers of science have been intrigued by ideals of reductionism and the grand scheme that all science will one day be reduced to a universal science of fundamental physics (see the entry on inter-theory relations in physics) for philosophical and scientific concepts of reductionism in the context of physical science). Philosophical reductionists believe that scientific knowledge progresses when higher-level sciences (e.g., chemistry) are reduced to lower-level sciences (e.g., physics). The so-called received view of scientific knowledge, codified in Nagel (1961) and Hempel (1966), promoted reductionism as a central ideal for science, and confidently asserted that much progress had been made in the reduction of chemistry to physics. Nagel constructed a formal model of reduction and applied it to illuminate how the science of thermodynamics, which was couched in terms of higher-level concepts such as pressure and temperature, was allegedly reduced to statistical mechanics, couched in terms of the lower-level concepts of Newtonian dynamics such as force and mean kinetic energy. In 1969, Schaffner claimed that the same kind of advance was now taking place in genetics, and that the science of classical genetics was being reduced to an emerging science of molecular genetics. Schaffner's claim, however, was quickly challenged by Hull. Other philosophers of biology developed Hull's anti-reductionist arguments and a near consensus developed that classical genetics was not and would not be reduced to molecular genetics. Although the philosophical case for anti-reductionism has been challenged, many philosophers still assume that the anti-reductionist account of genetics provides an exemplar for anti-reductionist analyses of other sciences.

Reductionism has many meanings. For example, the phrase genetic reductionism concerns the idea that all biological phenomena are caused by genes, and hence presupposes an ontological sense of reductionism according to which one kind of micro-entity (in this case, gene) exclusively causes a variety of higher-level phenomena (in this case, biological features, cultural phenomena, and so forth). But this is not the meaning of reductionism at issue in the philosophical literature about the reduction of classical genetics. This literature is more concerned with epistemology than metaphysics. The concept of reductionism at issue is Nagel's concept of theoretical reduction. (See Sarkar 1992 and Schaffner 1993 for discussions of alternative conceptions of reduction.) According to Nagel's concept, the reduction of one science to another science entails the reduction of the central theory of one science to the central theory of the other. Nagel believed that this kind of theoretical reduction led to progressive changes in scientific knowledge. He formulated two formal requirements for theoretical reductions.

Nagel's first formal requirements was that the laws of the reduced theory must be derivable from the laws and associated coordinating definitions of the reducing theory. This deducibility requirement was intended to capture the idea that the explanatory principles (or laws) of the reducing theory ought to explain the explanatory principles (or laws) of the reduced theory. Nagel's second formal requirement, the connectability requirement, was that all essential terms of the reduced theory must either be contained within or be appropriately connected to the terms of the reducing theory by way of additional assumptions. The connectability requirement is presupposed by the derivability requirement, but making it explicit helps emphasize an important task and potential stumbling block for carrying out theoretical reduction.

Schaffner (1969) modified Nagel's model by incorporating the idea that what the reducing theory actually derives (and hence explains) is a corrected version of the reduced theory, not the original theory. He argued that this revised model better captured reductions in the physical sciences. He claimed his revised model could also be used to show how a corrected version of classical genetics was being reduced to a new theory of physicochemical science called molecular genetics. Hull (1974) countered that classical genetics was not being reduced, at least not according to the model of reduction being applied by Schaffner. Hull argued that genetics did not exemplify Nagelian reduction because the fundamental terms of classical genetics could not be suitably connected to expressions couched in terms of DNA.

Most philosophers writing on genetics and reductionism have argued that molecular genetics has not and will not reduce classical genetics (e.g., see Wimsatt 1976a, Darden and Maull 1977, Kitcher 1984, Rosenberg 1985 and 1994, Dupr 1993, and Burian 1996). Two objections to Schaffner's reductionist thesis have been most persuasive: the unconnectability objection and the gory details objection. The unconnectability objection claims that the terminology of classical genetics cannot be redefined at the molecular level in terms of DNA. This objection effectively claims that Nagel's second formal requirement, the connectability requirement, cannot be satisfied. The gory details objection alleges that molecular genetics cannot and will not explain classical genetics or better explain the phenomena that are already explained by the classical theory. This objection relates to Nagel's first formal requirement, the derivability requirement. But the gory details objection goes philosophically deeper because it implies that even if the explanatory principles of classical genetics could be derived from the explanatory principles of molecular genetics, the derivations would not be explanatory.

The most rigorous formulation of the unconnectability objection can be found in the early writings of Rosenberg who once contended that there is an unbridgeable conceptual gap between the classical and molecular theories of genetics (1985, 1994). In support of this contention, Rosenberg argued that relations between the gene concept of classical genetics and the concepts of molecular genetics are hopelessly complicated many-many relations that will forever frustrate any attempt to systematically connect the two theories. Rosenberg began his analysis by pointing out that in classical genetics, genes are identified by way of their phenotypic effects. Classical geneticists identified the gene for purple eye-color, for example, by carrying out carefully orchestrated breeding experiments and following the distribution of eye-color phenotypes in successive generations of a laboratory population. The reason classical genetics will never be reduced to a molecular-level science, according to Rosenberg (1985), is that there is no manageable connection between the concept of a Mendelian phenotype and that of a molecular gene:

The pathway to red eye pigment production begins at many distinct molecular genes and proceeds through several alternative branched pathways. The pathway from the [molecular] genes also contains redundant, ambiguous, and interdependent paths. If we give a biochemical characterization of the gene for red eye color either by appeal to the parts of its pathway of synthesis, or by appeal to the segments of DNA that it begins with, our molecular description of this gene will be too intricate to be of any practical explanatory upshot. (Rosenberg 1985, p. 101)

Rosenberg concluded that since the relation between molecular genes and Mendelian phenotypes is exceedingly complex, the connection between any molecular concept and the Mendelian gene concept must also be exceedingly complex, thereby blocking any systematic, reductive explanation of classical genetics in terms of molecular-level theory.

The gory details objection can be traced back to the writings of Putnam (1965) and Fodor (1968) who argued against reductionism of the mental on the basis that psychological functons are multiply-realized. This objection against reductionism was further developed in the context of genetics, most thoroughly by Kitcher (e.g., see Kitcher 1984, 1999, 2001). Following Hull, Kitcher assumes that classical genetics is transmission genetics. The classical theory, according to Kitcher, explains the transmission of phenotypic traits, not the development of phenotypic traits in individual organisms. And transmission phenomena, on Kitcher's account, are best explained at the level of cytology: The distribution of genes to gametes is to be explained, not by rehearsing the gory details of the reshuffling of the molecules, but through the observation that chromosomes are aligned in pairs just prior to the meiotic division, and that one chromosome from each matched pair is transmitted to each gamete. (Kitcher 1984, p. 370). Kitcher suggests that the pairing and separation of chromosomes belong to a natural kind of pair separation processes which are heterogeneous from the molecular perspective because different kinds of forces are responsible for bringing together and pulling apart different paired entities. The separation of paired entities, he claims, may occur because of the action of electromagnetic forces or even nuclear forces; but it is easy to think of examples in which the separation is effected by the action of gravity. (Kitcher 1984, p. 350)

The image of genetics that emerges from the anti-reductionist literature is of a two-tiered science composed of two discreet theoretical discourses, one grounded in principles about entities at the cytological level (such as chromosomes) and the other grounded in principles about entities at the molecular level (such as nucleotide sequences in DNA). Anti-reductionists believe some phenomena, including transmission of genes, are best explained by a theory grounded at the cytological level and other phenomena, including the expression of genes, are best explained by a theory grounded at the molecular level. Although Kitcher argues that classical genetics provides the best explanation in an objective sense, some anti-reductionists (e.g., Rosenberg 1985, 1994) believe that the obstacles to reduction are merely practical. Rosenberg (1985, 1994) appealed to the concept of supervenience to argue that in principle, molecular genetics would provide the best explanations. But he argued that in practice, classical genetics provides the best explanation of transmission phenomena, in the sense that this is the best explanation available to creatures with our cognitive limitations. Subsequently, however, Rosenberg changed his position on this issue, largely on the grounds that that technological advances in information storage and processing "may substantially enhance our capacity to understand macromolecular processes and their combinations" (Rosenberg 2006, p. 14).

Despite philosophically significant differences in their views about the ultimate basis of the irreducibility of classical genetics, the image of biological knowledge that emerges from the antireductionists' writings is similar. The biological world consists of different domains of phenomena and each domain is best explained at a particular level of theoretical discourse. Hence, the ideal structure for biology is akin to a layer-cake, with tiers of theories, each of which provides the best possible account of its domain of phenomena. Biological sciences such as classical genetics that are couched in terms of higher levels of organization should persist, secure from the reductive grasp of molecular science, because their central theories (or patterns of reasoning) explain domains of phenomena that are best explained at levels higher than the molecular level.

The anti-reductionist consensus has not gone unchallenged (see Sarkar 1989, 1992 and 1998, Schaffner 1993, and Waters 1990 and 2000). According to critics, the chief objections supporting the consensus are mistaken. The unconnectability objection rests on the assumption that classical genetics took the relationships between genes and phenotypic traits to be simple one-to-one relationships. But classical geneticists knew better. Consider what Sturtevant, one of Morgan's star students and collaborators, had to say about genes and eye color:

The difference between normal red eyes and colorless (white) ones in Drosophila is due to a difference in a single gene. Yet red is a very complex color, requiring the interaction of at least five (and probably of very many more) different genes for its production. And these genes are quite independent, each chromosome bearing some of them. Moreover, eye-color is indirectly dependent upon a large number of other genes such as those on which the life of the fly depends. We can then, in no sense identify a given gene with the red color of the eye, even though there is a single gene differentiating it from the colorless eye. So it is for all characters (my emphasis, quoted from Carlson 1988, p. 69)

This quotation suggests that the relationship between gene and eye-color in classical genetics exhibited the same complexity that Rosenberg discussed at the molecular level (compare this quotation to the passage from Rosenberg 1985 quoted in section 3.2). According to this critique of the unconnectability objection, it is not the case that genotype-phenotype relationships appear simple and uniform at the level of classical genetics and complicated and dis-unified at the molecular level. The situation appears similarly complex at both levels of analysis (Waters 1990).

Classical genetics nevertheless finds a simple way to explain transmission phenomena by appealing to the difference principle, according to which particular differences in particular genes cause particular differences in phenotypic traits in particular contexts (see section 2.1). Sturtevant alludes to this principle in the first sentence of the quotation above and again in the emphasized clause. So the question arises, can this relationship be captured at the molecular level? And the answer is yes. The differences used by classical geneticists to explain inheritance patterns have been routinely identified at the molecular level by contemporary geneticists.

According to this critique, the gory details objection also fails. This objection claims that biologists cannot improve upon the classical explanations of transmission phenomena by citing molecular details. The cytological level allegedly provides the best level of explanation because explanations at this level uniformly account for a wide range of cases that would look heterogeneous from a molecular perspective. Consider Kitcher's formulation of this objection. Kitcher believes that to explain is to unify (1989). It follows that the best explanation of a class of phenomena is the explanation that accounts for the class in a uniform way. Kitcher claims meiosis exemplifies this kind of situation. The uniformity of pair-separation processes is evident at the cytological level, but is lost in the gory details at the molecular level where the process may occur because of the action of electromagnetic forces or even of nuclear forces (Kitcher 1984, p. 350). But it is unclear what Kitcher could have in mind. The molecular mechanisms underlying the pairing and separation of chromosomes are remarkably uniform in creatures ranging from yeast to human beings; it is not the case that some involve electromagnetic forces and others involve nuclear forces. Kitcher's claim that it is easy to think of examples in which the separation is effected by the action of gravity has no basis in what molecular biologists have learned about the pairing and separation of chromosomes.

Meiosis is an unpromising candidate to illustrate the idea that what appears uniform at the level of classical genetics turns out to be heterogeneous at the molecular level. But this idea is illustrated by other genetic phenomena. Consider the phenomenon of genetic dominance. In classical genetics, all examples of complete dominance are treated alike for the purposes of explaining transmission phenomena. But contemporary genetics reveals that there are several very different mechanisms underlying different instances of dominance. According to Kitcher's unificationist theory of scientific explanation, the classical account of dominance provides an objectively better basis for explaining transmission phenomena because it provides a more unified organization of the phenomena. But this would imply that the shallow explanations of classical genetics are objectively preferable to the deeper explanations provided by the molecular theory (Waters 1990).

Although Nagel's concept of theoretical reduction marks a common starting point for discussions about the apparent reduction of classical genetics, much of the literature on reduction is aimed at seeking a better understanding of the nature of reduction by seeking to replace Nagel's concept with a more illuminating one. This is true of the anti-reductionists, who seek to clarify why molecular genetics cannot reduce classical genetics, as well as those who have been more sympathetic to reductionism. Hence, there are two levels of discourse in the literature examining the question of whether molecular genetics is reducing classical genetics. One level concerns what is happening in the science of genetics. The other concerns more abstract issues about the nature of (epistemological) reduction.

The abstract level of discourse began with Schaffner's idea that what is reduced is not the original theory, but rather a corrected version of the original theory. Wimsatt (1976a) offers a more ambitious modification. He rejects the assumption that scientific theories are sets of law-like statements and that explanations are arguments in which the phenomena to-be-explained are derived from laws. Instead of relying on these assumptions, Wimsatt uses Salmon's account of explanation (Salmon 1971) to examine claims that molecular genetics offered reductive explanations. Kitcher (1984) also rejects the account of theorizing underlying Nagel's concept of reduction. He constructs a new concept of reductive explanation based on his own idea of what effectively constitutes a scientific theory and his unificationist account of scientific explanation (1989). Likewise, Sarkar (1998) rejects the account of theories and explanation presupposed in Nagel's concept of reduction. In fact, he explicitly avoids relying on any particular account of scientific theories or theoretical explanation. Instead, he assumes that reductive explanations are explanations without specifying what an explanation is, and then seeks to identify the features that set reductive explanations apart from other explanations.

Wimsatt, Kitcher, and Sarkar seek to replace Nagel's conception of reduction with a conception that does not assume that scientic explanation involves subsumption under universal laws. Weber (2005), however, seeks to replace Nagel's conception with one that retains the idea that reductive explanation involves subsumption under laws of the reducing science. What Weber rejects is the idea that reductionism in biology involves explaining higher-level biological laws. He argues that, with some rare exceptions, biological sciences don't have laws. He contends that reductionism in biology involves explaining biological phenomena directly in terms of physical laws. Hence, he rejects the "layer-cake" conception of reduction implicit in Nagel's account.

The literature about reduction and molecular genetics has influenced philosophers' thinking about reduction in other sciences. For example, Kitcher's concept of reduction, which he uses to explain why molecular genetics cannot reduce classical genetics, has subsequently been employed by Hardcastle (1992) in her examination of the relationship between psychology and neuroscience. On the other side, Sober develops and extends the criticism of Kitcher's gory details objection (section 3.3) by re-examining the arguments of Putnam (1967, 1975) and Fodor (1968, 1975) on multiple-realizability.

Sober (1999) argues that higher-level sciences can describe patterns invisible at lower level, and hence might offer more general explanations. But he insists that description should not be confused with explanation. He maintains that although physics might not be able to describe all the patterns, it can nevertheless explain any singular occurrence that a higher-level science can explain. Higher-level sciences might provide more "general" explanations, but physics provides "deeper" ones. He suggests that which explanation is better is in the eye of the beholder. He concludes that

The discussion has gone full circle. The multiple-realizability argument being criticized by Sober was based on abstract considerations in the context of philosophy of mind. Philosophers of biology drew on this literature to construct the gory details objection against the idea that molecular genetics is reducing classical genetics. Other philosophers argued that this objection did not stand up to a careful analysis of the concrete situation in genetics. Sober has developed lessons from the discussion about genetics to critique the original anti-realizability argument and draw general conclusions about reductionism.

Wimsatt's writings on reduction (1976a, 1976b, and 1979) emphasize the fruitfulness of attempting to achieve a reduction, even when a reduction is not achieved. He argues, for instance, that efforts to discover the molecular make-ups of entities identified at higher levels is often fruitful, even when identities between levels cannot be found. In addition, Wimsatt points out that the costs of working out reductive explanations of the many particulars already explained at a higher level are relevant to the question of why there is not a full-scale replacement of higher level explanations with lower level ones. Perhaps the fact that molecular genetics has not replaced classical genetics can be explained on the basis of high costs rather than lack of epistemic merit.

While Schaffner still maintains that molecular genetics can in principle reduce classical genetics, he has conceded that attempts to carry out the reduction would be peripheral to the advance of molecular genetics. One might respond, along the lines of Hull (1977), that the success of molecular genetics seems to be reductive in some important sense. Hence, the failure to illuminate this success in terms of reduction reveals a conceptual defiency. That is, one might argue that Schaffner's peripherality thesis indicates that his conception of reduction is not the epistemically relevant one because it cannot illuminate the fruitfulness of reductive inquiry in molecular genetics.

In fact, a general shortcoming in the debate about the reduction of classical genetics is that it concerns only a fragment of scientific reasoning. It is based almost exclusively on an analysis of explanatory or theoretical reasoning and largely ignores investigative reasoning. The philosophical literature on the alleged reduction of classical genetics focuses on how geneticists explain or try to explain phenomena, not how they manipulate or investigate phenomena. This is even true of Wimsatt's (1976a) account of heuristics, which stress heuristics for explanation.

Vance (1996) offers a more thorough shift in attention from theory to investigative practice. He asserts that there is only one contemporary science of genetics and describes how investigative methods of classical genetics are an essential part of the methodology of what is called molecular genetics. He concludes that reductionism fails because contemporary genetics still depends on methods of classical genetics involving breeding experiments. Vance's picture of genetics is compelling. The laboratory methods of classical genetics do indeed persist, even as they are greatly extended, augmented, and often replaced by techniques involving direct intervention on DNA. But Vance's picture does not match the anti-reductionist image of a two-tiered science and the contention that classical genetics will remain aloof from the reductive grasp of molecular biology.

A different image emerges from viewing genetics as an investigative science involving an interplay of methodological and explanatory reasoning (Waters 2004a). This image is not of a two-tiered science, one (classical genetics) aimed at investigating and explaining transmission phenomena and another (molecular genetics) aimed at investigating and explaining developmental phenomena. Instead, there is one science that retains much of the investigative and explanatory reasoning of classical genetics by re-conceptualizing its theoretical basis in molecular terms and by retooling its basic investigative approach by integrating methodologies of classical genetics with physically-based methods of biochemistry and new methods based on recombinant DNA and RNA interference technologies.

A common claim in the philosophical literature about molecular genetics is that genes cannot be conceived at the molecular level. Of course, philosophers do not deny that biologists use the term gene, but many philosophers believe gene is a dummy term, a placeholder for many different concepts. Different responses to gene skepticism illustrate a variety of philosophical aims and approaches. One kind of response is to analyze explanations closely tied to experimental practice (rather than sweeping generalizations of a fundamental theory) in order to determine whether there are uniform patterns of reasoning about genes that could (a) be codified into clear concepts, and/or (b) used to establish the reference of the term. Another kind of response is to propose new gene concepts that will better serve the expressed aims of practicing biologists. A third kind of response is to implement survey analysis, rather than conduct traditional methods of philosophical analysis. A fourth kind of response is to embrace the (allegedly) necessary vagueness of the gene concept(s) and to examine why use of the term gene is so useful.

Gene skeptics claim that there is no coherence to the way gene is used at the molecular level and that this term does not designate a natural kind; rather, gene is allegedly used to pick out many different kinds of units in DNA. DNA consists of coding regions that are transcribed into RNA, different kinds of regulatory regions, and in higher organisms, a number of regions whose functions are less clear and perhaps in cases non-existent. Skepticism about genes is based in part on the idea that the term is sometimes applied to only parts of a coding region, sometimes to an entire coding region, sometimes to parts of a coding region and to regions that regulate that coding region, and sometimes to an entire coding region and regulatory regions affecting or potentially affecting the transcription of the coding region. Skeptics (e.g., Burian 1986, Portin 1993, and Kitcher 1992) have concluded, as Kitcher succinctly puts it: a gene is whatever a competent biologist chooses to call a gene (Kitcher 1992, p. 131).

Biological textbooks contain definitions of gene and it is instructive to consider one in order to show that the conceptual situation is indeed unsettling. The most prevalent contemporary definition is that a gene is the fundamental unit that codes for a polypeptide. One problem with this definition is that it excludes many segments that are typically referred to as genes. Some DNA segments code for functional RNA molecules that are never translated into polypeptides. Such RNA molecules include transfer RNA, ribosomal RNA, and RNA molecules that play regulatory and catalytic roles. Hence, this definition is too narrow.

Another problem with this common definition is that it is based on an overly simplistic account of DNA expression. According to this simple account, a gene is a sequence of nucleotides in DNA that is transcribed into a sequence of nucleotides making up a messenger RNA molecule that is in turn translated into sequence of amino acids that forms a polypeptide. (Biologists talk as if genes produce the polypeptide molecules or provide the information for the polypeptide.) The real situation of DNA expression, however, is often far more complex. For example, in plants and animals, many mRNA molecules are processed before they are translated into polypeptides. In these cases, portions of the RNA molecule, called introns, are snipped out and the remaining segments, called exons, are spliced together before the RNA molecule leaves the cellular nucleus. Sometimes biologists call the entire DNA region, that is the region that corresponds to both introns and exons, the gene. Other times, they call only the portions of the DNA segment corresponding to the exons the gene. (This means that some DNA segments that geneticists call genes are not continuous segments of DNA; they are collections of discontinuous exons. Geneticists call these split genes.) Further complications arise because the splicing of exons in some cases is executed differentially in different tissue types and at different developmental stages. (This means that there are overlapping genes.) The problem with the common definition that genes are DNA segments that code for polypeptides is that the notion of coding for a polypeptide is ambiguous when it comes to actual complications of DNA expression. Gene skeptics argue that it is hopelessly ambiguous (Burian 1986, Fogle 1990 and 2000, Kitcher 1992, and Portin 1993).

Clearly, this definition, which is the most common and prominent textbook definition, is too narrow to be applied to the range of segments that geneticists commonly call genes and too ambiguous to provide a single, precise partition of DNA into separate genes. Textbooks include many definitions of the gene. In fact, philosophers have often been frustrated by the tendency of biologists to define and use the term gene in a number of contradictory ways in one and the same textbook. After subjecting the alternative definitions to philosophical scrutiny, gene skeptics have concluded that the problem isn't simply a lack of analytical rigor. The problem is that there simply is no such thing as a gene at the molecular level. That is, there is no single, uniform, and unambiguous way to divide a DNA molecule into different genes. Gene skeptics have often argued that biologists should couch their science in terms of DNA segments such exon, intron, promotor region, and so on, and dispense with the term gene altogether (most forcefully argued by Fogle 2000).

It has been argued, against gene skepticism, that biologists have a coherent, precise, and uniform way to conceive of genes at the molecular level. The analysis underlying this argument begins by distinguishing between two different ways contemporary geneticists think about genes. Classical geneticists often conceived of genes as the functional units in chromosomes, differences in which cause differences in phenotypes. Today, in contexts where genes are identified by way of observed phenotypic differences, geneticists still conceive of genes in this classical way, as the functional units in DNA whose differences are causing the observed differences in phenotypes. This way of conceiving of genes is called the classical gene concept (Waters 1994). But contemporary geneticists also think about genes in a different way by invoking a molecular-level concept. The molecular gene concept stems from the idea that genes are units in DNA that function to determine linear sequences in molecules synthesized via DNA expression. According to this analysis, both concepts are at work in contemporary geneticists. Moss 2003 also distinguishes between two contemporary gene concepts, which he calls genes-P (preformationist) and genes-D (developmental). He argues that conflation of these concepts leads to erroneous thinking in genetics.

Much confusion concerning the classical way to think about genes is due to the fact that geneticists have sometimes talked as if classically conceived genes are for gross phenotypic characters (phenotypes) or as if individual genes produce phenotypes. This talk was very misleading on the part of classical geneticists and continues to be misleading in the context of contemporary genetics. The production of a gross phenotypic character, such as purple eye-color, involves all sorts of genetic and extra-genetic factors including various cellular enzymes and structures, tissue arrangements, and environmental factors. In addition, it is not clear what, if any, gross phenotypic level functions can be attributed to individual genes. For example, it is no clearer today than it was in Morgan's day that the function of the purple gene discussed in section 2.1 is to contribute to the production of eye color. Mutations in this gene affect a number of gross phenotypic level traits. Legitimate explanatory reasoning invoking the classical gene concept does not depend on any baggage concerning what genes are for or what function a gene might have in development. What the explanatory reasoning depends on is the difference principle, that is, the principle that some difference in the gene causes certain phenotypic differences in particular genetic and environmental contexts (section 2.1). Many gene-based explanations in contemporary biology are best understood in terms of the classical gene concept and the difference principle.

Perhaps the reason gene skeptics overlooked the molecular gene concept is that they were searching for the wrong kind of concept. The concept is not a purely physicochemical concept, and it does not provide a single partition of DNA into separate genes. Instead, it is a functional concept that provides a uniform way to think about genes that can be applied to pick out different DNA segments in different investigative or explanatory contexts. The basic molecular concept, according to this analysis, is the concept of a gene for a linear sequence in a product of DNA expression:

A gene g for linear sequence l in product p synthesized in cellular context c is a potentially replicating nucleotide sequence, n, usually contained in DNA, that determines the linear sequence l in product p at some stage of DNA expression (Waters 2000)

The concept of the molecular gene can be presented as a 4-tuple: . This analysis shows how geneticists can consistently include introns as part of a gene in one epistemic context and not in another. If the context involves identifying a gene for a primary, preprocessed RNA molecule, then the gene includes the introns as well as the exons. If the context involves identifying the gene for the resulting polypeptide, then the gene includes only the exons. Hence, in the case of DNA expression that eventually leads to the synthesis of a given polypeptide, geneticists might talk as if the gene included the intron (in which case they would be referring to the gene for the primary, preprocessed RNA) and yet also talk as if the gene excluded the introns (in which case they would be referring to the gene for the mature RNA or polypeptide). Application of the molecular gene concept is not ambiguous; in fact, it is remarkably precise provided one specifies the values for the variables in the expression gene for linear sequence l in product p synthesized in cellular context c.

Gene skeptics have suggested that there is a lack of coherence in gene talk because biologists often talk as if genes code for polypeptides, but then turn around and talk about genes for RNA molecules that are not translated into polypeptides (including genes for RNA [tRNA], ribosomal RNA [rRNA], and interference RNA [iRNA]). This account shows that conceiving of genes for rRNA involves the same idea as conceiving of genes for polypeptides. In both cases, the gene is the segment of DNA, split or not, that determines the linear sequence in the molecule of interest.

An advantage of this analysis is that it emphasizes the limitations of gene-centered explanations while clarifying the distinctive causal role genes play in the syntheses of RNA and polypeptides: genes determine the linear sequences of primary RNA transcripts and often play a distinctive role, though not exclusive, in determining the sequence of amino acids in polypeptides.

Weber (2005) examines the evolution of the gene concept by tracing changes in the reference of the term gene through the history of genetics. The reference or extension of a term is the set of objects to which it reference. Weber adopts a mixed theory of refence. According to mixed theories, the reference of a term is determined how the relevant linguistic community causally interacts with potential referents as well as how they describe potential referents. This theory leads Weber to pay close attention, not just to how geneticists theorized about genes or used the concept to explain phenomena, but also how they conducted their laboratory investigations. Following Kitcher (1978, 1982), he examines ways in which modes of reference changed over time.

Weber identifies six different gene concepts, beginning with Darwin's pangene concept (1868) and ending with the contemporary concept of molecular genetics. He distinguishes the contemporary molecular concept from the classical (or neoclassical) one on the basis of how geneticists described their functional role (RNA/protein coding versus general function unit), their material basis (RNA/DNA versus chromosome), and their structure (discontinuous linear -- with introns and exons versus continuous linear) as well as on the basis of the criteria experimentalists used to identify genes (by gene product versus complementation test).

Weber examines how the investigation of several particular Drosophila genes changed as the science of genetics developed. His study shows that the methods of molecular genetics provided new ways to identify genes that were first identified by classical techniques. The reference of the term changed, not simply as a result of theoretical developments, but also as a result of the implementation of new methods to identify genes. He concludes that unlike concepts of physical science that have been analyzed by philosophers, the gene concept has a nonessentialistic character that allows biologists to lay down different natural classifications, depending on the investigative methods available as well as on theoretical interests (Weber 2005, p. 228). Weber calls this feature floating references.

Neumann-Held (2001) proposes a new way to think about genes in the context of developmental genetics. She says that in this context, interest in genes is largely focused on the regulated expression of polypeptides. She notes that textbook definitions of gene often acknowledge this interest and quotes the following definition from a scientific textbook:

A combination of DNA segments that together constitute an expressible unit, expression leading the formation of one or more specific functional gene products that may lead to either RNA molecules or polypeptides. The segments of a gene include (1) the transcribed unit and any regulatory segments included in the transcription unit, and (2) the regulatory sequences that flank the trancription unit and are required for specific expression. (Singer and Berg 1991, p. 41).

This definition emphasizes that regulatory sequences as well as coding regions are required for specific expression. Only a small proportion of coding sequences are transcribed in a given cell at a particular time, and whether a particular sequence is transcribed depends in part on regulatory regions external to the coding region.

Neumann-Held points out that if the aim is to specify what is necessary for regulated synthesis of polypeptides, then one must include even more than what is located in the DNA. This follows from the fact that processes such as differential splicing (and RNA editing processes such as methylation that I have not discussed in this article) involve entities outside of DNA such as splicing agents. She suggests that it is appropriate, at least in the context of developmental genetics, to reconceive genes as processes. She proposes the following process molecular gene concept.

Neumann-Held argues that this conception provides the clearest basis for understanding how DNA sequences are used in the processes of polypeptide production. She points out that the process molecular gene concept allows for the inclusion of coding sequences in DNA, regulatory sequences in DNA and also entities not located in DNA, all of which are causally involved in the production of polypeptides. Neumann-Held's concept excludes transcription processes and coding regions of DNA that lead to functional RNA molecules that are not translated into polypeptides. Hence, according to her account, there are not process molecular genes for tRNA (transfer RNA), rRNA (ribosomal RNA) or snRNA (small nuclear RNA). This feature of Neumann-Held's definition does not match the textbook definition that she quotes to motivate her account (presented above). Furthermore, the exclusion of these coding regions does not track with recent discoveries about the important functions played by non-coding RNA molecules such as snRNAs. Her definition could easily be revised to accommodate these regions and processes. In any case, Neumann-Held believes using this concept in developmental genetics, rather than DNA-centered gene concepts, will help avoid the view that genes are the most important explanatory factors in biology because of their unique causal powers (Neumann-Held 2001, p. 80).

Stotz and Griffiths (2004) believe that the variety of gene concepts used throughout the biological sciences calls for a more systematic and explicitly empirical approach. They point out that individual philosophers cannot grasp all the intricacies of the different contexts across the broad range of biological sciences in which gene concepts are employed. They have embarked upon an ambitious project to survey practicing scientists in an attempt to help identify how scientists actually conceive of genes. Their interest extends far beyond understanding molecular genetics. They hope to learn about the concepts employed in many different areas and contexts of biology by spotting differences in the way biologists from different areas (and biologists in different age groups, sexes, etc.) answer sophisticated questionnaires.

An initial motivation behind Stotz and Griffith's project was to test philosophical accounts of the gene concept. As Griffiths asked, if their survey-based study revealed that scientists don't actually think of genes in the way set out by a philosophical account, then what value could the account possibly have? There are, however, a number of daunting, practical difficulties with using a questionnaire to learn how a person is thinking, especially if the person's thinking involves the use of multiple concepts and/or is sometimes or somewhat muddled (Waters 2004b). It is also difficult to survey appropriate and representative samples of scientists. Griffiths and Stotz are aware of these difficulties and have refined their project through successive surveys.

Even if Stotz and Griffith's survey succeeds in identifying how scientists in different areas of biology actually think about genes in different contexts, it does not follow that their findings would provide an appropriate test of the classical, molecular, or process molecular gene concepts. The aim of the proponents of these concepts is to re-interpret the knowledge of contemporary genetics by replacing sloppy thinking based on unclear concepts with more rigorous thinking in terms of precise concepts. Showing that scientists' actual thinking does not align with the precise application of these concepts would not refute the analysis supporting the classical gene or molecular gene concepts and it would not undermine the argument motivating the proposal for the new process molecular gene concept.

Although it appears that survey-based findings would not provide an appropriate test of philosophical analyses of gene concepts, they might provide, as Stotz and Griffiths claim, important information relevant to those conducting philosophical research on gene concepts. For example, if such surveys find significant differences in the way evolutionary biologists and developmental geneticists answer questions about what counts as gene, philosophers might examine whether the contexts in which these biologists practice call for different gene concepts. Survey results could provide a useful heuristic for conducting concept analyses.

Gene skeptics such as Burian, Portin, and Fogle claim that the term gene has outlived its usefulness. They argue that the term is both too vague and too restrictive. It is too vague, they believe, because it does not provide a unique parsing of the genome. Borders between genes are overlapping and allegedly ambiguous. It is not clear, they argue, whether genes include or exclude introns, regulatory regions, and so forth. The term is allegedly too restrictive because it obscures the diversity of molecular elements playing different roles in the expression and regulation of DNA. In addition, any attempt to resolve the ambiguities, these skeptics argue, will make the term even more restrictive.

Keller's account of the history of twentieth century genetics seems to reinforce gene skepticism. For example, she argues that the question about what genes are for has become increasingly difficult to answer (Keller 2000). By the end of the twentieth century, she says, biological findings have revealed a complexity of developmental dynamics that make it impossible to conceive of genes as distinct causal agents in development. Keller emphasizes that words have power and devotes a good deal of attention to the way loose gene talk has affected biological research by reinforcing the assumption that the gene is the core explanatory concept of biological structure and function (Keller 2000, p. 9), an assumption with which she strongly disagrees. Yet Keller does not endorse the view of gene skeptics who argue that biology would be improved if biologists stopped talking about genes and restricted themselves to terms designating molecular units such as nucleotide, codon, coding region, promotor region, and so on. Keller maintains that the term gene continues to have obvious and undeniable uses.

One use of the term gene, according to Keller, is that its vagueness, the very feature that troubles philosophers, makes it possible for biologists to be flexible, to communicate across disciplinary boundaries, and to think in new ways:

The meaning of an experimental effect depends on its relation to other effects, and the use of language too closely tied to particular experimental practices would, by its very specificity, render communication across difference experimental contexts effectively impossible. (Keller 2000, p. 140).

Keller identifies a second reason that gene talk is useful. The term gene applies to entities that can be experimentally manipulated to produce definite and reproducible effects (though given Keller's criticism of gene concepts, it is unclear to what entities she thinks the term refers). She suggests that genes are short-term causes. She points out, however, that this does not mean genes are long-term causes or that genes are the fundamental causal agents of development. Rather, what it means (and Keller thinks this is an important reason why gene talk will continue) is that genes can be used as handles to manipulate biological processes (also see Waters 2000). And for these two reasons, Keller concludes, gene talk will and should continue to play an important role in biological discourse.

The science called molecular genetics is associated with a fundamental theory according to which genes and DNA direct all basic life processes by providing the information specifying the development and functioning of organisms. The genome is said to specify the developmental program, master plan, or blue print for development while other elements provide the materials (e.g., Bonner 1965, Jacob and Monod 1961, Mayr 1961, Maynard Smith 2000, Rosenberg 2006). Although the idea that the chromosomes contain a code-script for the development and functioning of an organism was famously expressed by Schrodinger (1944) before the era of molecular genetics, today it is often expressed in explicitly molecular terms. The information of development and function, which is passed down from one generation to the next, is allegedly encoded in the nucleotide sequences comprising genes and DNA. This so-called genetic information is first transcribed into RNA, then translated into proteins, and finally expressed in the development and functioning of organisms.

The concept of genetic information has a prominent place in the history of molecular genetics, beginning with Watson and Crick's observation that since any sequence of nucleotide base pairs could fit into the structure of any DNA molecule that in a long molecule many different permutations are possible, and it therefore seems likely that the precise sequence of the bases is the code which carries the genetic information. (Watson and Crick 1953). As Downes (2005) recounts, the geneticists Jacob and Monod reinforced the use of information language as did those who sought to crack the genetic code. By the early 1960s, the language of information was well-entrenched in the field of molecular genetics.

Philosophers have generally criticized the theory that genes and DNA provide all the information and have challenged the use of sweeping metaphors such as master plan and progam which suggest that genes and DNA contain all developmental information. Critics have taken a number of different positions. Most seem to accept the notion that biological systems or processes contain information, but they deny the idea that DNA has a exceptional role in providing information. Some are content to argue that under various existing theories of information, such as causal theories or standard teleosemantic theories, information is not restricted to DNA. But others contend that understanding what genes do requires a new conception of biological information. One approach is to retreat to a narrow conception of coding specifically aimed at clarifying the sense in which DNA provides information for the synthesis of polypeptides, but not for higher-level traits (e.g. Godfrey-Smith 2000). Another approach is to construct a new, broad conception of biological information and use this conception to show that the informational role of genes is not exclusive (Jablonka 2002). A different approach is to abandon information talk altogether and explain the investigative and explanatory reasoning associated with genetics and molecular biology in purely causal terms.

The fundamental theory that says the role of DNA is to provide the information for development has been criticized on many grounds. Keller (2000) points out that the idea flounders on an ambiguity. Does DNA provide the program or the data? Others have argued that information for development flows from a vast number of resources, not just genetic resources. Oyama (1985) suggests that it is a mistake to think information is contained within static entities such as DNA. She believes that information exists in life-cycles. Other criticisms challenge applications of particular conceptions or theories of information, including applications of the causal and teleosemantic conceptions.

Griffiths (2001) distinguishes between two ways to conceive of information, causal and intentional, and then argues that under either conception, information is not restricted to DNA. Causal theories of information, based on Dretske's (1981), are related to the Shannon mathematical theory of information (1948). Dretske distinguishes between a source variable and background or channel conditions. On Griffiths' (2001) reading of Dretske's theory, a source variable, X, carries information about variable Y if the value of X is correlated with the value of Y. Griffiths describes the causal interpretation of this idea as follows:

There is a channel between two systems when the state of one is systematically causally related to the other, so that the state of the sender can be discovered by observing the state of the receiver. The causal information is simply the state of affairs with which it reliably correlates at the other end of the channel. Thus, smoke carries information about fire and disease phenotypes carry information about disease genes. (Griffiths 2001, p. 397)

To capture the conventional ideas about genetic information under this theory, genes are treated as source variables and environments are treated as channel conditions. It follows that genes carry information about phenotypes because phenotypic values reliably correlate with genotypic values. But as Griffiths points out, nothing stops one from treating environmental conditions as source variables and genes as channel. Under this application of the causal theory, environmental conditions carry information about phenotypes. Griffiths and others have concluded that the idea that genes provide the information while other causal factors merely provide material cannot be sustained under causal theories of information.

Griffiths argues that the idea that genes and DNA provide all the information fares no better under intentional theories of information. Intentional theories are aimed at capturing the sense of semantic information that human thoughts and utterances allegedly contain (Godfrey-Smith 1999). The version of intentional theory favored by philosophers of biology is teleosemantic. According to teleosemantic theories, a signal represents whatever it was selected to represent (in the process of evolution). Under this idea, one might say that DNA contains information about development because DNA's effects on development were selected for in the process of evolution. But as Griffiths and Gray (1997) point out, this idea applies to a wide range of entities involved in development, not just DNA.

Weber (2005) challenges Maynard Smith's (2000) teleosemantic account. Maynard Smith draws an analogy between information in a programmed computer and information in DNA. Computers execute algorithms programmed by human beings and organisms express DNA that has been programmed by natural selection. The information programmed in a computer is intentional in that one could determine the intentions of the human programmer by analyzing the algorithm. Maynard Smith argues that the information programmed in DNA by natural selection is intentional in the same sense. Weber offers two arguments against this view. First, he points out that DNA might contain nucleotide sequences that have arisen from chance mutations that happen to be beneficial. If natural selection has not yet operated on them, then Maynard Smith's teleosemantic theory implies they do not contain information. Yet, causally, such a nucleotide sequence would influence development in the same way as sequences that have been selected for. Weber's second criticism of Maynard Smith's account stems from a closer examination of the intentionality associated with computer programs. Weber claims that intentional states associated with computers are actually states of the human engineers who write the programs, not states of the computers themselves: "A computer program is a string of symbols that acquires a meaning only in the context of a community of engineers who understand what the program does and what it can be used for" (Weber 2005, p. 252). The analogue to human programmers in Maynard Smith's account is natural selection. But natural selection does not have intentional states. Hence, Weber concludes, the teleosemantic approach fails to save the idea that DNA contains information in the intentional sense.

It is tempting to think that information talk is impotent in this context and indeed, some philosophers have argued that such talk is misleading and should be abandoned (e.g., Sarkar 1996, Weber 2005, and possibly Rosenberg 2006). But others have taken the view that more careful thinking about concepts of information could lead to important insights (see next section).

Jablonka's aim is to construct a general definition of information that recognizes different types of information associated with different ways of acquiring, replicating, and transmitting information through space and time (Jablonka 2002). One of her concerns is that discussions about the meaning (or non-meaning) of information talk in biology are biased by the assumption that the genetic system should serve as the prototype for thinking about biological information. She believes that a general definition of information, one designed to capture the senses of information exemplified in environmental cues, man-made instructions, and evolved biological signals, as well as the sense of information in hereditary material, will lead to more useful generalizations and perspectives.

Jablonka says that the sense of information in all these situations involve a source, a receiver system (organism or organism-designed system), and a special type of reaction of the receiver to the source. She conceives the receiver's reaction as a complex, regulated chain of events leading to a response. Variations in the form of the source lead to variations in response. That is, the nature of the reaction depends on the way the source is organized. In addition, she points out, reactions in these situations are beneficial for the receiver over an appropriate period of time (in the case of organisms, over evolutionary time). Jablonka stresses that the benefit, or function, in the case of organisms should be understood in terms of evolution, with the focus on the evolution of the reaction system, not on the evolution of the source or the evolution of the final outcome of the reaction.

Jablonka's concept of information is intentional, and is related to the teleosemantic conceptions discussed above. According to standard teleosemantic conceptions, signals have information because the production of the signal was selected for in evolutionary history. According to Jablonka's view, however, an entity has information, not because it was selected for, but because the receiver's response to it was selected for. Whether something counts as information depends on whether entities respond to it in a (proper) functional way.

Jablonka summarizes her general account in the following definition:

A source an entity or process can be said to have information when a receiver system reacts to this source in a special way. The reaction of the receiver to the source has to be such that the reaction can actually or potentially change the state of the receiver in a (usually) functional manner. Moreover, there must be a consistent relation between variations in the form of the source and the corresponding changes in the receiver. (Jablonka 2002, p. 582)

Jablonka points out that according to this definition, genes do not have a theoretically privileged status; they are one among many sources of information. In addition, she insists the focus should be on the interpretive system of the receiver of the information, not on the source.

Jablonka argues that the information in DNA has little in common with the information in an alarm call, a cloudy sky, or a chemical signal in a bacterial colony. In the latter cases, the receivers' reactions (or responses) to the source are adaptive for the receiver: an alarm warns the bird there are predators around; the cloudy sky alerts the ape to the coming storm; the chemical alerts the bacteria to imminent starvation. (p. 585). But in the case of DNA, the receiver does not seem to react in a way that adapts the cell to anything in particular. Rather, DNA is simply read by the cell, so it is not information in the same sense DNA is information about the cell or the organism, rather than for the cell or the organism. (Jablonka 2002, p. 585). Nevertheless, Jablonka claims that her concept applies to genes even if it doesn't apply to DNA in general:

However, if instead of thinking about DNA in general we think about a particular locus with a particular allele, it is not difficult to think about the functional role of this particular allele in a particular set of environmental circumstances. Hence we can say for all types of information, including alarm calls and pieces of DNA, a source S (allele, alarm call, cloudy sky, etc.) carries information about a state E for a receiver R (an organism or an organism-designed product), if the receiver has an interpretation system that reacts to S in a way that usually ends up adapting R (or its designer, if R is humanly designed) to E. (Jablonka 2002, p. 585, my stress)

Given that Jablonka says that DNA in general is not information in the same sense as the alarm call and cloudy sky (and that this is the sense specified in the statement above), it is puzzling why she claims that the statement quoted above applies to all types of information. Furthermore, her claim that the statement above applies to particular alleles (and apparently not to DNA in general) is not straightforward. Jablonka's orignal account provides an illuminating way to think about information in biological processes such as cellular signaling processes. But her account does not substantiate the idea that genes and DNA contain information or help elucidate the role of genes and DNA.

Another approach to elucidating the role of genes and DNA is to replace loose information talk with concrete causal descriptions grounded in an explicit understanding of causation (Waters 2000, and forthcoming). This approach is premised on the idea that the basic theory and laboratory methods associated with molecular genetics can be understood in purely causal terms. The basic theory and methodology concerns the syntheses of DNA, RNA, and polypeptide molecules, not the alleged role of DNA in "programming" or "directing" development (section 2.3). The causal role of molecular genes in the syntheses of these molecules can be understood in terms of causally specific actual difference making. This involves two causal concepts, actual difference making and causal specificity. These concepts can be explicated in terms of the manipulability account of causation .

The concept of actual difference making applies in the context of an actual population containing entities that actually differ with respect to some property. In such a population, there might be many potential difference makers. That is, there may be many factors that could be manipulated to alter the relevant property of the entities in the population. But the actual difference makers are (roughly speaking) the potential difference makers that actual differ, and whose actual differences bring about the actual differences in the property in the population.

The concept of actual difference making can be illustrated with the difference principle of classical genetics (section 2.1). According to this principle, genes can be difference makers with respect to phenotypic differences in particular genetic and environmental contexts. So, it identifies potential difference makers. When this principle is used to explain an actual hereditary pattern, it is applied to genes that actually differed in the population exhibiting the pattern (often an experimental population). In such cases, an actual difference in the gene among the organisms in the population caused the actual phenotypic differences in that population (see Gifford 1990). That is, the gene was the actual difference maker, not just a potential difference maker (in that population).

The concept of actual difference making can be applied to molecular genetics as follows. In an actual cell, where a population of unprocessed RNA molecules differ with respect to linear sequence, the question arises: what causes these differences? The answer is that differences in genes in the cell cause the actual differences in the linear sequences in the unprocessed RNA molecules, and also in populations of RNA molecules and polypeptides. Genes are not the only actual difference makers of the actual differences in the linear sequences of these molecules. And this brings us to the second causal concept, causal specificity.

Causal specificity has been analyzed by Lewis (2000). The basic idea is that a causal relationship between two properties is specific when many different values in a causal property bring about many different and specifically different values of a resultant variable (the causal relationships instantiate something like a mathematical function). An on/off switch is not specific in this technical sense because the causal property has only two values (on and off). A dimmer switch is causally specific in this sense. Genes can be specific difference makers because many specific differences in the sequences of nucleotides in DNA result in specific differences in RNA molecules. This is not the case with many other actual difference makers, such as polymerases, which are more like on/off switches (with respect to differences in linear sequences). Biologists have discovered, however, the existence of other actual difference makers, besides genes and DNA, that are causally specific with respect to the linear sequences of processed RNA and polypeptides, to some degree at least. For example, in some cells splicing complexes called splicosomes actually differ in multiple ways that result in multiple, specific differences in the linear sequences of processed RNA molecules.

Originally posted here:
Molecular Genetics (Stanford Encyclopedia of Philosophy)

Posted in Molecular Genetics | Comments Off on Molecular Genetics (Stanford Encyclopedia of Philosophy)

Molecular evolution – Wikipedia

Posted: October 23, 2016 at 11:42 pm

Molecular evolution is the process of change in the sequence composition of cellular molecules such as DNA, RNA, and proteins across generations. The field of molecular evolution uses principles of evolutionary biology and population genetics to explain patterns in these changes. Major topics in molecular evolution concern the rates and impacts of single nucleotide changes, neutral evolution vs. natural selection, origins of new genes, the genetic nature of complex traits, the genetic basis of speciation, evolution of development, and ways that evolutionary forces influence genomic and phenotypic changes.

The content and structure of a genome is the product of the molecular and population genetic forces which act upon that genome. Novel genetic variants will arise through mutation and will spread and be maintained in populations due to genetic drift or natural selection.

Mutations are permanent, transmissible changes to the genetic material (DNA or RNA) of a cell or virus. Mutations result from errors in DNA replication during cell division and by exposure to radiation, chemicals, and other environmental stressors, or viruses and transposable elements. Most mutations that occur are single nucleotide polymorphisms which modify single bases of the DNA sequence, resulting in point mutations. Other types of mutations modify larger segments of DNA and can cause duplications, insertions, deletions, inversions, and translocations.

Most organisms display a strong bias in the types of mutations that occur with strong influence in GC-content. Transitions (A G or C T) are more common than transversions (purine (adenine or guanine)) pyrimidine (cytosine or thymine, or in RNA, uracil))[1] and are less likely to alter amino acid sequences of proteins.

Mutations are stochastic and typically occur randomly across genes. Mutation rates for single nucleotide sites for most organisms are very low, roughly 109 to 108 per site per generation, though some viruses have higher mutation rates on the order of 106 per site per generation. Among these mutations, some will be neutral or beneficial and will remain in the genome unless lost via genetic drift, and others will be detrimental and will be eliminated from the genome by natural selection.

Because mutations are extremely rare, they accumulate very slowly across generations. While the number of mutations which appears in any single generation may vary, over very long time periods they will appear to accumulate at a regular pace. Using the mutation rate per generation and the number of nucleotide differences between two sequences, divergence times can be estimated effectively via the molecular clock.

Recombination is a process that results in genetic exchange between chromosomes or chromosomal regions. Recombination counteracts physical linkage between adjacent genes, thereby reducing genetic hitchhiking. The resulting independent inheritance of genes results in more efficient selection, meaning that regions with higher recombination will harbor fewer detrimental mutations, more selectively favored variants, and fewer errors in replication and repair. Recombination can also generate particular types of mutations if chromosomes are misaligned.

Gene conversion is a type of recombination that is the product of DNA repair where nucleotide damage is corrected using an homologous genomic region as a template. Damaged bases are first excised, the damaged strand is then aligned with an undamaged homolog, and DNA synthesis repairs the excised region using the undamaged strand as a guide. Gene conversion is often responsible for homogenizing sequences of duplicate genes over long time periods, reducing nucleotide divergence.

Genetic drift is the change of allele frequencies from one generation to the next due to stochastic effects of random sampling in finite populations. Some existing variants have no effect on fitness and may increase or decrease in frequency simply due to chance. "Nearly neutral" variants whose selection coefficient is close to a threshold value of 1 / the effective population size will also be affected by chance as well as by selection and mutation. Many genomic features have been ascribed to accumulation of nearly neutral detrimental mutations as a result of small effective population sizes.[2] With a smaller effective population size, a larger variety of mutations will behave as if they are neutral due to inefficiency of selection.

Selection occurs when organisms with greater fitness, i.e. greater ability to survive or reproduce, are favored in subsequent generations, thereby increasing the instance of underlying genetic variants in a population. Selection can be the product of natural selection, artificial selection, or sexual selection. Natural selection is any selective process that occurs due to the fitness of an organism to its environment. In contrast sexual selection is a product of mate choice and can favor the spread of genetic variants which act counter to natural selection but increase desirability to the opposite sex or increase mating success. Artificial selection, also known as selective breeding, is imposed by an outside entity, typically humans, in order to increase the frequency of desired traits.

The principles of population genetics apply similarly to all types of selection, though in fact each may produce distinct effects due to clustering of genes with different functions in different parts of the genome, or due to different properties of genes in particular functional classes. For instance, sexual selection could be more likely to affect molecular evolution of the sex chromosomes due to clustering of sex specific genes on the X,Y,Z or W.

Selection can operate at the gene level at the expense of organismal fitness, resulting in a selective advantage for selfish genetic elements in spite of a host cost. Examples of such selfish elements include transposable elements, meiotic drivers, killer X chromosomes, selfish mitochondria, and self-propagating introns. (See Intragenomic conflict.)

Genome size is influenced by the amount of repetitive DNA as well as number of genes in an organism. The C-value paradox refers to the lack of correlation between organism 'complexity' and genome size. Explanations for the so-called paradox are two-fold. First, repetitive genetic elements can comprise large portions of the genome for many organisms, thereby inflating DNA content of the haploid genome. Secondly, the number of genes is not necessarily indicative of the number of developmental stages or tissue types in an organism. An organism with few developmental stages or tissue types may have large numbers of genes that influence non-developmental phenotypes, inflating gene content relative to developmental gene families.

Neutral explanations for genome size suggest that when population sizes are small, many mutations become nearly neutral. Hence, in small populations repetitive content and other 'junk' DNA can accumulate without placing the organism at a competitive disadvantage. There is little evidence to suggest that genome size is under strong widespread selection in multicellular eukaryotes. Genome size, independent of gene content, correlates poorly with most physiological traits and many eukaryotes, including mammals, harbor very large amounts of repetitive DNA.

However, birds likely have experienced strong selection for reduced genome size, in response to changing energetic needs for flight. Birds, unlike humans, produce nucleated red blood cells, and larger nuclei lead to lower levels of oxygen transport. Bird metabolism is far higher than that of mammals, due largely to flight, and oxygen needs are high. Hence, most birds have small, compact genomes with few repetitive elements. Indirect evidence suggests that non-avian theropod dinosaur ancestors of modern birds [3] also had reduced genome sizes, consistent with endothermy and high energetic needs for running speed. Many bacteria have also experienced selection for small genome size, as time of replication and energy consumption are so tightly correlated with fitness.

Transposable elements are self-replicating, selfish genetic elements which are capable of proliferating within host genomes. Many transposable elements are related to viruses, and share several proteins in common.

DNA transposons are cut and paste transposable elements which excise DNA and move it to alternate sections of the genome.

non-LTR retrotransposons

LTR retrotransposons

Helitrons

Alu elements comprise over 10% of the human genome. They are short non-autonomous repeat sequences.

The number of chromosomes in an organism's genome also does not necessarily correlate with the amount of DNA in its genome. The ant Myrmecia pilosula has only a single pair of chromosomes[4] whereas the Adders-tongue fern Ophioglossum reticulatum has up to 1260 chromosomes.[5]Cilliate genomes house each gene in individual chromosomes, resulting in a genome which is not physically linked. Reduced linkage through creation of additional chromosomes should effectively increase the efficiency of selection.

Changes in chromosome number can play a key role in speciation, as differing chromosome numbers can serve as a barrier to reproduction in hybrids. Human chromosome 2 was created from a fusion of two chimpanzee chromosomes and still contains central telomeres as well as a vestigial second centromere. Polyploidy, especially allopolyploidy, which occurs often in plants, can also result in reproductive incompatibilities with parental species. Agrodiatus blue butterflies have diverse chromosome numbers ranging from n=10 to n=134 and additionally have one of the highest rates of speciation identified to date.[6]

Different organisms house different numbers of genes within their genomes as well as different patterns in the distribution of genes throughout the genome. Some organisms, such as most bacteria, Drosophila, and Arabidopsis have particularly compact genomes with little repetitive content or non-coding DNA. Other organisms, like mammals or maize, have large amounts of repetitive DNA, long introns, and substantial spacing between different genes. The content and distribution of genes within the genome can influence the rate at which certain types of mutations occur and can influence the subsequent evolution of different species. Genes with longer introns are more likely to recombine due to increased physical distance over the coding sequence. As such, long introns may facilitate ectopic recombination, and result in higher rates of new gene formation.

In addition to the nuclear genome, endosymbiont organelles contain their own genetic material typically as circular plasmids. Mitochondrial and chloroplast DNA varies across taxa, but membrane-bound proteins, especially electron transport chain constituents are most often encoded in the organelle. Chloroplasts and mitochondria are maternally inherited in most species, as the organelles must pass through the egg. In a rare departure, some species of mussels are known to inherit mitochondria from father to son.

New genes arise from several different genetic mechanisms including gene duplication, de novo origination, retrotransposition, chimeric gene formation, recruitment of non-coding sequence, and gene truncation.

Gene duplication initially leads to redundancy. However, duplicated gene sequences can mutate to develop new functions or specialize so that the new gene performs a subset of the original ancestral functions. In addition to duplicating whole genes, sometimes only a domain or part of a protein is duplicated so that the resulting gene is an elongated version of the parental gene.

Retrotransposition creates new genes by copying mRNA to DNA and inserting it into the genome. Retrogenes often insert into new genomic locations, and often develop new expression patterns and functions.

Chimeric genes form when duplication, deletion, or incomplete retrotransposition combine portions of two different coding sequences to produce a novel gene sequence. Chimeras often cause regulatory changes and can shuffle protein domains to produce novel adaptive functions.

De novo origin. Novel genes can also arise from previously non-coding DNA.[7] For instance, Levine and colleagues reported the origin of five new genes in the D. melanogaster genome from noncoding DNA.[8][9] Similar de novo origin of genes has been also shown in other organisms such as yeast,[10] rice[11] and humans.[12] De novo genes may evolve from transcripts that are already expressed at low levels.[13] Mutation of a stop codon to a regular codon or a frameshift may cause an extended protein that includes a previously non-coding sequence.

Molecular systematics is the product of the traditional fields of systematics and molecular genetics. It uses DNA, RNA, or protein sequences to resolve questions in systematics, i.e. about their correct scientific classification or taxonomy from the point of view of evolutionary biology.

Molecular systematics has been made possible by the availability of techniques for DNA sequencing, which allow the determination of the exact sequence of nucleotides or bases in either DNA or RNA. At present it is still a long and expensive process to sequence the entire genome of an organism, and this has been done for only a few species. However, it is quite feasible to determine the sequence of a defined area of a particular chromosome. Typical molecular systematic analyses require the sequencing of around 1000 base pairs.

Depending on the relative importance assigned to the various forces of evolution, three perspectives provide evolutionary explanations for molecular evolution.[14]

Selectionist hypotheses argue that selection is the driving force of molecular evolution. While acknowledging that many mutations are neutral, selectionists attribute changes in the frequencies of neutral alleles to linkage disequilibrium with other loci that are under selection, rather than to random genetic drift.[15] Biases in codon usage are usually explained with reference to the ability of even weak selection to shape molecular evolution.[16]

Neutralist hypotheses emphasize the importance of mutation, purifying selection, and random genetic drift.[17] The introduction of the neutral theory by Kimura,[18] quickly followed by King and Jukes' own findings,[19] led to a fierce debate about the relevance of neodarwinism at the molecular level. The Neutral theory of molecular evolution proposes that most mutations in DNA are at locations not important to function or fitness. These neutral changes drift towards fixation within a population. Positive changes will be very rare, and so will not greatly contribute to DNA polymorphisms.[20] Deleterious mutations will also not contribute very much to DNA diversity because they negatively affect fitness and so will not stay in the gene pool for long.[21] This theory provides a framework for the molecular clock.[20] The fate of neutral mutations are governed by genetic drift, and contribute to both nucleotide polymorphism and fixed differences between species.[22][23]

In the strictest sense, the neutral theory is not accurate.[24] Subtle changes in DNA very often have effects, but sometimes these effects are too small for natural selection to act on.[24] Even synonymous mutations are not necessarily neutral [24] because there is not a uniform amount of each codon. The nearly neutral theory expanded the neutralist perspective, suggesting that several mutations are nearly neutral, which means both random drift and natural selection is relevant to their dynamics.[24] The main difference between the neutral theory and nearly neutral theory is that the latter focuses on weak selection, not strictly neutral.[21]

Mutationists hypotheses emphasize random drift and biases in mutation patterns.[25] Sueoka was the first to propose a modern mutationist view. He proposed that the variation in GC content was not the result of positive selection, but a consequence of the GC mutational pressure.[26]

Protein evolution describes the changes over time in protein shape, function, and composition. Through quantitative analysis and experimentation, scientists have strived to understand the rate and causes of protein evolution. Using the amino acid sequences of hemoglobin and cytochrome c from multiple species, scientists were able to derive estimations of protein evolution rates. What they found was that the rates were not the same among proteins.[21] Each protein has its own rate, and that rate is constant across phylogenies (i.e., hemoglobin does not evolve at the same rate as cytochrome c, but hemoglobins from humans, mice, etc. do have comparable rates of evolution.). Not all regions within a protein mutate at the same rate; functionally important areas mutate more slowly and amino acid substitutions involving similar amino acids occurs more often than dissimilar substitutions.[21] Overall, the level of polymorphisms in proteins seems to be fairly constant. Several species (including humans, fruit flies, and mice) have similar levels of protein polymorphism.[20]

Protein evolution is inescapably tied to changes and selection of DNA polymorphisms and mutations because protein sequences change in response to alterations in the DNA sequence. Amino acid sequences and nucleic acid sequences do not mutate at the same rate. Due to the degenerate nature of DNA, bases can change without affecting the amino acid sequence. For example, there are six codons that code for leucine. Thus, despite the difference in mutation rates, it is essential to incorporate nucleic acid evolution into the discussion of protein evolution. At the end of the 1960s, two groups of scientistsKimura (1968) and King and Jukes (1969)-- independently proposed that a majority of the evolutionary changes observed in proteins were neutral.[20][21] Since then, the neutral theory has been expanded upon and debated.[21]

There are sometimes discordances between molecular and morphological evolution, which are reflected in molecular and morphological systematic studies, especially of bacteria, archaea and eukaryotic microbes. These discordances can be categorized as two types: (i) one morphology, multiple lineages (e.g. morphological convergence, cryptic species) and (ii) one lineage, multiple morphologies (e.g. phenotypic plasticity, multiple life-cycle stages). Neutral evolution possibly could explain the incongruences in some cases.[27]

The Society for Molecular Biology and Evolution publishes the journals "Molecular Biology and Evolution" and "Genome Biology and Evolution" and holds an annual international meeting. Other journals dedicated to molecular evolution include Journal of Molecular Evolution and Molecular Phylogenetics and Evolution. Research in molecular evolution is also published in journals of genetics, molecular biology, genomics, systematics, and evolutionary biology.

Continue reading here:
Molecular evolution - Wikipedia

Posted in Molecular Genetics | Comments Off on Molecular evolution – Wikipedia

MCW: Microbiology and Molecular Genetics Department

Posted: October 21, 2016 at 6:44 am

The mission of our faculty is to conduct innovative and impactful research in Microbiology, Immunology, and Molecular Genetics and to train students and postdoctoral fellows for careers as biomedical scientists. Our faculty also instruct in the Graduate School of Biomedical Sciences and the Medical School and often collaborate with clinical scientists to facilitate the translation of bench to bedside therapies to treat human diseases. Our students acquire professional training while carrying out independent research projects in microbial pathogenesis and physiology, the immune response, and host interactions with microbial pathogens. Our administrative and research staff strive to support the research, teaching and service activities of our students and faculty.

Contact information for faculty members in the department, including email addresses and room numbers, can be found on the faculty pages.

Medical College of Wisconsin Department of Microbiology and Molecular Genetics BSB - 2nd Floor - Room 273 8701 Watertown Plank Road Milwaukee, WI 53226

(414) 955-8253 | (414) 955-6535 (fax)

The department is located on the second floor of the Basic Science Building at 8701 W. Watertown Plank Road.

More here:
MCW: Microbiology and Molecular Genetics Department

Posted in Molecular Genetics | Comments Off on MCW: Microbiology and Molecular Genetics Department

Newcastle Hospitals – Molecular Genetics

Posted: October 21, 2016 at 6:44 am

Contact: (0191) 241 8600 - Dr David Bourn, Head of Laboratory, Molecular Genetics

The molecular laboratory service provides genetic diagnosis for those families suffering from inherited conditions caused by mutation of specific single genes.Testing is performed using a variety of DNA analysis techniques to identify causative mutations or to track defective genes through families.

The Molecular Genetics Laboratory operates within the Professional Guidelines of the Clinical Molecular Genetics Society (CMGS).

The laboratory is accredited by Clinical Pathology Accreditation

Clinical scientists and MLSO staff are State Registered with the Health Professions Council after the required period of training.

The Molecular Genetics Laboratory participates in the following external quality assurance schemes:

Northern Genetics Service Institute of Genetic Medicine Central Parkway Newcastle upon Tyne NE1 3BZ

Tel: 0191 241 8600

The laboratory operates Monday to Friday between the hours of 08.30 and 17.00.For the receipt and analysis of very urgent samples outside these hours, please make special arrangements with the laboratory.

Head of Laboratory

Dr David Bourn

telephone: 0191 241 8600

Read this article:
Newcastle Hospitals - Molecular Genetics

Posted in Molecular Genetics | Comments Off on Newcastle Hospitals – Molecular Genetics

Molecular Genetics – DNA, RNA, & Protein

Posted: October 20, 2016 at 1:44 am

MOLECULAR GENETICS You Are Here* molecular basis of inheritance Genes ---> Enzymes ---> Metabolism (phenotype) Central Dogma of Molecular Biology* DNA -transcription-->RNA-translation--> Protein Concept Activity -17.1 Overview of Protein Synthesis - INFORMATION FLOW

What is a GENE = ? DNA is the genetic material... [ but what about, retroviruses, as HIV & TMV, contain RNA ] - a discrete piece of deoxyribonucleic acid - linear polymer of repeating nucleotide monomers nucleotides* --> A adenine,C cytosine T thymidine,G guanine --> polynucleotide*

Technology with a Twist - Understanding Genetics

INFORMATION PROCESSING & the CENTRAL DOGMA - the letters of the genetic alphabet... are the nucleotides A, T, G, & C of DNA - the unit of information is CODON = genetic 'word' a triplet sequence of nucleotides 'CAT' in a polynucleotide 3 nucleotides = 1 codon (word) = 1 amino acid in a polypeptide - the definition of (codon) word = amino acid - Size of Human Genome: 3,000,000,000 base pairs or 1.5b in single strand of DNA genes 500,000,000 possible codons (words or amino acids) - average page your textbook = approx 850 words thus, human genome is equal to 588,000 pages or 470 copies of bio text book reading at 3 bases/sec it would take you about 47.6 years @ 8h/d - 7d/w WOW... extreme nanotechnology Mice & humans (indeed, most or all mammals including dogs, cats, rabbits, monkeys, & apes) have roughly the same number of nucleotides in their genomes -- about 3 billion bp. It is estimated that 99.9% of the 3billion n's of human genome is same person to person.

Experimental Proof of DNA as Genetic Material...

1. Transformation Experiments of Fred Griffith... (1920's) Streptococcus pneumoniae -pathogenic S strain & benign R transforming 'principle'* (converting R to S cells) is the genetic element 2. Oswald Avery, Colin MacLeod, & Maclyn McCarty... (1940's) suggest the transforming substance* is DNAmolecules, but... 3. Alfred Hershey & Martha Chase's* 1952 bacteriophage experiments*... VIRAL REPLICATION*[ phage infection & & lytic/lysogenic* ] a genetically controlled biological activity (viral reproduction) they did novel experiment... 1st real use radioisotopes in biology* CONCLUSION - DNA is genetic material because (32P) nucleic acid not (35S) protein guides* viral replication Sumanas, Inc. animation - Lifecycle of HIV virus

Structure of DNA ..... Discovery of Double Helix... Watson's book Nobel prize* -JD Watson, Francis Crick,Maurice Wilkins, but [ Erwin Chargaff & RosyFranklin]... Race for the Double Helix "Life Story" - a BBC dramatization of the discovery of DNA. used two approaches to decipher structure: 1. model building - figure* (are the bases in/out; are the sugar-P's in/out?) 2. x-ray diffraction*pattern* favor a DNA helix of constant diameter* we know now: DNA is a double stranded, helical, polynucleotide chains, made of... 4 nucleotides - A, T, G, C (purine & pyrimidines) in 2 polynucleotide strands (polymer chains) head-tail polarity [5'-----3'] - strands run antiparallel held together via weak H-Bonds & complimentary pairing - Chargaff's rule*..... A:T G:C A + G / T + C = 1.0 Fig's:sugar-P backbone*,base*pairing, dimensions*, models of DNA structure* john kyrk's animation of DNA & Quicktime movie of DNA structure literature references & myDNAi timeline*

Replication of DNA... (Arthur Kornberg - 1959 Nobel - died 10/26/07) copying of DNA into DNA is structurally obvious??? [figure*] Patterns of Replication* = conservative, semi-conservative, & dispersive Matt Meselson & Frank Stahl1958 - experimental design* can we separate 15N-DNA from 14N-DNA - (OLD DNA from NEW DNA)? sedimentation of DNA's (sucrose gradients --> CsCl gradients* & picture*) we can predict results... figure* & overview& all possible results Sumanas, Inc. animation - Meselson-Stahl DNA Replications*

Model of Replication is bacterial with DNA polymerase III... several enzymes* form a Replication Complex (Replisome) & include: helicase - untwists DNA topoisomerase [DNA gyrase] - removes supercoils, single strand binding proteins - stabilize replication fork, Primase - makes RNA primer POL III - synthesizes new DNA strands DNA polymerase I - removes RNA primer 1 base at a time, adds DNA bases DNA ligase repairs Okazaki fragments (seals lagging strand 3' open holes) Concept Activity - DNA Replication Review Structure of DNA polymerase III* copies both strands simultaneously, as DNA is Threaded Through a Replisome* a "replication machine", which may be stationary by anchoring in nuclear matrix Continuous & Discontinuous replication occur simultaneously in both strands

EVENTS: 1. DNA pol III binds at the origin of replication site in the template strand 2. DNA is unwound by replisome complex using helicase & topoisomerase 3. all polymerases require a preexisting DNA strand (PRIMER) to start replication, thus Primase adds a single short primer to the LEADING strand and adds many primers to the LAGGING strand 4. DNA pol III is a dimer adding new nucleotides to both strands primers direction of reading is 3' ---> 5' on template direction of synthesis of new strand is 5" ---> 3' rate of synthesis is substantial 400 nucleotide/sec 5. DNA pol I removes primer at 5' end replacing with DNA bases, leaves 3' hole 6. DNA ligase seals 3' holes of Okazaki fragments on lagging strand the sequence of events in detail* and DNA Repair* Rates of DNA synthesis: myDNAi movie of replication* native polymerase: 400 bases/sec with 1 error per 109 bases artificial: phophoramidite method (Marvin Caruthers, U.Colorado); ssDNA synthesis on polystyrene bead @ 1 base/300 sec with error rate of 1/100b

GENE Expression the Central Dogma of Molecular Biology depicts flow of genetic information Transcription - copying of DNA sequence into RNA Translation- copying of RNA sequence into protein DNA sequence -------> RNA sequence -----> amino acid sequence TAC AUG MET triplet sequence in DNA --> codon in mRNA ---->amino acid in protein Information : triplet sequence in DNA is the genetic word [codon] Compare Events: Procaryotes* vs. Eucaryotes* = Separation of labor Differences DNA vs. RNA (bases & sugars) and its single stranded Flow of Gene Information (FIG*) - One Gene - One enzyme (Beadle & Tatum) 18.3-Overview: Control of Gene Expression

Transcription - RNA polymerase Concept Activity 17.2 - Transcription RNA*polymerase - in bacteria Sigma factor* binds promoter & initiates* copying* [pnpase] transcription factors* are needed to recognize specific DNA sequence [motif*], binds to promoter DNA region [ activators & transcription factors*]* makes a complimentary copy* of one of the two DNA strands[sense strand] Quicktime movie of transcription*myDNAi Roger Kornberg's movie of transcription (2006 Nobel)* Kinds of RNA [table*] tRNA - small, 80n, anticodon sequence, single strand with 2ndary structure* function = picks up aa & transports it to ribosome rRNA - 3 individual pieces of RNA - make up the organelle = RIBOSOME primary transcript is processed into the 3 pieces of rRNA pieces(picture*) & recall structure of ribosome

Other classes of RNA: small nuclear RNA (snRNP's)- plays a structural and catalytic role in spliceosome* there are 5 snRNP's making a spliceosome [U1, U2, U4, U5, & U6]; they and participate in several RNA-RNA and RNA-protein interactions

SRP (signal recognition particle) - srpRNA is a component of the protein-RNA complex that recognizes the signal sequence of polypeptides targeted to the ER - figure*

small nucleolar RNA (snoRNA) - aids in processing of pre-rRNA transcripts for ribosome subunit formation in the nucleolus

micro RNA's (micro-RNA)- also called antisense RNA & interfereing RNA; c7-fig 19.9 short (20-24 nucleotide) RNAs that bind to mRNA inhibiting it. figure* present in MODEL eukaryotic organisms as:roundworms, fruit flies, mice, humans, & plants (arabidopsis); seems to help regulate gene expression by controlling the timing of developmental events via mRNA action also inhibits translation of target mRNAs. ex: siRNA --> [BARR Body*]

TRANSLATION - Making a Protein process of making a protein in a specific amino acid sequence from a unique mRNA sequence...[ E.M. picture* ] polypeptides are built on the ribosome (pic) on a polysome [ animation*] Sequence of 4 Steps in Translation... [glossary] 1. add an amino acid to tRNA -- > aa-tRNA - ACTIVATION* 2. assemble players [ribosome*, mRNA, aa-tRNA] - INITIATION* 3. adding new aa's via peptidyl transferase - ELONGATION* 4. stopping the process - TERMINATION* Concept CD Activity - 17.4 Events in Translation Review the processes - initiation, elongation, & termination myDNAi real-time movie of translation*& Quicktime movie of translation Review figures & parts: Summary fig* [ components, locations, AA-site, & advanced animation ] [ Nobel Committee static animations of Central Dogma ]

GENETIC CODE... ...is the sequence of nucleotides in DNA, but routinely shown as a mRNA code* ...specifies sequence of amino acids to be linked into the protein coding ratio* - # of n's... how many nucleotides specify 1 aa 1n = 4 singlets, 2n= 16 doublets, 3n = 64 triplets Student CD Activity - 11.2 - Triplet Coding S. Ochoa (1959 Nobel) - polynucleotide phosphorylase can make SYNTHETIC mRNA Np-Np-Np-Np <----> Np-Np-Np + Np Marshall Nirenberg (1968 Nobel)- synthetic mRNA's used in an in vitro system 5'-UUU-3' = pheU + C --> UUU, UUC, UCC, CCC UCU, CUC, CCU, CUU the Genetic CODE* - 64 triplet codons [61 = aa & 3 stop codons] universal (but some anomalies), 1 initiator codon (AUG), redundant but non-ambiguous, and exhibits "wobble*".

GENETIC CHANGE - a change in DNA nucleotide sequence (= change in mRNA) - 2 significant waysmutation & recombination [glossary] 1. MUTATION - a permanent change in an organism's DNA*that results in a different codon = different amino acid sequence Point mutation* - a single to few nucleotides change... - deletions, insertions, frame-shift mutations* [CAT] - single nucleotide base substitutions* : non-sense = change to no amino acid (a STOP codon) UCA --> UAA ser to non mis-sense = different amino acid UCA --> UUA ser to leu Sickle Cell Anemia* - a mis-sense mutation... (SCA-pleiotropy) another point mutation blood disease - thalassemia - Effects = no effect, detrimental (lethal), +/- functionality, beneficial

2. Recombination (Recombinant DNA)newly combined DNA's that [glossary]* can change genotype via insertion of NEW (foreign) DNA molecules into recipient cell 1. fertilization*- sperm inserted into recipient egg cell* --> zygote [n + n = 2n] 2. exchange of homologous chromatids via crossing over* = new gene combo's 3. transformation* - absorption of 'foreign' DNA by recipient cells changes cell 4. BACTERIAL CONJUGATION* - involves DNA plasmidsg* (F+ & R = resistance) conjugation may be a primitive sex-like reproduction in bacteria[Hfr*] 5. VIRAL TRANSDUCTION - insertion via a viral vector(lysogeny* &TRANSDUCTION*) general transduction - pieces of bacterial DNA are packaged w viral DNA during viral replication restricted transduction - a temperate phage goes lytic carrying adjacent bacterial DNA into virus particle 6. DESIGNER GENES - man-made recombinant DNA molecules

Designer Genes - Genetic Engineering - Biotechnology

RECOMBINANT DNA TECHNOLOGY... a collection of experimental techniques, which allow for isolation, copying, & insertion of new DNA sequences into host-recipientcells by A NUMBER OF laboratory protocols & methodologies

Restriction Endonucleases-[glossary]*... diplotomic cuts (unequal) at uniqueDNA sequences Eco-R1-figure* @ mostly palindromes... [never odd or even] 5' GAATTC 3' 5' G. . . . . + AATTC 3' 3' CTTAAG 5' 3' CTTAA .. .. G 5' campbell 7/e movie* DNA's cut this way have STICKY (complimentary) ENDS & can be reannealed or spliced* w other DNA molecules to produce new genes combosand sealed via DNA ligase. myDNAi movie of restriction enzyme action*

Procedures of Biotechnology? [Genome Biology Research] A. Technology involved in Cloning a Gene...[animation* & the tools of genetic analysis] making copies of gene DNA 1. via a plasmid*[ A.E. fig& human shotgun plasmid cloning & My DNAi movie*] 2. Librariesg... [ library figure* & BAC's* &Sumanas animation - DNA fingerprint library] 3. Probesg... [ cDNAg & reverse transcriptaseg & DNA Probe Hybridizationg... cDNA figure*& cDNA library* & a probe for a gene of interest* finding a gene with a probe among a library*] 4. Polymerase Chain Reactiong & figure 20.7* & animation*+Sumanas, Inc. animation* the PCR song PCR reaction protocol & Xeroxing DNA & Taq polymerase

Go here to see the original:
Molecular Genetics - DNA, RNA, & Protein

Posted in Molecular Genetics | Comments Off on Molecular Genetics – DNA, RNA, & Protein

Page 50«..1020..49505152