Page 1,643«..1020..1,6421,6431,6441,645..1,6501,660..»

Home – Weatherall Institute of Molecular Medicine

Posted: October 20, 2016 at 1:44 am

The mission of the MRC Weatherall Institute of Molecular Medicine (WIMM) is to undertake internationally competitive research into the processes underlying normal cell and molecular biology and to determine the mechanisms by which these processes are perturbed in inherited and acquired human diseases. It is also our mission to translate this research to improve human health. The WIMM is uniquely placed among biomedical institutes throughout the world in its pioneering vision of combining outstanding clinical research with excellent basic science. The WIMM Faculty currently includes an equal mixture of scientists and clinicians working together and in collaboration with the National Institute of Health Research, the NHS and commercial companies with the aim of improving the diagnosis and treatment of human diseases. The major topics of current research include haematology, immunology, stem cell biology, oncology and inherited human genetic diseases. The Institute benefits from strategic support from the MRC.

The Institute values communication with members of the broader scientific community and the general public and with the support of the Medical Research Council (MRC) we have commissioned three short videos to explain our mission.

This month, Dr Iztok Urbani joined Christian Eggeling's lab in the MRC Human Immunology Unit, supported by a prestigious Marie Skodowska-Curie postdoctoral fellowship. Iztok completed his studies in physics in 2009 at the University of Ljubljana, Slovenia, and obtained his PhD in biophysics in 2013 from the University of Maribor, Slovenia. During his 2-year fellowship here at the MRC WIMM, he will work on further improving super-resolution ...

The Lister Institute was founded in 1891 as a research institute researching vaccines and antitoxins, and over its impressive 125-year history has developed into one of the most prestigious funders of scientific research in the UK. Scientists supported by the Lister Institute have been involved in some of the most pivotal scientific and medical discoveries over the past century, including development of the UKs first diphtheria vaccine, ...

News Archive

We are seeking to appoint a Junior Group Leader in Computational Biology and Bioinformatics within the Human Immunology Unit (HIU), Investigative Medicine at the Weatherall Institute of Molecular Medicine, John Radcliffe Hospital, Oxford (www.imm.ox.ac.uk/mrc-human-immunology-unit). Reporting to the Director of the HIU you will be required to add value to the ongoing programmes within the Unit as well as establish your own programme as Junior ...

Other Vacancies

Seeing is believing: what does your DNA look like in3D?

Clue: its a bit more complicated than a bendy ladder. Over the past year, scientists working in the Computational Biology Research Group and the MRC Molecular Haematology Unit at the MRC WIMM have been collaborating with Goldsmiths University in London to produce CSynth: new interactive software which allows users to visualize DNA structures in three dimensions. The team took the technology to New Scientist Live in September this year, and wowed hundreds of people with this incredible new tool. In this blog post, Bryony Graham describes the science behind the technology, and how the team managed to explain some pretty complex genomics to thousands of people using some pieces of string, a few fluffy blood cells and a couple of touchscreens, all whilst working under a giant inflatable E. coli suspended from the ceiling. Of course.

WIMM Blog Archive

Read the rest here:
Home - Weatherall Institute of Molecular Medicine

Posted in Molecular Medicine | Comments Off on Home – Weatherall Institute of Molecular Medicine

Molecular Genetics – DNA, RNA, & Protein

Posted: October 20, 2016 at 1:44 am

MOLECULAR GENETICS You Are Here* molecular basis of inheritance Genes ---> Enzymes ---> Metabolism (phenotype) Central Dogma of Molecular Biology* DNA -transcription-->RNA-translation--> Protein Concept Activity -17.1 Overview of Protein Synthesis - INFORMATION FLOW

What is a GENE = ? DNA is the genetic material... [ but what about, retroviruses, as HIV & TMV, contain RNA ] - a discrete piece of deoxyribonucleic acid - linear polymer of repeating nucleotide monomers nucleotides* --> A adenine,C cytosine T thymidine,G guanine --> polynucleotide*

Technology with a Twist - Understanding Genetics

INFORMATION PROCESSING & the CENTRAL DOGMA - the letters of the genetic alphabet... are the nucleotides A, T, G, & C of DNA - the unit of information is CODON = genetic 'word' a triplet sequence of nucleotides 'CAT' in a polynucleotide 3 nucleotides = 1 codon (word) = 1 amino acid in a polypeptide - the definition of (codon) word = amino acid - Size of Human Genome: 3,000,000,000 base pairs or 1.5b in single strand of DNA genes 500,000,000 possible codons (words or amino acids) - average page your textbook = approx 850 words thus, human genome is equal to 588,000 pages or 470 copies of bio text book reading at 3 bases/sec it would take you about 47.6 years @ 8h/d - 7d/w WOW... extreme nanotechnology Mice & humans (indeed, most or all mammals including dogs, cats, rabbits, monkeys, & apes) have roughly the same number of nucleotides in their genomes -- about 3 billion bp. It is estimated that 99.9% of the 3billion n's of human genome is same person to person.

Experimental Proof of DNA as Genetic Material...

1. Transformation Experiments of Fred Griffith... (1920's) Streptococcus pneumoniae -pathogenic S strain & benign R transforming 'principle'* (converting R to S cells) is the genetic element 2. Oswald Avery, Colin MacLeod, & Maclyn McCarty... (1940's) suggest the transforming substance* is DNAmolecules, but... 3. Alfred Hershey & Martha Chase's* 1952 bacteriophage experiments*... VIRAL REPLICATION*[ phage infection & & lytic/lysogenic* ] a genetically controlled biological activity (viral reproduction) they did novel experiment... 1st real use radioisotopes in biology* CONCLUSION - DNA is genetic material because (32P) nucleic acid not (35S) protein guides* viral replication Sumanas, Inc. animation - Lifecycle of HIV virus

Structure of DNA ..... Discovery of Double Helix... Watson's book Nobel prize* -JD Watson, Francis Crick,Maurice Wilkins, but [ Erwin Chargaff & RosyFranklin]... Race for the Double Helix "Life Story" - a BBC dramatization of the discovery of DNA. used two approaches to decipher structure: 1. model building - figure* (are the bases in/out; are the sugar-P's in/out?) 2. x-ray diffraction*pattern* favor a DNA helix of constant diameter* we know now: DNA is a double stranded, helical, polynucleotide chains, made of... 4 nucleotides - A, T, G, C (purine & pyrimidines) in 2 polynucleotide strands (polymer chains) head-tail polarity [5'-----3'] - strands run antiparallel held together via weak H-Bonds & complimentary pairing - Chargaff's rule*..... A:T G:C A + G / T + C = 1.0 Fig's:sugar-P backbone*,base*pairing, dimensions*, models of DNA structure* john kyrk's animation of DNA & Quicktime movie of DNA structure literature references & myDNAi timeline*

Replication of DNA... (Arthur Kornberg - 1959 Nobel - died 10/26/07) copying of DNA into DNA is structurally obvious??? [figure*] Patterns of Replication* = conservative, semi-conservative, & dispersive Matt Meselson & Frank Stahl1958 - experimental design* can we separate 15N-DNA from 14N-DNA - (OLD DNA from NEW DNA)? sedimentation of DNA's (sucrose gradients --> CsCl gradients* & picture*) we can predict results... figure* & overview& all possible results Sumanas, Inc. animation - Meselson-Stahl DNA Replications*

Model of Replication is bacterial with DNA polymerase III... several enzymes* form a Replication Complex (Replisome) & include: helicase - untwists DNA topoisomerase [DNA gyrase] - removes supercoils, single strand binding proteins - stabilize replication fork, Primase - makes RNA primer POL III - synthesizes new DNA strands DNA polymerase I - removes RNA primer 1 base at a time, adds DNA bases DNA ligase repairs Okazaki fragments (seals lagging strand 3' open holes) Concept Activity - DNA Replication Review Structure of DNA polymerase III* copies both strands simultaneously, as DNA is Threaded Through a Replisome* a "replication machine", which may be stationary by anchoring in nuclear matrix Continuous & Discontinuous replication occur simultaneously in both strands

EVENTS: 1. DNA pol III binds at the origin of replication site in the template strand 2. DNA is unwound by replisome complex using helicase & topoisomerase 3. all polymerases require a preexisting DNA strand (PRIMER) to start replication, thus Primase adds a single short primer to the LEADING strand and adds many primers to the LAGGING strand 4. DNA pol III is a dimer adding new nucleotides to both strands primers direction of reading is 3' ---> 5' on template direction of synthesis of new strand is 5" ---> 3' rate of synthesis is substantial 400 nucleotide/sec 5. DNA pol I removes primer at 5' end replacing with DNA bases, leaves 3' hole 6. DNA ligase seals 3' holes of Okazaki fragments on lagging strand the sequence of events in detail* and DNA Repair* Rates of DNA synthesis: myDNAi movie of replication* native polymerase: 400 bases/sec with 1 error per 109 bases artificial: phophoramidite method (Marvin Caruthers, U.Colorado); ssDNA synthesis on polystyrene bead @ 1 base/300 sec with error rate of 1/100b

GENE Expression the Central Dogma of Molecular Biology depicts flow of genetic information Transcription - copying of DNA sequence into RNA Translation- copying of RNA sequence into protein DNA sequence -------> RNA sequence -----> amino acid sequence TAC AUG MET triplet sequence in DNA --> codon in mRNA ---->amino acid in protein Information : triplet sequence in DNA is the genetic word [codon] Compare Events: Procaryotes* vs. Eucaryotes* = Separation of labor Differences DNA vs. RNA (bases & sugars) and its single stranded Flow of Gene Information (FIG*) - One Gene - One enzyme (Beadle & Tatum) 18.3-Overview: Control of Gene Expression

Transcription - RNA polymerase Concept Activity 17.2 - Transcription RNA*polymerase - in bacteria Sigma factor* binds promoter & initiates* copying* [pnpase] transcription factors* are needed to recognize specific DNA sequence [motif*], binds to promoter DNA region [ activators & transcription factors*]* makes a complimentary copy* of one of the two DNA strands[sense strand] Quicktime movie of transcription*myDNAi Roger Kornberg's movie of transcription (2006 Nobel)* Kinds of RNA [table*] tRNA - small, 80n, anticodon sequence, single strand with 2ndary structure* function = picks up aa & transports it to ribosome rRNA - 3 individual pieces of RNA - make up the organelle = RIBOSOME primary transcript is processed into the 3 pieces of rRNA pieces(picture*) & recall structure of ribosome

Other classes of RNA: small nuclear RNA (snRNP's)- plays a structural and catalytic role in spliceosome* there are 5 snRNP's making a spliceosome [U1, U2, U4, U5, & U6]; they and participate in several RNA-RNA and RNA-protein interactions

SRP (signal recognition particle) - srpRNA is a component of the protein-RNA complex that recognizes the signal sequence of polypeptides targeted to the ER - figure*

small nucleolar RNA (snoRNA) - aids in processing of pre-rRNA transcripts for ribosome subunit formation in the nucleolus

micro RNA's (micro-RNA)- also called antisense RNA & interfereing RNA; c7-fig 19.9 short (20-24 nucleotide) RNAs that bind to mRNA inhibiting it. figure* present in MODEL eukaryotic organisms as:roundworms, fruit flies, mice, humans, & plants (arabidopsis); seems to help regulate gene expression by controlling the timing of developmental events via mRNA action also inhibits translation of target mRNAs. ex: siRNA --> [BARR Body*]

TRANSLATION - Making a Protein process of making a protein in a specific amino acid sequence from a unique mRNA sequence...[ E.M. picture* ] polypeptides are built on the ribosome (pic) on a polysome [ animation*] Sequence of 4 Steps in Translation... [glossary] 1. add an amino acid to tRNA -- > aa-tRNA - ACTIVATION* 2. assemble players [ribosome*, mRNA, aa-tRNA] - INITIATION* 3. adding new aa's via peptidyl transferase - ELONGATION* 4. stopping the process - TERMINATION* Concept CD Activity - 17.4 Events in Translation Review the processes - initiation, elongation, & termination myDNAi real-time movie of translation*& Quicktime movie of translation Review figures & parts: Summary fig* [ components, locations, AA-site, & advanced animation ] [ Nobel Committee static animations of Central Dogma ]

GENETIC CODE... ...is the sequence of nucleotides in DNA, but routinely shown as a mRNA code* ...specifies sequence of amino acids to be linked into the protein coding ratio* - # of n's... how many nucleotides specify 1 aa 1n = 4 singlets, 2n= 16 doublets, 3n = 64 triplets Student CD Activity - 11.2 - Triplet Coding S. Ochoa (1959 Nobel) - polynucleotide phosphorylase can make SYNTHETIC mRNA Np-Np-Np-Np <----> Np-Np-Np + Np Marshall Nirenberg (1968 Nobel)- synthetic mRNA's used in an in vitro system 5'-UUU-3' = pheU + C --> UUU, UUC, UCC, CCC UCU, CUC, CCU, CUU the Genetic CODE* - 64 triplet codons [61 = aa & 3 stop codons] universal (but some anomalies), 1 initiator codon (AUG), redundant but non-ambiguous, and exhibits "wobble*".

GENETIC CHANGE - a change in DNA nucleotide sequence (= change in mRNA) - 2 significant waysmutation & recombination [glossary] 1. MUTATION - a permanent change in an organism's DNA*that results in a different codon = different amino acid sequence Point mutation* - a single to few nucleotides change... - deletions, insertions, frame-shift mutations* [CAT] - single nucleotide base substitutions* : non-sense = change to no amino acid (a STOP codon) UCA --> UAA ser to non mis-sense = different amino acid UCA --> UUA ser to leu Sickle Cell Anemia* - a mis-sense mutation... (SCA-pleiotropy) another point mutation blood disease - thalassemia - Effects = no effect, detrimental (lethal), +/- functionality, beneficial

2. Recombination (Recombinant DNA)newly combined DNA's that [glossary]* can change genotype via insertion of NEW (foreign) DNA molecules into recipient cell 1. fertilization*- sperm inserted into recipient egg cell* --> zygote [n + n = 2n] 2. exchange of homologous chromatids via crossing over* = new gene combo's 3. transformation* - absorption of 'foreign' DNA by recipient cells changes cell 4. BACTERIAL CONJUGATION* - involves DNA plasmidsg* (F+ & R = resistance) conjugation may be a primitive sex-like reproduction in bacteria[Hfr*] 5. VIRAL TRANSDUCTION - insertion via a viral vector(lysogeny* &TRANSDUCTION*) general transduction - pieces of bacterial DNA are packaged w viral DNA during viral replication restricted transduction - a temperate phage goes lytic carrying adjacent bacterial DNA into virus particle 6. DESIGNER GENES - man-made recombinant DNA molecules

Designer Genes - Genetic Engineering - Biotechnology

RECOMBINANT DNA TECHNOLOGY... a collection of experimental techniques, which allow for isolation, copying, & insertion of new DNA sequences into host-recipientcells by A NUMBER OF laboratory protocols & methodologies

Restriction Endonucleases-[glossary]*... diplotomic cuts (unequal) at uniqueDNA sequences Eco-R1-figure* @ mostly palindromes... [never odd or even] 5' GAATTC 3' 5' G. . . . . + AATTC 3' 3' CTTAAG 5' 3' CTTAA .. .. G 5' campbell 7/e movie* DNA's cut this way have STICKY (complimentary) ENDS & can be reannealed or spliced* w other DNA molecules to produce new genes combosand sealed via DNA ligase. myDNAi movie of restriction enzyme action*

Procedures of Biotechnology? [Genome Biology Research] A. Technology involved in Cloning a Gene...[animation* & the tools of genetic analysis] making copies of gene DNA 1. via a plasmid*[ A.E. fig& human shotgun plasmid cloning & My DNAi movie*] 2. Librariesg... [ library figure* & BAC's* &Sumanas animation - DNA fingerprint library] 3. Probesg... [ cDNAg & reverse transcriptaseg & DNA Probe Hybridizationg... cDNA figure*& cDNA library* & a probe for a gene of interest* finding a gene with a probe among a library*] 4. Polymerase Chain Reactiong & figure 20.7* & animation*+Sumanas, Inc. animation* the PCR song PCR reaction protocol & Xeroxing DNA & Taq polymerase

Go here to see the original:
Molecular Genetics - DNA, RNA, & Protein

Posted in Molecular Genetics | Comments Off on Molecular Genetics – DNA, RNA, & Protein

Alternative medicine – Wikipedia, the free encyclopedia

Posted: October 20, 2016 at 1:44 am

Alternative or fringe medicine is any practice claimed to have the healing effects of medicine and is: proven not to work; has no scientific evidence showing that it works; or that is solely harmful.[n 1][n 2][n 3] Alternative medicine is not a part of medicine,[n 1][n 4][n 5][n 6] or science-based healthcare systems.[1][2][4] It consists of a wide variety of practices, products, and therapiesranging from those that are biologically plausible but not well tested, to those with known harmful and toxic effects.[n 4][5][6][7][8][9] Despite significant costs in testing alternative medicine, including $2.5 billion spent by the United States government, almost none have shown any effectiveness beyond that of false treatments (placebo).[10][11] Perceived effects of alternative medicine are caused by the placebo effect, decreased effects of functional treatment (and thus also decreased side-effects), and regression toward the mean where spontaneous improvement is credited to alternative therapies.

Complementary medicine or integrative medicine is when alternative medicine is used together with functional medical treatment, in a belief that it "complements" (improves the efficacy of) the treatment.[n 7][13][14][15][16] However, significant drug interactions caused by alternative therapies may instead negatively influence the treatment, making treatments less effective, notably cancer therapy.[17][18]CAM is an abbreviation of complementary and alternative medicine.[19][20] It has also be called sCAM or SCAM for "so-called complementary and alternative medicine" or "supplements and complementary and alternative medicine".[21][22]Holistic health or holistic medicine claims to take into account the "whole" person, including spirituality in its treatmentsand is a similar concept. Due to its many names the field has been criticized for intense rebranding of what are essentially the same practices: as soon as one name is declared synonymous with quackery, a new is chosen.

Alternative medical diagnoses and treatments are not included in the science-based treatments taught in medical schools, and are not used in medical practice where treatments are based on scientific knowledge. Alternative therapies are either unproven, disproved, or impossible to prove,[n 8][5][13][24][25] and are often based on religion, tradition, superstition, belief in supernatural energies, pseudoscience, errors in reasoning, propaganda, or fraud.[5][26][6][13] Regulation and licensing of alternative medicine and health care providers varies between and within countries. Marketing alternative therapies as treating or preventing cancer is illegal in many countries including the United States and most parts of the European Union.

Alternative medicine has been criticized for being based on misleading statements, quackery, pseudoscience, antiscience, fraud, or poor scientific methodology. Promoting alternative medicine has been called dangerous and unethical.[n 9][28] Testing alternative medicine that have no scientific basis has been called a waste of scarce medical research resources.[29][30] Critics have said "there is really no such thing as alternative medicine, just medicine that works and medicine that doesn't",[31] and the problem is not only that it does not work, but that the "underlying logic is magical, childish or downright absurd".[32] There have also been calls that the concept of any alternative medicine that works is paradoxical, as any treatment proven to work is simply "medicine".[33]

Alternative medicine consists of a wide range of health care practices, products, and therapies. The shared feature is a claim to heal that is not based on the scientific method. Alternative medicine practices are diverse in their foundations and methodologies.[1] Alternative medicine practices may be classified by their cultural origins or by the types of beliefs upon which they are based.[5][26][1][13] Methods may incorporate or be based on traditional medicinal practices of a particular culture, folk knowledge, supersition, spiritual beliefs, belief in supernatural energies (antiscience), pseudoscience, errors in reasoning, propaganda, fraud, new or different concepts of health and disease, and any bases other than being proven by scientific methods.[5][26][6][13] Different cultures may have their own unique traditional or belief based practices developed recently or over thousands of years, and specific practices or entire systems of practices.

Alternative medicine, such as using naturopathy or homeopathy in place of conventional medicine, is based on belief systems not grounded in science.[1]

Homeopathy is a system developed in a belief that a substance that causes the symptoms of a disease in healthy people cures similar symptoms in sick people.[n 10] It was developed before knowledge of atoms and molecules, and of basic chemistry, which shows that repeated dilution as practiced in homeopathy produces only water, and that homeopathy is scientifically implausible.[36][37][38][39] Homeopathy is considered quackery in the medical community.[40]

Naturopathic medicine is based on a belief that the body heals itself using a supernatural vital energy that guides bodily processes,[41] a view in conflict with the paradigm of evidence-based medicine.[42] Many naturopaths have opposed vaccination,[43] and "scientific evidence does not support claims that naturopathic medicine can cure cancer or any other disease".[44]

Alternative medical systems may be based on traditional medicine practices, such as traditional Chinese medicine, Ayurveda in India, or practices of other cultures around the world.[1]

Traditional Chinese medicine is a combination of traditional practices and beliefs developed over thousands of years in China, together with modifications made by the Communist party. Common practices include herbal medicine, acupuncture (insertion of needles in the body at specified points), massage (Tui na), exercise (qigong), and dietary therapy. The practices are based on belief in a supernatural energy called qi, considerations of Chinese Astrology and Chinese numerology, traditional use of herbs and other substances found in Chinaa belief that the tongue contains a map of the body that reflects changes in the body, and an incorrect model of the anatomy and physiology of internal organs.[5][45][46][47][48][49]

The Chinese Communist Party Chairman Mao Zedong, in response to a lack of modern medical practitioners, revived acupuncture, and had its theory rewritten to adhere to the political, economic, and logistic necessities of providing for the medical needs of China's population.[50][pageneeded] In the 1950s the "history" and theory of traditional Chinese medicine was rewritten as communist propaganda, at Mao's insistence, to correct the supposed "bourgeois thought of Western doctors of medicine".Acupuncture gained attention in the United States when President Richard Nixon visited China in 1972, and the delegation was shown a patient undergoing major surgery while fully awake, ostensibly receiving acupuncture rather than anesthesia. Later it was found that the patients selected for the surgery had both a high pain tolerance and received heavy indoctrination before the operation; these demonstration cases were also frequently receiving morphine surreptitiously through an intravenous drip that observers were told contained only fluids and nutrients.[45]Cochrane reviews found acupuncture is not effective for a wide range of conditions.[52] A systematic review of systematic reviews found that for reducing pain, real acupuncture was no better than sham acupuncture.[53] Although, other reviews have found that acupuncture is successful at reducing chronic pain, where as sham acupuncture was not found to be better than a placebo as well as no-acupuncture groups.[54]

Ayurvedic medicine is a traditional medicine of India. Ayurveda believes in the existence of three elemental substances, the doshas (called Vata, Pitta and Kapha), and states that a balance of the doshas results in health, while imbalance results in disease. Such disease-inducing imbalances can be adjusted and balanced using traditional herbs, minerals and heavy metals. Ayurveda stresses the use of plant-based medicines and treatments, with some animal products, and added minerals, including sulfur, arsenic, lead, copper sulfate.[citation needed]

Safety concerns have been raised about Ayurveda, with two U.S. studies finding about 20 percent of Ayurvedic Indian-manufactured patent medicines contained toxic levels of heavy metals such as lead, mercury and arsenic. Other concerns include the use of herbs containing toxic compounds and the lack of quality control in Ayurvedic facilities. Incidents of heavy metal poisoning have been attributed to the use of these compounds in the United States.[8][57][58][59]

Bases of belief may include belief in existence of supernatural energies undetected by the science of physics, as in biofields, or in belief in properties of the energies of physics that are inconsistent with the laws of physics, as in energy medicine.[1]

Biofield therapies are intended to influence energy fields that, it is purported, surround and penetrate the body.[1] Writers such as noted astrophysicist and advocate of skeptical thinking (Scientific skepticism) Carl Sagan (1934-1996) have described the lack of empirical evidence to support the existence of the putative energy fields on which these therapies are predicated.

Acupuncture is a component of traditional Chinese medicine. Proponents of acupuncture believe that a supernatural energy called qi flows through the universe and through the body, and helps propel the bloodand that blockage of this energy leads to disease.[46] They believe that inserting needles in various parts of the body, determined by astrological calculations, can restore balance to the blocked flows and thereby cure disease.[46]

Chiropractic was developed in the belief that manipulating the spine affects the flow of a supernatural vital energy and thereby affects health and disease.

In the western version of Japanese Reiki, practitioners place their palms on the patient near Chakras that they believe are centers of supernatural energies, and believe that these supernatural energies can transfer from the practitioner's palms to heal the patient.

Bioelectromagnetic-based therapies use verifiable electromagnetic fields, such as pulsed fields, alternating-current, or direct-current fields in an unconventional manner.[1]Magnetic healing does not claim existence of supernatural energies, but asserts that magnets can be used to defy the laws of physics to influence health and disease.

Mind-body medicine takes a holistic approach to health that explores the interconnection between the mind, body, and spirit. It works under the premise that the mind can affect "bodily functions and symptoms".[1] Mind body medicines includes healing claims made in yoga, meditation, deep-breathing exercises, guided imagery, hypnotherapy, progressive relaxation, qi gong, and tai chi.[1]

Yoga, a method of traditional stretches, exercises, and meditations in Hinduism, may also be classified as an energy medicine insofar as its healing effects are believed to be due to a healing "life energy" that is absorbed into the body through the breath, and is thereby believed to treat a wide variety of illnesses and complaints.[61]

Since the 1990s, tai chi (t'ai chi ch'uan) classes that purely emphasise health have become popular in hospitals, clinics, as well as community and senior centers. This has occurred as the baby boomers generation has aged and the art's reputation as a low-stress training method for seniors has become better known.[62][63] There has been some divergence between those that say they practice t'ai chi ch'uan primarily for self-defence, those that practice it for its aesthetic appeal (see wushu below), and those that are more interested in its benefits to physical and mental health.

Qigong, chi kung, or chi gung, is a practice of aligning body, breath, and mind for health, meditation, and martial arts training. With roots in traditional Chinese medicine, philosophy, and martial arts, qigong is traditionally viewed as a practice to cultivate and balance qi (chi) or what has been translated as "life energy".[64]

Substance based practices use substances found in nature such as herbs, foods, non-vitamin supplements and megavitamins, animal and fungal products, and minerals, including use of these products in traditional medical practices that may also incorporate other methods.[1][11][65] Examples include healing claims for nonvitamin supplements, fish oil, Omega-3 fatty acid, glucosamine, echinacea, flaxseed oil, and ginseng.[66]Herbal medicine, or phytotherapy, includes not just the use of plant products, but may also include the use of animal and mineral products.[11] It is among the most commercially successful branches of alternative medicine, and includes the tablets, powders and elixirs that are sold as "nutritional supplements".[11] Only a very small percentage of these have been shown to have any efficacy, and there is little regulation as to standards and safety of their contents.[11] This may include use of known toxic substances, such as use of the poison lead in traditional Chinese medicine.[66]

Manipulative and body-based practices feature the manipulation or movement of body parts, such as is done in bodywork and chiropractic manipulation.

Osteopathic manipulative medicine, also known as osteopathic manipulative treatment, is a core set of techniques of osteopathy and osteopathic medicine distinguishing these fields from mainstream medicine.[67]

Religion based healing practices, such as use of prayer and the laying of hands in Christian faith healing, and shamanism, rely on belief in divine or spiritual intervention for healing.

Shamanism is a practice of many cultures around the world, in which a practitioner reaches an altered states of consciousness in order to encounter and interact with the spirit world or channel supernatural energies in the belief they can heal.[68]

Some alternative medicine practices may be based on pseudoscience, ignorance, or flawed reasoning.[69] This can lead to fraud.[5]

Practitioners of electricity and magnetism based healing methods may deliberately exploit a patient's ignorance of physics to defraud them.[13]

"Alternative medicine" is a loosely defined set of products, practices, and theories that are believed or perceived by their users to have the healing effects of medicine,[n 2][n 4] but whose effectiveness has not been clearly established using scientific methods,[n 2][n 3][5][6][23][25] whose theory and practice is not part of biomedicine,[n 4][n 1][n 5][n 6] or whose theories or practices are directly contradicted by scientific evidence or scientific principles used in biomedicine.[5][26][6] "Biomedicine" is that part of medical science that applies principles of biology, physiology, molecular biology, biophysics, and other natural sciences to clinical practice, using scientific methods to establish the effectiveness of that practice. Alternative medicine is a diverse group of medical and health care systems, practices, and products that originate outside of biomedicine,[n 1] are not considered part of biomedicine,[1] are not widely used by the biomedical healthcare professions,[74] and are not taught as skills practiced in biomedicine.[74] Unlike biomedicine,[n 1] an alternative medicine product or practice does not originate from the sciences or from using scientific methodology, but may instead be based on testimonials, religion, tradition, superstition, belief in supernatural energies, pseudoscience, errors in reasoning, propaganda, fraud, or other unscientific sources.[n 3][5][6][13] The expression "alternative medicine" refers to a diverse range of related and unrelated products, practices, and theories, originating from widely varying sources, cultures, theories, and belief systems, and ranging from biologically plausible practices and products and practices with some evidence, to practices and theories that are directly contradicted by basic science or clear evidence, and products that have proven to be ineffective or even toxic and harmful.[n 4][7][8]

Alternative medicine, complementary medicine, holistic medicine, natural medicine, unorthodox medicine, fringe medicine, unconventional medicine, and new age medicine are used interchangeably as having the same meaning (are synonyms) in some contexts,[75][76][77] but may have different meanings in other contexts, for example, unorthodox medicine may refer to biomedicine that is different from what is commonly practiced, and fringe medicine may refer to biomedicine that is based on fringe science, which may be scientifically valid but is not mainstream.

The meaning of the term "alternative" in the expression "alternative medicine", is not that it is an actual effective alternative to medical science, although some alternative medicine promoters may use the loose terminology to give the appearance of effectiveness.[5]Marcia Angell stated that "alternative medicine" is "a new name for snake oil. There's medicine that works and medicine that doesn't work."[78] Loose terminology may also be used to suggest meaning that a dichotomy exists when it does not, e.g., the use of the expressions "western medicine" and "eastern medicine" to suggest that the difference is a cultural difference between the Asiatic east and the European west, rather than that the difference is between evidence-based medicine and treatments that don't work.[5]

"Complementary medicine" refers to use of alternative medical treatments alongside conventional medicine, in the belief that it increases the effectiveness of the science-based medicine.[79][80][81] An example of "complementary medicine" is use of acupuncture (sticking needles in the body to influence the flow of a supernatural energy), along with using science-based medicine, in the belief that the acupuncture increases the effectiveness or "complements" the science-based medicine.[81] "CAM" is an abbreviation for "complementary and alternative medicine".

The expression "Integrative medicine" (or "integrated medicine") is used in two different ways. One use refers to a belief that medicine based on science can be "integrated" with practices that are not. Another use refers only to a combination of alternative medical treatments with conventional treatments that have some scientific proof of efficacy, in which case it is identical with CAM.[16] "holistic medicine" (or holistic health) is an alternative medicine practice that claims to treat the "whole person" and not just the illness.

"Traditional medicine" and "folk medicine" refer to prescientific practices of a culture, not to what is traditionally practiced in cultures where medical science dominates. "Eastern medicine" typically refers to prescientific traditional medicines of Asia. "Western medicine", when referring to modern practice, typically refers to medical science, and not to alternative medicines practiced in the west (Europe and the Americas). "Western medicine", "biomedicine", "mainstream medicine", "medical science", "science-based medicine", "evidence-based medicine", "conventional medicine", "standard medicine", "orthodox medicine", "allopathic medicine", "dominant health system", and "medicine", are sometimes used interchangeably as having the same meaning, when contrasted with alternative medicine, but these terms may have different meanings in some contexts, e.g., some practices in medical science are not supported by rigorous scientific testing so "medical science" is not strictly identical with "science-based medicine", and "standard medical care" may refer to "best practice" when contrasted with other biomedicine that is less used or less recommended.[n 11][84]

Prominent members of the science[31][85] and biomedical science community[24] assert that it is not meaningful to define an alternative medicine that is separate from a conventional medicine, that the expressions "conventional medicine", "alternative medicine", "complementary medicine", "integrative medicine", and "holistic medicine" do not refer to anything at all.[24][31][85][86] Their criticisms of trying to make such artificial definitions include: "There's no such thing as conventional or alternative or complementary or integrative or holistic medicine. There's only medicine that works and medicine that doesn't;"[24][31][85] "By definition, alternative medicine has either not been proved to work, or been proved not to work. You know what they call alternative medicine that's been proved to work? Medicine;"[33] "There cannot be two kinds of medicine conventional and alternative. There is only medicine that has been adequately tested and medicine that has not, medicine that works and medicine that may or may not work. Once a treatment has been tested rigorously, it no longer matters whether it was considered alternative at the outset. If it is found to be reasonably safe and effective, it will be accepted;"[24] and "There is no alternative medicine. There is only scientifically proven, evidence-based medicine supported by solid data or unproven medicine, for which scientific evidence is lacking."[86]

Others in both the biomedical and CAM communities point out that CAM cannot be precisely defined because of the diversity of theories and practices it includes, and because the boundaries between CAM and biomedicine overlap, are porous, and change. The expression "complementary and alternative medicine" (CAM) resists easy definition because the health systems and practices it refers to are diffuse, and its boundaries poorly defined.[7][n 12] Healthcare practices categorized as alternative may differ in their historical origin, theoretical basis, diagnostic technique, therapeutic practice and in their relationship to the medical mainstream. Some alternative therapies, including traditional Chinese medicine (TCM) and Ayurveda, have antique origins in East or South Asia and are entirely alternative medical systems;[91] others, such as homeopathy and chiropractic, have origins in Europe or the United States and emerged in the eighteenth and nineteenth centuries. Some, such as osteopathy and chiropractic, employ manipulative physical methods of treatment; others, such as meditation and prayer, are based on mind-body interventions. Treatments considered alternative in one location may be considered conventional in another.[94] Thus, chiropractic is not considered alternative in Denmark and likewise osteopathic medicine is no longer thought of as an alternative therapy in the United States.[94]

One common feature of all definitions of alternative medicine is its designation as "other than" conventional medicine. For example, the widely referenced descriptive definition of complementary and alternative medicine devised by the US National Center for Complementary and Integrative Health (NCCIH) of the National Institutes of Health (NIH), states that it is "a group of diverse medical and health care systems, practices, and products that are not generally considered part of conventional medicine."[1] For conventional medical practitioners, it does not necessarily follow that either it or its practitioners would no longer be considered alternative.[n 13]

Some definitions seek to specify alternative medicine in terms of its social and political marginality to mainstream healthcare.[99] This can refer to the lack of support that alternative therapies receive from the medical establishment and related bodies regarding access to research funding, sympathetic coverage in the medical press, or inclusion in the standard medical curriculum.[99] In 1993, the British Medical Association (BMA), one among many professional organizations who have attempted to define alternative medicine, stated that it[n 14] referred to "...those forms of treatment which are not widely used by the conventional healthcare professions, and the skills of which are not taught as part of the undergraduate curriculum of conventional medical and paramedical healthcare courses."[74] In a US context, an influential definition coined in 1993 by the Harvard-based physician,[100] David M. Eisenberg,[101] characterized alternative medicine "as interventions neither taught widely in medical schools nor generally available in US hospitals".[102] These descriptive definitions are inadequate in the present-day when some conventional doctors offer alternative medical treatments and CAM introductory courses or modules can be offered as part of standard undergraduate medical training;[103] alternative medicine is taught in more than 50 per cent of US medical schools and increasingly US health insurers are willing to provide reimbursement for CAM therapies. In 1999, 7.7% of US hospitals reported using some form of CAM therapy; this proportion had risen to 37.7% by 2008.[105]

An expert panel at a conference hosted in 1995 by the US Office for Alternative Medicine (OAM),[106][n 15] devised a theoretical definition[106] of alternative medicine as "a broad domain of healing resources... other than those intrinsic to the politically dominant health system of a particular society or culture in a given historical period."[107] This definition has been widely adopted by CAM researchers,[106] cited by official government bodies such as the UK Department of Health,[108] attributed as the definition used by the Cochrane Collaboration,[109] and, with some modification,[dubious discuss] was preferred in the 2005 consensus report of the US Institute of Medicine, Complementary and Alternative Medicine in the United States.[n 4]

The 1995 OAM conference definition, an expansion of Eisenberg's 1993 formulation, is silent regarding questions of the medical effectiveness of alternative therapies.[110] Its proponents hold that it thus avoids relativism about differing forms of medical knowledge and, while it is an essentially political definition, this should not imply that the dominance of mainstream biomedicine is solely due to political forces.[110] According to this definition, alternative and mainstream medicine can only be differentiated with reference to what is "intrinsic to the politically dominant health system of a particular society of culture".[111] However, there is neither a reliable method to distinguish between cultures and subcultures, nor to attribute them as dominant or subordinate, nor any accepted criteria to determine the dominance of a cultural entity.[111] If the culture of a politically dominant healthcare system is held to be equivalent to the perspectives of those charged with the medical management of leading healthcare institutions and programs, the definition fails to recognize the potential for division either within such an elite or between a healthcare elite and the wider population.[111]

Normative definitions distinguish alternative medicine from the biomedical mainstream in its provision of therapies that are unproven, unvalidated, or ineffective and support of theories with no recognized scientific basis. These definitions characterize practices as constituting alternative medicine when, used independently or in place of evidence-based medicine, they are put forward as having the healing effects of medicine, but are not based on evidence gathered with the scientific method.[1][13][24][79][80][113] Exemplifying this perspective, a 1998 editorial co-authored by Marcia Angell, a former editor of the New England Journal of Medicine, argued that:

This line of division has been subject to criticism, however, as not all forms of standard medical practice have adequately demonstrated evidence of benefit, [n 1][84] and it is also unlikely in most instances that conventional therapies, if proven to be ineffective, would ever be classified as CAM.[106]

Public information websites maintained by the governments of the US and of the UK make a distinction between "alternative medicine" and "complementary medicine", but mention that these two overlap. The National Center for Complementary and Integrative Health (NCCIH) of the National Institutes of Health (NIH) (a part of the US Department of Health and Human Services) states that "alternative medicine" refers to using a non-mainstream approach in place of conventional medicine and that "complementary medicine" generally refers to using a non-mainstream approach together with conventional medicine, and comments that the boundaries between complementary and conventional medicine overlap and change with time.[1]

The National Health Service (NHS) website NHS Choices (owned by the UK Department of Health), adopting the terminology of NCCIH, states that when a treatment is used alongside conventional treatments, to help a patient cope with a health condition, and not as an alternative to conventional treatment, this use of treatments can be called "complementary medicine"; but when a treatment is used instead of conventional medicine, with the intention of treating or curing a health condition, the use can be called "alternative medicine".[115]

Similarly, the public information website maintained by the National Health and Medical Research Council (NHMRC) of the Commonwealth of Australia uses the acronym "CAM" for a wide range of health care practices, therapies, procedures and devices not within the domain of conventional medicine. In the Australian context this is stated to include acupuncture; aromatherapy; chiropractic; homeopathy; massage; meditation and relaxation therapies; naturopathy; osteopathy; reflexology, traditional Chinese medicine; and the use of vitamin supplements.[116]

The Danish National Board of Health's "Council for Alternative Medicine" (Sundhedsstyrelsens Rd for Alternativ Behandling (SRAB)), an independent institution under the National Board of Health (Danish: Sundhedsstyrelsen), uses the term "alternative medicine" for:

In General Guidelines for Methodologies on Research and Evaluation of Traditional Medicine, published in 2000 by the World Health Organization (WHO), complementary and alternative medicine were defined as a broad set of health care practices that are not part of that country's own tradition and are not integrated into the dominant health care system.[118] Some herbal therapies are mainstream in Europe but are alternative in the US.[120]

The history of alternative medicine may refer to the history of a group of diverse medical practices that were collectively promoted as "alternative medicine" beginning in the 1970s, to the collection of individual histories of members of that group, or to the history of western medical practices that were labeled "irregular practices" by the western medical establishment.[5][121][122][123][124] It includes the histories of complementary medicine and of integrative medicine. Before the 1970s, western practitioners that were not part of the increasingly science-based medical establishment were referred to "irregular practitioners", and were dismissed by the medical establishment as unscientific and as practicing quackery.[121][122] Until the 1970's, irregular practice became increasingly marginalized as quackery and fraud, as western medicine increasingly incorporated scientific methods and discoveries, and had a corresponding increase in success of its treatments.[123] In the 1970s, irregular practices were grouped with traditional practices of nonwestern cultures and with other unproven or disproven practices that were not part of biomedicine, with the entire group collectively marketed and promoted under the single expression "alternative medicine".[5][121][122][123][125]

Use of alternative medicine in the west began to rise following the counterculture movement of the 1960s, as part of the rising new age movement of the 1970s.[5][126][127] This was due to misleading mass marketing of "alternative medicine" being an effective "alternative" to biomedicine, changing social attitudes about not using chemicals and challenging the establishment and authority of any kind, sensitivity to giving equal measure to beliefs and practices of other cultures (cultural relativism), and growing frustration and desperation by patients about limitations and side effects of science-based medicine.[5][122][123][124][125][127][128] At the same time, in 1975, the American Medical Association, which played the central role in fighting quackery in the United States, abolished its quackery committee and closed down its Department of Investigation.[121]:xxi[128] By the early to mid 1970s the expression "alternative medicine" came into widespread use, and the expression became mass marketed as a collection of "natural" and effective treatment "alternatives" to science-based biomedicine.[5][128][129][130] By 1983, mass marketing of "alternative medicine" was so pervasive that the British Medical Journal (BMJ) pointed to "an apparently endless stream of books, articles, and radio and television programmes urge on the public the virtues of (alternative medicine) treatments ranging from meditation to drilling a hole in the skull to let in more oxygen".[128] In this 1983 article, the BMJ wrote, "one of the few growth industries in contemporary Britain is alternative medicine", noting that by 1983, "33% of patients with rheumatoid arthritis and 39% of those with backache admitted to having consulted an alternative practitioner".[128]

By about 1990, the American alternative medicine industry had grown to a $27 Billion per year, with polls showing 30% of Americans were using it.[127][131] Moreover, polls showed that Americans made more visits for alternative therapies than the total number of visits to primary care doctors, and American out-of-pocket spending (non-insurance spending) on alternative medicine was about equal to spending on biomedical doctors.[121]:172 In 1991, Time magazine ran a cover story, "The New Age of Alternative Medicine: Why New Age Medicine Is Catching On".[127][131] In 1993, the New England Journal of Medicine reported one in three Americans as using alternative medicine.[127] In 1993, the Public Broadcasting System ran a Bill Moyers special, Healing and the Mind, with Moyers commenting that "...people by the tens of millions are using alternative medicine. If established medicine does not understand that, they are going to lose their clients."[127]

Another explosive growth began in the 1990s, when senior level political figures began promoting alternative medicine, investing large sums of government medical research funds into testing alternative medicine, including testing of scientifically implausible treatments, and relaxing government regulation of alternative medicine products as compared to biomedical products.[5][121]:xxi[122][123][124][125][132][133] Beginning with a 1991 appropriation of $2 million for funding research of alternative medicine research, federal spending grew to a cumulative total of about $2.5 billion by 2009, with 50% of Americans using alternative medicine by 2013.[10][134]

In 1991, pointing to a need for testing because of the widespread use of alternative medicine without authoritative information on its efficacy, United States Senator Tom Harkin used $2 million of his discretionary funds to create the Office for the Study of Unconventional Medical Practices (OSUMP), later renamed to be the Office of Alternative Medicine (OAM).[121]:170[135][136] The OAM was created to be within the National Institute of Health (NIH), the scientifically prestigious primary agency of the United States government responsible for biomedical and health-related research.[121]:170[135][136] The mandate was to investigate, evaluate, and validate effective alternative medicine treatments, and alert the public as the results of testing its efficacy.[131][135][136][137]

Sen. Harkin had become convinced his allergies were cured by taking bee pollen pills, and was urged to make the spending by two of his influential constituents.[131][135][136] Bedell, a longtime friend of Sen. Harkin, was a former member of the United States House of Representatives who believed that alternative medicine had twice cured him of diseases after mainstream medicine had failed, claiming that cow's milk colostrum cured his Lyme disease, and an herbal derivative from camphor had prevented post surgical recurrence of his prostate cancer.[121][131] Wiewel was a promoter of unproven cancer treatments involving a mixture of blood sera that the Food and Drug Administration had banned from being imported.[131] Both Bedell and Wiewel became members of the advisory panel for the OAM. The company that sold the bee pollen was later fined by the Federal Trade Commission for making false health claims about their bee-pollen products reversing the aging process, curing allergies, and helping with weight loss.[138]

In 1993, Britain's Prince Charles, who claimed that homeopathy and other alternative medicine was an effective alternative to biomedicine, established the Foundation for Integrated Health (FIH), as a charity to explore "how safe, proven complementary therapies can work in conjunction with mainstream medicine".[139] The FIH received government funding through grants from Britain's Department of Health.[139]

In 1994, Sen. Harkin (D) and Senator Orrin Hatch (R) introduced the Dietary Supplement Health and Education Act (DSHEA).[140][141] The act reduced authority of the FDA to monitor products sold as "natural" treatments.[140] Labeling standards were reduced to allow health claims for supplements based only on unconfirmed preliminary studies that were not subjected to scientific peer review, and the act made it more difficult for the FDA to promptly seize products or demand proof of safety where there was evidence of a product being dangerous.[141] The Act became known as the "The 1993 Snake Oil Protection Act" following a New York Times editorial under that name.[140]

Senator Harkin complained about the "unbendable rules of randomized clinical trials", citing his use of bee pollen to treat his allergies, which he claimed to be effective even though it was biologically implausible and efficacy was not established using scientific methods.[135][142] Sen. Harkin asserted that claims for alternative medicine efficacy be allowed not only without conventional scientific testing, even when they are biologically implausible, "It is not necessary for the scientific community to understand the process before the American public can benefit from these therapies."[140] Following passage of the act, sales rose from about $4 billion in 1994, to $20 billion by the end of 2000, at the same time as evidence of their lack of efficacy or harmful effects grew.[140] Senator Harkin came into open public conflict with the first OAM Director Joseph M. Jacobs and OAM board members from the scientific and biomedical community.[136] Jacobs' insistence on rigorous scientific methodology caused friction with Senator Harkin.[135][142][143] Increasing political resistance to the use of scientific methodology was publicly criticized by Dr. Jacobs and another OAM board member complained that "nonsense has trickled down to every aspect of this office".[135][142] In 1994, Senator Harkin appeared on television with cancer patients who blamed Dr. Jacobs for blocking their access to untested cancer treatment, leading Jacobs to resign in frustration.[135][142]

In 1995, Wayne Jonas, a promoter of homeopathy and political ally of Senator Harkin, became the director of the OAM, and continued in that role until 1999.[144] In 1997, the NCCAM budget was increased from $12 million to $20 million annually.[145] From 1990 to 1997, use of alternative medicine in the US increased by 25%, with a corresponding 50% increase in expenditures.[146] The OAM drew increasing criticism from eminent members of the scientific community with letters to the Senate Appropriations Committee when discussion of renewal of funding OAM came up.[121]:175 Nobel laureate Paul Berg wrote that prestigious NIH should not be degraded to act as a cover for quackery, calling the OAM "an embarrassment to serious scientists."[121]:175[145] The president of the American Physical Society wrote complaining that the government was spending money on testing products and practices that "violate basic laws of physics and more clearly resemble witchcraft".[121]:175[145] In 1998, the President of the North Carolina Medical Association publicly called for shutting down the OAM.[147]

In 1998, NIH director and Nobel laureate Harold Varmus came into conflict with Senator Harkin by pushing to have more NIH control of alternative medicine research.[148] The NIH Director placed the OAM under more strict scientific NIH control.[145][148] Senator Harkin responded by elevating OAM into an independent NIH "center", just short of being its own "institute", and renamed to be the National Center for Complementary and Alternative Medicine (NCCAM). NCCAM had a mandate to promote a more rigorous and scientific approach to the study of alternative medicine, research training and career development, outreach, and "integration". In 1999, the NCCAM budget was increased from $20 million to $50 million.[147][148] The United States Congress approved the appropriations without dissent. In 2000, the budget was increased to about $68 million, in 2001 to $90 million, in 2002 to $104 million, and in 2003, to $113 million.[147]

In 2004, modifications of the European Parliament's 2001 Directive 2001/83/EC, regulating all medicine products, were made with the expectation of influencing development of the European market for alternative medicine products.[149] Regulation of alternative medicine in Europe was loosened with "a simplified registration procedure" for traditional herbal medicinal products.[149][150] Plausible "efficacy" for traditional medicine was redefined to be based on long term popularity and testimonials ("the pharmacological effects or efficacy of the medicinal product are plausible on the basis of long-standing use and experience."), without scientific testing.[149][150] The Committee on Herbal Medicinal Products (HMPC) was created within the European Medicines Agency in London (EMEA). A special working group was established for homeopathic remedies under the Heads of Medicines Agencies.[149]

Through 2004, alternative medicine that was traditional to Germany continued to be a regular part of the health care system, including homeopathy and anthroposophic medicine.[149] The German Medicines Act mandated that science-based medical authorities consider the "particular characteristics" of complementary and alternative medicines.[149] By 2004, homeopathy had grown to be the most used alternative therapy in France, growing from 16% of the population using homeopathic medicine in 1982, to 29% by 1987, 36% percent by 1992, and 62% of French mothers using homeopathic medicines by 2004, with 94.5% of French pharmacists advising pregnant women to use homeopathic remedies.[151] As of 2004[update], 100 million people in India depended solely on traditional German homeopathic remedies for their medical care.[152] As of 2010[update], homeopathic remedies continued to be the leading alternative treatment used by European physicians.[151] By 2005, sales of homeopathic remedies and anthroposophical medicine had grown to $930 million Euros, a 60% increase from 1995.[151][153]

In 2008, London's The Times published a letter from Edzard Ernst that asked the FIH to recall two guides promoting alternative medicine, saying: "the majority of alternative therapies appear to be clinically ineffective, and many are downright dangerous." In 2010, Brittan's FIH closed after allegations of fraud and money laundering led to arrests of its officials.[139]

In 2009, after a history of 17 years of government testing and spending of nearly $2.5 billion on research had produced almost no clearly proven efficacy of alternative therapies, Senator Harkin complained, "One of the purposes of this center was to investigate and validate alternative approaches. Quite frankly, I must say publicly that it has fallen short. It think quite frankly that in this center and in the office previously before it, most of its focus has been on disproving things rather than seeking out and approving."[148][154][155] Members of the scientific community criticized this comment as showing Senator Harkin did not understand the basics of scientific inquiry, which tests hypotheses, but never intentionally attempts to "validate approaches".[148] Members of the scientific and biomedical communities complained that after a history of 17 years of being tested, at a cost of over $2.5 Billion on testing scientifically and biologically implausible practices, almost no alternative therapy showed clear efficacy.[10] In 2009, the NCCAM's budget was increased to about $122 million.[148] Overall NIH funding for CAM research increased to $300 Million by 2009.[148] By 2009, Americans were spending $34 Billion annually on CAM.[156]

Since 2009, according to Art. 118a of the Swiss Federal Constitution, the Swiss Confederation and the Cantons of Switzerland shall within the scope of their powers ensure that consideration is given to complementary medicine.[157]

In 2012, the Journal of the American Medical Association (JAMA) published a criticism that study after study had been funded by NCCAM, but "failed to prove that complementary or alternative therapies are anything more than placebos".[158] The JAMA criticism pointed to large wasting of research money on testing scientifically implausible treatments, citing "NCCAM officials spending $374,000 to find that inhaling lemon and lavender scents does not promote wound healing; $750,000 to find that prayer does not cure AIDS or hasten recovery from breast-reconstruction surgery; $390,000 to find that ancient Indian remedies do not control type 2 diabetes; $700,000 to find that magnets do not treat arthritis, carpal tunnel syndrome, or migraine headaches; and $406,000 to find that coffee enemas do not cure pancreatic cancer."[158] It was pointed out that negative results from testing were generally ignored by the public, that people continue to "believe what they want to believe, arguing that it does not matter what the data show: They know what works for them".[158] Continued increasing use of CAM products was also blamed on the lack of FDA ability to regulate alternative products, where negative studies do not result in FDA warnings or FDA-mandated changes on labeling, whereby few consumers are aware that many claims of many supplements were found not to have not to be supported.[158]

By 2013, 50% of Americans were using CAM.[134] As of 2013[update], CAM medicinal products in Europe continued to be exempted from documented efficacy standards required of other medicinal products.[159]

In 2014 the NCCAM was renamed to the National Center for Complementary and Integrative Health (NCCIH) with a new charter requiring that 12 of the 18 council members shall be selected with a preference to selecting leading representatives of complementary and alternative medicine, 9 of the members must be licensed practitioners of alternative medicine, 6 members must be general public leaders in the fields of public policy, law, health policy, economics, and management, and 3 members must represent the interests of individual consumers of complementary and alternative medicine.[160]

Much of what is now categorized as alternative medicine was developed as independent, complete medical systems. These were developed long before biomedicine and use of scientific methods. Each system was developed in relatively isolated regions of the world where there was little or no medical contact with pre-scientific western medicine, or with each other's systems. Examples are traditional Chinese medicine and the Ayurvedic medicine of India.

Other alternative medicine practices, such as homeopathy, were developed in western Europe and in opposition to western medicine, at a time when western medicine was based on unscientific theories that were dogmatically imposed by western religious authorities. Homeopathy was developed prior to discovery of the basic principles of chemistry, which proved homeopathic remedies contained nothing but water. But homeopathy, with its remedies made of water, was harmless compared to the unscientific and dangerous orthodox western medicine practiced at that time, which included use of toxins and draining of blood, often resulting in permanent disfigurement or death.[122]

Other alternative practices such as chiropractic and osteopathic manipulative medicine were developed in the United States at a time that western medicine was beginning to incorporate scientific methods and theories, but the biomedical model was not yet totally dominant. Practices such as chiropractic and osteopathic, each considered to be irregular practices by the western medical establishment, also opposed each other, both rhetorically and politically with licensing legislation. Osteopathic practitioners added the courses and training of biomedicine to their licensing, and licensed Doctor of Osteopathic Medicine holders began diminishing use of the unscientific origins of the field. Without the original nonscientific practices and theories, osteopathic medicine is now considered the same as biomedicine.

Further information: Rise of modern medicine

Until the 1970s, western practitioners that were not part of the medical establishment were referred to "irregular practitioners", and were dismissed by the medical establishment as unscientific, as practicing quackery.[122] Irregular practice became increasingly marginalized as quackery and fraud, as western medicine increasingly incorporated scientific methods and discoveries, and had a corresponding increase in success of its treatments.

Dating from the 1970s, medical professionals, sociologists, anthropologists and other commentators noted the increasing visibility of a wide variety of health practices that had neither derived directly from nor been verified by biomedical science.[161] Since that time, those who have analyzed this trend have deliberated over the most apt language with which to describe this emergent health field.[161] A variety of terms have been used, including heterodox, irregular, fringe and alternative medicine while others, particularly medical commentators, have been satisfied to label them as instances of quackery.[161] The most persistent term has been alternative medicine but its use is problematic as it assumes a value-laden dichotomy between a medical fringe, implicitly of borderline acceptability at best, and a privileged medical orthodoxy, associated with validated medico-scientific norms.[162] The use of the category of alternative medicine has also been criticized as it cannot be studied as an independent entity but must be understood in terms of a regionally and temporally specific medical orthodoxy.[163] Its use can also be misleading as it may erroneously imply that a real medical alternative exists.[164] As with near-synonymous expressions, such as unorthodox, complementary, marginal, or quackery, these linguistic devices have served, in the context of processes of professionalisation and market competition, to establish the authority of official medicine and police the boundary between it and its unconventional rivals.[162]

An early instance of the influence of this modern, or western, scientific medicine outside Europe and North America is Peking Union Medical College.[165][n 16][n 17]

From a historical perspective, the emergence of alternative medicine, if not the term itself, is typically dated to the 19th century.[166] This is despite the fact that there are variants of Western non-conventional medicine that arose in the late-eighteenth century or earlier and some non-Western medical traditions, currently considered alternative in the West and elsewhere, which boast extended historical pedigrees.[162] Alternative medical systems, however, can only be said to exist when there is an identifiable, regularized and authoritative standard medical practice, such as arose in the West during the nineteenth century, to which they can function as an alternative.

During the late eighteenth and nineteenth centuries regular and irregular medical practitioners became more clearly differentiated throughout much of Europe and,[168] as the nineteenth century progressed, most Western states converged in the creation of legally delimited and semi-protected medical markets.[169] It is at this point that an "official" medicine, created in cooperation with the state and employing a scientific rhetoric of legitimacy, emerges as a recognizable entity and that the concept of alternative medicine as a historical category becomes tenable.[170]

As part of this process, professional adherents of mainstream medicine in countries such as Germany, France, and Britain increasingly invoked the scientific basis of their discipline as a means of engendering internal professional unity and of external differentiation in the face of sustained market competition from homeopaths, naturopaths, mesmerists and other nonconventional medical practitioners, finally achieving a degree of imperfect dominance through alliance with the state and the passage of regulatory legislation.[162][164] In the US the Johns Hopkins University School of Medicine, based in Baltimore, Maryland, opened in 1893, with William H. Welch and William Osler among the founding physicians, and was the first medical school devoted to teaching "German scientific medicine".[171]

Buttressed by increased authority arising from significant advances in the medical sciences of the late 19th century onwardsincluding development and application of the germ theory of disease by the chemist Louis Pasteur and the surgeon Joseph Lister, of microbiology co-founded by Robert Koch (in 1885 appointed professor of hygiene at the University of Berlin), and of the use of X-rays (Rntgen rays)the 1910 Flexner Report called upon American medical schools to follow the model of the Johns Hopkins School of Medicine, and adhere to mainstream science in their teaching and research. This was in a belief, mentioned in the Report's introduction, that the preliminary and professional training then prevailing in medical schools should be reformed, in view of the new means for diagnosing and combating disease made available the sciences on which medicine depended.[n 18][173]

Putative medical practices at the time that later became known as "alternative medicine" included homeopathy (founded in Germany in the early 19c.) and chiropractic (founded in North America in the late 19c.). These conflicted in principle with the developments in medical science upon which the Flexner reforms were based, and they have not become compatible with further advances of medical science such as listed in Timeline of medicine and medical technology, 19001999 and 2000present, nor have Ayurveda, acupuncture or other kinds of alternative medicine.[citation needed]

At the same time "Tropical medicine" was being developed as a specialist branch of western medicine in research establishments such as Liverpool School of Tropical Medicine founded in 1898 by Alfred Lewis Jones, London School of Hygiene & Tropical Medicine, founded in 1899 by Patrick Manson and Tulane University School of Public Health and Tropical Medicine, instituted in 1912. A distinction was being made between western scientific medicine and indigenous systems. An example is given by an official report about indigenous systems of medicine in India, including Ayurveda, submitted by Mohammad Usman of Madras and others in 1923. This stated that the first question the Committee considered was "to decide whether the indigenous systems of medicine were scientific or not".[174][175]

By the later twentieth century the term 'alternative medicine' entered public discourse,[n 19][178] but it was not always being used with the same meaning by all parties. Arnold S. Relman remarked in 1998 that in the best kind of medical practice, all proposed treatments must be tested objectively, and that in the end there will only be treatments that pass and those that do not, those that are proven worthwhile and those that are not. He asked 'Can there be any reasonable "alternative"?'[179] But also in 1998 the then Surgeon General of the United States, David Satcher,[180] issued public information about eight common alternative treatments (including acupuncture, holistic and massage), together with information about common diseases and conditions, on nutrition, diet, and lifestyle changes, and about helping consumers to decipher fraud and quackery, and to find healthcare centers and doctors who practiced alternative medicine.[181]

By 1990, approximately 60 million Americans had used one or more complementary or alternative therapies to address health issues, according to a nationwide survey in the US published in 1993 by David Eisenberg.[182] A study published in the November 11, 1998 issue of the Journal of the American Medical Association reported that 42% of Americans had used complementary and alternative therapies, up from 34% in 1990.[146] However, despite the growth in patient demand for complementary medicine, most of the early alternative/complementary medical centers failed.[183]

Mainly as a result of reforms following the Flexner Report of 1910[184]medical education in established medical schools in the US has generally not included alternative medicine as a teaching topic.[n 20] Typically, their teaching is based on current practice and scientific knowledge about: anatomy, physiology, histology, embryology, neuroanatomy, pathology, pharmacology, microbiology and immunology.[186] Medical schools' teaching includes such topics as doctor-patient communication, ethics, the art of medicine,[187] and engaging in complex clinical reasoning (medical decision-making).[188] Writing in 2002, Snyderman and Weil remarked that by the early twentieth century the Flexner model had helped to create the 20th-century academic health center, in which education, research, and practice were inseparable. While this had much improved medical practice by defining with increasing certainty the pathophysiological basis of disease, a single-minded focus on the pathophysiological had diverted much of mainstream American medicine from clinical conditions that were not well understood in mechanistic terms, and were not effectively treated by conventional therapies.[189]

By 2001 some form of CAM training was being offered by at least 75 out of 125 medical schools in the US.[190] Exceptionally, the School of Medicine of the University of Maryland, Baltimore includes a research institute for integrative medicine (a member entity of the Cochrane Collaboration).[191][192] Medical schools are responsible for conferring medical degrees, but a physician typically may not legally practice medicine until licensed by the local government authority. Licensed physicians in the US who have attended one of the established medical schools there have usually graduated Doctor of Medicine (MD).[193] All states require that applicants for MD licensure be graduates of an approved medical school and complete the United States Medical Licensing Exam (USMLE).[193]

The British Medical Association, in its publication Complementary Medicine, New Approach to Good Practice (1993), gave as a working definition of non-conventional therapies (including acupuncture, chiropractic and homeopathy): "...those forms of treatment which are not widely used by the orthodox health-care professions, and the skills of which are not part of the undergraduate curriculum of orthodox medical and paramedical health-care courses." By 2000 some medical schools in the UK were offering CAM familiarisation courses to undergraduate medical students while some were also offering modules specifically on CAM.[195]

The Cochrane Collaboration Complementary Medicine Field explains its "Scope and Topics" by giving a broad and general definition for complementary medicine as including practices and ideas outside the domain of conventional medicine in several countriesand defined by its users as preventing or treating illness, or promoting health and well being, and which complement mainstream medicine in three ways: by contributing to a common whole, by satisfying a demand not met by conventional practices, and by diversifying the conceptual framework of medicine.[196]

Proponents of an evidence-base for medicine[n 21][198][199][200][201] such as the Cochrane Collaboration (founded in 1993 and from 2011 providing input for WHO resolutions) take a position that all systematic reviews of treatments, whether "mainstream" or "alternative", ought to be held to the current standards of scientific method.[192] In a study titled Development and classification of an operational definition of complementary and alternative medicine for the Cochrane Collaboration (2011) it was proposed that indicators that a therapy is accepted include government licensing of practitioners, coverage by health insurance, statements of approval by government agencies, and recommendation as part of a practice guideline; and that if something is currently a standard, accepted therapy, then it is not likely to be widely considered as CAM.[106]

That alternative medicine has been on the rise "in countries where Western science and scientific method generally are accepted as the major foundations for healthcare, and 'evidence-based' practice is the dominant paradigm" was described as an "enigma" in the Medical Journal of Australia.[202]

Critics in the US say the expression is deceptive because it implies there is an effective alternative to science-based medicine, and that complementary is deceptive because it implies that the treatment increases the effectiveness of (complements) science-based medicine, while alternative medicines that have been tested nearly always have no measurable positive effect compared to a placebo.[5][203][204][205]

Some opponents, focused upon health fraud, misinformation, and quackery as public health problems in the US, are highly critical of alternative medicine, notably Wallace Sampson and Paul Kurtz founders of Scientific Review of Alternative Medicine and Stephen Barrett, co-founder of The National Council Against Health Fraud and webmaster of Quackwatch.[206] Grounds for opposing alternative medicine stated in the US and elsewhere include that:

Paul Offit proposed that "alternative medicine becomes quackery" in four ways, by:[85]

A United States government agency, the National Center on Complementary and Integrative Health (NCCIH), created its own classification system for branches of complementary and alternative medicine that divides them into five major groups. These groups have some overlap, and distinguish two types of energy medicine: veritable which involves scientifically observable energy (including magnet therapy, colorpuncture and light therapy) and putative, which invokes physically undetectable or unverifiable energy.[215]

Alternative medicine practices and beliefs are diverse in their foundations and methodologies. The wide range of treatments and practices referred to as alternative medicine includes some stemming from nineteenth century North America, such as chiropractic and naturopathy, others, mentioned by Jtte, that originated in eighteenth- and nineteenth-century Germany, such as homeopathy and hydropathy,[164] and some that have originated in China or India, while African, Caribbean, Pacific Island, Native American, and other regional cultures have traditional medical systems as diverse as their diversity of cultures.[1]

Examples of CAM as a broader term for unorthodox treatment and diagnosis of illnesses, disease, infections, etc.,[216] include yoga, acupuncture, aromatherapy, chiropractic, herbalism, homeopathy, hypnotherapy, massage, osteopathy, reflexology, relaxation therapies, spiritual healing and tai chi.[216] CAM differs from conventional medicine. It is normally private medicine and not covered by health insurance.[216] It is paid out of pocket by the patient and is an expensive treatment.[216] CAM tends to be a treatment for upper class or more educated people.[146]

The NCCIH classification system is -

Alternative therapies based on electricity or magnetism use verifiable electromagnetic fields, such as pulsed fields, alternating-current, or direct-current fields in an unconventional manner rather than claiming the existence of imponderable or supernatural energies.[1]

Substance based practices use substances found in nature such as herbs, foods, non-vitamin supplements and megavitamins, and minerals, and includes traditional herbal remedies with herbs specific to regions where the cultural practices.[1] Nonvitamin supplements include fish oil, Omega-3 fatty acid, glucosamine, echinacea, flaxseed oil or pills, and ginseng, when used under a claim to have healing effects.[66]

Mind-body interventions, working under the premise that the mind can affect "bodily functions and symptoms",[1] include healing claims made in hypnotherapy,[217] and in guided imagery, meditation, progressive relaxation, qi gong, tai chi and yoga.[1] Meditation practices including mantra meditation, mindfulness meditation, yoga, tai chi, and qi gong have many uncertainties. According to an AHRQ review, the available evidence on meditation practices through September 2005 is of poor methodological quality and definite conclusions on the effects of meditation in healthcare cannot be made using existing research.[218][219]

Naturopathy is based on a belief in vitalism, which posits that a special energy called vital energy or vital force guides bodily processes such as metabolism, reproduction, growth, and adaptation.[41] The term was coined in 1895[220] by John Scheel and popularized by Benedict Lust, the "father of U.S. naturopathy".[221] Today, naturopathy is primarily practiced in the United States and Canada.[222] Naturopaths in unregulated jurisdictions may use the Naturopathic Doctor designation or other titles regardless of level of education.[223]

Follow this link:
Alternative medicine - Wikipedia, the free encyclopedia

Posted in Integrative Medicine | Comments Off on Alternative medicine – Wikipedia, the free encyclopedia

AAIM | Welcome to the American Association of Integrative …

Posted: October 20, 2016 at 1:44 am

Dr. Behlen studied at the University of Wisconsin-Oshkosh and the University of Cambridge for her undergraduate studies of Biological Sciences. She graduated with her medical training from Texas Health & Science University. During the course of her Medical training she specialized in Pain Management and Sport's Medicine. At Concordia University Dr. Behlen treated athletes and completed her Sport's Medicine training. Her boards were completed in San Antonio, Texas - Medical Branch. While in San Antonio she spent time treating athletes in the NBA. With over a decade and a half of working in the healthcare field, Dr. Behlen found a calling in Integrative Pain Management. She also has a large fertility patient population. Dr. Behlen is a Fellow with the American Association of Integrative Medicine (FAAIM). She has appeared on the nationally syndicated NBC show, Dr.OZ, as a physician commentator. She is also on the National Pain Advisory Board for Chronic Migraines. As of 2014 she is one of the newest elected Board Members of the American Association of Integrative Medicine. She is a Diplomate in the College of Physicians (DCP), Diplomate in the College of Pain Management (DCPM), Diplomate in the College of Pharmaceutical & Apothecary Sciences (DCPAS), and Diplomate in the College of Acupuncture & Neuromuscular Therapy (DCANT). She holds the highest National board certification for Acupuncture & Oriental Medicine (NCCAOM). Dr. Behlen is a member of American Association of Physicians & Surgeons, American Association of Integrative Medicine, American Chronic Pain Association and American Pregnancy Association. Currently Dr. Behlen is the only Doctor in the state of Oklahoma who has achieved these board certifications. She is also ranked as the #1 Provider for Acupuncture in the state.

Orthomolecular medicine, as conceptualized by double-Nobel laureate Linus Pauling, aims to restore the optimum environment of the body by correcting imbalances or deficiencies based on individual biochemistry, using substances natural to the body such as vitamins, minerals, amino acids, trace elements and fatty acids. The key concept in orthomolecular medicine is that genetic factors affect not only the physical characteristics of individuals, but also their biochemical makeup. Biochemical pathways of the body have significant genetic variability and diseases such as atherosclerosis, cancer, schizophrenia or depression are associated with specific biochemical abnormalities which are causal or contributing factors of the illness.

Click here for full story

Follow this link:
AAIM | Welcome to the American Association of Integrative ...

Posted in Integrative Medicine | Comments Off on AAIM | Welcome to the American Association of Integrative …

Recent Articles | Genetic Engineering | The Scientist …

Posted: October 20, 2016 at 1:44 am

Most Recent

Tweaks to a transformation protocol in 1986 cemented the little plant's mighty role in plant genetics research.

0 Comments

Other government authorities have yet to evaluate a proposal aimed at reducing populations of Zika-carrying insects in Florida.

3 Comments

Researchers engineer bacteria that deliver an anti-tumor toxin in mice before self-destructing.

0 Comments

A National Academiesled analysis evaluates the impacts of genetically engineered cropsand calls for updated regulations.

1 Comment

Researchers use a gene editor to introduce an allele that eliminates the horned traitand thus, the need for an expensive and painful process of dehorningin dairy cows.

2 Comments

Monkeys genetically engineered with multiple copies of an autism-linked human gene display some autism-like behaviors, scientists show.

1 Comment

Genetically modified bacteria that dont survive unless given an unnatural amino acid could serve as a new control measure to protect wild organisms and ecosystems against accidental release.

1 Comment

By Kerry Grens | December 24, 2015

The Scientists choice of major improvements in imaging, optogenetics, single-cell analyses, and CRISPR

0 Comments

By Kerry Grens | December 21, 2015

The first two bulls genetically engineered to lack horns arrived at the University of California, Davis, for breeding.

2 Comments

By Kate Yandell | December 7, 2015

Kill switches ensure that genetically engineered bacteria survive only in certain environmental conditions.

1 Comment

Visit link:
Recent Articles | Genetic Engineering | The Scientist ...

Posted in Genetic Engineering | Comments Off on Recent Articles | Genetic Engineering | The Scientist …

Genetics – Wikipedia

Posted: October 20, 2016 at 1:44 am

This article is about the general scientific term. For the scientific journal, see Genetics (journal).

Genetics is the study of genes, genetic variation, and heredity in living organisms.[1][2] It is generally considered a field of biology, but it intersects frequently with many of the life sciences and is strongly linked with the study of information systems.

The father of genetics is Gregor Mendel, a late 19th-century scientist and Augustinian friar. Mendel studied 'trait inheritance', patterns in the way traits were handed down from parents to offspring. He observed that organisms (pea plants) inherit traits by way of discrete "units of inheritance". This term, still used today, is a somewhat ambiguous definition of what is referred to as a gene.

Trait inheritance and molecular inheritance mechanisms of genes are still primary principles of genetics in the 21st century, but modern genetics has expanded beyond inheritance to studying the function and behavior of genes. Gene structure and function, variation, and distribution are studied within the context of the cell, the organism (e.g. dominance) and within the context of a population. Genetics has given rise to a number of sub-fields including epigenetics and population genetics. Organisms studied within the broad field span the domain of life, including bacteria, plants, animals, and humans.

Genetic processes work in combination with an organism's environment and experiences to influence development and behavior, often referred to as nature versus nurture. The intra- or extra-cellular environment of a cell or organism may switch gene transcription on or off. A classic example is two seeds of genetically identical corn, one placed in a temperate climate and one in an arid climate. While the average height of the two corn stalks may be genetically determined to be equal, the one in the arid climate only grows to half the height of the one in the temperate climate due to lack of water and nutrients in its environment.

The word genetics stems from the Ancient Greek genetikos meaning "genitive"/"generative", which in turn derives from genesis meaning "origin".[3][4][5]

The observation that living things inherit traits from their parents has been used since prehistoric times to improve crop plants and animals through selective breeding.[6] The modern science of genetics, seeking to understand this process, began with the work of Gregor Mendel in the mid-19th century.[7]

Prior to Mendel, Imre Festetics, a Hungarian noble, who lived in Kszeg before Mendel, was the first who used the word "genetics". He described several rules of genetic inheritance in his work The genetic law of the Nature (Die genetische Gestze der Natur, 1819). His second law is the same as what Mendel published. In his third law, he developed the basic principles of mutation (he can be considered a forerunner of Hugo de Vries.)[8]

Other theories of inheritance preceded his work. A popular theory during Mendel's time was the concept of blending inheritance: the idea that individuals inherit a smooth blend of traits from their parents.[9] Mendel's work provided examples where traits were definitely not blended after hybridization, showing that traits are produced by combinations of distinct genes rather than a continuous blend. Blending of traits in the progeny is now explained by the action of multiple genes with quantitative effects. Another theory that had some support at that time was the inheritance of acquired characteristics: the belief that individuals inherit traits strengthened by their parents. This theory (commonly associated with Jean-Baptiste Lamarck) is now known to be wrongthe experiences of individuals do not affect the genes they pass to their children,[10] although evidence in the field of epigenetics has revived some aspects of Lamarck's theory.[11] Other theories included the pangenesis of Charles Darwin (which had both acquired and inherited aspects) and Francis Galton's reformulation of pangenesis as both particulate and inherited.[12]

Modern genetics started with Gregor Johann Mendel, a scientist and Augustinian friar who studied the nature of inheritance in plants. In his paper "Versuche ber Pflanzenhybriden" ("Experiments on Plant Hybridization"), presented in 1865 to the Naturforschender Verein (Society for Research in Nature) in Brnn, Mendel traced the inheritance patterns of certain traits in pea plants and described them mathematically.[13] Although this pattern of inheritance could only be observed for a few traits, Mendel's work suggested that heredity was particulate, not acquired, and that the inheritance patterns of many traits could be explained through simple rules and ratios.

The importance of Mendel's work did not gain wide understanding until the 1890s, after his death, when other scientists working on similar problems re-discovered his research. William Bateson, a proponent of Mendel's work, coined the word genetics in 1905.[14][15] (The adjective genetic, derived from the Greek word genesis, "origin", predates the noun and was first used in a biological sense in 1860.)[16] Bateson both acted as a mentor and was aided significantly by the work of women scientists from Newnham College at Cambridge, specifically the work of Becky Saunders, Nora Darwin Barlow, and Muriel Wheldale Onslow.[17] Bateson popularized the usage of the word genetics to describe the study of inheritance in his inaugural address to the Third International Conference on Plant Hybridization in London, England, in 1906.[18]

After the rediscovery of Mendel's work, scientists tried to determine which molecules in the cell were responsible for inheritance. In 1911, Thomas Hunt Morgan argued that genes are on chromosomes, based on observations of a sex-linked white eye mutation in fruit flies.[19] In 1913, his student Alfred Sturtevant used the phenomenon of genetic linkage to show that genes are arranged linearly on the chromosome.[20]

Although genes were known to exist on chromosomes, chromosomes are composed of both protein and DNA, and scientists did not know which of the two is responsible for inheritance. In 1928, Frederick Griffith discovered the phenomenon of transformation (see Griffith's experiment): dead bacteria could transfer genetic material to "transform" other still-living bacteria. Sixteen years later, in 1944, the AveryMacLeodMcCarty experiment identified DNA as the molecule responsible for transformation.[21] The role of the nucleus as the repository of genetic information in eukaryotes had been established by Hmmerling in 1943 in his work on the single celled alga Acetabularia.[22] The HersheyChase experiment in 1952 confirmed that DNA (rather than protein) is the genetic material of the viruses that infect bacteria, providing further evidence that DNA is the molecule responsible for inheritance.[23]

James Watson and Francis Crick determined the structure of DNA in 1953, using the X-ray crystallography work of Rosalind Franklin and Maurice Wilkins that indicated DNA had a helical structure (i.e., shaped like a corkscrew).[24][25] Their double-helix model had two strands of DNA with the nucleotides pointing inward, each matching a complementary nucleotide on the other strand to form what looks like rungs on a twisted ladder.[26] This structure showed that genetic information exists in the sequence of nucleotides on each strand of DNA. The structure also suggested a simple method for replication: if the strands are separated, new partner strands can be reconstructed for each based on the sequence of the old strand. This property is what gives DNA its semi-conservative nature where one strand of new DNA is from an original parent strand.[27]

Although the structure of DNA showed how inheritance works, it was still not known how DNA influences the behavior of cells. In the following years, scientists tried to understand how DNA controls the process of protein production.[28] It was discovered that the cell uses DNA as a template to create matching messenger RNA, molecules with nucleotides very similar to DNA. The nucleotide sequence of a messenger RNA is used to create an amino acid sequence in protein; this translation between nucleotide sequences and amino acid sequences is known as the genetic code.[29]

With the newfound molecular understanding of inheritance came an explosion of research.[30] A notable theory arose from Tomoko Ohta in 1973 with her amendment to the neutral theory of molecular evolution through publishing the nearly neutral theory of molecular evolution. In this theory, Ohta stressed the importance of natural selection and the environment to the rate at which genetic evolution occurs.[31] One important development was chain-termination DNA sequencing in 1977 by Frederick Sanger. This technology allows scientists to read the nucleotide sequence of a DNA molecule.[32] In 1983, Kary Banks Mullis developed the polymerase chain reaction, providing a quick way to isolate and amplify a specific section of DNA from a mixture.[33] The efforts of the Human Genome Project, Department of Energy, NIH, and parallel private efforts by Celera Genomics led to the sequencing of the human genome in 2003.[34]

At its most fundamental level, inheritance in organisms occurs by passing discrete heritable units, called genes, from parents to progeny.[35] This property was first observed by Gregor Mendel, who studied the segregation of heritable traits in pea plants.[13][36] In his experiments studying the trait for flower color, Mendel observed that the flowers of each pea plant were either purple or whitebut never an intermediate between the two colors. These different, discrete versions of the same gene are called alleles.

In the case of the pea, which is a diploid species, each individual plant has two copies of each gene, one copy inherited from each parent.[37] Many species, including humans, have this pattern of inheritance. Diploid organisms with two copies of the same allele of a given gene are called homozygous at that gene locus, while organisms with two different alleles of a given gene are called heterozygous.

The set of alleles for a given organism is called its genotype, while the observable traits of the organism are called its phenotype. When organisms are heterozygous at a gene, often one allele is called dominant as its qualities dominate the phenotype of the organism, while the other allele is called recessive as its qualities recede and are not observed. Some alleles do not have complete dominance and instead have incomplete dominance by expressing an intermediate phenotype, or codominance by expressing both alleles at once.[38]

When a pair of organisms reproduce sexually, their offspring randomly inherit one of the two alleles from each parent. These observations of discrete inheritance and the segregation of alleles are collectively known as Mendel's first law or the Law of Segregation.

Geneticists use diagrams and symbols to describe inheritance. A gene is represented by one or a few letters. Often a "+" symbol is used to mark the usual, non-mutant allele for a gene.[39]

In fertilization and breeding experiments (and especially when discussing Mendel's laws) the parents are referred to as the "P" generation and the offspring as the "F1" (first filial) generation. When the F1 offspring mate with each other, the offspring are called the "F2" (second filial) generation. One of the common diagrams used to predict the result of cross-breeding is the Punnett square.

When studying human genetic diseases, geneticists often use pedigree charts to represent the inheritance of traits.[40] These charts map the inheritance of a trait in a family tree.

Organisms have thousands of genes, and in sexually reproducing organisms these genes generally assort independently of each other. This means that the inheritance of an allele for yellow or green pea color is unrelated to the inheritance of alleles for white or purple flowers. This phenomenon, known as "Mendel's second law" or the "Law of independent assortment", means that the alleles of different genes get shuffled between parents to form offspring with many different combinations. (Some genes do not assort independently, demonstrating genetic linkage, a topic discussed later in this article.)

Often different genes can interact in a way that influences the same trait. In the Blue-eyed Mary (Omphalodes verna), for example, there exists a gene with alleles that determine the color of flowers: blue or magenta. Another gene, however, controls whether the flowers have color at all or are white. When a plant has two copies of this white allele, its flowers are whiteregardless of whether the first gene has blue or magenta alleles. This interaction between genes is called epistasis, with the second gene epistatic to the first.[41]

Many traits are not discrete features (e.g. purple or white flowers) but are instead continuous features (e.g. human height and skin color). These complex traits are products of many genes.[42] The influence of these genes is mediated, to varying degrees, by the environment an organism has experienced. The degree to which an organism's genes contribute to a complex trait is called heritability.[43] Measurement of the heritability of a trait is relativein a more variable environment, the environment has a bigger influence on the total variation of the trait. For example, human height is a trait with complex causes. It has a heritability of 89% in the United States. In Nigeria, however, where people experience a more variable access to good nutrition and health care, height has a heritability of only 62%.[44]

The molecular basis for genes is deoxyribonucleic acid (DNA). DNA is composed of a chain of nucleotides, of which there are four types: adenine (A), cytosine (C), guanine (G), and thymine (T). Genetic information exists in the sequence of these nucleotides, and genes exist as stretches of sequence along the DNA chain.[45]Viruses are the only exception to this rulesometimes viruses use the very similar molecule, RNA, instead of DNA as their genetic material.[46] Viruses cannot reproduce without a host and are unaffected by many genetic processes, so tend not to be considered living organisms.

DNA normally exists as a double-stranded molecule, coiled into the shape of a double helix. Each nucleotide in DNA preferentially pairs with its partner nucleotide on the opposite strand: A pairs with T, and C pairs with G. Thus, in its two-stranded form, each strand effectively contains all necessary information, redundant with its partner strand. This structure of DNA is the physical basis for inheritance: DNA replication duplicates the genetic information by splitting the strands and using each strand as a template for synthesis of a new partner strand.[47]

Genes are arranged linearly along long chains of DNA base-pair sequences. In bacteria, each cell usually contains a single circular genophore, while eukaryotic organisms (such as plants and animals) have their DNA arranged in multiple linear chromosomes. These DNA strands are often extremely long; the largest human chromosome, for example, is about 247 million base pairs in length.[48] The DNA of a chromosome is associated with structural proteins that organize, compact and control access to the DNA, forming a material called chromatin; in eukaryotes, chromatin is usually composed of nucleosomes, segments of DNA wound around cores of histone proteins.[49] The full set of hereditary material in an organism (usually the combined DNA sequences of all chromosomes) is called the genome.

While haploid organisms have only one copy of each chromosome, most animals and many plants are diploid, containing two of each chromosome and thus two copies of every gene.[37] The two alleles for a gene are located on identical loci of the two homologous chromosomes, each allele inherited from a different parent.

Many species have so-called sex chromosomes that determine the gender of each organism.[50] In humans and many other animals, the Y chromosome contains the gene that triggers the development of the specifically male characteristics. In evolution, this chromosome has lost most of its content and also most of its genes, while the X chromosome is similar to the other chromosomes and contains many genes. The X and Y chromosomes form a strongly heterogeneous pair.

When cells divide, their full genome is copied and each daughter cell inherits one copy. This process, called mitosis, is the simplest form of reproduction and is the basis for asexual reproduction. Asexual reproduction can also occur in multicellular organisms, producing offspring that inherit their genome from a single parent. Offspring that are genetically identical to their parents are called clones.

Eukaryotic organisms often use sexual reproduction to generate offspring that contain a mixture of genetic material inherited from two different parents. The process of sexual reproduction alternates between forms that contain single copies of the genome (haploid) and double copies (diploid).[37] Haploid cells fuse and combine genetic material to create a diploid cell with paired chromosomes. Diploid organisms form haploids by dividing, without replicating their DNA, to create daughter cells that randomly inherit one of each pair of chromosomes. Most animals and many plants are diploid for most of their lifespan, with the haploid form reduced to single cell gametes such as sperm or eggs.

Although they do not use the haploid/diploid method of sexual reproduction, bacteria have many methods of acquiring new genetic information. Some bacteria can undergo conjugation, transferring a small circular piece of DNA to another bacterium.[51] Bacteria can also take up raw DNA fragments found in the environment and integrate them into their genomes, a phenomenon known as transformation.[52] These processes result in horizontal gene transfer, transmitting fragments of genetic information between organisms that would be otherwise unrelated.

The diploid nature of chromosomes allows for genes on different chromosomes to assort independently or be separated from their homologous pair during sexual reproduction wherein haploid gametes are formed. In this way new combinations of genes can occur in the offspring of a mating pair. Genes on the same chromosome would theoretically never recombine. However, they do via the cellular process of chromosomal crossover. During crossover, chromosomes exchange stretches of DNA, effectively shuffling the gene alleles between the chromosomes.[53] This process of chromosomal crossover generally occurs during meiosis, a series of cell divisions that creates haploid cells.

The first cytological demonstration of crossing over was performed by Harriet Creighton and Barbara McClintock in 1931. Their research and experiments on corn provided cytological evidence for the genetic theory that linked genes on paired chromosomes do in fact exchange places from one homolog to the other.

The probability of chromosomal crossover occurring between two given points on the chromosome is related to the distance between the points. For an arbitrarily long distance, the probability of crossover is high enough that the inheritance of the genes is effectively uncorrelated.[54] For genes that are closer together, however, the lower probability of crossover means that the genes demonstrate genetic linkage; alleles for the two genes tend to be inherited together. The amounts of linkage between a series of genes can be combined to form a linear linkage map that roughly describes the arrangement of the genes along the chromosome.[55]

Genes generally express their functional effect through the production of proteins, which are complex molecules responsible for most functions in the cell. Proteins are made up of one or more polypeptide chains, each of which is composed of a sequence of amino acids, and the DNA sequence of a gene (through an RNA intermediate) is used to produce a specific amino acid sequence. This process begins with the production of an RNA molecule with a sequence matching the gene's DNA sequence, a process called transcription.

This messenger RNA molecule is then used to produce a corresponding amino acid sequence through a process called translation. Each group of three nucleotides in the sequence, called a codon, corresponds either to one of the twenty possible amino acids in a protein or an instruction to end the amino acid sequence; this correspondence is called the genetic code.[56] The flow of information is unidirectional: information is transferred from nucleotide sequences into the amino acid sequence of proteins, but it never transfers from protein back into the sequence of DNAa phenomenon Francis Crick called the central dogma of molecular biology.[57]

The specific sequence of amino acids results in a unique three-dimensional structure for that protein, and the three-dimensional structures of proteins are related to their functions.[58][59] Some are simple structural molecules, like the fibers formed by the protein collagen. Proteins can bind to other proteins and simple molecules, sometimes acting as enzymes by facilitating chemical reactions within the bound molecules (without changing the structure of the protein itself). Protein structure is dynamic; the protein hemoglobin bends into slightly different forms as it facilitates the capture, transport, and release of oxygen molecules within mammalian blood.

A single nucleotide difference within DNA can cause a change in the amino acid sequence of a protein. Because protein structures are the result of their amino acid sequences, some changes can dramatically change the properties of a protein by destabilizing the structure or changing the surface of the protein in a way that changes its interaction with other proteins and molecules. For example, sickle-cell anemia is a human genetic disease that results from a single base difference within the coding region for the -globin section of hemoglobin, causing a single amino acid change that changes hemoglobin's physical properties.[60] Sickle-cell versions of hemoglobin stick to themselves, stacking to form fibers that distort the shape of red blood cells carrying the protein. These sickle-shaped cells no longer flow smoothly through blood vessels, having a tendency to clog or degrade, causing the medical problems associated with this disease.

Some DNA sequences are transcribed into RNA but are not translated into protein productssuch RNA molecules are called non-coding RNA. In some cases, these products fold into structures which are involved in critical cell functions (e.g. ribosomal RNA and transfer RNA). RNA can also have regulatory effects through hybridization interactions with other RNA molecules (e.g. microRNA).

Although genes contain all the information an organism uses to function, the environment plays an important role in determining the ultimate phenotypes an organism displays. This is the complementary relationship often referred to as "nature and nurture". The phenotype of an organism depends on the interaction of genes and the environment. An interesting example is the coat coloration of the Siamese cat. In this case, the body temperature of the cat plays the role of the environment. The cat's genes code for dark hair, thus the hair-producing cells in the cat make cellular proteins resulting in dark hair. But these dark hair-producing proteins are sensitive to temperature (i.e. have a mutation causing temperature-sensitivity) and denature in higher-temperature environments, failing to produce dark-hair pigment in areas where the cat has a higher body temperature. In a low-temperature environment, however, the protein's structure is stable and produces dark-hair pigment normally. The protein remains functional in areas of skin that are colder such as its legs, ears, tail and face so the cat has dark-hair at its extremities.[61]

Environment plays a major role in effects of the human genetic disease phenylketonuria.[62] The mutation that causes phenylketonuria disrupts the ability of the body to break down the amino acid phenylalanine, causing a toxic build-up of an intermediate molecule that, in turn, causes severe symptoms of progressive mental retardation and seizures. However, if someone with the phenylketonuria mutation follows a strict diet that avoids this amino acid, they remain normal and healthy.

A popular method for determining how genes and environment ("nature and nurture") contribute to a phenotype involves studying identical and fraternal twins, or other siblings of multiple births.[63] Because identical siblings come from the same zygote, they are genetically the same. Fraternal twins are as genetically different from one another as normal siblings. By comparing how often a certain disorder occurs in a pair of identical twins to how often it occurs in a pair of fraternal twins, scientists can determine whether that disorder is caused by genetic or postnatal environmental factors whether it has "nature" or "nurture" causes. One famous example is the multiple birth study of the Genain quadruplets, who were identical quadruplets all diagnosed with schizophrenia.[64] However such tests cannot separate genetic factors from environmental factors affecting fetal development.

The genome of a given organism contains thousands of genes, but not all these genes need to be active at any given moment. A gene is expressed when it is being transcribed into mRNA and there exist many cellular methods of controlling the expression of genes such that proteins are produced only when needed by the cell. Transcription factors are regulatory proteins that bind to DNA, either promoting or inhibiting the transcription of a gene.[65] Within the genome of Escherichia coli bacteria, for example, there exists a series of genes necessary for the synthesis of the amino acid tryptophan. However, when tryptophan is already available to the cell, these genes for tryptophan synthesis are no longer needed. The presence of tryptophan directly affects the activity of the genestryptophan molecules bind to the tryptophan repressor (a transcription factor), changing the repressor's structure such that the repressor binds to the genes. The tryptophan repressor blocks the transcription and expression of the genes, thereby creating negative feedback regulation of the tryptophan synthesis process.[66]

Differences in gene expression are especially clear within multicellular organisms, where cells all contain the same genome but have very different structures and behaviors due to the expression of different sets of genes. All the cells in a multicellular organism derive from a single cell, differentiating into variant cell types in response to external and intercellular signals and gradually establishing different patterns of gene expression to create different behaviors. As no single gene is responsible for the development of structures within multicellular organisms, these patterns arise from the complex interactions between many cells.

Within eukaryotes, there exist structural features of chromatin that influence the transcription of genes, often in the form of modifications to DNA and chromatin that are stably inherited by daughter cells.[67] These features are called "epigenetic" because they exist "on top" of the DNA sequence and retain inheritance from one cell generation to the next. Because of epigenetic features, different cell types grown within the same medium can retain very different properties. Although epigenetic features are generally dynamic over the course of development, some, like the phenomenon of paramutation, have multigenerational inheritance and exist as rare exceptions to the general rule of DNA as the basis for inheritance.[68]

During the process of DNA replication, errors occasionally occur in the polymerization of the second strand. These errors, called mutations, can affect the phenotype of an organism, especially if they occur within the protein coding sequence of a gene. Error rates are usually very low1 error in every 10100million basesdue to the "proofreading" ability of DNA polymerases.[69][70] Processes that increase the rate of changes in DNA are called mutagenic: mutagenic chemicals promote errors in DNA replication, often by interfering with the structure of base-pairing, while UV radiation induces mutations by causing damage to the DNA structure.[71] Chemical damage to DNA occurs naturally as well and cells use DNA repair mechanisms to repair mismatches and breaks. The repair does not, however, always restore the original sequence.

In organisms that use chromosomal crossover to exchange DNA and recombine genes, errors in alignment during meiosis can also cause mutations.[72] Errors in crossover are especially likely when similar sequences cause partner chromosomes to adopt a mistaken alignment; this makes some regions in genomes more prone to mutating in this way. These errors create large structural changes in DNA sequence duplications, inversions, deletions of entire regions or the accidental exchange of whole parts of sequences between different chromosomes (chromosomal translocation).

Mutations alter an organism's genotype and occasionally this causes different phenotypes to appear. Most mutations have little effect on an organism's phenotype, health, or reproductive fitness.[73] Mutations that do have an effect are usually deleterious, but occasionally some can be beneficial.[74] Studies in the fly Drosophila melanogaster suggest that if a mutation changes a protein produced by a gene, about 70 percent of these mutations will be harmful with the remainder being either neutral or weakly beneficial.[75]

Population genetics studies the distribution of genetic differences within populations and how these distributions change over time.[76] Changes in the frequency of an allele in a population are mainly influenced by natural selection, where a given allele provides a selective or reproductive advantage to the organism,[77] as well as other factors such as mutation, genetic drift, genetic draft,[78]artificial selection and migration.[79]

Over many generations, the genomes of organisms can change significantly, resulting in evolution. In the process called adaptation, selection for beneficial mutations can cause a species to evolve into forms better able to survive in their environment.[80] New species are formed through the process of speciation, often caused by geographical separations that prevent populations from exchanging genes with each other.[81] The application of genetic principles to the study of population biology and evolution is known as the "modern synthesis".

By comparing the homology between different species' genomes, it is possible to calculate the evolutionary distance between them and when they may have diverged. Genetic comparisons are generally considered a more accurate method of characterizing the relatedness between species than the comparison of phenotypic characteristics. The evolutionary distances between species can be used to form evolutionary trees; these trees represent the common descent and divergence of species over time, although they do not show the transfer of genetic material between unrelated species (known as horizontal gene transfer and most common in bacteria).[82]

Although geneticists originally studied inheritance in a wide range of organisms, researchers began to specialize in studying the genetics of a particular subset of organisms. The fact that significant research already existed for a given organism would encourage new researchers to choose it for further study, and so eventually a few model organisms became the basis for most genetics research.[83] Common research topics in model organism genetics include the study of gene regulation and the involvement of genes in development and cancer.

Organisms were chosen, in part, for convenienceshort generation times and easy genetic manipulation made some organisms popular genetics research tools. Widely used model organisms include the gut bacterium Escherichia coli, the plant Arabidopsis thaliana, baker's yeast (Saccharomyces cerevisiae), the nematode Caenorhabditis elegans, the common fruit fly (Drosophila melanogaster), and the common house mouse (Mus musculus).

Medical genetics seeks to understand how genetic variation relates to human health and disease.[84] When searching for an unknown gene that may be involved in a disease, researchers commonly use genetic linkage and genetic pedigree charts to find the location on the genome associated with the disease. At the population level, researchers take advantage of Mendelian randomization to look for locations in the genome that are associated with diseases, a method especially useful for multigenic traits not clearly defined by a single gene.[85] Once a candidate gene is found, further research is often done on the corresponding gene the orthologous gene in model organisms. In addition to studying genetic diseases, the increased availability of genotyping methods has led to the field of pharmacogenetics: the study of how genotype can affect drug responses.[86]

Individuals differ in their inherited tendency to develop cancer,[87] and cancer is a genetic disease.[88] The process of cancer development in the body is a combination of events. Mutations occasionally occur within cells in the body as they divide. Although these mutations will not be inherited by any offspring, they can affect the behavior of cells, sometimes causing them to grow and divide more frequently. There are biological mechanisms that attempt to stop this process; signals are given to inappropriately dividing cells that should trigger cell death, but sometimes additional mutations occur that cause cells to ignore these messages. An internal process of natural selection occurs within the body and eventually mutations accumulate within cells to promote their own growth, creating a cancerous tumor that grows and invades various tissues of the body.

Normally, a cell divides only in response to signals called growth factors and stops growing once in contact with surrounding cells and in response to growth-inhibitory signals. It usually then divides a limited number of times and dies, staying within the epithelium where it is unable to migrate to other organs. To become a cancer cell, a cell has to accumulate mutations in a number of genes (37) that allow it to bypass this regulation: it no longer needs growth factors to divide, it continues growing when making contact to neighbor cells, and ignores inhibitory signals, it will keep growing indefinitely and is immortal, it will escape from the epithelium and ultimately may be able to escape from the primary tumor, cross the endothelium of a blood vessel, be transported by the bloodstream and will colonize a new organ, forming deadly metastasis. Although there are some genetic predispositions in a small fraction of cancers, the major fraction is due to a set of new genetic mutations that originally appear and accumulate in one or a small number of cells that will divide to form the tumor and are not transmitted to the progeny (somatic mutations). The most frequent mutations are a loss of function of p53 protein, a tumor suppressor, or in the p53 pathway, and gain of function mutations in the ras proteins, or in other oncogenes.

DNA can be manipulated in the laboratory. Restriction enzymes are commonly used enzymes that cut DNA at specific sequences, producing predictable fragments of DNA.[89] DNA fragments can be visualized through use of gel electrophoresis, which separates fragments according to their length.

The use of ligation enzymes allows DNA fragments to be connected. By binding ("ligating") fragments of DNA together from different sources, researchers can create recombinant DNA, the DNA often associated with genetically modified organisms. Recombinant DNA is commonly used in the context of plasmids: short circular DNA molecules with a few genes on them. In the process known as molecular cloning, researchers can amplify the DNA fragments by inserting plasmids into bacteria and then culturing them on plates of agar (to isolate clones of bacteria cells). ("Cloning" can also refer to the various means of creating cloned ("clonal") organisms.)

DNA can also be amplified using a procedure called the polymerase chain reaction (PCR).[90] By using specific short sequences of DNA, PCR can isolate and exponentially amplify a targeted region of DNA. Because it can amplify from extremely small amounts of DNA, PCR is also often used to detect the presence of specific DNA sequences.

DNA sequencing, one of the most fundamental technologies developed to study genetics, allows researchers to determine the sequence of nucleotides in DNA fragments. The technique of chain-termination sequencing, developed in 1977 by a team led by Frederick Sanger, is still routinely used to sequence DNA fragments.[91] Using this technology, researchers have been able to study the molecular sequences associated with many human diseases.

As sequencing has become less expensive, researchers have sequenced the genomes of many organisms, using a process called genome assembly, which utilizes computational tools to stitch together sequences from many different fragments.[92] These technologies were used to sequence the human genome in the Human Genome Project completed in 2003.[34] New high-throughput sequencing technologies are dramatically lowering the cost of DNA sequencing, with many researchers hoping to bring the cost of resequencing a human genome down to a thousand dollars.[93]

Next generation sequencing (or high-throughput sequencing) came about due to the ever-increasing demand for low-cost sequencing. These sequencing technologies allow the production of potentially millions of sequences concurrently.[94][95] The large amount of sequence data available has created the field of genomics, research that uses computational tools to search for and analyze patterns in the full genomes of organisms. Genomics can also be considered a subfield of bioinformatics, which uses computational approaches to analyze large sets of biological data. A common problem to these fields of research is how to manage and share data that deals with human subject and personally identifiable information. See also genomics data sharing.

On 19 March 2015, a leading group of biologists urged a worldwide ban on clinical use of methods, particularly the use of CRISPR and zinc finger, to edit the human genome in a way that can be inherited.[96][97][98][99] In April 2015, Chinese researchers reported results of basic research to edit the DNA of non-viable human embryos using CRISPR.[100][101]

Follow this link:
Genetics - Wikipedia

Posted in Genetics | Comments Off on Genetics – Wikipedia

genetics facts, information, pictures | Encyclopedia.com …

Posted: October 20, 2016 at 1:44 am

I. Genetics And BehaviorP. L. Broadhurst

BIBLIOGRAPHY

II. Demography and Population GeneticsJean Sutter

BIBLIOGRAPHY

III. Race and GeneticsJ. N. Spuhler

BIBLIOGRAPHY

Behavior genetics is a relatively new cross-disciplinary specialization between genetics and Psychology. It is so new that it hardly knows what to call itself. The term behavior genetics is gaining currency in the United States; but in some quarters there, and certainly elsewhere, the term psycho-genetics is favored. Logically, the best name would be genetical psychology, since the emphasis is on the use of the techniques of genetics in the analysis of behavior rather than vice versa; but the in evitable ambiguity of that term is apparent. Psy chologists generally use the terms genetic or genetical in two senses: in the first and older sense of developmental, or ontogenetic; and in the second, more recent usage relating to the analysis of inheritance. The psychologist G. Stanley Hall coined the term genetic before the turn of the century to denote developmental studies (witness the Journal of Genetic Psychology), and Alfred Binet even used the term psychogenetic in this sense. But with the rapid rise of the discipline now known as genetics after the rediscovery of the Mendelian laws in 1900, William Bateson, one of the founders of this new science, pre-empted the term genetic in naming it, thereby investing genetic with the double meaning that causes the current confusion. Psychological genetics, with its obvious abbreviation, psychogenetics, is probably the best escape from the dilemma.

Importance of genetics in behavior. The importance of psychogenetics lies in the fundamental nature of the biological processes in our understanding of human social behavior. The social sciences, and psychology in particular, have long concentrated on environmental determinants of behavior and neglected hereditary ones. But it is clear that in many psychological functions a substantial portion of the observed variation, roughly of the order of 50 per cent for many traits, can be ascribed to hereditary causation. To ignore this hereditary contribution is to impede both action and thought in this area.

This manifold contribution to behavioral variation is not a static affair. Heredity and environment interact, and behavior is the product, rather than the sum, of their respective contributions. The number of sources of variability in both he redity and environment is large, and the consequent number of such possible products even larger. Nevertheless, these outcomes are not incalculable, and experimental and other analyses of their limits are of immense potential interest to the behavioral scientist. The chief theoretical interest lies in the analysis of the evolution of behavior; and the chief practical significance, so far as can be envisaged at present, lies in the possibilities psychogenetics has for the optimization of genetic potential by manipulation of the environmental expression of it.

Major current approaches. The major approaches to the study of psychogenetics can be characterized as the direct, or experimental, and the indirect, or observational. The former derive principally from the genetical parent of this hybrid discipline and involve the manipulation of the heredity of experimental subjects, usually by restricting the choice of mates in some specially defined way. Since such techniques are not possible with human subjects a second major approach exists, the indirect or observational, with its techniques derived largely from psychology and sociology. The two approaches are largely complementary in the case of natural genetic experiments in human populations, such as twinning or cousin marriages. Thus, the distinction between the two is based on the practicability of controlling in some way the essentially immutable genetic endowmentin a word, the genotypeof the individuals subject to investigation. With typical experimental animals (rats, mice, etc.) and other organisms used by the geneticist, such as the fruit fly and many microorganisms, the genotype can often be specified in advance and populations constructed by the hybridization of suitable strains to meet this specification with a high degree of accuracy. Not so with humans, where the genotype must remain as given, and indeed where its details can rarely be specified with any degree of accuracy except for certain physical characteristics, such as blood groups. Observational, demographic, and similar techniques are therefore all that are available here. The human field has another disadvantage in rigorous psychogenetic work: the impossibility of radically manipulating the environmentfor example, by rearing humans in experimental environ ments from birth in the way that can easily be done with animals in the laboratory. Since in psychogenetics, as in all branches of genetics, one deals with a phenotypein this case, behavior and since the phenotype is the end product of the action, or better still, interaction of genotype and environment, human psychogenetics is fraught with double difficulty. Analytical techniques to be mentioned later can assist in resolving some of these difficulties.

Definition. To define psychogenetics as the study of the inheritance of behavior is to adopt a misleadingly narrow definition of the area of study, and one which is unduly restrictive in its emphasis on the hereditarian point of view. Just as the parent discipline of genetics is the analysis not only of the similarities between individuals but also of the differences between them, so psychogenetics seeks to understand the basis of individual differences in behavior. Any psychogenetic analysis must therefore be concerned with the environmental determinants of behavior (conventionally implicated in the genesis of differences) in addition to the hereditary ones (the classic source of resemblances). But manifestly this dichotomy does not always operate, so that for this reason alone the analysis of environmental effects must go hand in hand with the search for genetic causation. This is true even if the intention is merely to exclude the influence of the one the better to study the other; but the approach advocated here is to study the two in tandem, as it were, and to determine the extent to which the one interacts with the other. Psychogenetics is best viewed as that specialization which concerns itself with the interaction of heredity and environment, insofar as they affect behavior. To attempt greater precision is to become involved in subtle semantic problems about the meanings of terms.

At first sight many would tend to restrict environmental effects to those operating after the birth of the organism, but to do so would be to exclude prenatal environmental effects that have been shown to be influential in later behavior. On the other hand, to broaden the concept of environment to include all influences after fertilization the point in time at which the genotype is fixed permits consideration of the reciprocal influence of parts of the genotype upon each other. Can environment include the rest of the genotype, other than that part which is more or less directly concerned with the phenotype under consideration? This point assumes some importance since there are characteristics, not behavioralat least, none that are behavioral have so far been reported whose expression depends on the nature of the other genes present in the organism. In the absence of some of them, or rather certain alleles of the gene pairs, the value phenotypically observed would be different from what it would be if they were present. That is, different components of the genotype, in interplay with one another, modify phenotypic expression of the characteristic they in fluence. Can such indirect action, which recalls that of a chemical catalyst, best be considered as environmental or innate? It would be preferable to many to regard this mechanism as a genetic effect rather than an environmental one in the usually accepted sense. Hence, the definition of the area of study as one involving the interaction of heredity and environment, while apparently adding complexity, in fact serves to reduce confusion.

It must be conceded that this view has not as yet gained general acceptance. In some of the work reviewed in the necessarily brief survey of the major findings in this area, attempts have been made to retain a rather rigid dichotomy between heredity and environmentnature versus nurture in fact, an either/or proposition that the facts do not warrant. The excesses of both sides in the controversies of the 1920sfor example, the famous debate between Watson and McDougall over the relative importance of learned (environmental) and instinctive (genetic) determinants of behavior show the fallacies that extreme protagonists on either side can entertain if the importance of the interaction effect is ignored.

Gene action. The nature of gene action as such is essentially conducive to interaction with the environment, since the behavioral phenotype we observe is the end product of a long chain of action, principally biochemical, originating in the chromosome within the individual cell. A chromosome has a complex structure, involving DNA (deoxyribonucleic acid) and the connections of DNA with various proteins, and may be influenced in turn by another nucleic acid, RNA (ribonucleic acid), also within the cell but external to the nucleus. There are complex structures and sequences of processes, anatomical, physiological, and hormonal, which underlie normal development and differentiation of structure and function in the growth, development, and maturation of the organism. Much of this influence is determined genetically in the sense that the genotype of the organism, fixed at conception, determines how it proceeds under normal environmental circumstances. But it would be a mistake to regard any such sequence as rigid or immutable, as we shall see.

The state of affairs that arises when a number of genetically determined biochemical abnormalities affect behavior is illustrative of the argument. Many of these biochemical deficiencies or inborn errors of metabolism in humans are the outcome of a chain of causation starting with genie structures, some of them having known chromosomal locations. Their effects on the total personalitythat is, the sum total of behavorial variation that makes the individual uniquecan range from the trivial to the intense. The facility with which people can taste a solution of phenylthiocarbamide (PTC), a synthetic substance not found in nature, varies in a relatively simple genetical way: people are either tasters or nontasters in certain rather well-defined proportions, with a pattern of inheritance determined probably by one gene of major effect. But being taste blind or not is a relatively unimportant piece of behavior, since one is never likely to encounter it outside a genetical experiment. (It should perhaps be added that there is some evidence that the ability to taste PTC may be linked with other characteristics of some importance, such as susceptibility to thyroid disease.) Nevertheless, this example is insignificant compared with the psychological effect of the absence of a biochemical link in patients suffering from phenylketonuria. They are unable to metabolize phenylalanine to tyrosine in the liver, with the result that the phenylalanine accumulates and the patient suffers multiple defects, among which is usually gross intellectual defect, with an IQ typically on the order of 30. This gross biochemical failure is mediated by a single recessive gene that may be passed on in a family unnoticed in heterozygoussingle doseform but becomes painfully apparent in the unfortunate individual who happens to receive a double dose and consequently is homozygous for the defect.

Alternatively, a normal dominant gene may mutate to the recessive form and so give rise to the trouble. While mutation is a relatively rare event individually, the number of genes in each individualprobably on the order of ten thousand and the number of individuals in a population make it statistically a factor to be reckoned with. One of the best documented cases of a deleterious mutation of this kind giving rise to a major defect relates to the hemophilia transmitted, with certain important political consequences, to some of the descendants of Queen Victoria of England. The dependence of the last tsarina of Russia on the monk Rasputin was said to be based in part on the beneficial therapeutic effect of his hypnotic techniques on the uncontrollable bleeding of the Tsarevitch Alexis. Victoria was almost certainly heterozygous for hemophilia and, in view of the absence of any previous record of the defect in the Hanoverian dynasty, it seems likely that the origin of the trouble was a mutation in one of the germ cells in a testicle of Victorias father, the duke of Kent, before Victoria was conceived in August 1818.

But however it comes about, a defect such as phenylketonuria can be crippling. Fortunately, its presence can be diagnosed in very early life by a simple urine test for phenyl derivatives. The dependence of the expression of the genetic defect on the environmental circumstances is such that its effect can be mitigated by feeding the afflicted infant with a specially composed diet low in the phenylalanine with which the patients biochemical make-up cannot cope. Here again, therefore, one sees the interaction of genotype and environment in this case the type of food eaten. Many of the human biochemical defects that have been brought to light in recent years are rather simply determined genetically, in contrast with the prevailing beliefs about the bases of many behavioral characteristics including intelligence, personality, and most psychotic and neurotic disorders. This is also true of several chromosomal aberrations that have been much studied recently and that are now known to be implicated in various conditions of profound behavioral importance. Prominent among these is Downs syndrome (mongolism) with, again, effects including impairment of cognitive power. [SeeIntelligence and Intelligence Testing; Mental Disorders, articles onBiological AspectsandGenetic Aspects.]

Sex as a genetic characteristic. The sex difference is perhaps the most striking genetically determined difference in behavior and the one that is most often ignored in this connection. Primary sex is completely determined genetically at the moment of fertilization of the ovum; in mammals sex depends on whether the spermatozoon effecting fertilization bears an X or a Y chromosome to combine with the X chromosome inevitably contributed by the ovum. The resulting gamete then has the form of an XX (female) or an XY (male) individual. This difference penetrates every cell of every tissue of the resulting individual and in turn is responsible for the observable gross differences in morphology. These, in turn, subserve differences of physiological function, metabolism, and endocrine function which profoundly influence not only those aspects of behavior relating to sexual behav ior and reproductive function in the two sexes but many other aspects as well. But behavior is also influenced by social and cultural pressures, so that the resulting sex differences in behavior as observed by the psychologist are especially good examples of a phenotype that must be the and product of both genetic and environmental forces. There is a large literature on sex differences in human behavior and a sizable one on such differ ences in animal behavior, but there has been little attempt to assess this pervasive variation in terms of the relative contribution of genetic and environmental determinants. This is partly because of the technical difficulties of the problem, in the sense that all subjects must be of either one sex or the othercrossing males with females will always result in the same groups as those one started with, either males or femalesthere being, in general, no genetically intermediate sex against which to evaluate either and identical twins being inevitably of like sex. It is also partly because the potential of genetic analyses that do not involve direct experi mentation has not been realized. This is especially so since the causal routes whereby genetic determinants of sex influence many of the behavioral phenotypes observed are often better understood than in other cases where the genetic determinants underlying individual differences manifest in a population are not so clear-cut. [SeeIndividualDifferences, article onSex Differences.]

Sex linkage. There is one exception to the general lack of interest in the biometrical analysis of sex differences having behavioral connotations: sex-linked conditions. That is to say, it is demonstrated or postulated that the gene or genes responsible for the behavioroften a defect, as in the case of color blindness, which has a significantly greater incidence in males than in femalesare linked with the sex difference by virtue of their location on the sex chromosome determining genetic sex. Thus it is that sex can be thought of as a chromosomal difference of regular occurrence, as opposed to aberrations of the sort which give rise to pathological conditions, such as Downs syndrome. Indeed there are also various anomalies of genetic sex that give rise to problems of sexual identity, in which the psychological and overt be havioral consequences can be of major importance for the individual. While the evidence in such cases of environmental modification of the causative genetic conditions is less dramatic than in phenylketonuria, interaction undoubtedly exists, since these chromosomal defects of sex differentiation can in some cases be alleviated by appropriate surgical and hormonal treatment. [SeeSexual BEHavior, article onSexual Deviation: Psychological Aspects; andVision, article onColor Vision and Color Blindness.]

Human psychogenetics. It is abundantly clear that most of the phenotypes the behavioral scientist is interested in are multidetermined, both environmentally and genetically. The previous examples, however, are the exception rather than the rule, and their prominence bears witness that our understanding of genetics and behavior is as yet so little advanced that the simpler modes of genetic expression have been the first to be explored. In genetics itself, the striking differences in seed configuration used by Mendel in his classic crosses of sweet peas are determined by major genes with full dominance acting simply. But such clear-cut expression, especially of dominance, is unusual in human psychogenetics, and more complex statistical techniques are necessary to evaluate multiple genetic and environmental effects acting to produce the observed phenotype.

Whatever the analysis applied to the data gathered in other fields, in human psychogenetics the method employed cannot be the straightforward Mendelian one of crossbreeding which, in various elaborations, remains the basic tool of the geneticist today. Neither can it be the method of selection artificial, as opposed to naturalthat is other wise known as selective breeding. Indeed, none of the experimental techniques that can be applied to any other organism, whatever the phenotype being measured, is applicable to man, since experimental mating is effectively ruled out as a permissible technique in current cultures. It may be remarked in passing that such has not always been the case. The experiment of the Mogul emperor, Akbar, who reared children in isolation to determine their natural religion (and merely produced mutes) and the eugenics program of J. H. Noyes at the Oneida Community in New York State in the nineteenth century are cases in point. The apparent inbreeding of brother with sister among the rulers of ancient Egypt in the eighteenth dynasty (sixteenth to fourteenth century B.C.), which is often quoted as an example of the absence in humans of the deleterious effects of inbreeding (inbreeding depression), may not be all it seems. It is likely that the definition of sister and brother in this context did not necessarily have the same biological relevance that it has today but was rather a cultural role that could be defined, at least in this case, at will.

Twin study. In the absence of the possibility of an experimental approach, contemporary re search in human psychogenetics must rely on natural genetic experiments. Of these, the one most widely used and most industriously studied is the phenomenon of human twinning. Credit for the recognition of the value of observations on twins can be given to the nineteenth-century English scientist entist Francis Galton, who pioneered many fields of inquiry. He may be justly regarded as the father of psychogenetics for the practical methods he introduced into this field, such as the method of twin study, as well as for his influence which extended, although indirectly, even to the American experimenters in psychogenetics during the early decades of the present century.

Twin births are relatively rare in humans and vary in frequency with the ethnic group. However, the extent to which such ethnic groups differ among themselves behaviorally as a result of the undoubted genetic differences, of which incidence of multiple births is but one example, is controversial. As is well known, there are two types of twins: the monozygotic or so-called identical twins, derived from a single fertilized ovum that has split into two at an early stage in development, and the dizygotic or so-called fraternal twins, developed from two separate ova fertilized by different spermatozoa. These two physical types are not always easy to differentiate, although this difficulty is relatively miner in twin study. Nonetheless, they have led to two kinds of investigation. The first relates to differences in monozygotic twins who have identical hereditary make-up but who have been reared apart and thus subjected to different environmental influences during childhood; and the second relates to the comparison of the two types of twins usually restricted to like-sex pairs, since fraternal twins can differ in sex. The latter method supposes all differences between monozygotic pairs to be due to environmental origin, whereas the (greater) difference between dizygotic pairs is of environmental plus genetic origin. Thus, the relative contribution of the two sources of variation can be evaluated.

Findings obtained from either method have not been especially clear-cut, both because of intractable problems regarding the relative weight to be placed upon differences in the environment in which the twins have been reared and because of the sampling difficulties, which are likely to be formidable in any twin study. Nevertheless, interesting inferences can be drawn from twin study. The investigation of separated monozygotic twins has shown that while even with their identical heredity they can differ quite widely, there exists a significant resemblance in basic aspects of personality including intelligence, introversion, and neurotic tendencies, and that these resemblances can persist despite widely different environments in which the members of a pair are reared. Such findings emphasize the need to consider the contribution of genotype and environment in an inter active senseclearly some genotypes represented in the personality of monozygotic twin pairs are sensitive to environmentally induced variation, whereas others are resistant to it.

Comparisons between monozygotic and dizygotic twins reared together suggest that monozygotic twins more closely resemble each other in many aspects of personality, especially those defining psychological factors such as neuroticism and introversion-extroversion. The increase in the differences between the two types of twins when factor measures are usedas opposed to simple test scoressuggests that a more basic biological stratum is tapped by factor techniques, since the genetic determination seems greater than where individual tests are employed. Here again, the de gree to which any phenotype is shown to be hereditary in origin is valid only for the environment in which it developed and is measured; different environments may well yield different results. The problems of environmental control in human samples are so intractable that some students of the subject have questioned whether the effort and undoubted skill devoted to twin study have been well invested, in view of the inherent and persisting equivocality of the outcome.

Multivariate methods. Methods of twin study, introduced largely to improve upon the earlier methods of familial correlation (parents with off spring, sib with sib, etc.), have been combined with them. Familial correlation methods them selves have not been dealt with here, since within-family environments are bound to be even greater contaminants in determining the observed behavior than environments in twin study methods. Never theless, used on a large scale and in conjunction with twin study and with control subjects selected at random from a population, multivariate methods show promise for defining the limits of environmental and genotypic interaction. So far, the solutions to the problems of biometrical analysis posed by this type of investigation have been only partial, and the sheer weight of effort involved in locating and testing the requisite numbers of subjects standing in the required relationships has deterred all but a few pioneers. Despite the undoubtedly useful part such investigations have played in defining the problems involved, the absence of the possibility of experimental breeding has proved a drawback in the provision of socially useful data.

Animal psychogenetics. Recourse has often been had to nonhuman subjects. The additional problem thereby incurred of the relevance of comparative data to human behavior is probably balanced by the double refinements of the control of both the heredity and the environment of the experimental subjects. Two major methods of genetics have been employed, both intended to produce subjects of predetermined genotype: the crossbreeding of strains of animals of known genotype; and phenotypic selection, the mating of like with like to increase a given characteristic in a population.

Selection. Behavioral phenotypes of interest have been studied by the above methods, often using laboratory rodents. For example, attributes such as intelligence, activity, speed of conditioning, and emotionality have been selectively bred in rats.

Selection for emotional behavior in the rat will serve as an example of the techniques used and the results achieved. Rats, in common with many other mammalian species, defecate when afraid. A technique of measuring individual differences in emotional arousal is based on this propensity. The animal under test is exposed to mildly stressful noise and light stimulation in an open field or arena. The number of fecal pellets deposited serves as an index of disturbance, and in this way the extremes among a large group of rats can be characterized as high or low in emotional responsiveness. Continued selection from within the high and low groups will in time produce two distinct strains. Control of environmental variables is achieved by a rigid standardization of the conditions under which the animals are reared before being subjected to the test as adults. Careful checks on maternal effects, both prenatal and postnatal, reveal these effects to be minimal.

Such an experiment does little beyond establishing the importance of the genetic effect on the given strains in the given environment. While there are techniques for assessing the relative importance of the genetic and environmental contributions to the variation observed under selection, they are better suited to the analysis of the outcome of experiments using the alternative major genetical method, that of crossbreeding of inbred strains.

Crossbreeding. Strains used in crossbreeding experiments have usually been inbred for a phenotypic character of interest, although not usually a behavioral one. However, this does not preclude the use of these inbred strains for behavioral studies, since linkage relationships among genes ensure that selection for factors multidetermined genetically often involves multiple changes in characteristics other than those selected for, and behavior is no exception to this rule. Moreover, the existence of such inbred strains constitutes perhaps the most important single advantage of animals as subjects, since it enables simplifying assumptions regarding the homozygosity or genetic uniformity of such strains to be made in analysis of the outcome of crosses. Members of inbred strains are theoretically as alike as monozygotic twin pairs, so that genetic relationshipswhich in human populations can be investigated only after widespread efforts to find themcan be multiplied at will in laboratory animals.

This approach allows a more sensitive analysis of the determinants, both environmental and genetic, of the behavioral phenotype under observation. In addition, the nature of the genetic forces can be further differentiated into considerations of the average dominance effects of the genes in volved, the extent to which they tend to increase or decrease the metrical expression of the behavioral phenotype, and the extent to which the different strains involved possess such increasers or de creases. Finally, rough estimates of the number of these genes can be given. But the analysis depends upon meeting requirements regarding the scaling of the metric upon which the behavior is measured and is essentially a statistical one. That is, only average effects of cumulative action of the relatively large number of genes postulated as in volved can be detected. Gone are the elegantly simple statistics derived from the classical Men-delian analyses of genes of major effect, often displaying dominance, like those encountered incertain human inborn errors of metabolism. There is little evidence of the existence of comparable genes of major effect mediating behavior in laboratory animals, although some have been studied in in sects, especially the fruit fly.

A typical investigation of a behavioral phenotype might take the form of identifying two inbred strains known to differ in a behavioral trait, measuring individuals from these strains, and then systematically crossing them and measuring all offspring. When this was done for the runway performance of mice, an attribute related to their temperamental wildness, the results, analyzed by the techniques of biometrical genetics, showed that the behavior was controlled by at least three groups of genes (a probable underestimate). The contributions of these groups were additive to each other and independent of the environment when measured on a logarithmic scale but interacted with each other and with the environment on a linear scale. These genes showed a significant average dominance effect, and there was a preponderance of dominant genes in the direction of greater wildness. The heritability ratio of the contributions of nature and nurture was around seven to three.

The use of inbred lines may be restricted to first filial crosses if a number of such crosses are made from several different lines. This increases precision of analysis in addition to allowing a proportionate decrease in the amount of laboratory work. One investigation examined the exploratory behavior of six different strains of rats in an open field of the kind used for the selection mentioned above. On a linear scale there were no untoward environmental effects, including specifically prenatal maternal ones. The heritability ratio was high, around nine to one; and while there was a significant average dominance component among the genes determining exploration, there was no preponderance of dominants or recessively acting genes among increasers or decreasers. The relative standing in this respect of the parental strains could be established with some precision.

Limitations. While the methods described above have allowed the emergence of results that ultimately may assist our understanding of the mechanisms of behavioral inheritance, it cannot be said that much substantial progress has yet been made. Until experiments explore the effect of a range of different genotypes interacting with a range of environments of psychological interest and consequence, little more can be expected. Manipulating heredity in a single standard environment or manipulating the environment of a single standard genotype can only provide conclusions so limited to both the genotypes and conditions employed that they have little usefulness in a wider context. When better experiments are performed, as seems likely in the next few decades, then problems of some sociological importance and interest will arise in the application of these experiments to the tasks of maximizing genetic potential and perfecting environmental control for the purpose of so doing. A new eugenics may well develop, but grappling with the problems of its impact on contemporary society had best be left to future generations.

P. L. Broadhurst

[Directly related are the entriesEugenics; Evolution; Mental Disorders, article onGenetic Aspects. Other relevant material may be found inIndividual Differences, article onSex Differences; Instinct; Intelligence and Intelligence Testing; Mental Ertardation; Psychology, article onConstitutional Psychology.]

Broadhurst, P. L. 1960 Experiments in Psychogenetics: Applications of Biometrical Genetics to the Inheritance of Behavior. Pages 1-102 in Hans J. Eysenck (editor), Experiments in Personality. Volume 1: Psychogenetics and Psychopharmacology. London: Routledge. Selection and crossbreeding methods applied to laboratory rats.

Catteix, RaymondB.; Stice, GlenF.; and Kristy, Nor TonF. 1957 A First Approximation to Nature-Nurture Ratios for Eleven Primary Personality Factors in Objective Tests. Journal of Abnormal and Social Psychology 54:143159. Pioneer multivariate analysis combining twin study and familial correlations.

Fuller, JohnL.; and Thompson, W. Robert 1960 Be havior Genetics. New York: Wiley. A comprehen sive review of the field.

Mather, Kenneth1949 Biometrical Genetics: The Study of Continuous Variation. New York: Dover. The classic work on the analysis of quantitative char acteristics.

Shields, James1962 Monozygotic Twins Brought Up Apart and Brought Up Together: An Investigation Into the Genetic and Environmental Causes of Variation in Personality. Oxford Univ. Press.

The best available definition of population genetics is doubtless that of Malcot: It is the totality of mathematical models that can be constructed to represent the evolution of the structure of a population classified according to the distribution of its Mendelian genes (1955, p. 240). This definition, by a probabilist mathematician, gives a correct idea of the constructed and abstract side of this branch of genetics; it also makes intelligible the rapid development of population genetics since the advent of Mendelism.

In its formal aspect this branch of genetics might even seem to be a science that is almost played out. Indeed, it is not unthinkable that mathematicians have exhausted all the structural possibilities for building models, both within the context of general genetics and within that of the hypothesesmore or less complex and abstractthat enable us to characterize the state of a population.

Two major categories of models can be distinguished: determinist models are those in which variations in population composition over time are rigorously determined by (a) a known initial state of the population; (b) a known number of forces or pressures operating, in the course of generations, in an unambiguously defined fashion (Male-cot 1955, p. 240). These pressures involve mutation, selection, and preferential marriages (by consanguinity, for instance). Determinist models, based on ratios that have been exactly ascertained from preceding phenomena, can be expressed only in terms of populations that are infinite in the mathematical sense. In fact, it is only in this type of population that statistical regularities can emerge (Malecot 1955). In these models the composition of each generation is perfectly defined by the composition of the preceding generation.

Stochastic models, in contrast to determinist ones, involve only finite populations, in which the gametes that, beginning with the first generation, are actually going to give birth to the new generation represent only a finite number among all possible gametes. The result is that among these active, or useful, gametes (Malecot 1959), male or female, the actual frequency of a gene will differ from the probability that each gamete had of carrying it at the outset.

The effect of chance will play a prime role, and the frequencies of the genes will be able to drift from one generation to the other. The effects of random drift and of genetic drift become, under these conditions, the focal points for research.

The body of research completed on these assumptions does indeed form a coherent whole, but these results, in spite of their brilliance, are marked by a very noticeable formalism. In reality, the models, although of great importance at the conceptual level, are often too far removed from the facts. In the study of man, particularly, the problems posed are often too complex for the solutions taken directly from the models to describe concrete reality.

Not all these models, however, are the result of purely abstract speculation; construction of some of them has been facilitated by experimental data. To illustrate this definition of population genetics and the problems that it raises, this article will limit itself to explaining one determinist model, both because it is one of the oldest and simplest to under stand and because it is one of those most often verified by observation.

A determinist model. Let us take the case of a particular human population: the inhabitants of an island cut off from outside contacts. It is obvious that great variability exists among the genes carried by the different inhabitants of this island. The genotypes differ materially from one another; in other words, there is a certain polymorphism in the populationpolymorphism that we can define in genetic terms with the help of a simple example.

Let us take the case of autosome (not connected with sex) gene a, transmitting itself in a mono-hybrid diallely. In relation to it individuals can be classified in three categories: homozygotes whose two alleles are a (a/a); heterozygotes, carriers of a and its allele a (a/a); and the homozygotes who are noncarriers of a (a/a). At any given moment or during any given generation, these three categories of individuals exist within the population in certain proportions relative to each other.

Now, according to Mendels second law (the law of segregation), the population born out of a cross between an individual who is homozygote for a (a/a) and an individual who is homozygote for a (a/a) will include individuals a/a, a/a, and a/a in the following proportions: one-fourth a/a, one-half a/a, and one-fourth a/a. In this popu lation the alleles a and a have the same frequency, one-half, and each sex produces half a and half a. If these individuals are mated randomly, a simple algebraic calculation quickly demonstrates that individuals of the generation following will be quan titatively distributed in the same fashion: one-fourth a/a, one-half a/a, and one-fourth a/a. It will be the same for succeeding generations.

It can therefore be stated that the genetic structure of such a population does not vary from one generation to the other. If we designate by p the initial proportion of a/a individuals and by q that of a/a individuals, we get p + q = 1, or the totality of the population. Applying this system of symbols to the preceding facts, it can be easily shown that the proportion of individuals of all three categories in the first generation born from a/a and a/a equals p2, 2pq, q2. In the second and third generation the frequency of individuals will always be similar: p2, 2pq, q2.

Until this point, we have remained at the individual level. If we proceed to that of the gametes carrying a or a and to that of genes a and a, we observe that their frequencies intermingle. In the type of population discussed above, the formula p2, 2pq, q2 still applies perfectly, therefore, to the gametes and genes. This model, which can be regarded as a formalization of the Hardy-Weinberg law, has other properties, but our study of it will stop here. (For a discussion of the study of isolated populations, see Sutter & Tabah 1951.)

Model construction and demographic reality. The Hardy-Weinberg law has been verified by numerous studies, involving both vegetable and animal species. The findings in the field of human blood groups have also been studied for a long time from a viewpoint derived implicitly from this law, especially in connection with their geographic distribution. Under the system of reproduction by sexes, a generation renews itself as a result of the encounter of the sexual cells (gametes) produced by individuals of both sexes belonging to the living generation. In the human species it can be said that this encounter takes place at random. One can imagine the advantage that formal population genetics can take of this circumstance, which can be compared to drawing marked balls by lot from two different urns. Model construction, already favored by these circumstances, is favored even further if the characteristics of the population utilized are artificially defined with the help of a certain number of hypotheses, of which the following is a summary description:

(1) Fertility is identical for all couples; there is no differential fertility.

(2) The population is closed; it cannot, there fore, be the locus of migrations (whether immigration or emigration).

(3) Marriages take place at random; there is no assortative mating.

(4) There are no systematic preferential marriages (for instance, because of consanguinity).

(5) Possible mutations are not taken into consideration.

(6) The size of the population is clearly denned. On the basis of these working hypotheses, the whole of which constitutes panmixia, it was possible, not long after the rediscovery of Mendels laws, to construct the first mathematical models. Thus, population genetics took its first steps forward, one of which was undoubtedly the Hardy-Weinberg law.

Mere inspection of the preceding hypotheses will enable the reader to judge how, taken one by one, they conflict with reality. In fact, no human population can be panmictic in the way the models are.

The following evidence can be cited in favor of this conclusion:

(1) Fertility is never the same with all couples. In fact, differential fertility is the rule in human populations. There is always a far from negligible sterility rate of about 18 per cent among the large populations of Western civilization. On the other hand, the part played by large families in keeping up the numbers of these populations is extremely important; we can therefore generalize by emphasizing that for one or another reason individ uals carrying a certain assortment of genes reproduce themselves more or less than the average number of couples. That is what makes for the fact that in each population there is always a certain degree of selection. Hypothesis (1) above, essential to the construction of models, is therefore very far removed from reality.

(2) Closed populations are extremely rare. Even among the most primitive peoples there is always a minimum of emigration or immigration. The only cases where one could hope to see this condition fulfilled at the present time would be those of island populations that have remained extremely primitive.

(3) With assortative mating we touch on a point that is still obscure; but even if these phe nomena remain poorly understood, it can nevertheless be said that they appear to be crucial in determining the genetic composition of populations. This choice can be positive: the carriers of a given characteristic marry among each other more often than chance would warrant. The fact was demonstrated in England by Pearson and Lee (1903): very tall individuals have a tendency to marry each other, and so do very short ones. Willoughby (1933) has reported on this question with respect to a great number of somatic characteristics other than heightfor example, coloring of hair, eyes and skin, intelligence quotient, and so forth. Inversely, negative choice makes individuals with the same characteristics avoid marrying one another. This mechanism is much less well known than the above. The example of persons of violent nature (Dahlberg 1943) and of red-headed individuals has been cited many times, although it has not been possible to establish valid statistics to support it.

(4) The case of preferential marriages is not at all negligible. There are still numerous areas where marriages between relatives (consanguineous marriages) occur much more frequently than they would as the result of simple random encounters. In addition, recent studies on the structures of kinship have shown that numerous populations that do not do so today used to practice preferential marriagemost often in a matrilinear sense. These social phenomena have a wide repercussion on the genetic structure of populations and are capable of modifying them considerably from one generation to the other.

(5) Although we do not know exactly what the real rates of mutation are, it can be admitted that their frequency is not negligible. If one or several genes mutate at a given moment in one or several individuals, the nature of the gene or genes is in this way modified; its stability in the population undergoes a disturbance that can considerably transform the composition of that population.

(6) The size of the population and its limits have to be taken into account. We have seen that this is one of the essential characteristics important in differentiating two large categories of models.

The above examination brings us into contact with the realities of population: fertility, fecundity, nuptiality, mortality, migration, and size are the elements that are the concern of demography and are studied not only by this science but also very often as part of administrative routine. Leaving aside the influence of size, which by definition is of prime importance in the technique of the models, there remain five factors to be examined from the demographic point of view. Mutation can be ruled out of consideration, because, although its importance is great, it is felt only after the passage of a certain number of generations. It can therefore be admitted that it is not of immediate interest.

We can also set aside choice of a mate, because the importance of this factor in practice is still unknown. Accordingly, there remain three factors of prime importance: fertility, migration, and preferential marriage. Over the last decade the progressive disappearance of consanguineous marriage has been noted everywhere but in Asia. In many civilized countries marriage between cousins has practically disappeared. It can be stated, therefore, that this factor has in recent years become considerably less important.

Migrations remain very important on the genetic level, but, unfortunately, precise demographic data about them are rare, and most of the data are of doubtful validity. For instance, it is hard to judge how their influence on a population of Western culture could be estimated.

The only remaining factor, fertility (which to geneticists seems essential), has fortunately been studied in satisfactory fashion by demographers. To show the importance of differential fertility in human populations, let us recall a well-known cal culation made by Karl Pearson in connection with Denmark. In 1830, 50 per cent of the children in that country were born of 25 per cent of the parents. If that fertility had been maintained at the same rate, 73 per cent of the second-generation Danes and 97 per cent of the third generation would have been descended from the first 25 per cent. Similarly, before World War I, Charles B. Davenport calculated, on the basis of differential fertility, that 1,000 Harvard graduates would have only 50 descendants after two centuries, while 1,000 Rumanian emigrants living in Boston would have become 100,000.

Human reproduction involves both fecundity (capacity for reproduction) and fertility (actual reproductive performance). These can be estimated for males, females, and married couples treated as a reproductive unit. Let us rapidly review the measurements that demography provides for geneticists in this domain.

Crude birth rate. The number of living births in a calendar year per thousand of the average population in the same year is known as the crude birth rate. The rate does not seem a very useful one for geneticists: there are too many different groups of childbearing age; marriage rates are too variable from one population to another; birth control is not uniformly diffused, and so forth.

General fertility rate. The ratio of the number of live births in one year to the average number of women capable of bearing children (usually defined as those women aged 15 to 49) is known as the general fertility rate. Its genetic usefulness is no greater than that of the preceding figure. Moreover, experience shows that this figure is not very different from the crude birth rate.

Age-specific fertility rates. Fertility rates according to the age reached by the mother during the year under consideration are known as age-specific fertility rates. Demographic experience shows that great differences are observed here, depending on whether or not the populations are Malthusianin other words, whether they practice birth control or not. In the case of a population where the fertility is natural, knowledge of the mothers age is sufficient. In cases where the population is Malthusian, the figure becomes interesting when it is calculated both by age and by age group of the mothers at time of marriage, thus combining the mothers age at the birth of her child and her age at marriage. This is generally known as the age-specific marital fertility rate. If we are dealing with a Malthusian population, it is preferable, in choosing the sample to be studied, to take into consideration the age at marriage rather than the age at the childs birth. Thus, while the age at birth is sufficient for natural populations, these techniques cannot be applied indiscriminately to all populations.

Family histories. Fertility rates can also be calculated on the basis of family histories, which can be reconstructed from such sources as parish registries (Fleury & Henry 1965) or, in some countries, from systematic family registrations (for instance, the Japanese koseki or honseki). The method for computing the fertility rate for, say, the 25-29-year-old age group from this kind of data is first to determine the number of legitimate births in the group. It is then necessary to make a rigorous count of the number of years lived in wedlock between their 25th and 30th birthdays by all the women in the group; this quantity is known as the groups total woman-years. The number of births is then divided by the number of woman-years to obtain the groups fertility rate. This method is very useful in the study of historical problems in genetics, since it is often the only one that can be applied to the available data.

Let us leavefer tility rates in order to examine rates of reproduction. Here we return to more purely genetic considerations, since we are looking for the mechanism whereby one generation is replaced by the one that follows it. Starting with a series of fertility rates by age groups, a gross reproduction rate can be calculated that gives the average number of female progeny that would be born to an age cohort of women, all of whom live through their entire reproductive period and continue to give birth at the rates prevalent when they themselves were born. The gross reproduction rate obtaining for a population at any one time can be derived by combining the rates for the different age cohorts.

A gross reproduction rate for a real generation can also be determined by calculating the average number of live female children ever born to women of fifty or over. As explained above, this rate is higher for non-Malthusian than for Malthusian populations and can be refined by taking into consideration the length of marriage.

We have seen that in order to be correct, it is necessary for the description of fertility in Malthusian populations to be closely related to the date of marriage. Actually, when a family reaches the size that the parents prefer, fertility tends to approach zero. The preferred size is evidently related to length of marriage in such a manner that fertility is more closely linked with length of marriage than with age at marriage. In recent years great progress has been made in the demographic analysis of fertility, based on this kind of data. This should en ablegeneticists to be more circumspect in their choice of sections of the population to be studied.

Americans talk of cohort analysis, the French of analysis by promotion (a term meaning year or class, as we might speak of the class of 1955). A cohort, or promotion, includes all women born within a 12-month period; to estimate fertility or mortality, it is supposed that these women are all born at the same moment on the first of January of that year. Thus, women born between January 1, 1900, and January 1, 1901, are considered to be exactly 15 years old on January 1, 1915; exactly 47 years old on January 1, 1947; and so forth.

The research done along these lines has issued in the construction of tables that are extremely useful in estimating fertility in a human population. As we have seen, it is more useful to draw up cohorts based on age at marriage than on age at birth. A fertility table set up in this way gives for each cohort the cumulative birth rate, by order of birth and single age of mother, for every woman surviving at each age, from 15 to 49. The progress that population genetics could make in knowing real genie frequencies can be imagined, if it could concentrate its research on any particular cohort and its descendants.

This rapid examination of the facts that demography can now provide in connection with fertility clearly reveals the variables that population genetics can use to make its models coincide with reality. The models retain their validity for genetics because they are still derived from basic genetic concepts; their application to actual problems, however, should be based on the kind of data mentioned above. We have voluntarily limited ourselves to the problem of fertility, since it is the most important factor in genetics research.

The close relationship between demography and population genetics that now appears can be illustrated by the field of research into blood groups. Although researchers concede that blood groups are independent of both age and sex, they do not explore the full consequences of this, since their measures are applied to samples of the population that are representative only in a demographic sense. We must deplore the fact that this method has spread to the other branches of genetics, since it is open to criticism not only from the demographic but from the genetic point of view. By proceeding in this way, a most important factor is overlookedthat of genie frequencies.

Let us admit that the choice of a blood group to be studied is of little impor tance when the characteristic is widely distributed throughout the populationfor instance, if each individual is the carrier of a gene taken into account in the system being studied (e.g., a system made up of groups A, B, and O). But this is no longer the case if the gene is carried only by a few individualsin other words, if its frequency attains 0.1 per cent or less. In this case (and cases like this are common in human genetics) the structure of the sample examined begins to take on prime importance.

A brief example must serve to illustrate this cardinal point. We have seen that in the case of rare recessive genes the importance of consan guineous marriages is considerable. The scarcer that carriers of recessive genes become in the pop ulation as a whole, the greater the proportion of such carriers produced by consanguineous marriages. Thus if as many as 25 per cent of all individuals in a population are carriers of recessive genes, and if one per cent of all marriages in that population are marriages between first cousins, then this one per cent of consanguineous unions will produce 1.12 times as many carriers of recessive genes as will be produced by all the unions of persons not so related. But if recessive genes are carried by only one per cent of the total population, then the same proportion of marriages between first cousins will produce 2.13 times as many carriers as will be produced by all other marriages. This production ratio increases to 4.9 if the total frequency of carriers is .01 per cent, to 20.2 if it is .005 per cent, and to 226 if it is .0001 per cent. Under these conditions, one can see the importance of the sampling method used to estimate the frequency within a population, not only of the individuals who are carriers but of the gametes and genes themselves.

Genealogical method. It should be emphasized that genetic studies based on genealogies remain the least controversial. Studying a population where the degrees of relationship connecting individuals are known presents an obvious interest. Knowing one or several characteristics of certain parents, we can follow what becomes of these in the descendants. Their evolution can also be considered from the point of view of such properties of genes as dominance, recessiveness, expressivity, and penetrance. But above all, we can follow the evolution of these characteristics in the population over time and thus observe the effects of differential fertility. Until now the genealogical method was applicable only to a numerically sparse population, but progress in electronic methods of data processing permits us to anticipate its application to much larger populations (Sutter & Tabah 1956).

Dynamic studies. In very large modern populations it would appear that internal analysis of cohorts and their descendants will bring in the future a large measure of certainty to research in population genetics. In any case, it is a sure way to a dynamic genetics based on demographic reality. For instance, it has been recommended that blood groups should be studied according to age groups; but if we proceed to do so without regard for demographic factors, we cannot make our observations dynamic. Thus, a study that limits itself to, let us say, the fifty- to sixty-year-old age group will have to deal with a universe that includes certain genetically dead elements, such as unmarried and sterile persons, which have no meaning from the dynamic point of view. But if a study is made of this same fifty- to sixty-year-old age group and then of the twenty- to thirty-year-old age group, and if in the older group only those individuals are considered who have descendants in the younger group, the dynamic potential of the data is maximized. It is quite possible to subject demographic cohorts to this sort of interpretation, because in many countries demographic statistics supply series of individuals classified according to the mothers age at their birth.

This discussion would not be complete if we did not stress another aspect of the genetic importance of certain demographic factors, revealed by modern techniques, which have truly created a demographic biology. Particularly worthy of note are the mothers age, order of birth, spacing between births, and size of family.

The mothers age is a great influence on fecundity. A certain number of couples become in capable of having a second child after the birth of the first child; a third child after the second; a fourth after the third; and so forth. This sterility increases with the length of a marriage and especially after the age of 35. It is very important to realize this when, for instance, natural selection and its effects are being studied.

The mothers age also strongly influences the frequency of twin births (monozygotic or dizygotic), spontaneous abortions, stillborn or abnormal births, and so on. Many examples can also be given of the influence of the order of birth, the interval between births, and the size of the family to illustrate their effect on such things as fertility, mortality, morbidity, and malformations.

It has been demonstrated above how seriously demographic factors must be taken into consideration when we wish to study the influence of the genetic structure of populations. We will leave aside the possible environmental influences, such as social class and marital status, since they have previously been codified by Osborn (1956/1957) and Larsson (19561957), among others. At the practical level, however, the continuing efforts to utilize vital statistics for genetic purposes should be pointed out. In this connection, the research of H. B. Newcombe and his colleagues (1965), who are attempting to organize Canadian national statistics for use in genetics, cannot be too highly praised. The United Nations itself posed the problem on the world level at a seminar organized in Geneva in 1960. The question of the relation between demography and genetics is therefore being posed in an acute form.

These problems also impinge in an important way on more general philosophical issues, as has been demonstrated by Haldane (1932), Fisher (1930), and Wright (1951). It must be recognized, however, that their form of Neo-Darwinism, although it is based on Mendelian genetics, too often neglects demographic considerations. In the future these seminal developments should be renewed in full confrontation with demographic reality.

Jean Sutter

[Directly related are the entriesCohort Analysis; Fertility; Fertility Control. Other relevant ma terial may be found inNuptiality; Race; SocialBehavior, Animal, article onThe Regulation of Animal Populations.]

Barclay, George W. 1958 Techniques of Population Analysis. New York: Wiley.

Dahlberg, Gunnar(1943)1948 Mathematical Methods for Population Genetics. New York and London: Inter-science. First published in German.

Dunn, Leslie C. (editor) 1951 Genetics in the Twentieth Century: Essays on the Progress of Genetics During Its First Fifty Years. New York: Macmillan.

Fisher, R. A. (1930) 1958 The Genetical Theory of Natural Selection. 2d ed., rev. New York: Dover.

Read more:
genetics facts, information, pictures | Encyclopedia.com ...

Posted in Genetics | Comments Off on genetics facts, information, pictures | Encyclopedia.com …

Epigenetics – Wikipedia

Posted: October 20, 2016 at 1:43 am

Epigenetics studies genetic effects not encoded in the DNA sequence of an organism, hence the prefix epi- (Greek: - over, outside of, around).[1][2] Such effects on cellular and physiological phenotypic traits may result from external or environmental factors that switch genes on and off and affect how cells express genes.[3][4] These alterations may or may not be heritable, although the use of the term epigenetic to describe processes that are heritable is controversial.[5]

The term also refers to the changes themselves: functionally relevant changes to the genome that do not involve a change in the nucleotide sequence. Examples of mechanisms that produce such changes are DNA methylation and histone modification, each of which alters how genes are expressed without altering the underlying DNA sequence. Gene expression can be controlled through the action of repressor proteins that attach to silencer regions of the DNA. These epigenetic changes may last through cell divisions for the duration of the cell's life, and may also last for multiple generations even though they do not involve changes in the underlying DNA sequence of the organism;[6] instead, non-genetic factors cause the organism's genes to behave (or "express themselves") differently.[7]

One example of an epigenetic change in eukaryotic biology is the process of cellular differentiation. During morphogenesis, totipotent stem cells become the various pluripotent cell lines of the embryo, which in turn become fully differentiated cells. In other words, as a single fertilized egg cell the zygote continues to divide, the resulting daughter cells change into all the different cell types in an organism, including neurons, muscle cells, epithelium, endothelium of blood vessels, etc., by activating some genes while inhibiting the expression of others.[8]

The term epigenetics in its contemporary usage emerged in the 1990s, but for some years has been used in somewhat variable meanings.[3] A consensus definition of the concept of epigenetic trait as "stably heritable phenotype resulting from changes in a chromosome without alterations in the DNA sequence" was formulated at a Cold Spring Harbor meeting in 2008.

The term epigenesis has a generic meaning "extra growth", taken directly from Koine Greek , used in English since the 17th century.[9]

From this, and the associated adjective epigenetic, the term epigenetics was coined by C. H. Waddington in 1942 as pertaining to epigenesis in parallel to Valentin Haecker's 'phenogenetics' (Pnogenetik).[10]Epigenesis in the context of biology refers to the differentiation of cells from their initial totipotent state in embryonic development.[11]

When Waddington coined the term the physical nature of genes and their role in heredity was not known; he used it as a conceptual model of how genes might interact with their surroundings to produce a phenotype; he used the phrase "epigenetic landscape" as a metaphor for biological development. Waddington held that cell fates were established in development much like a marble rolls down to the point of lowest local elevation.[12]

Waddington suggested visualising increasing irreversibility of cell type differentiation as ridges rising between the valleys where the marbles (cells) are travelling.[13] In recent times Waddington's notion of the epigenetic landscape has been rigorously formalized in the context of the systems dynamics state approach to the study of cell-fate.[14][15] Cell-fate determination is predicted to exhibit certain dynamics, such as attractor-convergence (the attractor can be an equilibrium point, limit cycle or strange attractor) or oscillatory.[15]

The term "epigenetic" has also been used in developmental psychology to describe psychological development as the result of an ongoing, bi-directional interchange between heredity and the environment.[16] Interactivist ideas of development have been discussed in various forms and under various names throughout the 19th and 20th centuries. An early version was proposed, among the founding statements in embryology, by Karl Ernst von Baer and popularized by Ernst Haeckel. A radical epigenetic view (physiological epigenesis) was developed by Paul Wintrebert. Another variation, probabilistic epigenesis, was presented by Gilbert Gottlieb in 2003.[17] This view encompasses all of the possible developing factors on an organism and how they not only influence the organism and each other, but how the organism also influences its own development.

The developmental psychologist Erik Erikson used the term epigenetic principle in his book Identity: Youth and Crisis (1968), and used it to encompass the notion that we develop through an unfolding of our personality in predetermined stages, and that our environment and surrounding culture influence how we progress through these stages. This biological unfolding in relation to our socio-cultural settings is done in stages of psychosocial development, where "progress through each stage is in part determined by our success, or lack of success, in all the previous stages."[18][19][20]

Robin Holliday defined epigenetics as "the study of the mechanisms of temporal and spatial control of gene activity during the development of complex organisms."[21] Thus epigenetic can be used to describe anything other than DNA sequence that influences the development of an organism.

The more recent usage of the word in science has a stricter definition. It is, as defined by Arthur Riggs and colleagues, "the study of mitotically and/or meiotically heritable changes in gene function that cannot be explained by changes in DNA sequence."[22] The Greek prefix epi- in epigenetics implies features that are "on top of" or "in addition to" genetics; thus epigenetic traits exist on top of or in addition to the traditional molecular basis for inheritance.[23]

The term "epigenetics", however, has been used to describe processes which have not been demonstrated to be heritable such as histone modification; there are therefore attempts to redefine it in broader terms that would avoid the constraints of requiring heritability. For example, Sir Adrian Bird defined epigenetics as "the structural adaptation of chromosomal regions so as to register, signal or perpetuate altered activity states."[6] This definition would be inclusive of transient modifications associated with DNA repair or cell-cycle phases as well as stable changes maintained across multiple cell generations, but exclude others such as templating of membrane architecture and prions unless they impinge on chromosome function. Such redefinitions however are not universally accepted and are still subject to dispute.[5] The NIH "Roadmap Epigenomics Project," ongoing as of 2016, uses the following definition: "...For purposes of this program, epigenetics refers to both heritable changes in gene activity and expression (in the progeny of cells or of individuals) and also stable, long-term alterations in the transcriptional potential of a cell that are not necessarily heritable."[24]

In 2008, a consensus definition of the epigenetic trait, "stably heritable phenotype resulting from changes in a chromosome without alterations in the DNA sequence", was made at a Cold Spring Harbor meeting.[25]

The similarity of the word to "genetics" has generated many parallel usages. The "epigenome" is a parallel to the word "genome", referring to the overall epigenetic state of a cell, and epigenomics refers to more global analyses of epigenetic changes across the entire genome.[24] The phrase "genetic code" has also been adaptedthe "epigenetic code" has been used to describe the set of epigenetic features that create different phenotypes in different cells. Taken to its extreme, the "epigenetic code" could represent the total state of the cell, with the position of each molecule accounted for in an epigenomic map, a diagrammatic representation of the gene expression, DNA methylation and histone modification status of a particular genomic region. More typically, the term is used in reference to systematic efforts to measure specific, relevant forms of epigenetic information such as the histone code or DNA methylation patterns.

Epigenetic changes modify the activation of certain genes, but not the genetic code sequence of DNA. The microstructure (not code) of DNA itself or the associated chromatin proteins may be modified, causing activation or silencing. This mechanism enables differentiated cells in a multicellular organism to express only the genes that are necessary for their own activity. Epigenetic changes are preserved when cells divide. Most epigenetic changes only occur within the course of one individual organism's lifetime; however, if gene inactivation occurs in a sperm or egg cell that results in fertilization, then some epigenetic changes can be transferred to the next generation.[26] This raises the question of whether or not epigenetic changes in an organism can alter the basic structure of its DNA (see Evolution, below), a form of Lamarckism.

Specific epigenetic processes include paramutation, bookmarking, imprinting, gene silencing, X chromosome inactivation, position effect, reprogramming, transvection, maternal effects, the progress of carcinogenesis, many effects of teratogens, regulation of histone modifications and heterochromatin, and technical limitations affecting parthenogenesis and cloning.

DNA damage can also cause epigenetic changes.[27][28][29] DNA damage is very frequent, occurring on average about 60,000 times a day per cell of the human body (see DNA damage (naturally occurring)). These damages are largely repaired, but at the site of a DNA repair, epigenetic changes can remain.[30] In particular, a double strand break in DNA can initiate unprogrammed epigenetic gene silencing both by causing DNA methylation as well as by promoting silencing types of histone modifications (chromatin remodeling - see next section).[31] In addition, the enzyme Parp1 (poly(ADP)-ribose polymerase) and its product poly(ADP)-ribose (PAR) accumulate at sites of DNA damage as part of a repair process.[32] This accumulation, in turn, directs recruitment and activation of the chromatin remodeling protein ALC1 that can cause nucleosome remodeling.[33] Nucleosome remodeling has been found to cause, for instance, epigenetic silencing of DNA repair gene MLH1.[22][34] DNA damaging chemicals, such as benzene, hydroquinone, styrene, carbon tetrachloride and trichloroethylene, cause considerable hypomethylation of DNA, some through the activation of oxidative stress pathways.[35]

Foods are known to alter the epigenetics of rats on different diets.[36] Some food components epigenetically increase the levels of DNA repair enzymes such as MGMT and MLH1[37] and p53.[38][39] Other food components can reduce DNA damage, such as soy isoflavones[40][41] and bilberry anthocyanins.[42]

Epigenetic research uses a wide range of molecular biologic techniques to further our understanding of epigenetic phenomena, including chromatin immunoprecipitation (together with its large-scale variants ChIP-on-chip and ChIP-Seq), fluorescent in situ hybridization, methylation-sensitive restriction enzymes, DNA adenine methyltransferase identification (DamID) and bisulfite sequencing. Furthermore, the use of bioinformatic methods is playing an increasing role (computational epigenetics).

Computer simulations and molecular dynamics approaches revealed the atomistic motions associated with the molecular recognition of the histone tail through an allosteric mechanism.[43]

Several types of epigenetic inheritance systems may play a role in what has become known as cell memory,[44] note however that not all of these are universally accepted to be examples of epigenetics.

Covalent modifications of either DNA (e.g. cytosine methylation and hydroxymethylation) or of histone proteins (e.g. lysine acetylation, lysine and arginine methylation, serine and threonine phosphorylation, and lysine ubiquitination and sumoylation) play central roles in many types of epigenetic inheritance. Therefore, the word "epigenetics" is sometimes used as a synonym for these processes. However, this can be misleading. Chromatin remodeling is not always inherited, and not all epigenetic inheritance involves chromatin remodeling.[45]

Because the phenotype of a cell or individual is affected by which of its genes are transcribed, heritable transcription states can give rise to epigenetic effects. There are several layers of regulation of gene expression. One way that genes are regulated is through the remodeling of chromatin. Chromatin is the complex of DNA and the histone proteins with which it associates. If the way that DNA is wrapped around the histones changes, gene expression can change as well. Chromatin remodeling is accomplished through two main mechanisms:

Mechanisms of heritability of histone state are not well understood; however, much is known about the mechanism of heritability of DNA methylation state during cell division and differentiation. Heritability of methylation state depends on certain enzymes (such as DNMT1) that have a higher affinity for 5-methylcytosine than for cytosine. If this enzyme reaches a "hemimethylated" portion of DNA (where 5-methylcytosine is in only one of the two DNA strands) the enzyme will methylate the other half.

Although histone modifications occur throughout the entire sequence, the unstructured N-termini of histones (called histone tails) are particularly highly modified. These modifications include acetylation, methylation, ubiquitylation, phosphorylation, sumoylation, ribosylation and citrullination. Acetylation is the most highly studied of these modifications. For example, acetylation of the K14 and K9 lysines of the tail of histone H3 by histone acetyltransferase enzymes (HATs) is generally related to transcriptional competence.

One mode of thinking is that this tendency of acetylation to be associated with "active" transcription is biophysical in nature. Because it normally has a positively charged nitrogen at its end, lysine can bind the negatively charged phosphates of the DNA backbone. The acetylation event converts the positively charged amine group on the side chain into a neutral amide linkage. This removes the positive charge, thus loosening the DNA from the histone. When this occurs, complexes like SWI/SNF and other transcriptional factors can bind to the DNA and allow transcription to occur. This is the "cis" model of epigenetic function. In other words, changes to the histone tails have a direct effect on the DNA itself.

Another model of epigenetic function is the "trans" model. In this model, changes to the histone tails act indirectly on the DNA. For example, lysine acetylation may create a binding site for chromatin-modifying enzymes (or transcription machinery as well). This chromatin remodeler can then cause changes to the state of the chromatin. Indeed, a bromodomain a protein domain that specifically binds acetyl-lysine is found in many enzymes that help activate transcription, including the SWI/SNF complex. It may be that acetylation acts in this and the previous way to aid in transcriptional activation.

The idea that modifications act as docking modules for related factors is borne out by histone methylation as well. Methylation of lysine 9 of histone H3 has long been associated with constitutively transcriptionally silent chromatin (constitutive heterochromatin). It has been determined that a chromodomain (a domain that specifically binds methyl-lysine) in the transcriptionally repressive protein HP1 recruits HP1 to K9 methylated regions. One example that seems to refute this biophysical model for methylation is that tri-methylation of histone H3 at lysine 4 is strongly associated with (and required for full) transcriptional activation. Tri-methylation in this case would introduce a fixed positive charge on the tail.

It has been shown that the histone lysine methyltransferase (KMT) is responsible for this methylation activity in the pattern of histones H3 & H4. This enzyme utilizes a catalytically active site called the SET domain (Suppressor of variegation, Enhancer of zeste, Trithorax). The SET domain is a 130-amino acid sequence involved in modulating gene activities. This domain has been demonstrated to bind to the histone tail and causes the methylation of the histone.[46]

Differing histone modifications are likely to function in differing ways; acetylation at one position is likely to function differently from acetylation at another position. Also, multiple modifications may occur at the same time, and these modifications may work together to change the behavior of the nucleosome. The idea that multiple dynamic modifications regulate gene transcription in a systematic and reproducible way is called the histone code, although the idea that histone state can be read linearly as a digital information carrier has been largely debunked. One of the best-understood systems that orchestrates chromatin-based silencing is the SIR protein based silencing of the yeast hidden mating type loci HML and HMR.

DNA methylation frequently occurs in repeated sequences, and helps to suppress the expression and mobility of 'transposable elements':[47] Because 5-methylcytosine can be spontaneously deaminated (replacing nitrogen by oxygen) to thymidine, CpG sites are frequently mutated and become rare in the genome, except at CpG islands where they remain unmethylated. Epigenetic changes of this type thus have the potential to direct increased frequencies of permanent genetic mutation. DNA methylation patterns are known to be established and modified in response to environmental factors by a complex interplay of at least three independent DNA methyltransferases, DNMT1, DNMT3A, and DNMT3B, the loss of any of which is lethal in mice.[48] DNMT1 is the most abundant methyltransferase in somatic cells,[49] localizes to replication foci,[50] has a 1040-fold preference for hemimethylated DNA and interacts with the proliferating cell nuclear antigen (PCNA).[51]

By preferentially modifying hemimethylated DNA, DNMT1 transfers patterns of methylation to a newly synthesized strand after DNA replication, and therefore is often referred to as the maintenance' methyltransferase.[52] DNMT1 is essential for proper embryonic development, imprinting and X-inactivation.[48][53] To emphasize the difference of this molecular mechanism of inheritance from the canonical Watson-Crick base-pairing mechanism of transmission of genetic information, the term 'Epigenetic templating' was introduced.[54] Furthermore, in addition to the maintenance and transmission of methylated DNA states, the same principle could work in the maintenance and transmission of histone modifications and even cytoplasmic (structural) heritable states.[55]

Histones H3 and H4 can also be manipulated through demethylation using histone lysine demethylase (KDM). This recently identified enzyme has a catalytically active site called the Jumonji domain (JmjC). The demethylation occurs when JmjC utilizes multiple cofactors to hydroxylate the methyl group, thereby removing it. JmjC is capable of demethylating mono-, di-, and tri-methylated substrates.[56]

Chromosomal regions can adopt stable and heritable alternative states resulting in bistable gene expression without changes to the DNA sequence. Epigenetic control is often associated with alternative covalent modifications of histones.[57] The stability and heritability of states of larger chromosomal regions are suggested to involve positive feedback where modified nucleosomes recruit enzymes that similarly modify nearby nucleosomes.[58] A simplified stochastic model for this type of epigenetics is found here.[59][60]

It has been suggested that chromatin-based transcriptional regulation could be mediated by the effect of small RNAs. Small interfering RNAs can modulate transcriptional gene expression via epigenetic modulation of targeted promoters.[61]

Sometimes a gene, after being turned on, transcribes a product that (directly or indirectly) maintains the activity of that gene. For example, Hnf4 and MyoD enhance the transcription of many liver- and muscle-specific genes, respectively, including their own, through the transcription factor activity of the proteins they encode. RNA signalling includes differential recruitment of a hierarchy of generic chromatin modifying complexes and DNA methyltransferases to specific loci by RNAs during differentiation and development.[62] Other epigenetic changes are mediated by the production of different splice forms of RNA, or by formation of double-stranded RNA (RNAi). Descendants of the cell in which the gene was turned on will inherit this activity, even if the original stimulus for gene-activation is no longer present. These genes are often turned on or off by signal transduction, although in some systems where syncytia or gap junctions are important, RNA may spread directly to other cells or nuclei by diffusion. A large amount of RNA and protein is contributed to the zygote by the mother during oogenesis or via nurse cells, resulting in maternal effect phenotypes. A smaller quantity of sperm RNA is transmitted from the father, but there is recent evidence that this epigenetic information can lead to visible changes in several generations of offspring.[63]

MicroRNAs (miRNAs) are members of non-coding RNAs that range in size from 17 to 25 nucleotides. miRNAs regulate a large variety of biological functions in plants and animals.[64] So far, in 2013, about 2000 miRNAs have been discovered in humans and these can be found online in a miRNA database.[65] Each miRNA expressed in a cell may target about 100 to 200 messenger RNAs that it downregulates.[66] Most of the downregulation of mRNAs occurs by causing the decay of the targeted mRNA, while some downregulation occurs at the level of translation into protein.[67]

It appears that about 60% of human protein coding genes are regulated by miRNAs.[68] Many miRNAs are epigenetically regulated. About 50% of miRNA genes are associated with CpG islands,[64] that may be repressed by epigenetic methylation. Transcription from methylated CpG islands is strongly and heritably repressed.[69] Other miRNAs are epigenetically regulated by either histone modifications or by combined DNA methylation and histone modification.[64]

In 2011, it was demonstrated that the methylation of mRNA plays a critical role in human energy homeostasis. The obesity-associated FTO gene is shown to be able to demethylate N6-methyladenosine in RNA.[70][71]

sRNAs are small (50250 nucleotides), highly structured, non-coding RNA fragments found in bacteria. They control gene expression including virulence genes in pathogens and are viewed as new targets in the fight against drug-resistant bacteria.[72] They play an important role in many biological processes, binding to mRNA and protein targets in prokaryotes. Their phylogenetic analyses, for example through sRNAmRNA target interactions or protein binding properties, are used to build comprehensive databases.[73] sRNA-gene maps based on their targets in microbial genomes are also constructed.[74]

Prions are infectious forms of proteins. In general, proteins fold into discrete units that perform distinct cellular functions, but some proteins are also capable of forming an infectious conformational state known as a prion. Although often viewed in the context of infectious disease, prions are more loosely defined by their ability to catalytically convert other native state versions of the same protein to an infectious conformational state. It is in this latter sense that they can be viewed as epigenetic agents capable of inducing a phenotypic change without a modification of the genome.[75]

Fungal prions are considered by some to be epigenetic because the infectious phenotype caused by the prion can be inherited without modification of the genome. PSI+ and URE3, discovered in yeast in 1965 and 1971, are the two best studied of this type of prion.[76][77] Prions can have a phenotypic effect through the sequestration of protein in aggregates, thereby reducing that protein's activity. In PSI+ cells, the loss of the Sup35 protein (which is involved in termination of translation) causes ribosomes to have a higher rate of read-through of stop codons, an effect that results in suppression of nonsense mutations in other genes.[78] The ability of Sup35 to form prions may be a conserved trait. It could confer an adaptive advantage by giving cells the ability to switch into a PSI+ state and express dormant genetic features normally terminated by stop codon mutations.[79][80][81][82]

In ciliates such as Tetrahymena and Paramecium, genetically identical cells show heritable differences in the patterns of ciliary rows on their cell surface. Experimentally altered patterns can be transmitted to daughter cells. It seems existing structures act as templates for new structures. The mechanisms of such inheritance are unclear, but reasons exist to assume that multicellular organisms also use existing cell structures to assemble new ones.[83][84][85]

Eukaryotic genomes have numerous nucleosomes. Nucleosome position is not random, and determine the accessibility of DNA to regulatory proteins. This determines differences in gene expression and cell differentiation. It has been shown that at least some nucleosomes are retained in sperm cells (where most but not all histones are replaced by protamines). Thus nucleosome positioning is to some degree inheritable. Recent studies have uncovered connections between nucleosome positioning and other epigenetic factors, such as DNA methylation and hydroxymethylation [86]

Developmental epigenetics can be divided into predetermined and probabilistic epigenesis. Predetermined epigenesis is a unidirectional movement from structural development in DNA to the functional maturation of the protein. "Predetermined" here means that development is scripted and predictable. Probabilistic epigenesis on the other hand is a bidirectional structure-function development with experiences and external molding development.[87]

Somatic epigenetic inheritance, particularly through DNA and histone covalent modifications and nucleosome repositioning, is very important in the development of multicellular eukaryotic organisms.[86] The genome sequence is static (with some notable exceptions), but cells differentiate into many different types, which perform different functions, and respond differently to the environment and intercellular signalling. Thus, as individuals develop, morphogens activate or silence genes in an epigenetically heritable fashion, giving cells a memory. In mammals, most cells terminally differentiate, with only stem cells retaining the ability to differentiate into several cell types ("totipotency" and "multipotency"). In mammals, some stem cells continue producing new differentiated cells throughout life, such as in neurogenesis, but mammals are not able to respond to loss of some tissues, for example, the inability to regenerate limbs, which some other animals are capable of. Epigenetic modifications regulate the transition from neural stem cells to glial progenitor cells (for example, differentiation into oligodendrocytes is regulated by the deacetylation and methylation of histones.[88] Unlike animals, plant cells do not terminally differentiate, remaining totipotent with the ability to give rise to a new individual plant. While plants do utilise many of the same epigenetic mechanisms as animals, such as chromatin remodeling, it has been hypothesised that some kinds of plant cells do not use or require "cellular memories", resetting their gene expression patterns using positional information from the environment and surrounding cells to determine their fate.[89]

Epigenetic changes can occur in response to environmental exposurefor example, mice given some dietary supplements have epigenetic changes affecting expression of the agouti gene, which affects their fur color, weight, and propensity to develop cancer.[90][91]

Controversial results from one study suggested that traumatic experiences might produce an epigenetic signal that is capable of being passed to future generations. Mice were trained, using foot shocks, to fear a cherry blossom odor. The investigators reported that the mouse offspring had an increased aversion to this specific odor.[92][93] They suggested epigenetic changes that increase gene expression, rather than in DNA itself, in a gene, M71, that governs the functioning of an odor receptor in the nose that responds specifically to this cherry blossom smell. There were physical changes that correlated with olfactory (smell) function in the brains of the trained mice and their descendants. Several criticisms were reported, including the study's low statistical power as evidence of some irregularity such as bias in reporting results.[94] Due to limits of sample size, there is a probability that an effect will not be demonstrated to within statistical significance even if it exists. The criticism suggested that the probability that all the experiments reported would show positive results if an identical protocol was followed, assuming the claimed effects exist, is merely 0.4%. The authors also did not indicate which mice were siblings, and treated all of the mice as statistically independent.[95] The original researchers pointed out negative results in the paper's appendix that the criticism omitted in its calculations, and undertook to track which mice were siblings in the future.[96]

Epigenetics can affect evolution when epigenetic changes are heritable.[3] A sequestered germ line or Weismann barrier is specific to animals, and epigenetic inheritance is more common in plants and microbes. Eva Jablonka, Marion J. Lamb and tienne Danchin have argued that these effects may require enhancements to the standard conceptual framework of the modern synthesis and have called for an extended evolutionary synthesis.[97][98][99] Other evolutionary biologists have incorporated epigenetic inheritance into population genetics models and are openly skeptical, stating that epigenetic mechanisms such as DNA methylation and histone modification are genetically inherited under the control of natural selection.[100][101][102]

Two important ways in which epigenetic inheritance can be different from traditional genetic inheritance, with important consequences for evolution, are that rates of epimutation can be much faster than rates of mutation[103] and the epimutations are more easily reversible.[104] In plants heritable DNA methylation mutations are 100.000 times more likely to occur compared to DNA mutations.[105] An epigenetically inherited element such as the PSI+ system can act as a "stop-gap", good enough for short-term adaptation that allows the lineage to survive for long enough for mutation and/or recombination to genetically assimilate the adaptive phenotypic change.[106] The existence of this possibility increases the evolvability of a species.

More than 100cases of transgenerational epigenetic inheritance phenomena have been reported in a wide range of organisms, including prokaryotes, plants, and animals.[107] For instance, Mourning Cloak butterflies will change color through hormone changes in response to experimentation of varying temperatures.[108]

The filamentous fungus Neurospora crassa is a prominent model system for understanding the control and function of cytosine methylation. In this organisms, DNA methylation is associated with relics of a genome defense system called RIP (repeat-induced point mutation) and silences gene expression by inhibiting transcription elongation.[109]

The yeast prion PSI is generated by a conformational change of a translation termination factor, which is then inherited by daughter cells. This can provide a survival advantage under adverse conditions. This is an example of epigenetic regulation enabling unicellular organisms to respond rapidly to environmental stress. Prions can be viewed as epigenetic agents capable of inducing a phenotypic change without modification of the genome.[110]

Direct detection of epigenetic marks in microorganisms is possible with single molecule real time sequencing, in which polymerase sensitivity allows for measuring methylation and other modifications as a DNA molecule is being sequenced.[111] Several projects have demonstrated the ability to collect genome-wide epigenetic data in bacteria.[112][113][114][115]

While epigenetics is of fundamental importance in eukaryotes, especially metazoans, it plays a different role in bacteria. Most importantly, eukaryotes use epigenetic mechanisms primarily to regulate gene expression which bacteria rarely do. However, bacteria make widespread use of postreplicative DNA methylation for the epigenetic control of DNA-protein interactions. Bacteria also use DNA adenine methylation (rather than DNA cytosine methylation) as an epigenetic signal. DNA adenine methylation is important in bacteria virulence in organisms such as Escherichia coli, Salmonella, Vibrio, Yersinia, Haemophilus, and Brucella. In Alphaproteobacteria, methylation of adenine regulates the cell cycle and couples gene transcription to DNA replication. In Gammaproteobacteria, adenine methylation provides signals for DNA replication, chromosome segregation, mismatch repair, packaging of bacteriophage, transposase activity and regulation of gene expression.[110][116] There exists a genetic switch controlling Streptococcus pneumoniae (the pneumococcus) that allows the bacterium to randomly change its characteristics into six alternative states that could pave the way to improved vaccines. Each form is randomly generated by a phase variable methylation system. The ability of the pneumococcus to cause deadly infections is different in each of these six states. Similar systems exist in other bacterial genera.[117]

Epigenetics has many and varied potential medical applications.[118] In 2008, the National Institutes of Health announced that $190 million had been earmarked for epigenetics research over the next five years. In announcing the funding, government officials noted that epigenetics has the potential to explain mechanisms of aging, human development, and the origins of cancer, heart disease, mental illness, as well as several other conditions. Some investigators, like Randy Jirtle, PhD, of Duke University Medical Center, think epigenetics may ultimately turn out to have a greater role in disease than genetics.[119]

Direct comparisons of identical twins constitute an optimal model for interrogating environmental epigenetics. In the case of humans with different environmental exposures, monozygotic (identical) twins were epigenetically indistinguishable during their early years, while older twins had remarkable differences in the overall content and genomic distribution of 5-methylcytosine DNA and histone acetylation.[3] The twin pairs who had spent less of their lifetime together and/or had greater differences in their medical histories were those who showed the largest differences in their levels of 5-methylcytosine DNA and acetylation of histones H3 and H4.[120]

Dizygotic (fraternal) and monozygotic (identical) twins show evidence of epigenetic influence in humans.[120][121][122] DNA sequence differences that would be abundant in a singleton-based study do not interfere with the analysis. Environmental differences can produce long-term epigenetic effects, and different developmental monozygotic twin subtypes may be different with respect to their susceptibility to be discordant from an epigenetic point of view.[123]

A high-throughput study, which denotes technology that looks at extensive genetic markers, focused on epigenetic differences between monozygotic twins to compare global and locus-specific changes in DNA methylation and histone modifications in a sample of 40 monozygotic twin pairs.[120] In this case, only healthy twin pairs were studied, but a wide range of ages was represented, between 3 and 74 years. One of the major conclusions from this study was that there is an age-dependent accumulation of epigenetic differences between the two siblings of twin pairs. This accumulation suggests the existence of epigenetic drift.

A more recent study, where 114 monozygotic twins and 80 dizygotic twins were analyzed for the DNA methylation status of around 6000 unique genomic regions, concluded that epigenetic similarity at the time of blastocyst splitting may also contribute to phenotypic similarities in monozygotic co-twins. This supports the notion that microenvironment at early stages of embryonic development can be quite important for the establishment of epigenetic marks.[124] Congenital genetic disease is well understood and it is clear that epigenetics can play a role, for example, in the case of Angelman syndrome and Prader-Willi syndrome. These are normal genetic diseases caused by gene deletions or inactivation of the genes, but are unusually common because individuals are essentially hemizygous because of genomic imprinting, and therefore a single gene knock out is sufficient to cause the disease, where most cases would require both copies to be knocked out.[125]

Some human disorders are associated with genomic imprinting, a phenomenon in mammals where the father and mother contribute different epigenetic patterns for specific genomic loci in their germ cells.[126] The best-known case of imprinting in human disorders is that of Angelman syndrome and Prader-Willi syndromeboth can be produced by the same genetic mutation, chromosome 15q partial deletion, and the particular syndrome that will develop depends on whether the mutation is inherited from the child's mother or from their father.[127] This is due to the presence of genomic imprinting in the region. Beckwith-Wiedemann syndrome is also associated with genomic imprinting, often caused by abnormalities in maternal genomic imprinting of a region on chromosome 11.

Rett syndrome is underlain by mutations in the MECP2 gene despite no large-scale changes in expression of MeCP2 being found in microarray analyses. BDNF is downregulated in the MECP2 mutant resulting in Rett syndrome.

In the verkalix study, paternal (but not maternal) grandsons[128] of Swedish men who were exposed during preadolescence to famine in the 19th century were less likely to die of cardiovascular disease. If food was plentiful, then diabetes mortality in the grandchildren increased, suggesting that this was a transgenerational epigenetic inheritance.[129] The opposite effect was observed for femalesthe paternal (but not maternal) granddaughters of women who experienced famine while in the womb (and therefore while their eggs were being formed) lived shorter lives on average.[130]

A variety of epigenetic mechanisms can be perturbed in different types of cancer. Epigenetic alterations of DNA repair genes or cell cycle control genes are very frequent in sporadic (non-germ line) cancers, being significantly more common than germ line (familial) mutations in these sporadic cancers.[131][132] Epigenetic alterations are important in cellular transformation to cancer, and their manipulation holds great promise for cancer prevention, detection, and therapy.[133][134] Several medications which have epigenetic impact are used in several of these diseases. These aspects of epigenetics are addressed in cancer epigenetics.

Addiction is a disorder of the brain's reward system which arises through transcriptional and neuroepigenetic mechanisms and occurs over time from chronically high levels of exposure to an addictive stimulus (e.g., morphine, cocaine, sexual intercourse, gambling, etc.).[135][136][137][138] Transgenerational epigenetic inheritance of addictive phenotypes has been noted to occur in preclinical studies.[139][140]

Transgenerational epigenetic inheritance of anxiety-related phenotypes has been reported in a preclinical study using mice.[141] In this investigation, transmission of paternal stress-induced traits across generations involved small non-coding RNA signals transmitted via the male germline.

Epigenetic inheritance of depression-related phenotypes has also been reported in a preclinically.[141] Inheritance of paternal stress-induced traits across generations involved small non-coding RNA signals transmitted via the paternal germline.

The two forms of heritable information, namely genetic and epigenetic, are collectively denoted as dual inheritance. Members of the APOBEC/AID family of cytosine deaminases may concurrently influence genetic and epigenetic inheritance using similar molecular mechanisms, and may be a point of crosstalk between these conceptually compartmentalized processes.[142]

Fluoroquinolone antibiotics induce epigenetic changes in mammalian cells through iron chelation. This leads to epigenetic effects through inhibition of -ketoglutarate-dependent dioxygenases that require iron as a co-factor.[143]

Various pharmacological agents are applied for the production of induced pluripotent stem cells (iPSC) or maintain the embryonic stem cell (ESC) phenotypic via epigenetic approach. Adult stem cells like bone marrow stem cells have also shown a potential to differentiate into cardiac competent cells when treated with G9a histone methyltransferase inhibitor BIX01294.[144][145]

Due to the early stages of epigenetics as a science and to the sensationalism surrounding it, surgical oncologist David Gorski and geneticist Adam Rutherford caution against the drawing and proliferation of false and pseudoscientific conclusions from new age authors such as Deepak Chopra and Bruce Lipton.[146][147]

In Neal Stephensons 2015 novel Seveneves, survivors of a worldwide holocaust are tasked with seeding new life on a dormant Earth. Rather than create specific breeds of animals to be hunters, scavengers, or prey, species like canids are developed with mutable epigenetic traits, with the intention that the animals would quickly transform into the necessary roles that would be required for an ecosystem to rapidly evolve. Additionally, a race of humans, Moirans, are created to survive in space, with the hope that this subspecies of human would be able to adapt to unforeseeable dangers and circumstances, via an epigenetic process called "going epi".

Read the original here:
Epigenetics - Wikipedia

Posted in Epigenetics | Comments Off on Epigenetics – Wikipedia

Regenerative Medicine In Pain Management – Boost Medical

Posted: October 20, 2016 at 1:43 am

TORY MCJUNKIN, MD Co-founder Arizona Pain Specialists Scottsdale, Arizona

PAUL LYNCH, MD Co-founder Arizona Pain Specialists Scottsdale, Arizona

TIMOTHY R. DEER, MD President and CEO The Center for Pain Relief

Clinical Professor of Anesthesiology West Virginia School of Medicine West Virginia University Charleston, West Virginia JACK ANDERSON, MD Fellow Arizona Pain Specialists Scottsdale, Arizona

RAHUL DESAI, MD Director Epic Imaging Sports Medicine and Interventional Pain Clinic Portland, Oregon

Drs. McJunkin, Lynch, and Deer have all received research funding from Mesoblast Limited. Dr. Desai has received consulting and research funding from Harvest Technologies and MiMedx Group.

Regenerative medicine, where the body regenerates or rebuilds itself, is a relatively new and rapidly evolving front in the field of interventional pain management. Although stem cell therapy has garnered much of the attention over the past several decades, multiple other regenerative medicine modalities also have caught the publics attention. As experts in our field, we should be ascertaining if and when to offer these treatments to our patients.

Stem Cells

Stem cells are characterized by the ability to renew themselves through cell division and differentiate into a diverse range of specialized cell types. There are multiple sources of stem cells, including human embryos, which contain pluripotent stem cells that can differentiate into any cell line. Human embryonic stem cell use has been and currently remains an ethically controversial topic. Induced pluripotent stem cells are generated by taking cells, such as skin cells, from a person and then injecting a small number of specific genes or molecules into the cells, which converts the cells into stem cells. A concern with this source of stem cells is introducing an oncogene, which can result in cancer. Adult stem cells are another category. Example sources of adult stem cells include bone marrow, peripheral blood, placental blood, placental tissue, and adipose tissue. Most adult stem cells are multipotent, which can differentiate into some, but not all cell types.

Stem cell technology has been used clinically since the 1960s, in the form of bone marrow transplants to treat conditions like leukemia. Since then, much research has focused on stem cell therapy and its application to a variety of medical conditions. For interventional pain applications, ongoing research is examining the application of stem cell therapy for the treatment of multiple chronic pain conditions, such as osteoarthritis and degenerative disk disease. Crevensten et al studied the effects of injecting mesenchymal stem cells into degenerative disks in rats and found a trend of increased disk height, suggesting an increase in matrix synthesis in the study group compared with the control subjects.1 Mesoblast is conducting the second phase of research on mesenchymal stem cell from bone marrow for degenerative disk disease in human subjects. Many interventional pain physicians are hopeful that stem cells will prove to be an effective treatment for conditions such as diskogenic pain, which currently has few treatment choices.

Future research should assess the relative effectiveness of the different stem cell sources to treat different types of pain conditions. This will help guide the choice of the source of stem cells to use and the types of conditions to treat.

Amniotic Membrane

Because of its unique properties and availability, the human amniotic membrane recently has been studied and is currently being used in regenerative medicine. The human amniotic membrane is composed of 2 cell types, human amnion epithelial cells and human amnion mesenchymal stromal cells. Both types display low immunogenicity and display characteristic properties of stem cells. Both cell types are able to differentiate in vitro into the major mesodermal lineages.2 Human amniotic membranes have been used extensively in ophthalmology and plastic surgery for the treatment of corneal and cutaneous wounds, respectively.3 Recent research has focused on the use of human amniotic membranes applied to other disciplines. Intraoperative placement of amniotic tissue at the site of laminectomy in dogs was effective in reducing epidural fibrosis and scar adhesion.4 Using a human amnion tissue patch after a right L45 decompression procedure significantly reduced both scar tissue formation and adherence to the underlying dura in the patient.

The application of human amniotic membranes within the field of interventional pain management is currently a topic of great interest. Much of the current research is investigating its role in the treatment of tissue damage and inflammation, such as tendonosis and tendonitis. After an intralesional injection of ovine amniotic epithelial cells into equine superficial digital flexor tendon defects, the amniotic epithelial cells participated in the deposition of new collagen fibers in the repairing area.6 Amniotic epithelial cells injected into calcaneal tendon defects in sheep resulted in a high number of reparative cells in active proliferation that were accumulating collagen within the extracellular matrix.7 Injecting amniotic epithelial cells into Achilles tendon defects in sheep resulted in much better structural and mechanical recoveries than control tendon defects during the early phase of healing.

Additional research in the field of human amniotic membrane applications in interventional pain management is needed, but animal model research studies and anecdotal reports of its use in human subjects are promising.

Platelet Rich Plasma

Platelet rich plasma (PRP) therapy was first introduced in the 1970s and has been used in many medical specialties, including orthopedic surgery, plastic surgery, sports medicine, wound care, and pain management, since the 1990s. PRP therapy involves the injection of concentrated platelets, autologous growth factors, and secretory proteins into the region of interest. PRP has been used for numerous conditions. In interventional pain management, it is commonly used for acute and chronic conditions such as tendinopathy, tendonosis, muscle strain, muscle fibrosis, ligamentous injury, arthritis, arthrofibrosis, articular cartilage defects, meniscal injury, and chronic synovitis or joint inflammation (Figure).

The PRP concentrate is made from the patients own blood. After the blood is centrifuged, it separates into the serum (top coat), the platelets and white blood cells (buffy coat or middle layer), and the red blood cells (bottom layer). The middle layer contains a platelet concentration of at least 1 million platelets/uL (normal range: 150,000350,000 platelets/uL) and a 3 to 5 fold increase in growth factor concentrations.10 There is significant variability between PRP centrifuge systems, each yielding varying products. There is no clear comparative evidence to date indicating a superior product. Some PRP protocols include white blood cells, whereas others involve activation with thrombin or calcium, and the platelet concentrations vary as well. The optimal concentration of platelets for PRP is debated. Giusti et al examined the optimal concentration of platelets for promoting angiogenesis in human endothelial cells and found 1.5 million platelets/uL to be the optimal concentration.11 With the system used in our practice, 20 cc of blood will yield approximately 3 cc of concentrate, adequate for small target areas like an epicondyle or acromioclavicular joint and 60 cc of blood will yield 7 to 10 cc of PRP for larger applications, such as a hip or shoulder injection.

Platelets synthesize and release more than 1,100 biologically active proteins, including those that promote tissue regeneration.12 PRP is thought to enhance the recruitment, proliferation, and differentiation of cells involved in tissue regeneration to promote healing.10 Studies have demonstrated that PRP positively affects gene expression and matrix synthesis in tendons. Cell proliferation and total collagen production is increased in human tenocytes cultured in PRP. In vivo, a platelet concentrate injected into the hematoma 6 hours after creation of a defect in a rat Achilles tendon demonstrated increased tendon callus strength and stiffness. Muscles treated with insulin like growth factor 1 and basic fibroblast growth factor showed improved healing and significantly increased fast twitch and tetanus strength.

Over the past decade, numerous published studies involving human subjects have emerged investigating the use of PRP for conditions such as lateral epicondylitis, patellar tendinopathy, Achilles tendinopathy, rotator cuff tendinopathy, rotator cuff tears, medial collateral ligament and anterior cruciate ligament tears, and osteoarthritis of the knee.13 Although most of the studies examined small populations, the results have been very promising, with many demonstrating significant pain relief and functional improvement. Future studies are needed in this emerging field to further delineate the optimal constituents and concentrations of the PRP solution and more clearly define the role of PRP in interventional pain management.

Conclusions Osteoarthritis and other degenerative conditions, which are largely a function of aging, are a major area of concern for pain physicians. Regenerative medicine is an exciting and rapidly evolving branch of medicine, which has the potential to let us turn back the clock and regenerate workout tissues. Based on current data, it is reasonable to integrate these regenerative techniques into treatment algorithms, usually after other traditional treatments have failed. As research progresses, if more conclusive evidence demonstrates superior efficacy over other modalities, the use of regenerative medicine techniques would be justified sooner in the treatment algorithm.

References

1. Crevensten G, Walsh AJ, Ananthakrishnan D, et al. Intervertebral disc cell therapy for regeneration: mesenchymal stem cell implantation in rat intervertebral discs. Ann Biomed Eng. 2004;32(3):430-434. 2. Daz Prado S, Muios Lpez E, Hermida Gmez T, et al. Human amniotic membrane as an alternative source of stem cells for regenerative medicine. Differentiation. 2011;81(3):162-171. 3. Gruss JS, Jirsch DW. Human amniotic membrane: a versatile wound dressing. Can Med Assoc J. 1978;118(10):1237-1246. 4. Tao H, Fan H. Implantation of amniotic membrane to reduce postlaminectomy epidural adhesions. Eur Spine J. 2009;18(8):1202-1212. 5. Ploska P. Summary of clinical outcome related to the use of human amnion tissue allograft in right L4-L5 decompression procedure. Jan 27, 2010. Applied Biologics. http://appliedbiologics.com/ images/pub/ploska.pdf. Accessed October 25, 2012. 6. Muttini A, Valbonetti L, Abate M, et al. Ovine amniotic epithelial cells: In vitro characterization and transplantation into equine superficial digital flexor tendon spontaneous defects. Res Vet Sci. 2012 Sep 3.

Read the rest here:
Regenerative Medicine In Pain Management - Boost Medical

Posted in West Virginia Stem Cells | Comments Off on Regenerative Medicine In Pain Management – Boost Medical

Biotechnology – Wikipedia

Posted: October 20, 2016 at 1:40 am

"Bioscience" redirects here. For the scientific journal, see BioScience. For life sciences generally, see life science.

Biotechnology is the use of living systems and organisms to develop or make products, or "any technological application that uses biological systems, living organisms or derivatives thereof, to make or modify products or processes for specific use" (UN Convention on Biological Diversity, Art. 2).[1] Depending on the tools and applications, it often overlaps with the (related) fields of bioengineering, biomedical engineering, biomanufacturing, molecular engineering, etc.

For thousands of years, humankind has used biotechnology in agriculture, food production, and medicine.[2] The term is largely believed to have been coined in 1919 by Hungarian engineer Kroly Ereky. In the late 20th and early 21st century, biotechnology has expanded to include new and diverse sciences such as genomics, recombinant gene techniques, applied immunology, and development of pharmaceutical therapies and diagnostic tests.[2]

The wide concept of "biotech" or "biotechnology" encompasses a wide range of procedures for modifying living organisms according to human purposes, going back to domestication of animals, cultivation of the plants, and "improvements" to these through breeding programs that employ artificial selection and hybridization. Modern usage also includes genetic engineering as well as cell and tissue culture technologies. The American Chemical Society defines biotechnology as the application of biological organisms, systems, or processes by various industries to learning about the science of life and the improvement of the value of materials and organisms such as pharmaceuticals, crops, and livestock.[3] As per European Federation of Biotechnology, Biotechnology is the integration of natural science and organisms, cells, parts thereof, and molecular analogues for products and services.[4] Biotechnology also writes on the pure biological sciences (animal cell culture, biochemistry, cell biology, embryology, genetics, microbiology, and molecular biology). In many instances, it is also dependent on knowledge and methods from outside the sphere of biology including:

Conversely, modern biological sciences (including even concepts such as molecular ecology) are intimately entwined and heavily dependent on the methods developed through biotechnology and what is commonly thought of as the life sciences industry. Biotechnology is the research and development in the laboratory using bioinformatics for exploration, extraction, exploitation and production from any living organisms and any source of biomass by means of biochemical engineering where high value-added products could be planned (reproduced by biosynthesis, for example), forecasted, formulated, developed, manufactured and marketed for the purpose of sustainable operations (for the return from bottomless initial investment on R & D) and gaining durable patents rights (for exclusives rights for sales, and prior to this to receive national and international approval from the results on animal experiment and human experiment, especially on the pharmaceutical branch of biotechnology to prevent any undetected side-effects or safety concerns by using the products).[5][6][7]

By contrast, bioengineering is generally thought of as a related field that more heavily emphasizes higher systems approaches (not necessarily the altering or using of biological materials directly) for interfacing with and utilizing living things. Bioengineering is the application of the principles of engineering and natural sciences to tissues, cells and molecules. This can be considered as the use of knowledge from working with and manipulating biology to achieve a result that can improve functions in plants and animals.[8] Relatedly, biomedical engineering is an overlapping field that often draws upon and applies biotechnology (by various definitions), especially in certain sub-fields of biomedical and/or chemical engineering such as tissue engineering, biopharmaceutical engineering, and genetic engineering.

Although not normally what first comes to mind, many forms of human-derived agriculture clearly fit the broad definition of "'utilizing a biotechnological system to make products". Indeed, the cultivation of plants may be viewed as the earliest biotechnological enterprise.

Agriculture has been theorized to have become the dominant way of producing food since the Neolithic Revolution. Through early biotechnology, the earliest farmers selected and bred the best suited crops, having the highest yields, to produce enough food to support a growing population. As crops and fields became increasingly large and difficult to maintain, it was discovered that specific organisms and their by-products could effectively fertilize, restore nitrogen, and control pests. Throughout the history of agriculture, farmers have inadvertently altered the genetics of their crops through introducing them to new environments and breeding them with other plants one of the first forms of biotechnology.

These processes also were included in early fermentation of beer.[9] These processes were introduced in early Mesopotamia, Egypt, China and India, and still use the same basic biological methods. In brewing, malted grains (containing enzymes) convert starch from grains into sugar and then adding specific yeasts to produce beer. In this process, carbohydrates in the grains were broken down into alcohols such as ethanol. Later other cultures produced the process of lactic acid fermentation which allowed the fermentation and preservation of other forms of food, such as soy sauce. Fermentation was also used in this time period to produce leavened bread. Although the process of fermentation was not fully understood until Louis Pasteur's work in 1857, it is still the first use of biotechnology to convert a food source into another form.

Before the time of Charles Darwin's work and life, animal and plant scientists had already used selective breeding. Darwin added to that body of work with his scientific observations about the ability of science to change species. These accounts contributed to Darwin's theory of natural selection.[10]

For thousands of years, humans have used selective breeding to improve production of crops and livestock to use them for food. In selective breeding, organisms with desirable characteristics are mated to produce offspring with the same characteristics. For example, this technique was used with corn to produce the largest and sweetest crops.[11]

In the early twentieth century scientists gained a greater understanding of microbiology and explored ways of manufacturing specific products. In 1917, Chaim Weizmann first used a pure microbiological culture in an industrial process, that of manufacturing corn starch using Clostridium acetobutylicum, to produce acetone, which the United Kingdom desperately needed to manufacture explosives during World War I.[12]

Biotechnology has also led to the development of antibiotics. In 1928, Alexander Fleming discovered the mold Penicillium. His work led to the purification of the antibiotic compound formed by the mold by Howard Florey, Ernst Boris Chain and Norman Heatley to form what we today know as penicillin. In 1940, penicillin became available for medicinal use to treat bacterial infections in humans.[11]

The field of modern biotechnology is generally thought of as having been born in 1971 when Paul Berg's (Stanford) experiments in gene splicing had early success. Herbert W. Boyer (Univ. Calif. at San Francisco) and Stanley N. Cohen (Stanford) significantly advanced the new technology in 1972 by transferring genetic material into a bacterium, such that the imported material would be reproduced. The commercial viability of a biotechnology industry was significantly expanded on June 16, 1980, when the United States Supreme Court ruled that a genetically modified microorganism could be patented in the case of Diamond v. Chakrabarty.[13] Indian-born Ananda Chakrabarty, working for General Electric, had modified a bacterium (of the Pseudomonas genus) capable of breaking down crude oil, which he proposed to use in treating oil spills. (Chakrabarty's work did not involve gene manipulation but rather the transfer of entire organelles between strains of the Pseudomonas bacterium.

Revenue in the industry is expected to grow by 12.9% in 2008. Another factor influencing the biotechnology sector's success is improved intellectual property rights legislationand enforcementworldwide, as well as strengthened demand for medical and pharmaceutical products to cope with an ageing, and ailing, U.S. population.[14]

Rising demand for biofuels is expected to be good news for the biotechnology sector, with the Department of Energy estimating ethanol usage could reduce U.S. petroleum-derived fuel consumption by up to 30% by 2030. The biotechnology sector has allowed the U.S. farming industry to rapidly increase its supply of corn and soybeansthe main inputs into biofuelsby developing genetically modified seeds which are resistant to pests and drought. By boosting farm productivity, biotechnology plays a crucial role in ensuring that biofuel production targets are met.[15]

Biotechnology has applications in four major industrial areas, including health care (medical), crop production and agriculture, non food (industrial) uses of crops and other products (e.g. biodegradable plastics, vegetable oil, biofuels), and environmental uses.

For example, one application of biotechnology is the directed use of organisms for the manufacture of organic products (examples include beer and milk products). Another example is using naturally present bacteria by the mining industry in bioleaching. Biotechnology is also used to recycle, treat waste, clean up sites contaminated by industrial activities (bioremediation), and also to produce biological weapons.

A series of derived terms have been coined to identify several branches of biotechnology; for example:

The investment and economic output of all of these types of applied biotechnologies is termed as "bioeconomy".

In medicine, modern biotechnology finds applications in areas such as pharmaceutical drug discovery and production, pharmacogenomics, and genetic testing (or genetic screening).

Pharmacogenomics (a combination of pharmacology and genomics) is the technology that analyses how genetic makeup affects an individual's response to drugs.[17] It deals with the influence of genetic variation on drug response in patients by correlating gene expression or single-nucleotide polymorphisms with a drug's efficacy or toxicity.[18] By doing so, pharmacogenomics aims to develop rational means to optimize drug therapy, with respect to the patients' genotype, to ensure maximum efficacy with minimal adverse effects.[19] Such approaches promise the advent of "personalized medicine"; in which drugs and drug combinations are optimized for each individual's unique genetic makeup.[20][21]

Biotechnology has contributed to the discovery and manufacturing of traditional small molecule pharmaceutical drugs as well as drugs that are the product of biotechnology biopharmaceutics. Modern biotechnology can be used to manufacture existing medicines relatively easily and cheaply. The first genetically engineered products were medicines designed to treat human diseases. To cite one example, in 1978 Genentech developed synthetic humanized insulin by joining its gene with a plasmid vector inserted into the bacterium Escherichia coli. Insulin, widely used for the treatment of diabetes, was previously extracted from the pancreas of abattoir animals (cattle and/or pigs). The resulting genetically engineered bacterium enabled the production of vast quantities of synthetic human insulin at relatively low cost.[22][23] Biotechnology has also enabled emerging therapeutics like gene therapy. The application of biotechnology to basic science (for example through the Human Genome Project) has also dramatically improved our understanding of biology and as our scientific knowledge of normal and disease biology has increased, our ability to develop new medicines to treat previously untreatable diseases has increased as well.[23]

Genetic testing allows the genetic diagnosis of vulnerabilities to inherited diseases, and can also be used to determine a child's parentage (genetic mother and father) or in general a person's ancestry. In addition to studying chromosomes to the level of individual genes, genetic testing in a broader sense includes biochemical tests for the possible presence of genetic diseases, or mutant forms of genes associated with increased risk of developing genetic disorders. Genetic testing identifies changes in chromosomes, genes, or proteins.[24] Most of the time, testing is used to find changes that are associated with inherited disorders. The results of a genetic test can confirm or rule out a suspected genetic condition or help determine a person's chance of developing or passing on a genetic disorder. As of 2011 several hundred genetic tests were in use.[25][26] Since genetic testing may open up ethical or psychological problems, genetic testing is often accompanied by genetic counseling.

Genetically modified crops ("GM crops", or "biotech crops") are plants used in agriculture, the DNA of which has been modified with genetic engineering techniques. In most cases the aim is to introduce a new trait to the plant which does not occur naturally in the species.

Examples in food crops include resistance to certain pests,[27] diseases,[28] stressful environmental conditions,[29] resistance to chemical treatments (e.g. resistance to a herbicide[30]), reduction of spoilage,[31] or improving the nutrient profile of the crop.[32] Examples in non-food crops include production of pharmaceutical agents,[33]biofuels,[34] and other industrially useful goods,[35] as well as for bioremediation.[36][37]

Farmers have widely adopted GM technology. Between 1996 and 2011, the total surface area of land cultivated with GM crops had increased by a factor of 94, from 17,000 square kilometers (4,200,000 acres) to 1,600,000km2 (395 million acres).[38] 10% of the world's crop lands were planted with GM crops in 2010.[38] As of 2011, 11 different transgenic crops were grown commercially on 395 million acres (160 million hectares) in 29 countries such as the USA, Brazil, Argentina, India, Canada, China, Paraguay, Pakistan, South Africa, Uruguay, Bolivia, Australia, Philippines, Myanmar, Burkina Faso, Mexico and Spain.[38]

Genetically modified foods are foods produced from organisms that have had specific changes introduced into their DNA with the methods of genetic engineering. These techniques have allowed for the introduction of new crop traits as well as a far greater control over a food's genetic structure than previously afforded by methods such as selective breeding and mutation breeding.[39] Commercial sale of genetically modified foods began in 1994, when Calgene first marketed its Flavr Savr delayed ripening tomato.[40] To date most genetic modification of foods have primarily focused on cash crops in high demand by farmers such as soybean, corn, canola, and cotton seed oil. These have been engineered for resistance to pathogens and herbicides and better nutrient profiles. GM livestock have also been experimentally developed, although as of November 2013 none are currently on the market.[41]

There is a scientific consensus[42][43][44][45] that currently available food derived from GM crops poses no greater risk to human health than conventional food,[46][47][48][49][50] but that each GM food needs to be tested on a case-by-case basis before introduction.[51][52][53] Nonetheless, members of the public are much less likely than scientists to perceive GM foods as safe.[54][55][56][57] The legal and regulatory status of GM foods varies by country, with some nations banning or restricting them, and others permitting them with widely differing degrees of regulation.[58][59][60][61]

GM crops also provide a number of ecological benefits, if not used in excess.[62] However, opponents have objected to GM crops per se on several grounds, including environmental concerns, whether food produced from GM crops is safe, whether GM crops are needed to address the world's food needs, and economic concerns raised by the fact these organisms are subject to intellectual property law.

Industrial biotechnology (known mainly in Europe as white biotechnology) is the application of biotechnology for industrial purposes, including industrial fermentation. It includes the practice of using cells such as micro-organisms, or components of cells like enzymes, to generate industrially useful products in sectors such as chemicals, food and feed, detergents, paper and pulp, textiles and biofuels.[63] In doing so, biotechnology uses renewable raw materials and may contribute to lowering greenhouse gas emissions and moving away from a petrochemical-based economy.[64]

The environment can be affected by biotechnologies, both positively and adversely. Vallero and others have argued that the difference between beneficial biotechnology (e.g. bioremediation to clean up an oil spill or hazard chemical leak) versus the adverse effects stemming from biotechnological enterprises (e.g. flow of genetic material from transgenic organisms into wild strains) can be seen as applications and implications, respectively.[65] Cleaning up environmental wastes is an example of an application of environmental biotechnology; whereas loss of biodiversity or loss of containment of a harmful microbe are examples of environmental implications of biotechnology.

The regulation of genetic engineering concerns approaches taken by governments to assess and manage the risks associated with the use of genetic engineering technology, and the development and release of genetically modified organisms (GMO), including genetically modified crops and genetically modified fish. There are differences in the regulation of GMOs between countries, with some of the most marked differences occurring between the USA and Europe.[66] Regulation varies in a given country depending on the intended use of the products of the genetic engineering. For example, a crop not intended for food use is generally not reviewed by authorities responsible for food safety.[67] The European Union differentiates between approval for cultivation within the EU and approval for import and processing. While only a few GMOs have been approved for cultivation in the EU a number of GMOs have been approved for import and processing.[68] The cultivation of GMOs has triggered a debate about coexistence of GM and non GM crops. Depending on the coexistence regulations incentives for cultivation of GM crops differ.[69]

In 1988, after prompting from the United States Congress, the National Institute of General Medical Sciences (National Institutes of Health) (NIGMS) instituted a funding mechanism for biotechnology training. Universities nationwide compete for these funds to establish Biotechnology Training Programs (BTPs). Each successful application is generally funded for five years then must be competitively renewed. Graduate students in turn compete for acceptance into a BTP; if accepted, then stipend, tuition and health insurance support is provided for two or three years during the course of their Ph.D. thesis work. Nineteen institutions offer NIGMS supported BTPs.[70] Biotechnology training is also offered at the undergraduate level and in community colleges.

The literature about Biodiversity and the GE food/feed consumption has sometimes resulted in animated debate regarding the suitability of the experimental designs, the choice of the statistical methods or the public accessibility of data. Such debate, even if positive and part of the natural process of review by the scientific community, has frequently been distorted by the media and often used politically and inappropriately in anti-GE crops campaigns.

Domingo, Jos L.; Bordonaba, Jordi Gin (2011). "A literature review on the safety assessment of genetically modified plants" (PDF). Environment International. 37: 734742. doi:10.1016/j.envint.2011.01.003. In spite of this, the number of studies specifically focused on safety assessment of GM plants is still limited. However, it is important to remark that for the first time, a certain equilibrium in the number of research groups suggesting, on the basis of their studies, that a number of varieties of GM products (mainly maize and soybeans) are as safe and nutritious as the respective conventional non-GM plant, and those raising still serious concerns, was observed. Moreover, it is worth mentioning that most of the studies demonstrating that GM foods are as nutritional and safe as those obtained by conventional breeding, have been performed by biotechnology companies or associates, which are also responsible of commercializing these GM plants. Anyhow, this represents a notable advance in comparison with the lack of studies published in recent years in scientific journals by those companies.

Krimsky, Sheldon (2015). "An Illusory Consensus behind GMO Health Assessment" (PDF). Science, Technology, & Human Values: 132. doi:10.1177/0162243915598381. I began this article with the testimonials from respected scientists that there is literally no scientific controversy over the health effects of GMOs. My investigation into the scientific literature tells another story.

And contrast:

Panchin, Alexander Y.; Tuzhikov, Alexander I. (January 14, 2016). "Published GMO studies find no evidence of harm when corrected for multiple comparisons". Critical Reviews in Biotechnology. doi:10.3109/07388551.2015.1130684. ISSN0738-8551. Here, we show that a number of articles some of which have strongly and negatively influenced the public opinion on GM crops and even provoked political actions, such as GMO embargo, share common flaws in the statistical evaluation of the data. Having accounted for these flaws, we conclude that the data presented in these articles does not provide any substantial evidence of GMO harm.

The presented articles suggesting possible harm of GMOs received high public attention. However, despite their claims, they actually weaken the evidence for the harm and lack of substantial equivalency of studied GMOs. We emphasize that with over 1783 published articles on GMOs over the last 10 years it is expected that some of them should have reported undesired differences between GMOs and conventional crops even if no such differences exist in reality.

and

Yang, Y.T.; Chen, B. (2016). "Governing GMOs in the USA: science, law and public health". Journal of the Science of Food and Agriculture. 96: 18511855. doi:10.1002/jsfa.7523. It is therefore not surprising that efforts to require labeling and to ban GMOs have been a growing political issue in the USA (citing Domingo and Bordonaba, 2011).

Overall, a broad scientific consensus holds that currently marketed GM food poses no greater risk than conventional food... Major national and international science and medical associations have stated that no adverse human health effects related to GMO food have been reported or substantiated in peer-reviewed literature to date.

Despite various concerns, today, the American Association for the Advancement of Science, the World Health Organization, and many independent international science organizations agree that GMOs are just as safe as other foods. Compared with conventional breeding techniques, genetic engineering is far more precise and, in most cases, less likely to create an unexpected outcome.

Pinholster, Ginger (October 25, 2012). "AAAS Board of Directors: Legally Mandating GM Food Labels Could "Mislead and Falsely Alarm Consumers"". American Association for the Advancement of Science. Retrieved February 8, 2016.

"REPORT 2 OF THE COUNCIL ON SCIENCE AND PUBLIC HEALTH (A-12): Labeling of Bioengineered Foods" (PDF). American Medical Association. 2012. Retrieved March 19, 2016. Bioengineered foods have been consumed for close to 20 years, and during that time, no overt consequences on human health have been reported and/or substantiated in the peer-reviewed literature.

GM foods currently available on the international market have passed safety assessments and are not likely to present risks for human health. In addition, no effects on human health have been shown as a result of the consumption of such foods by the general population in the countries where they have been approved. Continuous application of safety assessments based on the Codex Alimentarius principles and, where appropriate, adequate post market monitoring, should form the basis for ensuring the safety of GM foods.

"Genetically modified foods and health: a second interim statement" (PDF). British Medical Association. March 2004. Retrieved March 21, 2016. In our view, the potential for GM foods to cause harmful health effects is very small and many of the concerns expressed apply with equal vigour to conventionally derived foods. However, safety concerns cannot, as yet, be dismissed completely on the basis of information currently available.

When seeking to optimise the balance between benefits and risks, it is prudent to err on the side of caution and, above all, learn from accumulating knowledge and experience. Any new technology such as genetic modification must be examined for possible benefits and risks to human health and the environment. As with all novel foods, safety assessments in relation to GM foods must be made on a case-by-case basis.

Members of the GM jury project were briefed on various aspects of genetic modification by a diverse group of acknowledged experts in the relevant subjects. The GM jury reached the conclusion that the sale of GM foods currently available should be halted and the moratorium on commercial growth of GM crops should be continued. These conclusions were based on the precautionary principle and lack of evidence of any benefit. The Jury expressed concern over the impact of GM crops on farming, the environment, food safety and other potential health effects.

The Royal Society review (2002) concluded that the risks to human health associated with the use of specific viral DNA sequences in GM plants are negligible, and while calling for caution in the introduction of potential allergens into food crops, stressed the absence of evidence that commercially available GM foods cause clinical allergic manifestations. The BMA shares the view that that there is no robust evidence to prove that GM foods are unsafe but we endorse the call for further research and surveillance to provide convincing evidence of safety and benefit.

Link:
Biotechnology - Wikipedia

Posted in Biotechnology | Comments Off on Biotechnology – Wikipedia

Page 1,643«..1020..1,6421,6431,6441,645..1,6501,660..»