Page 1,659«..1020..1,6581,6591,6601,661..1,6701,680..»

Lung Institute | Stem Cell Treatment Center in TN for Lung …

Posted: November 9, 2016 at 5:46 am

If you have chronic obstructive pulmonary disease (COPD), emphysema, chronic bronchitis, interstitial lung disease or pulmonary fibrosis, you may need to find a treatment option beyond supplemental oxygen. Many healthcare providers will recommenddaily medications distributed through an inhaler or nebulizer, which can help mask lung disease symptoms. Some patients who have more developed conditions may even consider a lung transplant as an extreme answer to their lung disease. However, alternative treatment options are available, like stem cell therapy from the Lung Institute.

Thestem cells used by the Lung Instituteare autologous, meaning they come from the patients own body, and can be found in blood (venous), bone marrow and/or the patients peripheral blood cells. Stem cells have the ability toself-renew and replicate, capable of forming into any type of tissue or organ in the body. This ability to morph into the cells that they are surrounded by is why stem cells are often referred to as the bodys system to promote healing because they do just that. For example, if a persons lung tissue is damaged, the body will send stem cells to that location, and the stem cells will promote healing tissue. This process, however, can be slow moving.

Stem cell treatment involves extractingadult stem cells from the bloodor bone marrow, separating and treating the cells, then reintroducing them into the area of the body that needs it. This speeds up the bodys natural regenerationprocess. These procedures should be performed in a clinical setting under the supervision of a trained professional.

The Lung Institute is a leading global provider of innovative regenerative medicine technologies for the treatment of debilitating lung and pulmonary conditions. We are committed to an individualized patient-centric approach, which consistently provides the highest quality of care and produces positive outcomes. By applying modern-day best practices to the growing field of regenerative medicine, the Lung Institute is improving lives.

Our office in Tennessee is located just outside of Nashville in Franklin. The treatment is minimally invasive and usually takes three days of outpatient therapy. Most of our patients travel to Tennessee to receive the treatment and enjoy their free time with the sites and culture that Nashville has to offer.

If you or a loved one is looking for stem cell treatments for lung disease in Tennessee, the Lung Institute can help. Contact one of our Personal Care Coordinators for a free consultation by calling(800) 729-3065.

See the rest here:
Lung Institute | Stem Cell Treatment Center in TN for Lung ...

Posted in Tennessee Stem Cells | Comments Off on Lung Institute | Stem Cell Treatment Center in TN for Lung …

Des Moines, Iowa Stem Cell Transplants, Clive, Grimes …

Posted: November 8, 2016 at 4:48 am

Each year it seems that patients from Des Moines and all over the world, are more at risk than ever for life threatening and chronic illness like COPD, Arthritis, Diabetes and cardiovascular diseases. It seems that traditional medicine has had a difficult time keeping up.

Des Moines is definitely part for Stem Cells Trend. Researchers from University of Iowa explained that they took skin stem cells from adult humans and retrained them to act as if they were pancreas cells. The pancreas is the organ whose failure causes diabetes, a dangerous condition that leaves the body unable to process sugar. The American Diabetes Associations top national expert called the Iowa research a cutting-edge approach.

In spite of the billionaire investment in stem cells investigation in Des Moines and the uncountable benefits of stem cells therapies; United States is still in the clinical trial phase, therapies have not been legally approved yet.

Whether you live in Clive, Grimes, Waukee, Johnston, Sioux Center, North Liberty. Westwood, Linden Heights, Waterbury, Salisbury Oaks, Greenwood or Ingersoll Park, now you can access stem cells treatment in Costa Rica.

The Stem Cells Transplant Institute of Costa Rica specializes in the legal treatment of Critical limb isquemia, Erectile Dysfunction, Cardiovascular Disease, Knee Injury, Chronic Obstructive pulmonary disease, Multiple Sclerosis, Lupus, Osteoarthritis, Rheumatoid Arthritis, Alzheimer, Parkinson, Myocardial infarction, Diabetes and Neuropathy.

Stem Cells Transplant Institute in Costa Rica, provides cutting-edge stem cells procedures that stimulate and support the bodys natural recovery mechanisms. State-of-the-art injection and intravenous therapies can eliminate the need of surgeries, and provide alternative therapies to our patients from Des Moines who struggles with many degenerative diseases.Apply Now

Original post:
Des Moines, Iowa Stem Cell Transplants, Clive, Grimes ...

Posted in Iowa Stem Cells | Comments Off on Des Moines, Iowa Stem Cell Transplants, Clive, Grimes …

Delaware (Stem Cell) – what-when-how

Posted: November 3, 2016 at 5:42 am

WITH THE FOUNDING of the Delaware Biotechnology Institute in 1999, the state and its academic and industrial leaders made the development of biotechnology, including stem cell research, within the state a priority. Delaware has no legislation in place to regulate or fund stem cell research, though the Delaware BioTechnology Institute at the University of Delaware is a statewide collaborative network to encourage research in biotechnology including stem cell research.

At present, no federal legislation in the United States is in place to regulate stem cell research (except by executive order to not allow federal funding for generation of new embryonic stem cell research and limiting research to specific embryonic stem cell lines); this leaves each state responsible for determining its own policy and funding for stem cell research.

Although passed by the Delaware Senate in March, in June 2007 the legislators in the Delaware House of Representatives defeated a bill (State Bill 5) regarding oversight and regulation of research for regenerative medicine and human cloning and establishing regulation of stem cell research on adult, embryonic, and umbilical cord blood cells. The defeat of this bill left Delaware with no laws governing stem cell research; therefore, research being done could continue.

The University of Delaware, located in Newark, was founded in 1743. The university offers a variety of academic programs in science and medicine, as well as other academic majors. One of the research groups in the chemical engineering department is focused on stem cell differentiation and understanding the cellular processes of regulation. Current research includes cancer biology and genetically linked illness. In cancer, biology researchers are studying embryonic development and cancer tumor growth processes in both mouse and human models; the role of bone matrix in the progression of cancer following metastasis from primary sites, with the possibility of molecular drug development for prevention or control of metastasis; the study of cell adhesion molecule role in metastasis; finding fast-growing versus slow-growing cell types for drug development for cancer inhibition; tissue engineering with polymeric and organic-inorganic hybrid materials; and synthesis of model peptides for the activation of pharmaceuticals at the target organ.

The university also participates in research with industry partners through the Delaware Biotechnology Institute to work on gene editing and repair that may lead to a cure for a number of devastating hereditary diseases.

There is also clinical collaboration with Christiana Care Health Services, through a National Institutes of Health National Center for Research Resources IDeA Network of Biomedical Research Excellence grant to Delaware. The core of the program is focused on innovative research in bio-medical imaging and in infrastructure to support expanded cancer research in Delaware. The network brings together state, academic, and industrial stakeholders to perform research and improve educational opportunities as a means of enhancing the biotechnology industry and promote jobs within the state.

The Delaware Biotechnology Institute was established at the University of Delaware in 1999 as a center of excellence in biotechnology and life sciences. The institute was created through funding from the state of Delaware, the National Institutes of Health, the National Science Foundation, and other government and private sources. The institutes research facility occupies land adjacent to the Delaware Technology Park, with laboratory space dedicated to plants, animals, human health, biomaterials, and bioinformatics, as well as office space and instrumentation. Though the institute is an academic division of the University of Delaware, it brings together professionals from other institutions statewide, including Delaware State University, Delaware Technical and Community College, Wesley College, Christiana Care Health System, Helen F. Graham Cancer Center, Alfred I. DuPont Hospital for Children, and Nemours Bio-medical Research for collaboration.

The institute brings together all the academic disciplines for the development of new technology. The field of biomaterials is an emerging technology area creating clinical therapies, medications, and bioelectronic devices through the networking of scientists in physical sciences and those in materials science and engineering. The institutes current research includes biosurface modifications to promote or prevent protein absorption, rapid separation and sensing of proteins, and cell and tissue engineering. An example of the type of integrated research occurring is the creation of nanofibers by controlling polymer shaping by the universitys department of Materials Science and Engineering and then the biology departments investigation of cell response, growth, and proliferation within the polymers.

The Delaware Technology Park, located in Newark, Delaware, is built on 40 acres adjacent to the University of Delaware and is dedicated to the creation of jobs and the growth of biotechnology and other high-tech industries in an environment with proximity (within 35 miles) to 30 educational institutions, as well as providing networking opportunities with other businesses in the park.

Read this article:
Delaware (Stem Cell) - what-when-how

Posted in Delaware Stem Cells | Comments Off on Delaware (Stem Cell) – what-when-how

Molecular Genetics (Stanford Encyclopedia of Philosophy)

Posted: October 30, 2016 at 9:51 pm

The term molecular genetics is now redundant because contemporary genetics is thoroughly molecular. Genetics is not made up of two sciences, one molecular and one non-molecular. Nevertheless, practicing biologists still use the term. When they do, they are typically referring to a set of laboratory techniques aimed at identifying and/or manipulating DNA segments involved in the synthesis of important biological molecules. Scientists often talk and write about the application of these techniques across a broad swath of biomedical sciences. For them, molecular genetics is an investigative approach that involves the application of laboratory methods and research strategies. This approach presupposes basic knowledge about the expression and regulation of genes at the molecular level.

Philosophical interest in molecular genetics, however, has centered, not on investigative approaches or laboratory methods, but on theory. Early philosophical research concerned the basic theory about the make-up, expression, and regulation of genes. Most attention centered on the issue of theoretical reductionism. The motivating question concerned whether classical genetics, the science of T. H. Morgan and his collaborators, was being reduced to molecular genetics. With the rise of developmental genetics and developmental biology, philosophical attention has subsequently shifted towards critiquing a fundamental theory associated with contemporary genetics. The fundamental theory concerns not just the make-up, expression, and regulation of genes, but also the overall role of genes within the organism. According to the fundamental theory, genes and DNA direct all life processes by providing the information that specifies the development and functioning of organisms.

This article begins by providing a quick review of the basic theory associated with molecular genetics. Since this theory incorporates ideas from the Morgan school of classical genetics, it is useful to sketch its development from Morgan's genetics. After reviewing the basic theory, I examine four questions driving philosophical investigations of molecular genetics. The first question asks whether classical genetics has been or will be reduced to molecular genetics. The second question concerns the gene concept and whether it has outlived its usefulness. The third question regards the tenability of the fundamental theory. The fourth question, which hasn't yet attracted much philosophical attention, asks why so much biological research is centered on genes and DNA.

The basic theory associated with classical genetics provided explanations of the transmission of traits from parents to offspring. Morgan and his collaborators drew upon a conceptual division between the genetic makeup of an organism, termed its genotype, and its observed manifestation called its phenotype (see the entry on the genotype/phenotype distinction). The relation between the two was treated as causal: genotype in conjunction with environment produces phenotype. The theory explained the transmission of phenotypic differences from parents to offspring by following the transmission of gene differences from generation to generation and attributing the presence of alternative traits to the presence of alternative forms of genes.

I will illustrate the classical mode of explanatory reasoning with a simple historical example involving the fruit fly Drosophila melanogastor. It is worth emphasizing that the mode of reasoning illustrated by this historical example is still an important mode of reasoning in genetics today, including what is sometimes called molecular genetics.

Genes of Drosophila come in pairs, located in corresponding positions on the four pairs of chromosomes contained within each cell of the fly. The eye-color mutant known as purple is associated with a gene located on chromosome II. Two copies of this gene, existing either in mutated or normal wild-type form, are located at the same locus (corresponding position) in the two second-chromosomes. Alternative forms of a gene occurring at a locus are called alleles. The transmission of genes from parent to offspring is carried out in a special process of cellular division called meiosis, which produces gamete cells containing one chromosome from each paired set. The half set of chromosomes from an egg and the half set from a sperm combine during fertilization, which gives each offspring a copy of one gene from each gene pair of its female parent and a copy of one gene from each gene pair of its male parent.

Explanations of the transmission of traits relate the presence of alternative genes (genotype) to the presence of alternative observable traits (phenotype). Sometimes this is done in terms of dominant/recessive relations. Purple eye-color, for example, is recessive to the wild-type character (red eye-color). This means that flies with two copies of the purple allele (the mutant form of the gene, which is designated pr) have purple eyes, but heterozygotes, flies with one copy of the purple allele and one copy of the wild-type allele (designated +), have normal wild-type eyes (as do flies with two copies of the wild-type allele). See Table 1.

To see how the classical theory explains trait transmission, consider a cross of red-eyed females with purple-eyed males that was carried out by Morgan's collaborators. The offspring all had red eyes. So the trait of red eyes was passed from the females to all their offspring even though the offspring's male parents had purple eyes. The classical explanation of this inheritance pattern proceeds, as do all classical explanations of inheritance patterns, in two stages.

The first stage accounts for the transmission of genes and goes as follows (Figure 1): each offspring received one copy of chromosome II from each parent. The maternally derived chromosomes must have contained the wild-type allele (since both second-chromosomes of every female parent used in the experiment contained the wild-type allele -- this was known on the basis of previous experiments). The paternally derived chromosomes must have contained the purple allele (since both second-chromosomes of every male parent contained the purple allele -- this was inferred from the knowledge that purple is recessive to red eye-color). Hence, all offspring were heterozygous (pr / +). Having explained the genetic makeup of the progeny by tracing the transmission of genes from parents to offspring, we can proceed to the second stage of the explanation: drawing an inference about phenotypic appearances. Since all offspring were heterozygous (pr / +), and since purple is recessive to wild-type, all offspring had red eye-color (the wild-type character). See Figure 1.

Notice that the reasoning here does not depend on identifying the material make-up, mode of action, or general function of the underlying gene. It depends only on the ideas that copies of the gene are distributed from generation to generation and that the difference in the gene (i.e., the difference between pr and +), whatever this difference is, causes the phenotypic difference. The idea that the gene is the difference maker needs to be qualified: differences in the gene cause phenotypic differences in particular genetic and environmental contexts. This idea is so crucial and so often overlooked that it merits articulation as a principle (Waters 1994):

Difference principle: differences in a classical gene cause uniform phenotypic differences in particular genetic and environmental contexts.

It is also worth noting that the difference principle provides a means to explain the transmission of phenotypic characteristics from one generation to the next without explaining how these characteristics are produced in the process of an organism's development. This effectively enabled classical geneticists to develop a science of heredity without answering questions about development.

The practice of classical genetics included the theoretical analysis of complicated transmission patterns involving the recombination of phenotypic traits. Analyzing these patterns yielded information about the basic biological processes such as chromosomal mechanics as well as information about the linear arrangement of genes in linkage groups. These theoretical explanations did not depend on ideas about what genes are, how genes are replicated, what genes do, or how differences in genes bring about differences in phenotypic traits.

Research in molecular biology and genetics has yielded answers to the basic questions left unanswered by classical genetics about the make-up of genes, the mechanism of gene replication, what genes do, and the way that gene differences bring about phenotypic differences. These answers are couched in terms of molecular level phenomena and they provide much of the basic theory associated with molecular genetics.

What is a gene? This question is dealt with at further length in section 4 of this article, but a quick answer suffices for present purposes: genes are linear sequences of nucleotides in DNA molecules. Each DNA molecule consists of a double chain of nucleotides. There are four kinds of nucleotides in DNA: guanine, cytosine, thymine, and adenine. The pair of nucleotide chains in a DNA molecule twist around one another in the form of a double helix. The two chains in the helix are bound by hydrogen bonds between nucleotides from adjacent chains. The hydrogen bonding is specific so that a guanine in one chain is always located next to cytosine in the adjacent chain (and vice-versa) and thymine in one chain is always located next to adenine (and vice-versa). Hence, the linear sequence of nucleotides in one chain of nucleotides in a DNA molecule is complimentary to the linear sequence of nucleotides in the other chain of the DNA molecule. A gene is a segment of nucleotides in one of the chains of a DNA molecule. Of course, not every string of nucleotides in DNA is a gene; segments of DNA are identified as genes according to what they do (see below).

How are genes replicated? The idea that genes are segments in a DNA double helix provides a straightforward answer to this question. Genes are faithfully replicated when the paired chains of a DNA molecule unwind and new chains are formed along side the separating strands by the pairing of complementary nucleotides. When the process is complete, two copies of the original double helix have been formed and hence the genes in the original DNA molecule have been effectively replicated.

What do genes do? Roughly speaking, genes serve as templates in the synthesis of RNA molecules. The result is that the linear sequence of nucleotides in a newly synthesized RNA molecule corresponds to the linear sequence of nucleotides in the DNA segment used as the template. Different RNA molecules play different functional roles in the cell, and many RNA molecules play the role of template in the synthesis of polypeptide molecules. Newly synthesized polypeptides are linear sequences of amino acids that constitute proteins and proteins play a wide variety of functional roles in the cell and organism (and environment). The ability of a polypeptide to function in specific ways depends on the linear sequence of amino acids of which it is formed. And this linear sequence corresponds to the linear sequence of triplets of nucleotides in RNA (codons), which in turn corresponds to the linear sequence of nucleotides in segments of DNA, and this latter segment is the gene for that polypeptide.

How do differences in genes bring about differences in phenotypic traits? The modest answer given above to the question What do genes do? provides the basis for explaining how differences in genes can bring about differences in phenotypic traits. A difference in the nucleotide sequence of a gene will result in the difference in the nucleotide sequence of RNA molecules, which in turn can result in a difference in the amino acid sequence of a polypeptide. Differences in the linear sequences of amino acids in polypeptides (and in the linear sequences of nucleotides in functional RNA molecules) can affect the roles they play in the cell and organism, sometimes having an effect that is observable as a phenotypic difference. The mutations (differences in genes) identified by the Morgan group (e.g., the purple mutation) have been routinely identified as differences in nucleotide sequences in DNA.

The modest answer to the question What do genes do? is that they code for or determine the linear sequences in RNA molecules and polypeptides synthesized in the cell. (Even this modest answer needs to be qualified because RNA molecules are often spliced and edited in ways that affect the linear sequence of amino acids in the eventual polypeptide product.) But biologists have offered a far less modest answer as well. The bolder answer is part of a sweeping, fundamental theory. According to this theory, genes are fundamental entities that direct the development and functioning of organisms by producing proteins that in turn regulate all the important cellular processes. It is often claimed that genes provide the information, the blueprint, or the program for an organism. It is useful to distinguish this sweeping, fundamental theory about the allegedly fundamental role of genes from the modest, basic theory about what genes do with respect to the synthesis of RNA and polypeptides.

Philosophers of science have been intrigued by ideals of reductionism and the grand scheme that all science will one day be reduced to a universal science of fundamental physics (see the entry on inter-theory relations in physics) for philosophical and scientific concepts of reductionism in the context of physical science). Philosophical reductionists believe that scientific knowledge progresses when higher-level sciences (e.g., chemistry) are reduced to lower-level sciences (e.g., physics). The so-called received view of scientific knowledge, codified in Nagel (1961) and Hempel (1966), promoted reductionism as a central ideal for science, and confidently asserted that much progress had been made in the reduction of chemistry to physics. Nagel constructed a formal model of reduction and applied it to illuminate how the science of thermodynamics, which was couched in terms of higher-level concepts such as pressure and temperature, was allegedly reduced to statistical mechanics, couched in terms of the lower-level concepts of Newtonian dynamics such as force and mean kinetic energy. In 1969, Schaffner claimed that the same kind of advance was now taking place in genetics, and that the science of classical genetics was being reduced to an emerging science of molecular genetics. Schaffner's claim, however, was quickly challenged by Hull. Other philosophers of biology developed Hull's anti-reductionist arguments and a near consensus developed that classical genetics was not and would not be reduced to molecular genetics. Although the philosophical case for anti-reductionism has been challenged, many philosophers still assume that the anti-reductionist account of genetics provides an exemplar for anti-reductionist analyses of other sciences.

Reductionism has many meanings. For example, the phrase genetic reductionism concerns the idea that all biological phenomena are caused by genes, and hence presupposes an ontological sense of reductionism according to which one kind of micro-entity (in this case, gene) exclusively causes a variety of higher-level phenomena (in this case, biological features, cultural phenomena, and so forth). But this is not the meaning of reductionism at issue in the philosophical literature about the reduction of classical genetics. This literature is more concerned with epistemology than metaphysics. The concept of reductionism at issue is Nagel's concept of theoretical reduction. (See Sarkar 1992 and Schaffner 1993 for discussions of alternative conceptions of reduction.) According to Nagel's concept, the reduction of one science to another science entails the reduction of the central theory of one science to the central theory of the other. Nagel believed that this kind of theoretical reduction led to progressive changes in scientific knowledge. He formulated two formal requirements for theoretical reductions.

Nagel's first formal requirements was that the laws of the reduced theory must be derivable from the laws and associated coordinating definitions of the reducing theory. This deducibility requirement was intended to capture the idea that the explanatory principles (or laws) of the reducing theory ought to explain the explanatory principles (or laws) of the reduced theory. Nagel's second formal requirement, the connectability requirement, was that all essential terms of the reduced theory must either be contained within or be appropriately connected to the terms of the reducing theory by way of additional assumptions. The connectability requirement is presupposed by the derivability requirement, but making it explicit helps emphasize an important task and potential stumbling block for carrying out theoretical reduction.

Schaffner (1969) modified Nagel's model by incorporating the idea that what the reducing theory actually derives (and hence explains) is a corrected version of the reduced theory, not the original theory. He argued that this revised model better captured reductions in the physical sciences. He claimed his revised model could also be used to show how a corrected version of classical genetics was being reduced to a new theory of physicochemical science called molecular genetics. Hull (1974) countered that classical genetics was not being reduced, at least not according to the model of reduction being applied by Schaffner. Hull argued that genetics did not exemplify Nagelian reduction because the fundamental terms of classical genetics could not be suitably connected to expressions couched in terms of DNA.

Most philosophers writing on genetics and reductionism have argued that molecular genetics has not and will not reduce classical genetics (e.g., see Wimsatt 1976a, Darden and Maull 1977, Kitcher 1984, Rosenberg 1985 and 1994, Dupr 1993, and Burian 1996). Two objections to Schaffner's reductionist thesis have been most persuasive: the unconnectability objection and the gory details objection. The unconnectability objection claims that the terminology of classical genetics cannot be redefined at the molecular level in terms of DNA. This objection effectively claims that Nagel's second formal requirement, the connectability requirement, cannot be satisfied. The gory details objection alleges that molecular genetics cannot and will not explain classical genetics or better explain the phenomena that are already explained by the classical theory. This objection relates to Nagel's first formal requirement, the derivability requirement. But the gory details objection goes philosophically deeper because it implies that even if the explanatory principles of classical genetics could be derived from the explanatory principles of molecular genetics, the derivations would not be explanatory.

The most rigorous formulation of the unconnectability objection can be found in the early writings of Rosenberg who once contended that there is an unbridgeable conceptual gap between the classical and molecular theories of genetics (1985, 1994). In support of this contention, Rosenberg argued that relations between the gene concept of classical genetics and the concepts of molecular genetics are hopelessly complicated many-many relations that will forever frustrate any attempt to systematically connect the two theories. Rosenberg began his analysis by pointing out that in classical genetics, genes are identified by way of their phenotypic effects. Classical geneticists identified the gene for purple eye-color, for example, by carrying out carefully orchestrated breeding experiments and following the distribution of eye-color phenotypes in successive generations of a laboratory population. The reason classical genetics will never be reduced to a molecular-level science, according to Rosenberg (1985), is that there is no manageable connection between the concept of a Mendelian phenotype and that of a molecular gene:

The pathway to red eye pigment production begins at many distinct molecular genes and proceeds through several alternative branched pathways. The pathway from the [molecular] genes also contains redundant, ambiguous, and interdependent paths. If we give a biochemical characterization of the gene for red eye color either by appeal to the parts of its pathway of synthesis, or by appeal to the segments of DNA that it begins with, our molecular description of this gene will be too intricate to be of any practical explanatory upshot. (Rosenberg 1985, p. 101)

Rosenberg concluded that since the relation between molecular genes and Mendelian phenotypes is exceedingly complex, the connection between any molecular concept and the Mendelian gene concept must also be exceedingly complex, thereby blocking any systematic, reductive explanation of classical genetics in terms of molecular-level theory.

The gory details objection can be traced back to the writings of Putnam (1965) and Fodor (1968) who argued against reductionism of the mental on the basis that psychological functons are multiply-realized. This objection against reductionism was further developed in the context of genetics, most thoroughly by Kitcher (e.g., see Kitcher 1984, 1999, 2001). Following Hull, Kitcher assumes that classical genetics is transmission genetics. The classical theory, according to Kitcher, explains the transmission of phenotypic traits, not the development of phenotypic traits in individual organisms. And transmission phenomena, on Kitcher's account, are best explained at the level of cytology: The distribution of genes to gametes is to be explained, not by rehearsing the gory details of the reshuffling of the molecules, but through the observation that chromosomes are aligned in pairs just prior to the meiotic division, and that one chromosome from each matched pair is transmitted to each gamete. (Kitcher 1984, p. 370). Kitcher suggests that the pairing and separation of chromosomes belong to a natural kind of pair separation processes which are heterogeneous from the molecular perspective because different kinds of forces are responsible for bringing together and pulling apart different paired entities. The separation of paired entities, he claims, may occur because of the action of electromagnetic forces or even nuclear forces; but it is easy to think of examples in which the separation is effected by the action of gravity. (Kitcher 1984, p. 350)

The image of genetics that emerges from the anti-reductionist literature is of a two-tiered science composed of two discreet theoretical discourses, one grounded in principles about entities at the cytological level (such as chromosomes) and the other grounded in principles about entities at the molecular level (such as nucleotide sequences in DNA). Anti-reductionists believe some phenomena, including transmission of genes, are best explained by a theory grounded at the cytological level and other phenomena, including the expression of genes, are best explained by a theory grounded at the molecular level. Although Kitcher argues that classical genetics provides the best explanation in an objective sense, some anti-reductionists (e.g., Rosenberg 1985, 1994) believe that the obstacles to reduction are merely practical. Rosenberg (1985, 1994) appealed to the concept of supervenience to argue that in principle, molecular genetics would provide the best explanations. But he argued that in practice, classical genetics provides the best explanation of transmission phenomena, in the sense that this is the best explanation available to creatures with our cognitive limitations. Subsequently, however, Rosenberg changed his position on this issue, largely on the grounds that that technological advances in information storage and processing "may substantially enhance our capacity to understand macromolecular processes and their combinations" (Rosenberg 2006, p. 14).

Despite philosophically significant differences in their views about the ultimate basis of the irreducibility of classical genetics, the image of biological knowledge that emerges from the antireductionists' writings is similar. The biological world consists of different domains of phenomena and each domain is best explained at a particular level of theoretical discourse. Hence, the ideal structure for biology is akin to a layer-cake, with tiers of theories, each of which provides the best possible account of its domain of phenomena. Biological sciences such as classical genetics that are couched in terms of higher levels of organization should persist, secure from the reductive grasp of molecular science, because their central theories (or patterns of reasoning) explain domains of phenomena that are best explained at levels higher than the molecular level.

The anti-reductionist consensus has not gone unchallenged (see Sarkar 1989, 1992 and 1998, Schaffner 1993, and Waters 1990 and 2000). According to critics, the chief objections supporting the consensus are mistaken. The unconnectability objection rests on the assumption that classical genetics took the relationships between genes and phenotypic traits to be simple one-to-one relationships. But classical geneticists knew better. Consider what Sturtevant, one of Morgan's star students and collaborators, had to say about genes and eye color:

The difference between normal red eyes and colorless (white) ones in Drosophila is due to a difference in a single gene. Yet red is a very complex color, requiring the interaction of at least five (and probably of very many more) different genes for its production. And these genes are quite independent, each chromosome bearing some of them. Moreover, eye-color is indirectly dependent upon a large number of other genes such as those on which the life of the fly depends. We can then, in no sense identify a given gene with the red color of the eye, even though there is a single gene differentiating it from the colorless eye. So it is for all characters (my emphasis, quoted from Carlson 1988, p. 69)

This quotation suggests that the relationship between gene and eye-color in classical genetics exhibited the same complexity that Rosenberg discussed at the molecular level (compare this quotation to the passage from Rosenberg 1985 quoted in section 3.2). According to this critique of the unconnectability objection, it is not the case that genotype-phenotype relationships appear simple and uniform at the level of classical genetics and complicated and dis-unified at the molecular level. The situation appears similarly complex at both levels of analysis (Waters 1990).

Classical genetics nevertheless finds a simple way to explain transmission phenomena by appealing to the difference principle, according to which particular differences in particular genes cause particular differences in phenotypic traits in particular contexts (see section 2.1). Sturtevant alludes to this principle in the first sentence of the quotation above and again in the emphasized clause. So the question arises, can this relationship be captured at the molecular level? And the answer is yes. The differences used by classical geneticists to explain inheritance patterns have been routinely identified at the molecular level by contemporary geneticists.

According to this critique, the gory details objection also fails. This objection claims that biologists cannot improve upon the classical explanations of transmission phenomena by citing molecular details. The cytological level allegedly provides the best level of explanation because explanations at this level uniformly account for a wide range of cases that would look heterogeneous from a molecular perspective. Consider Kitcher's formulation of this objection. Kitcher believes that to explain is to unify (1989). It follows that the best explanation of a class of phenomena is the explanation that accounts for the class in a uniform way. Kitcher claims meiosis exemplifies this kind of situation. The uniformity of pair-separation processes is evident at the cytological level, but is lost in the gory details at the molecular level where the process may occur because of the action of electromagnetic forces or even of nuclear forces (Kitcher 1984, p. 350). But it is unclear what Kitcher could have in mind. The molecular mechanisms underlying the pairing and separation of chromosomes are remarkably uniform in creatures ranging from yeast to human beings; it is not the case that some involve electromagnetic forces and others involve nuclear forces. Kitcher's claim that it is easy to think of examples in which the separation is effected by the action of gravity has no basis in what molecular biologists have learned about the pairing and separation of chromosomes.

Meiosis is an unpromising candidate to illustrate the idea that what appears uniform at the level of classical genetics turns out to be heterogeneous at the molecular level. But this idea is illustrated by other genetic phenomena. Consider the phenomenon of genetic dominance. In classical genetics, all examples of complete dominance are treated alike for the purposes of explaining transmission phenomena. But contemporary genetics reveals that there are several very different mechanisms underlying different instances of dominance. According to Kitcher's unificationist theory of scientific explanation, the classical account of dominance provides an objectively better basis for explaining transmission phenomena because it provides a more unified organization of the phenomena. But this would imply that the shallow explanations of classical genetics are objectively preferable to the deeper explanations provided by the molecular theory (Waters 1990).

Although Nagel's concept of theoretical reduction marks a common starting point for discussions about the apparent reduction of classical genetics, much of the literature on reduction is aimed at seeking a better understanding of the nature of reduction by seeking to replace Nagel's concept with a more illuminating one. This is true of the anti-reductionists, who seek to clarify why molecular genetics cannot reduce classical genetics, as well as those who have been more sympathetic to reductionism. Hence, there are two levels of discourse in the literature examining the question of whether molecular genetics is reducing classical genetics. One level concerns what is happening in the science of genetics. The other concerns more abstract issues about the nature of (epistemological) reduction.

The abstract level of discourse began with Schaffner's idea that what is reduced is not the original theory, but rather a corrected version of the original theory. Wimsatt (1976a) offers a more ambitious modification. He rejects the assumption that scientific theories are sets of law-like statements and that explanations are arguments in which the phenomena to-be-explained are derived from laws. Instead of relying on these assumptions, Wimsatt uses Salmon's account of explanation (Salmon 1971) to examine claims that molecular genetics offered reductive explanations. Kitcher (1984) also rejects the account of theorizing underlying Nagel's concept of reduction. He constructs a new concept of reductive explanation based on his own idea of what effectively constitutes a scientific theory and his unificationist account of scientific explanation (1989). Likewise, Sarkar (1998) rejects the account of theories and explanation presupposed in Nagel's concept of reduction. In fact, he explicitly avoids relying on any particular account of scientific theories or theoretical explanation. Instead, he assumes that reductive explanations are explanations without specifying what an explanation is, and then seeks to identify the features that set reductive explanations apart from other explanations.

Wimsatt, Kitcher, and Sarkar seek to replace Nagel's conception of reduction with a conception that does not assume that scientic explanation involves subsumption under universal laws. Weber (2005), however, seeks to replace Nagel's conception with one that retains the idea that reductive explanation involves subsumption under laws of the reducing science. What Weber rejects is the idea that reductionism in biology involves explaining higher-level biological laws. He argues that, with some rare exceptions, biological sciences don't have laws. He contends that reductionism in biology involves explaining biological phenomena directly in terms of physical laws. Hence, he rejects the "layer-cake" conception of reduction implicit in Nagel's account.

The literature about reduction and molecular genetics has influenced philosophers' thinking about reduction in other sciences. For example, Kitcher's concept of reduction, which he uses to explain why molecular genetics cannot reduce classical genetics, has subsequently been employed by Hardcastle (1992) in her examination of the relationship between psychology and neuroscience. On the other side, Sober develops and extends the criticism of Kitcher's gory details objection (section 3.3) by re-examining the arguments of Putnam (1967, 1975) and Fodor (1968, 1975) on multiple-realizability.

Sober (1999) argues that higher-level sciences can describe patterns invisible at lower level, and hence might offer more general explanations. But he insists that description should not be confused with explanation. He maintains that although physics might not be able to describe all the patterns, it can nevertheless explain any singular occurrence that a higher-level science can explain. Higher-level sciences might provide more "general" explanations, but physics provides "deeper" ones. He suggests that which explanation is better is in the eye of the beholder. He concludes that

The discussion has gone full circle. The multiple-realizability argument being criticized by Sober was based on abstract considerations in the context of philosophy of mind. Philosophers of biology drew on this literature to construct the gory details objection against the idea that molecular genetics is reducing classical genetics. Other philosophers argued that this objection did not stand up to a careful analysis of the concrete situation in genetics. Sober has developed lessons from the discussion about genetics to critique the original anti-realizability argument and draw general conclusions about reductionism.

Wimsatt's writings on reduction (1976a, 1976b, and 1979) emphasize the fruitfulness of attempting to achieve a reduction, even when a reduction is not achieved. He argues, for instance, that efforts to discover the molecular make-ups of entities identified at higher levels is often fruitful, even when identities between levels cannot be found. In addition, Wimsatt points out that the costs of working out reductive explanations of the many particulars already explained at a higher level are relevant to the question of why there is not a full-scale replacement of higher level explanations with lower level ones. Perhaps the fact that molecular genetics has not replaced classical genetics can be explained on the basis of high costs rather than lack of epistemic merit.

While Schaffner still maintains that molecular genetics can in principle reduce classical genetics, he has conceded that attempts to carry out the reduction would be peripheral to the advance of molecular genetics. One might respond, along the lines of Hull (1977), that the success of molecular genetics seems to be reductive in some important sense. Hence, the failure to illuminate this success in terms of reduction reveals a conceptual defiency. That is, one might argue that Schaffner's peripherality thesis indicates that his conception of reduction is not the epistemically relevant one because it cannot illuminate the fruitfulness of reductive inquiry in molecular genetics.

In fact, a general shortcoming in the debate about the reduction of classical genetics is that it concerns only a fragment of scientific reasoning. It is based almost exclusively on an analysis of explanatory or theoretical reasoning and largely ignores investigative reasoning. The philosophical literature on the alleged reduction of classical genetics focuses on how geneticists explain or try to explain phenomena, not how they manipulate or investigate phenomena. This is even true of Wimsatt's (1976a) account of heuristics, which stress heuristics for explanation.

Vance (1996) offers a more thorough shift in attention from theory to investigative practice. He asserts that there is only one contemporary science of genetics and describes how investigative methods of classical genetics are an essential part of the methodology of what is called molecular genetics. He concludes that reductionism fails because contemporary genetics still depends on methods of classical genetics involving breeding experiments. Vance's picture of genetics is compelling. The laboratory methods of classical genetics do indeed persist, even as they are greatly extended, augmented, and often replaced by techniques involving direct intervention on DNA. But Vance's picture does not match the anti-reductionist image of a two-tiered science and the contention that classical genetics will remain aloof from the reductive grasp of molecular biology.

A different image emerges from viewing genetics as an investigative science involving an interplay of methodological and explanatory reasoning (Waters 2004a). This image is not of a two-tiered science, one (classical genetics) aimed at investigating and explaining transmission phenomena and another (molecular genetics) aimed at investigating and explaining developmental phenomena. Instead, there is one science that retains much of the investigative and explanatory reasoning of classical genetics by re-conceptualizing its theoretical basis in molecular terms and by retooling its basic investigative approach by integrating methodologies of classical genetics with physically-based methods of biochemistry and new methods based on recombinant DNA and RNA interference technologies.

A common claim in the philosophical literature about molecular genetics is that genes cannot be conceived at the molecular level. Of course, philosophers do not deny that biologists use the term gene, but many philosophers believe gene is a dummy term, a placeholder for many different concepts. Different responses to gene skepticism illustrate a variety of philosophical aims and approaches. One kind of response is to analyze explanations closely tied to experimental practice (rather than sweeping generalizations of a fundamental theory) in order to determine whether there are uniform patterns of reasoning about genes that could (a) be codified into clear concepts, and/or (b) used to establish the reference of the term. Another kind of response is to propose new gene concepts that will better serve the expressed aims of practicing biologists. A third kind of response is to implement survey analysis, rather than conduct traditional methods of philosophical analysis. A fourth kind of response is to embrace the (allegedly) necessary vagueness of the gene concept(s) and to examine why use of the term gene is so useful.

Gene skeptics claim that there is no coherence to the way gene is used at the molecular level and that this term does not designate a natural kind; rather, gene is allegedly used to pick out many different kinds of units in DNA. DNA consists of coding regions that are transcribed into RNA, different kinds of regulatory regions, and in higher organisms, a number of regions whose functions are less clear and perhaps in cases non-existent. Skepticism about genes is based in part on the idea that the term is sometimes applied to only parts of a coding region, sometimes to an entire coding region, sometimes to parts of a coding region and to regions that regulate that coding region, and sometimes to an entire coding region and regulatory regions affecting or potentially affecting the transcription of the coding region. Skeptics (e.g., Burian 1986, Portin 1993, and Kitcher 1992) have concluded, as Kitcher succinctly puts it: a gene is whatever a competent biologist chooses to call a gene (Kitcher 1992, p. 131).

Biological textbooks contain definitions of gene and it is instructive to consider one in order to show that the conceptual situation is indeed unsettling. The most prevalent contemporary definition is that a gene is the fundamental unit that codes for a polypeptide. One problem with this definition is that it excludes many segments that are typically referred to as genes. Some DNA segments code for functional RNA molecules that are never translated into polypeptides. Such RNA molecules include transfer RNA, ribosomal RNA, and RNA molecules that play regulatory and catalytic roles. Hence, this definition is too narrow.

Another problem with this common definition is that it is based on an overly simplistic account of DNA expression. According to this simple account, a gene is a sequence of nucleotides in DNA that is transcribed into a sequence of nucleotides making up a messenger RNA molecule that is in turn translated into sequence of amino acids that forms a polypeptide. (Biologists talk as if genes produce the polypeptide molecules or provide the information for the polypeptide.) The real situation of DNA expression, however, is often far more complex. For example, in plants and animals, many mRNA molecules are processed before they are translated into polypeptides. In these cases, portions of the RNA molecule, called introns, are snipped out and the remaining segments, called exons, are spliced together before the RNA molecule leaves the cellular nucleus. Sometimes biologists call the entire DNA region, that is the region that corresponds to both introns and exons, the gene. Other times, they call only the portions of the DNA segment corresponding to the exons the gene. (This means that some DNA segments that geneticists call genes are not continuous segments of DNA; they are collections of discontinuous exons. Geneticists call these split genes.) Further complications arise because the splicing of exons in some cases is executed differentially in different tissue types and at different developmental stages. (This means that there are overlapping genes.) The problem with the common definition that genes are DNA segments that code for polypeptides is that the notion of coding for a polypeptide is ambiguous when it comes to actual complications of DNA expression. Gene skeptics argue that it is hopelessly ambiguous (Burian 1986, Fogle 1990 and 2000, Kitcher 1992, and Portin 1993).

Clearly, this definition, which is the most common and prominent textbook definition, is too narrow to be applied to the range of segments that geneticists commonly call genes and too ambiguous to provide a single, precise partition of DNA into separate genes. Textbooks include many definitions of the gene. In fact, philosophers have often been frustrated by the tendency of biologists to define and use the term gene in a number of contradictory ways in one and the same textbook. After subjecting the alternative definitions to philosophical scrutiny, gene skeptics have concluded that the problem isn't simply a lack of analytical rigor. The problem is that there simply is no such thing as a gene at the molecular level. That is, there is no single, uniform, and unambiguous way to divide a DNA molecule into different genes. Gene skeptics have often argued that biologists should couch their science in terms of DNA segments such exon, intron, promotor region, and so on, and dispense with the term gene altogether (most forcefully argued by Fogle 2000).

It has been argued, against gene skepticism, that biologists have a coherent, precise, and uniform way to conceive of genes at the molecular level. The analysis underlying this argument begins by distinguishing between two different ways contemporary geneticists think about genes. Classical geneticists often conceived of genes as the functional units in chromosomes, differences in which cause differences in phenotypes. Today, in contexts where genes are identified by way of observed phenotypic differences, geneticists still conceive of genes in this classical way, as the functional units in DNA whose differences are causing the observed differences in phenotypes. This way of conceiving of genes is called the classical gene concept (Waters 1994). But contemporary geneticists also think about genes in a different way by invoking a molecular-level concept. The molecular gene concept stems from the idea that genes are units in DNA that function to determine linear sequences in molecules synthesized via DNA expression. According to this analysis, both concepts are at work in contemporary geneticists. Moss 2003 also distinguishes between two contemporary gene concepts, which he calls genes-P (preformationist) and genes-D (developmental). He argues that conflation of these concepts leads to erroneous thinking in genetics.

Much confusion concerning the classical way to think about genes is due to the fact that geneticists have sometimes talked as if classically conceived genes are for gross phenotypic characters (phenotypes) or as if individual genes produce phenotypes. This talk was very misleading on the part of classical geneticists and continues to be misleading in the context of contemporary genetics. The production of a gross phenotypic character, such as purple eye-color, involves all sorts of genetic and extra-genetic factors including various cellular enzymes and structures, tissue arrangements, and environmental factors. In addition, it is not clear what, if any, gross phenotypic level functions can be attributed to individual genes. For example, it is no clearer today than it was in Morgan's day that the function of the purple gene discussed in section 2.1 is to contribute to the production of eye color. Mutations in this gene affect a number of gross phenotypic level traits. Legitimate explanatory reasoning invoking the classical gene concept does not depend on any baggage concerning what genes are for or what function a gene might have in development. What the explanatory reasoning depends on is the difference principle, that is, the principle that some difference in the gene causes certain phenotypic differences in particular genetic and environmental contexts (section 2.1). Many gene-based explanations in contemporary biology are best understood in terms of the classical gene concept and the difference principle.

Perhaps the reason gene skeptics overlooked the molecular gene concept is that they were searching for the wrong kind of concept. The concept is not a purely physicochemical concept, and it does not provide a single partition of DNA into separate genes. Instead, it is a functional concept that provides a uniform way to think about genes that can be applied to pick out different DNA segments in different investigative or explanatory contexts. The basic molecular concept, according to this analysis, is the concept of a gene for a linear sequence in a product of DNA expression:

A gene g for linear sequence l in product p synthesized in cellular context c is a potentially replicating nucleotide sequence, n, usually contained in DNA, that determines the linear sequence l in product p at some stage of DNA expression (Waters 2000)

The concept of the molecular gene can be presented as a 4-tuple: . This analysis shows how geneticists can consistently include introns as part of a gene in one epistemic context and not in another. If the context involves identifying a gene for a primary, preprocessed RNA molecule, then the gene includes the introns as well as the exons. If the context involves identifying the gene for the resulting polypeptide, then the gene includes only the exons. Hence, in the case of DNA expression that eventually leads to the synthesis of a given polypeptide, geneticists might talk as if the gene included the intron (in which case they would be referring to the gene for the primary, preprocessed RNA) and yet also talk as if the gene excluded the introns (in which case they would be referring to the gene for the mature RNA or polypeptide). Application of the molecular gene concept is not ambiguous; in fact, it is remarkably precise provided one specifies the values for the variables in the expression gene for linear sequence l in product p synthesized in cellular context c.

Gene skeptics have suggested that there is a lack of coherence in gene talk because biologists often talk as if genes code for polypeptides, but then turn around and talk about genes for RNA molecules that are not translated into polypeptides (including genes for RNA [tRNA], ribosomal RNA [rRNA], and interference RNA [iRNA]). This account shows that conceiving of genes for rRNA involves the same idea as conceiving of genes for polypeptides. In both cases, the gene is the segment of DNA, split or not, that determines the linear sequence in the molecule of interest.

An advantage of this analysis is that it emphasizes the limitations of gene-centered explanations while clarifying the distinctive causal role genes play in the syntheses of RNA and polypeptides: genes determine the linear sequences of primary RNA transcripts and often play a distinctive role, though not exclusive, in determining the sequence of amino acids in polypeptides.

Weber (2005) examines the evolution of the gene concept by tracing changes in the reference of the term gene through the history of genetics. The reference or extension of a term is the set of objects to which it reference. Weber adopts a mixed theory of refence. According to mixed theories, the reference of a term is determined how the relevant linguistic community causally interacts with potential referents as well as how they describe potential referents. This theory leads Weber to pay close attention, not just to how geneticists theorized about genes or used the concept to explain phenomena, but also how they conducted their laboratory investigations. Following Kitcher (1978, 1982), he examines ways in which modes of reference changed over time.

Weber identifies six different gene concepts, beginning with Darwin's pangene concept (1868) and ending with the contemporary concept of molecular genetics. He distinguishes the contemporary molecular concept from the classical (or neoclassical) one on the basis of how geneticists described their functional role (RNA/protein coding versus general function unit), their material basis (RNA/DNA versus chromosome), and their structure (discontinuous linear -- with introns and exons versus continuous linear) as well as on the basis of the criteria experimentalists used to identify genes (by gene product versus complementation test).

Weber examines how the investigation of several particular Drosophila genes changed as the science of genetics developed. His study shows that the methods of molecular genetics provided new ways to identify genes that were first identified by classical techniques. The reference of the term changed, not simply as a result of theoretical developments, but also as a result of the implementation of new methods to identify genes. He concludes that unlike concepts of physical science that have been analyzed by philosophers, the gene concept has a nonessentialistic character that allows biologists to lay down different natural classifications, depending on the investigative methods available as well as on theoretical interests (Weber 2005, p. 228). Weber calls this feature floating references.

Neumann-Held (2001) proposes a new way to think about genes in the context of developmental genetics. She says that in this context, interest in genes is largely focused on the regulated expression of polypeptides. She notes that textbook definitions of gene often acknowledge this interest and quotes the following definition from a scientific textbook:

A combination of DNA segments that together constitute an expressible unit, expression leading the formation of one or more specific functional gene products that may lead to either RNA molecules or polypeptides. The segments of a gene include (1) the transcribed unit and any regulatory segments included in the transcription unit, and (2) the regulatory sequences that flank the trancription unit and are required for specific expression. (Singer and Berg 1991, p. 41).

This definition emphasizes that regulatory sequences as well as coding regions are required for specific expression. Only a small proportion of coding sequences are transcribed in a given cell at a particular time, and whether a particular sequence is transcribed depends in part on regulatory regions external to the coding region.

Neumann-Held points out that if the aim is to specify what is necessary for regulated synthesis of polypeptides, then one must include even more than what is located in the DNA. This follows from the fact that processes such as differential splicing (and RNA editing processes such as methylation that I have not discussed in this article) involve entities outside of DNA such as splicing agents. She suggests that it is appropriate, at least in the context of developmental genetics, to reconceive genes as processes. She proposes the following process molecular gene concept.

Neumann-Held argues that this conception provides the clearest basis for understanding how DNA sequences are used in the processes of polypeptide production. She points out that the process molecular gene concept allows for the inclusion of coding sequences in DNA, regulatory sequences in DNA and also entities not located in DNA, all of which are causally involved in the production of polypeptides. Neumann-Held's concept excludes transcription processes and coding regions of DNA that lead to functional RNA molecules that are not translated into polypeptides. Hence, according to her account, there are not process molecular genes for tRNA (transfer RNA), rRNA (ribosomal RNA) or snRNA (small nuclear RNA). This feature of Neumann-Held's definition does not match the textbook definition that she quotes to motivate her account (presented above). Furthermore, the exclusion of these coding regions does not track with recent discoveries about the important functions played by non-coding RNA molecules such as snRNAs. Her definition could easily be revised to accommodate these regions and processes. In any case, Neumann-Held believes using this concept in developmental genetics, rather than DNA-centered gene concepts, will help avoid the view that genes are the most important explanatory factors in biology because of their unique causal powers (Neumann-Held 2001, p. 80).

Stotz and Griffiths (2004) believe that the variety of gene concepts used throughout the biological sciences calls for a more systematic and explicitly empirical approach. They point out that individual philosophers cannot grasp all the intricacies of the different contexts across the broad range of biological sciences in which gene concepts are employed. They have embarked upon an ambitious project to survey practicing scientists in an attempt to help identify how scientists actually conceive of genes. Their interest extends far beyond understanding molecular genetics. They hope to learn about the concepts employed in many different areas and contexts of biology by spotting differences in the way biologists from different areas (and biologists in different age groups, sexes, etc.) answer sophisticated questionnaires.

An initial motivation behind Stotz and Griffith's project was to test philosophical accounts of the gene concept. As Griffiths asked, if their survey-based study revealed that scientists don't actually think of genes in the way set out by a philosophical account, then what value could the account possibly have? There are, however, a number of daunting, practical difficulties with using a questionnaire to learn how a person is thinking, especially if the person's thinking involves the use of multiple concepts and/or is sometimes or somewhat muddled (Waters 2004b). It is also difficult to survey appropriate and representative samples of scientists. Griffiths and Stotz are aware of these difficulties and have refined their project through successive surveys.

Even if Stotz and Griffith's survey succeeds in identifying how scientists in different areas of biology actually think about genes in different contexts, it does not follow that their findings would provide an appropriate test of the classical, molecular, or process molecular gene concepts. The aim of the proponents of these concepts is to re-interpret the knowledge of contemporary genetics by replacing sloppy thinking based on unclear concepts with more rigorous thinking in terms of precise concepts. Showing that scientists' actual thinking does not align with the precise application of these concepts would not refute the analysis supporting the classical gene or molecular gene concepts and it would not undermine the argument motivating the proposal for the new process molecular gene concept.

Although it appears that survey-based findings would not provide an appropriate test of philosophical analyses of gene concepts, they might provide, as Stotz and Griffiths claim, important information relevant to those conducting philosophical research on gene concepts. For example, if such surveys find significant differences in the way evolutionary biologists and developmental geneticists answer questions about what counts as gene, philosophers might examine whether the contexts in which these biologists practice call for different gene concepts. Survey results could provide a useful heuristic for conducting concept analyses.

Gene skeptics such as Burian, Portin, and Fogle claim that the term gene has outlived its usefulness. They argue that the term is both too vague and too restrictive. It is too vague, they believe, because it does not provide a unique parsing of the genome. Borders between genes are overlapping and allegedly ambiguous. It is not clear, they argue, whether genes include or exclude introns, regulatory regions, and so forth. The term is allegedly too restrictive because it obscures the diversity of molecular elements playing different roles in the expression and regulation of DNA. In addition, any attempt to resolve the ambiguities, these skeptics argue, will make the term even more restrictive.

Keller's account of the history of twentieth century genetics seems to reinforce gene skepticism. For example, she argues that the question about what genes are for has become increasingly difficult to answer (Keller 2000). By the end of the twentieth century, she says, biological findings have revealed a complexity of developmental dynamics that make it impossible to conceive of genes as distinct causal agents in development. Keller emphasizes that words have power and devotes a good deal of attention to the way loose gene talk has affected biological research by reinforcing the assumption that the gene is the core explanatory concept of biological structure and function (Keller 2000, p. 9), an assumption with which she strongly disagrees. Yet Keller does not endorse the view of gene skeptics who argue that biology would be improved if biologists stopped talking about genes and restricted themselves to terms designating molecular units such as nucleotide, codon, coding region, promotor region, and so on. Keller maintains that the term gene continues to have obvious and undeniable uses.

One use of the term gene, according to Keller, is that its vagueness, the very feature that troubles philosophers, makes it possible for biologists to be flexible, to communicate across disciplinary boundaries, and to think in new ways:

The meaning of an experimental effect depends on its relation to other effects, and the use of language too closely tied to particular experimental practices would, by its very specificity, render communication across difference experimental contexts effectively impossible. (Keller 2000, p. 140).

Keller identifies a second reason that gene talk is useful. The term gene applies to entities that can be experimentally manipulated to produce definite and reproducible effects (though given Keller's criticism of gene concepts, it is unclear to what entities she thinks the term refers). She suggests that genes are short-term causes. She points out, however, that this does not mean genes are long-term causes or that genes are the fundamental causal agents of development. Rather, what it means (and Keller thinks this is an important reason why gene talk will continue) is that genes can be used as handles to manipulate biological processes (also see Waters 2000). And for these two reasons, Keller concludes, gene talk will and should continue to play an important role in biological discourse.

The science called molecular genetics is associated with a fundamental theory according to which genes and DNA direct all basic life processes by providing the information specifying the development and functioning of organisms. The genome is said to specify the developmental program, master plan, or blue print for development while other elements provide the materials (e.g., Bonner 1965, Jacob and Monod 1961, Mayr 1961, Maynard Smith 2000, Rosenberg 2006). Although the idea that the chromosomes contain a code-script for the development and functioning of an organism was famously expressed by Schrodinger (1944) before the era of molecular genetics, today it is often expressed in explicitly molecular terms. The information of development and function, which is passed down from one generation to the next, is allegedly encoded in the nucleotide sequences comprising genes and DNA. This so-called genetic information is first transcribed into RNA, then translated into proteins, and finally expressed in the development and functioning of organisms.

The concept of genetic information has a prominent place in the history of molecular genetics, beginning with Watson and Crick's observation that since any sequence of nucleotide base pairs could fit into the structure of any DNA molecule that in a long molecule many different permutations are possible, and it therefore seems likely that the precise sequence of the bases is the code which carries the genetic information. (Watson and Crick 1953). As Downes (2005) recounts, the geneticists Jacob and Monod reinforced the use of information language as did those who sought to crack the genetic code. By the early 1960s, the language of information was well-entrenched in the field of molecular genetics.

Philosophers have generally criticized the theory that genes and DNA provide all the information and have challenged the use of sweeping metaphors such as master plan and progam which suggest that genes and DNA contain all developmental information. Critics have taken a number of different positions. Most seem to accept the notion that biological systems or processes contain information, but they deny the idea that DNA has a exceptional role in providing information. Some are content to argue that under various existing theories of information, such as causal theories or standard teleosemantic theories, information is not restricted to DNA. But others contend that understanding what genes do requires a new conception of biological information. One approach is to retreat to a narrow conception of coding specifically aimed at clarifying the sense in which DNA provides information for the synthesis of polypeptides, but not for higher-level traits (e.g. Godfrey-Smith 2000). Another approach is to construct a new, broad conception of biological information and use this conception to show that the informational role of genes is not exclusive (Jablonka 2002). A different approach is to abandon information talk altogether and explain the investigative and explanatory reasoning associated with genetics and molecular biology in purely causal terms.

The fundamental theory that says the role of DNA is to provide the information for development has been criticized on many grounds. Keller (2000) points out that the idea flounders on an ambiguity. Does DNA provide the program or the data? Others have argued that information for development flows from a vast number of resources, not just genetic resources. Oyama (1985) suggests that it is a mistake to think information is contained within static entities such as DNA. She believes that information exists in life-cycles. Other criticisms challenge applications of particular conceptions or theories of information, including applications of the causal and teleosemantic conceptions.

Griffiths (2001) distinguishes between two ways to conceive of information, causal and intentional, and then argues that under either conception, information is not restricted to DNA. Causal theories of information, based on Dretske's (1981), are related to the Shannon mathematical theory of information (1948). Dretske distinguishes between a source variable and background or channel conditions. On Griffiths' (2001) reading of Dretske's theory, a source variable, X, carries information about variable Y if the value of X is correlated with the value of Y. Griffiths describes the causal interpretation of this idea as follows:

There is a channel between two systems when the state of one is systematically causally related to the other, so that the state of the sender can be discovered by observing the state of the receiver. The causal information is simply the state of affairs with which it reliably correlates at the other end of the channel. Thus, smoke carries information about fire and disease phenotypes carry information about disease genes. (Griffiths 2001, p. 397)

To capture the conventional ideas about genetic information under this theory, genes are treated as source variables and environments are treated as channel conditions. It follows that genes carry information about phenotypes because phenotypic values reliably correlate with genotypic values. But as Griffiths points out, nothing stops one from treating environmental conditions as source variables and genes as channel. Under this application of the causal theory, environmental conditions carry information about phenotypes. Griffiths and others have concluded that the idea that genes provide the information while other causal factors merely provide material cannot be sustained under causal theories of information.

Griffiths argues that the idea that genes and DNA provide all the information fares no better under intentional theories of information. Intentional theories are aimed at capturing the sense of semantic information that human thoughts and utterances allegedly contain (Godfrey-Smith 1999). The version of intentional theory favored by philosophers of biology is teleosemantic. According to teleosemantic theories, a signal represents whatever it was selected to represent (in the process of evolution). Under this idea, one might say that DNA contains information about development because DNA's effects on development were selected for in the process of evolution. But as Griffiths and Gray (1997) point out, this idea applies to a wide range of entities involved in development, not just DNA.

Weber (2005) challenges Maynard Smith's (2000) teleosemantic account. Maynard Smith draws an analogy between information in a programmed computer and information in DNA. Computers execute algorithms programmed by human beings and organisms express DNA that has been programmed by natural selection. The information programmed in a computer is intentional in that one could determine the intentions of the human programmer by analyzing the algorithm. Maynard Smith argues that the information programmed in DNA by natural selection is intentional in the same sense. Weber offers two arguments against this view. First, he points out that DNA might contain nucleotide sequences that have arisen from chance mutations that happen to be beneficial. If natural selection has not yet operated on them, then Maynard Smith's teleosemantic theory implies they do not contain information. Yet, causally, such a nucleotide sequence would influence development in the same way as sequences that have been selected for. Weber's second criticism of Maynard Smith's account stems from a closer examination of the intentionality associated with computer programs. Weber claims that intentional states associated with computers are actually states of the human engineers who write the programs, not states of the computers themselves: "A computer program is a string of symbols that acquires a meaning only in the context of a community of engineers who understand what the program does and what it can be used for" (Weber 2005, p. 252). The analogue to human programmers in Maynard Smith's account is natural selection. But natural selection does not have intentional states. Hence, Weber concludes, the teleosemantic approach fails to save the idea that DNA contains information in the intentional sense.

It is tempting to think that information talk is impotent in this context and indeed, some philosophers have argued that such talk is misleading and should be abandoned (e.g., Sarkar 1996, Weber 2005, and possibly Rosenberg 2006). But others have taken the view that more careful thinking about concepts of information could lead to important insights (see next section).

Jablonka's aim is to construct a general definition of information that recognizes different types of information associated with different ways of acquiring, replicating, and transmitting information through space and time (Jablonka 2002). One of her concerns is that discussions about the meaning (or non-meaning) of information talk in biology are biased by the assumption that the genetic system should serve as the prototype for thinking about biological information. She believes that a general definition of information, one designed to capture the senses of information exemplified in environmental cues, man-made instructions, and evolved biological signals, as well as the sense of information in hereditary material, will lead to more useful generalizations and perspectives.

Jablonka says that the sense of information in all these situations involve a source, a receiver system (organism or organism-designed system), and a special type of reaction of the receiver to the source. She conceives the receiver's reaction as a complex, regulated chain of events leading to a response. Variations in the form of the source lead to variations in response. That is, the nature of the reaction depends on the way the source is organized. In addition, she points out, reactions in these situations are beneficial for the receiver over an appropriate period of time (in the case of organisms, over evolutionary time). Jablonka stresses that the benefit, or function, in the case of organisms should be understood in terms of evolution, with the focus on the evolution of the reaction system, not on the evolution of the source or the evolution of the final outcome of the reaction.

Jablonka's concept of information is intentional, and is related to the teleosemantic conceptions discussed above. According to standard teleosemantic conceptions, signals have information because the production of the signal was selected for in evolutionary history. According to Jablonka's view, however, an entity has information, not because it was selected for, but because the receiver's response to it was selected for. Whether something counts as information depends on whether entities respond to it in a (proper) functional way.

Jablonka summarizes her general account in the following definition:

A source an entity or process can be said to have information when a receiver system reacts to this source in a special way. The reaction of the receiver to the source has to be such that the reaction can actually or potentially change the state of the receiver in a (usually) functional manner. Moreover, there must be a consistent relation between variations in the form of the source and the corresponding changes in the receiver. (Jablonka 2002, p. 582)

Jablonka points out that according to this definition, genes do not have a theoretically privileged status; they are one among many sources of information. In addition, she insists the focus should be on the interpretive system of the receiver of the information, not on the source.

Jablonka argues that the information in DNA has little in common with the information in an alarm call, a cloudy sky, or a chemical signal in a bacterial colony. In the latter cases, the receivers' reactions (or responses) to the source are adaptive for the receiver: an alarm warns the bird there are predators around; the cloudy sky alerts the ape to the coming storm; the chemical alerts the bacteria to imminent starvation. (p. 585). But in the case of DNA, the receiver does not seem to react in a way that adapts the cell to anything in particular. Rather, DNA is simply read by the cell, so it is not information in the same sense DNA is information about the cell or the organism, rather than for the cell or the organism. (Jablonka 2002, p. 585). Nevertheless, Jablonka claims that her concept applies to genes even if it doesn't apply to DNA in general:

However, if instead of thinking about DNA in general we think about a particular locus with a particular allele, it is not difficult to think about the functional role of this particular allele in a particular set of environmental circumstances. Hence we can say for all types of information, including alarm calls and pieces of DNA, a source S (allele, alarm call, cloudy sky, etc.) carries information about a state E for a receiver R (an organism or an organism-designed product), if the receiver has an interpretation system that reacts to S in a way that usually ends up adapting R (or its designer, if R is humanly designed) to E. (Jablonka 2002, p. 585, my stress)

Given that Jablonka says that DNA in general is not information in the same sense as the alarm call and cloudy sky (and that this is the sense specified in the statement above), it is puzzling why she claims that the statement quoted above applies to all types of information. Furthermore, her claim that the statement above applies to particular alleles (and apparently not to DNA in general) is not straightforward. Jablonka's orignal account provides an illuminating way to think about information in biological processes such as cellular signaling processes. But her account does not substantiate the idea that genes and DNA contain information or help elucidate the role of genes and DNA.

Another approach to elucidating the role of genes and DNA is to replace loose information talk with concrete causal descriptions grounded in an explicit understanding of causation (Waters 2000, and forthcoming). This approach is premised on the idea that the basic theory and laboratory methods associated with molecular genetics can be understood in purely causal terms. The basic theory and methodology concerns the syntheses of DNA, RNA, and polypeptide molecules, not the alleged role of DNA in "programming" or "directing" development (section 2.3). The causal role of molecular genes in the syntheses of these molecules can be understood in terms of causally specific actual difference making. This involves two causal concepts, actual difference making and causal specificity. These concepts can be explicated in terms of the manipulability account of causation .

The concept of actual difference making applies in the context of an actual population containing entities that actually differ with respect to some property. In such a population, there might be many potential difference makers. That is, there may be many factors that could be manipulated to alter the relevant property of the entities in the population. But the actual difference makers are (roughly speaking) the potential difference makers that actual differ, and whose actual differences bring about the actual differences in the property in the population.

The concept of actual difference making can be illustrated with the difference principle of classical genetics (section 2.1). According to this principle, genes can be difference makers with respect to phenotypic differences in particular genetic and environmental contexts. So, it identifies potential difference makers. When this principle is used to explain an actual hereditary pattern, it is applied to genes that actually differed in the population exhibiting the pattern (often an experimental population). In such cases, an actual difference in the gene among the organisms in the population caused the actual phenotypic differences in that population (see Gifford 1990). That is, the gene was the actual difference maker, not just a potential difference maker (in that population).

The concept of actual difference making can be applied to molecular genetics as follows. In an actual cell, where a population of unprocessed RNA molecules differ with respect to linear sequence, the question arises: what causes these differences? The answer is that differences in genes in the cell cause the actual differences in the linear sequences in the unprocessed RNA molecules, and also in populations of RNA molecules and polypeptides. Genes are not the only actual difference makers of the actual differences in the linear sequences of these molecules. And this brings us to the second causal concept, causal specificity.

Causal specificity has been analyzed by Lewis (2000). The basic idea is that a causal relationship between two properties is specific when many different values in a causal property bring about many different and specifically different values of a resultant variable (the causal relationships instantiate something like a mathematical function). An on/off switch is not specific in this technical sense because the causal property has only two values (on and off). A dimmer switch is causally specific in this sense. Genes can be specific difference makers because many specific differences in the sequences of nucleotides in DNA result in specific differences in RNA molecules. This is not the case with many other actual difference makers, such as polymerases, which are more like on/off switches (with respect to differences in linear sequences). Biologists have discovered, however, the existence of other actual difference makers, besides genes and DNA, that are causally specific with respect to the linear sequences of processed RNA and polypeptides, to some degree at least. For example, in some cells splicing complexes called splicosomes actually differ in multiple ways that result in multiple, specific differences in the linear sequences of processed RNA molecules.

Originally posted here:
Molecular Genetics (Stanford Encyclopedia of Philosophy)

Posted in Molecular Genetics | Comments Off on Molecular Genetics (Stanford Encyclopedia of Philosophy)

Behavioral epigenetics – Wikipedia

Posted: October 30, 2016 at 9:50 pm

Behavioral epigenetics is the field of study examining the role of epigenetics in shaping animal (including human) behaviour.[1] It is an experimental science that seeks to explain how nurture shapes nature,[2] where nature refers to biological heredity[3] and nurture refers to virtually everything that occurs during the life-span (e.g., social-experience, diet and nutrition, and exposure to toxins).[4] Behavioral epigenetics attempts to provide a framework for understanding how the expression of genes is influenced by experiences and the environment[5] to produce individual differences in behaviour,[6]cognition[2]personality,[7] and mental health.[8][9]

Epigenetic gene regulation involves changes other than to the sequence of DNA and includes changes to histones (proteins around which DNA is wrapped) and DNA methylation.[4][10] These epigenetic changes can influence the growth of neurons in the developing brain[11] as well as modify activity of the neurons in the adult brain.[12][13] Together, these epigenetic changes on neuron structure and function can have a marked influence on an organism's behavior.[1]

In biology, and specifically genetics, epigenetics is the study of heritable changes in gene activity which are not caused by changes in the DNA sequence; the term can also be used to describe the study of stable, long-term alterations in the transcriptional potential of a cell that are not necessarily heritable.[14][15]

Examples of mechanisms that produce such changes are DNA methylation[16] and histone modification,[17] each of which alters how genes are expressed without altering the underlying DNA sequence. Gene expression can be controlled through the action of repressor proteins that attach to silencer regions of the DNA.

DNA methylation turns a gene "off" it results in the inability of genetic information to be read from DNA; removing the methyl tag can turn the gene back "on".[18][19]

Epigenetics has a strong influence on the development of an organism and can alter the expression of individual traits.[10] Epigenetic changes occur not only in the developing fetus, but also in individuals throughout the human life-span.[4][20] Because some epigenetic modifications can be passed from one generation to the next,[21] subsequent generations may be affected by the epigenetic changes that took place in the parents.[21]

The first documented example of epigenetics affecting behavior was provided by Michael Meaney and Moshe Szyf. While working at McGill University in Montral in 2004, they discovered that the type and amount of nurturing a mother rat provides in the early weeks of the rat's infancy determines how that rat responds to stress later in life.[4] This stress sensitivity was linked to a down-regulation in the expression of the glucocorticoid receptor in the brain. In turn, this down-regulation was found to be a consequence of the extent of methylation in the promoter region of the glucocorticoid receptor gene.[1] Immediately after birth, Meaney and Szyf found that methyl groups repress the glucocorticoid receptor gene in all rat pups, making the gene unable to unwind from the histone in order to be transcribed, causing a decreased stress response. Nurturing behaviours from the mother rat were found to stimulate activation of stress signalling pathways that remove methyl groups from DNA. This releases the tightly wound gene, exposing it for transcription. The glucocorticoid gene is activated, resulting in lowered stress response. Rat pups that receive a less nurturing upbringing are more sensitive to stress throughout their life-span.

This pioneering work in rodents has been difficult to replicate in humans because of a general lack of availability human brain tissue for measurement of epigenetic changes.[1]

In a small clinical study in humans published in 2008, epigenetic differences were linked to differences in risk-taking and reactions to stress in monozygotic twins.[22] The study identified twins with different life paths, wherein one twin displayed risk-taking behaviours, and the other displayed risk-averse behaviours. Epigenetic differences in DNA methylation of the CpG islands proximal to the DLX1 gene correlated with the differing behavior.[22] The authors of the twin study noted that despite the associations between epigenetic markers and differences personality traits, epigenetics cannot predict complex decision-making processes like career selection.[22]

Animal and human studies have found correlations between poor care during infancy and epigenetic changes that correlate with long-term impairments that result from neglect.[23][24][25]

Studies in rats have shown correlations between maternal care in terms of the parental licking of offspring and epigenetic changes.[23] A high level of licking results in a long-term reduction in stress response as measured behaviorally and biochemically in elements of the hypothalamic-pituitary-adrenal axis (HPA). Further, decreased DNA methylation of the glucocorticoid receptor gene were found in offspring that experienced a high level of licking; the glucorticoid receptor plays a key role in regulating the HPA.[23] The opposite is found in offspring that experienced low levels of licking, and when pups are switched, the epigenetic changes are reversed. This research provides evidence for an underlying epigenetic mechanism.[23] Further support comes from experiments with the same setup, using drugs that can increase or decrease methylation.[24] Finally, epigenetic variations in parental care can be passed down from one generation to the next, from mother to female offspring. Female offspring who received increased parental care (i.e., high licking) became mothers who engaged in high licking and offspring who received less licking became mothers who engaged in less licking.[23]

In humans, a small clinical research study showed the relationship between prenatal exposure to maternal mood and genetic expression resulting in increased reactivity to stress in offspring.[4] Three groups of infants were examined: those born to mothers medicated for depression with serotonin reuptake inhibitors; those born to depressed mothers not being treated for depression; and those born to non-depressed mothers. Prenatal exposure to depressed/anxious mood was associated with increased DNA methylation at the glucocorticoid receptor gene and to increased HPA axis stress reactivity.[23] The findings were independent of whether the mothers were being pharmaceutically treated for depression.[23]

Recent research has also shown the relationship of methylation of the maternal glucocorticoid receptor and maternal neural activity in response to mother-infant interactions on video.[26] Longitudinal follow-up of those infants will be important to understand the impact of early caregiving in this high-risk population on child epigenetics and behavior.

A 2010 review discusses the role of DNA methylation in memory formation and storage, but the precise mechanisms involving neuronal function, memory, and methylation reversal remain unclear.[27]

Studies in rodents have found that the environment exerts an influence on epigenetic changes related to cognition, in terms of learning and memory;[4]environmental enrichment correlated with increased histone acetylation, and verification by administering histone deacetylase inhibitors induced sprouting of dendrites, an increased number of synapses, and reinstated learning behaviour and access to long-term memories.[1][28] Research has also linked learning and long-term memory formation to reversible epigenetic changes in the hippocampus and cortex in animals with normal-functioning, non-damaged brains.[1][29] In human studies, post-mortem brains from Alzheimer's patients show increased histone de-acetylase levels.[30][31]

Note: colored text contains article links.

Environmental and epigenetic influences seem to work together to increase the risk of addiction.[40] For example, environmental stress has been shown to increase the risk of substance abuse.[41] In an attempt to cope with stress, alcohol and drugs can be used as an escape.[42] Once substance abuse commences, however, epigenetic alterations may further exacerbate the biological and behavioural changes associated with addiction.[40]

Even short-term substance abuse can produce long-lasting epigenetic changes in the brain of rodents,[40] via DNA methylation and histone modification.[17] Epigenetic modifications have been observed in studies on rodents involving ethanol, nicotine, cocaine, amphetamine, methamphetamine and opiates.[4] Specifically, these epigenetic changes modify gene expression, which in turn increases the vulnerability of an individual to engage in repeated substance overdose in the future. In turn, increased substance abuse results in even greater epigenetic changes in various components of a rodent's reward system[40] (e.g., in the nucleus accumbens[43]). Hence, a cycle emerges whereby changes in the pleasure-reward areas contribute to the long-lasting neural and behavioural changes associated with the increased likelihood of addiction, the maintenance of addiction and relapse.[40] In humans, alcohol consumption has been shown to produce epigenetic changes that contribute to the increased craving of alcohol. As such, epigenetic modifications may play a part in the progression from the controlled intake to the loss of control of alcohol consumption.[44] These alterations may be long-term, as is evidenced in smokers who still possess nicotine-related epigenetic changes ten years after cessation.[45] Therefore, epigenetic modifications[40] may account for some of the behavioural changes generally associated with addiction. These include: repetitive habits that increase the risk of disease, and personal and social problems; need for immediate gratification; high rates of relapse following treatment; and, the feeling of loss of control.[46]

Evidence for related epigenetic changes has come from human studies involving alcohol,[47] nicotine, and opiate abuse. Evidence for epigenetic changes stemming from amphetamine and cocaine abuse derives from animal studies. In animals, drug-related epigenetic changes in fathers have also been shown to negatively affect offspring in terms of poorer spatial working memory, decreased attention and decreased cerebral volume.[48]

Epigenetic changes may help to facilitate the development and maintenance of eating disorders via influences in the early environment and throughout the life-span.[20]Pre-natal epigenetic changes due to maternal stress, behaviour and diet may later predispose offspring to persistent, increased anxiety and anxiety disorders. These anxiety issues can precipitate the onset of eating disorders and obesity, and persist even after recovery from the eating disorders.[49]

Epigenetic differences accumulating over the life-span may account for the incongruent differences in eating disorders observed in monozygotic twins. At puberty, sex hormones may exert epigenetic changes (via DNA methylation) on gene expression, thus accounting for higher rates of eating disorders in men as compared to women. Overall, epigenetics contribute to persistent, unregulated self-control behaviours related to the urge to binge.[20]

Epigenetic changes including hypomethylation of glutamatergic genes (i.e., NMDA-receptor-subunit gene NR3B and the promoter of the AMPA-receptor-subunit gene GRIA2) in the post-mortem human brains of schizophrenics are associated with increased levels of the neurotransmitter glutamate.[50] Since glutamate is the most prevalent, fast, excitatory neurotransmitter, increased levels may result in the psychotic episodes related to schizophrenia. Interestingly, epigenetic changes affecting a greater number of genes have been detected in men with schizophrenia as compared to women with the illness.[51]

Population studies have established a strong association linking schizophrenia in children born to older fathers.[52][53] Specifically, children born to fathers over the age of 35 years are up to three times more likely to develop schizophrenia.[53] Epigenetic dysfunction in human male sperm cells, affecting numerous genes, have been shown to increase with age. This provides a possible explanation for increased rates of the disease in men.[51][53][not in citation given] To this end, toxins[51][53] (e.g., air pollutants) have been shown to increase epigenetic differentiation. Animals exposed to ambient air from steel mills and highways show drastic epigenetic changes that persist after removal from the exposure.[54] Therefore, similar epigenetic changes in older human fathers are likely.[53] Schizophrenia studies provide evidence that the nature versus nurture debate in the field of psychopathology should be re-evaluated to accommodate the concept that genes and the environment work in tandem. As such, many other environmental factors (e.g., nutritional deficiencies and cannabis use) have been proposed to increase the susceptibility of psychotic disorders like schizophrenia via epigenetics.[53]

Evidence for epigenetic modifications for bipolar disorder is unclear.[55] One study found hypomethylation of a gene promoter of a prefrontal lobe enzyme (i.e., membrane-bound catechol-O-methyl transferase, or COMT) in post-mortem brain samples from individuals with bipolar disorder. COMT is an enzyme that metabolizes dopamine in the synapse. These findings suggest that the hypomethylation of the promoter results in over-expression of the enzyme. In turn, this results in increased degradation of dopamine levels in the brain. These findings provide evidence that epigenetic modification in the prefrontal lobe is a risk factor for bipolar disorder.[56] However, a second study found no epigenetic differences in post-mortem brains from bipolar individuals.[57]

The causes of major depressive disorder (MDD) are poorly understood from a neuroscience perspective.[58] The epigenetic changes leading to changes in glucocorticoid receptor expression and its effect on the HPA stress system discussed above, have also been applied to attempts to understand MDD.[59]

Much of the work in animal models has focused on the indirect downregulation of brain derived neurotrophic factor (BDNF) by over-activation of the stress axis.[60][61] Studies in various rodent models of depression, often involving induction of stress, have found direct epigenetic modulation of BDNF as well.[62]

Epigenetics may be relevant to aspects of psychopathic behaviour through methylation and histone modification.[63] These processes are heritable but can also be influenced by environmental factors such as smoking and abuse.[64] Epigenetics may be one of the mechanisms through which the environment can impact the expression of the genome.[65] Studies have also linked methylation of genes associated with nicotine and alcohol dependence in women, ADHD, and drug abuse.[66][67][68] It is probable that epigenetic regulation as well as methylation profiling will play an increasingly important role in the study of the play between the environment and genetics of psychopaths.[69]

A study of the brains of 24 suicide completers, 12 of whom had a history of child abuse and 12 who did not, found decreased levels of glucocorticoid receptor in victims of child abuse and associated epigenetic changes.[70]

Several studies have indicated DNA cytosine methylation linked to the social behavior of insects, such as honeybees and ants. In honeybees, when nurse bee switched from her in-hive tasks to out foraging, cytosine methylation marks are changing. When a forager bee was reversed to do nurse duties, the cytosine methylation marks were also reversed.[71] Knocking down the DNMT3 in the larvae changed the worker to queen-like phenotype.[72] Queen and worker are two distinguish castes with different morphology, behavior, and physiology. Studies in DNMT3 silencing also indicated DNA methylation may regulate gene alternative splicing and pre-mRNA maturation.[73]

Many researchers contribute information to the Human Epigenome Consortium.[74] The aim of future research is to reprogram epigenetic changes to help with addiction, mental illness, age related changes,[2] memory decline, and other issues.[1] However, the sheer volume of consortium-based data makes analysis difficult.[2] Most studies also focus on one gene.[75] In actuality, many genes and interactions between them likely contribute to individual differences in personality, behaviour and health.[76] As social scientists often work with many variables, determining the number of affected genes also poses methodological challenges. More collaboration between medical researchers, geneticists and social scientists has been advocated to increase knowledge in this field of study.[77]

Limited access to human brain tissue poses a challenge to conducting human research.[2] Not yet knowing if epigenetic changes in the blood and (non-brain) tissues parallel modifications in the brain, places even greater reliance on brain research.[74] Although some epigenetic studies have translated findings from animals to humans,[70] some researchers caution about the extrapolation of animal studies to humans.[1] One view notes that when animal studies do not consider how the subcellular and cellular components, organs and the entire individual interact with the influences of the environment, results are too reductive to explain behaviour.[76]

Some researchers note that epigenetic perspectives will likely be incorporated into pharmacological treatments.[8] Others caution that more research is necessary as drugs are known to modify the activity of multiple genes and may, therefore, cause serious side effects.[1] However, the ultimate goal is to find patterns of epigenetic changes that can be targeted to treat mental illness, and reverse the effects of childhood stressors, for example. If such treatable patterns eventually become well-established, the inability to access brains in living humans to identify them poses an obstacle to pharmacological treatment.[74] Future research may also focus on epigenetic changes that mediate the impact of psychotherapy on personality and behaviour.[23]

Most epigenetic research is correlational; it merely establishes associations. More experimental research is necessary to help establish causation.[78] Lack of resources has also limited the number of intergenerational studies.[2] Therefore, advancing longitudinal[77] and multigenerational, experience-dependent studies will be critical to further understanding the role of epigenetics in psychology.[5]

Follow this link:
Behavioral epigenetics - Wikipedia

Posted in Epigenetics | Comments Off on Behavioral epigenetics – Wikipedia

Stem Cell Collection – University of Kansas Hospital

Posted: October 30, 2016 at 9:48 pm

Your or a donor provides stem cells for transplant

Stem cell collection is a relatively painless procedure. It takes three to five days to collect your stem cells for an autologous transplant. It takes one to two days to collect donor stem cells for an allogeneic transplant. For either type of transplant, it takes four or five hours for each day's collection.

If your donor is related, we will evaluate him or her before the stem cell collection. This will include an electrocardiogram, chest X-ray and blood tests. The evaluation and collection process will take seven to 10 days.

We match patients with unrelated donors who are registered with the National Marrow Donor Program. These donors are evaluated in their home cities before they donate.

This type of transplant has become more common than bone marrow stem cell transplants. Peripheral stem cells are easier to collect because donors don't require general anesthesia during collection.

During the collection procedure, called apheresis, we draw donor blood into a vessel and separate the stem cells. The remaining blood is returned to the donor.

Before and during collection, the donor will have injections of protein growth factors. Called mobilization, this stimulates the bone marrow to increase the number of stem cells in the blood.

We collect bone marrow stem cells from the pelvic bone and freeze them until the time of transplant. Because this requires general anesthesia, donors may stay in the hospital overnight.

The rest is here:
Stem Cell Collection - University of Kansas Hospital

Posted in Kansas Stem Cells | Comments Off on Stem Cell Collection – University of Kansas Hospital

Human Genetic Enhancements: A Transhumanist Perspective

Posted: October 30, 2016 at 5:47 am

1. What is Transhumanism?

Transhumanism is a loosely defined movement that has developed gradually over the past two decades. It promotes an interdisciplinary approach to understanding and evaluating the opportunities for enhancing the human condition and the human organism opened up by the advancement of technology. Attention is given to both present technologies, like genetic engineering and information technology, and anticipated future ones, such as molecular nanotechnology and artificial intelligence.1

The enhancement options being discussed include radical extension of human health-span, eradication of disease, elimination of unnecessary suffering, and augmentation of human intellectual, physical, and emotional capacities.2 Other transhumanist themes include space colonization and the possibility of creating superintelligent machines, along with other potential developments that could profoundly alter the human condition. The ambit is not limited to gadgets and medicine, but encompasses also economic, social, institutional designs, cultural development, and psychological skills and techniques.

Transhumanists view human nature as a work-in-progress, a half-baked beginning that we can learn to remold in desirable ways. Current humanity need not be the endpoint of evolution. Transhumanists hope that by responsible use of science, technology, and other rational means we shall eventually manage to become post-human, beings with vastly greater capacities than present human beings have.

Some transhumanists take active steps to increase the probability that they personally will survive long enough to become post-human, for example by choosing a healthy lifestyle or by making provisions for having themselves cryonically suspended in case of de-animation.3 In contrast to many other ethical outlooks, which in practice often reflect a reactionary attitude to new technologies, the transhumanist view is guided by an evolving vision to take a more active approach to technology policy. This vision, in broad strokes, is to create the opportunity to live much longer and healthier lives, to enhance our memory and other intellectual faculties, to refine our emotional experiences and increase our subjective sense of well-being, and generally to achieve a greater degree of control over our own lives. This affirmation of human potential is offered as an alternative to customary injunctions against playing God, messing with nature, tampering with our human essence, or displaying punishable hubris.

Transhumanism does not entail technological optimism. While future technological capabilities carry immense potential for beneficial deployments, they also could be misused to cause enormous harm, ranging all the way to the extreme possibility of intelligent life becoming extinct. Other potential negative outcomes include widening social inequalities or a gradual erosion of the hard-to-quantify assets that we care deeply about but tend to neglect in our daily struggle for material gain, such as meaningful human relationships and ecological diversity. Such risks must be taken very seriously, as thoughtful transhumanists fully acknowledge.4

Transhumanism has roots in secular humanist thinking, yet is more radical in that it promotes not only traditional means of improving human nature, such as education and cultural refinement, but also direct application of medicine and technology to overcome some of our basic biological limits.

2. A Core Transhumanist Value: Exploring the Post-human Realm

The range of thoughts, feelings, experiences, and activities that are accessible to human organisms presumably constitute only a tiny part of what is possible. There is no reason to think that the human mode of being is any more free of limitations imposed by our biological nature than are the modes of being of other animals. Just as chimpanzees lack the brainpower to understand what it is like to be human, so too do we lack the practical ability to form a realistic intuitive understanding of what it would be like to be post-human.

This point is distinct from any principled claims about impossibility. We need not assert that post-humans would not be Turing computable or that their concepts could not be expressed by any finite sentences in human language. The impossibility is more like the impossibility for us to visualize a twenty-dimensional hypersphere or to read, with perfect recollection and understanding, every book in the Library of Congress. Our own current mode of being, therefore, spans but a minute subspace of what is possible or permitted by the physical constraints of the universe. It is not farfetched to suppose that there are parts of this larger space that represent extremely valuable ways of living, feeling, and thinking.

We can conceive of aesthetic and contemplative pleasures whose blissfulness vastly exceeds what any human being has yet experienced. We can imagine beings that reach a much greater level of personal development and maturity than current human beings do, because they have the opportunity to live for hundreds or thousands of years with full bodily and psychic vigor. We can conceive of beings that are much smarter than us, that can read books in seconds, that are much more brilliant philosophers than we are, that can create artworks, which, even if we could understand them only on the most superficial level, would strike us as wonderful masterpieces. We can imagine love that is stronger, purer, and more secure than any human being has yet harbored. Our everyday intuitions about values are constrained by the narrowness of our experience and the limitations of our powers of imagination. We should leave room in our thinking for the possibility that as we develop greater capacities, we shall come to discover values that will strike us as being of a far higher order than those we can realize as un-enhanced biological humans beings.

The conjecture that there are greater values than we can currently fathom does not imply that values are not defined in terms of our current dispositions. Take, for example, a dispositional theory of value such as the one described by David Lewis.5 According to Lewiss theory, something is a value for you if and only if you would want to want it if you were perfectly acquainted with it and you were thinking and deliberating as clearly as possible about it. On this view, there may be values that we do not currently want, and that we do not even currently want to want, because we may not be perfectly acquainted with them or because we are not ideal deliberators. Some values pertaining to certain forms of post-human existence may well be of this sort; they may be values for us now, and they may be so in virtue of our current dispositions, and yet we may not be able to fully appreciate them with our current limited deliberative capacities and our lack of the receptive faculties required for full acquaintance with them. This point is important because it shows that the transhumanist view that we ought to explore the realm of post-human values does not entail that we should forego our current values. The post-human values can be our current values, albeit ones that we have not yet clearly comprehended. Transhumanism does not require us to say that we should favor post-human beings over human beings, but that the right way of favoring human beings is by enabling us to realize our ideals better and that some of our ideals may well be located outside the space of modes of being that are accessible to us with our current biological constitution.

We can overcome many of our biological limitations. It is possible that there are some limitations that are impossible for us to transcend, not only because of technological difficulties but on metaphysical grounds. Depending on what our views are about what constitutes personal identity, it could be that certain modes of being, while possible, are not possible for us, because any being of such a kind would be so different from us that they could not be us. Concerns of this kind are familiar from theological discussions of the afterlife. In Christian theology, some souls will be allowed by God to go to heaven after their time as corporal creatures is over. Before being admitted to heaven, the souls would undergo a purification process in which they would lose many of their previous bodily attributes. Skeptics may doubt that the resulting minds would be sufficiently similar to our current minds for it to be possible for them to be the same person. A similar predicament arises within transhumanism: if the mode of being of a post-human being is radically different from that of a human being, then we may doubt whether a post-human being could be the same person as a human being, even if the post-human being originated from a human being.

We can, however, envision many enhancements that would not make it impossible for the post-transformation someone to be the same person as the pre-transformation person. A person could obtain considerable increased life expectancy, intelligence, health, memory, and emotional sensitivity, without ceasing to exist in the process. A persons intellectual life can be transformed radically by getting an education. A persons life expectancy can be extended substantially by being unexpectedly cured from a lethal disease. Yet these developments are not viewed as spelling the end of the original person. In particular, it seems that modifications that add to a persons capacities can be more substantial than modifications that subtract, such as brain damage. If most of someone currently is, including her most important memories, activities, and feelings, is preserved, then adding extra capacities on top of that would not easily cause the person to cease to exist.

Preservation of personal identity, especially if this notion is given a narrow construal, is not everything. We can value other things than ourselves, or we might regard it as satisfactory if some parts or aspects of ourselves survive and flourish, even if that entails giving up some parts of ourselves such that we no longer count as being the same person. Which parts of ourselves we might be willing to sacrifice may not become clear until we are more fully acquainted with the full meaning of the options. A careful, incremental exploration of the post-human realm may be indispensable for acquiring such an understanding, although we may also be able to learn from each others experiences and from works of the imagination. Additionally, we may favor future people being posthuman rather than human, if the posthumans would lead lives more worthwhile than the alternative humans would. Any reasons stemming from such considerations would not depend on the assumption that we ourselves could become posthuman beings.

Transhumanism promotes the quest to develop further so that we can explore hitherto inaccessible realms of value. Technological enhancement of human organisms is a means that we ought to pursue to this end. There are limits to how much can be achieved by low-tech means such as education, philosophical contemplation, moral self-scrutiny and other such methods proposed by classical philosophers with perfectionist leanings, including Plato, Aristotle, and Nietzsche, or by means of creating a fairer and better society, as envisioned by social reformists such as Marx or Martin Luther King. This is not to denigrate what we can do with the tools we have today. Yet ultimately, transhumanists hope to go further.

3. The Morality of Human Germ-Line Genetic Engineering

Most potential human enhancement technologies have so far received scant attention in the ethics literature. One exception is genetic engineering, the morality of which has been extensively debated in recent years. To illustrate how the transhumanist approach can be applied to particular technologies, we shall therefore now turn to consider the case of human germ-line genetic enhancements.

Certain types of objection against germ-line modifications are not accorded much weight by a transhumanist interlocutor. For instance, objections that are based on the idea that there is something inherently wrong or morally suspect in using science to manipulate human nature are regarded by transhumanists as wrongheaded. Moreover, transhumanists emphasize that particular concerns about negative aspects of genetic enhancements, even when such concerns are legitimate, must be judged against the potentially enormous benefits that could come from genetic technology successfully employed.6 For example, many commentators worry about the psychological effects of the use of germ-line engineering. The ability to select the genes of our children and to create so-called designer babies will, it is claimed, corrupt parents, who will come to view their children as mere products.7 We will then begin to evaluate our offspring according to standards of quality control, and this will undermine the ethical ideal of unconditional acceptance of children, no matter what their abilities and traits. Are we really prepared to sacrifice on the altar of consumerism even those deep values that are embodied in traditional relationships between child and parents? Is the quest for perfection worth this cultural and moral cost? A transhumanist should not dismiss such concerns as irrelevant. Transhumanists recognize that the depicted outcome would be bad. We do not want parents to love and respect their children less. We do not want social prejudice against people with disabilities to get worse. The psychological and cultural effects of commodifying human nature are potentially important.

But such dystopian scenarios are speculations. There is no firm ground for believing that the alleged consequences would actually happen. What relevant evidence we have, for instance regarding the treatment of children who have been conceived through the use of in vitro fertilization or embryo screening, suggests that the pessimistic prognosis is alarmist. Parents will in fact love and respect their children even when artificial means and conscious choice play a part in procreation.

We might speculate, instead, that germ-line enhancements will lead to more love and parental dedication. Some mothers and fathers might find it easier to love a child who, thanks to enhancements, is bright, beautiful, healthy, and happy. The practice of germ-line enhancement might lead to better treatment of people with disabilities, because a general demystification of the genetic contributions to human traits could make it clearer that people with disabilities are not to blame for their disabilities and a decreased incidence of some disabilities could lead to more assistance being available for the remaining affected people to enable them to live full, unrestricted lives through various technological and social supports. Speculating about possible psychological or cultural effects of germ-line engineering can therefore cut both ways. Good consequences no less than bad ones are possible. In the absence of sound arguments for the view that the negative consequences would predominate, such speculations provide no reason against moving forward with the technology.

Ruminations over hypothetical side-effects may serve to make us aware of things that could go wrong so that we can be on the lookout for untoward developments. By being aware of the perils in advance, we will be in a better position to take preventive countermeasures. For instance, if we think that some people would fail to realize that a human clone would be a unique person deserving just as much respect and dignity as any other human being, we could work harder to educate the public on the inadequacy of genetic determinism. The theoretical contributions of well-informed and reasonable critics of germ-line enhancement could indirectly add to our justification for proceeding with germ-line engineering. To the extent that the critics have done their job, they can alert us to many of the potential untoward consequences of germ-line engineering and contribute to our ability to take precautions, thus improving the odds that the balance of effects will be positive. There may well be some negative consequences of human germ-line engineering that we will not forestall, though of course the mere existence of negative effects is not a decisive reason not to proceed. Every major technology has some negative consequences. Only after a fair comparison of the risks with the likely positive consequences can any conclusion based on a cost-benefit analysis be reached.

In the case of germ-line enhancements, the potential gains are enormous. Only rarely, however, are the potential gains discussed, perhaps because they are too obvious to be of much theoretical interest. By contrast, uncovering subtle and non-trivial ways in which manipulating our genome could undermine deep values is philosophically a lot more challenging. But if we think about it, we recognize that the promise of genetic enhancements is anything but insignificant. Being free from severe genetic diseases would be good, as would having a mind that can learn more quickly, or having a more robust immune system. Healthier, wittier, happier people may be able to reach new levels culturally. To achieve a significant enhancement of human capacities would be to embark on the transhuman journey of exploration of some of the modes of being that are not accessible to us as we are currently constituted, possibly to discover and to instantiate important new values. On an even more basic level, genetic engineering holds great potential for alleviating unnecessary human suffering. Every day that the introduction of effective human genetic enhancement is delayed is a day of lost individual and cultural potential, and a day of torment for many unfortunate sufferers of diseases that could have been prevented. Seen in this light, proponents of a ban or a moratorium on human genetic modification must take on a heavy burden of proof in order to have the balance of reason tilt in their favor. Transhumanists conclude that the challenge has not been met.

4. Should Human Reproduction be Regulated?

One way of going forward with genetic engineering is to permit everything, leaving all choices to parents. While this attitude may be consistent with transhumanism, it is not the best transhumanist approach. One thing that can be said for adopting a libertarian stance in regard to human reproduction is the sorry track record of socially planned attempts to improve the human gene pool. The list of historical examples of state intervention in this domain ranges from the genocidal horrors of the Nazi regime, to the incomparably milder but still disgraceful semi-coercive sterilization programs of mentally impaired individuals favored by many well-meaning socialists in the past century, to the controversial but perhaps understandable program of the current Chinese government to limit population growth. In each case, state policies interfered with the reproductive choices of individuals. If parents had been left to make the choices for themselves, the worst transgressions of the eugenics movement would not have occurred. Bearing this in mind, we ought to think twice before giving our support to any proposal that would have the state regulate what sort of children people are allowed to have and the methods that may be used to conceive them.8

We currently permit governments to have a role in reproduction and child-rearing and we may reason by extension that there would likewise be a role in regulating the application of genetic reproductive technology. State agencies and regulators play a supportive and supervisory role, attempting to promote the interests of the child. Courts intervene in cases of child abuse or neglect. Some social policies are in place to support children from disadvantaged backgrounds and to ameliorate some of the worst inequities suffered by children from poor homes, such as through the provision of free schooling. These measures have analogues that apply to genetic enhancement technologies. For example, we ought to outlaw genetic modifications that are intended to damage the child or limit its opportunities in life, or that are judged to be too risky. If there are basic enhancements that would be beneficial for a child but that some parents cannot afford, then we should consider subsidizing those enhancements, just as we do with basic education. There are grounds for thinking that the libertarian approach is less appropriate in the realm of reproduction than it is in other areas. In reproduction, the most important interests at stake are those of the child-to-be, who cannot give his or her advance consent or freely enter into any form of contract. As it is, we currently approve of many measures that limit parental freedoms. We have laws against child abuse and child neglect. We have obligatory schooling. In some cases, we can force needed medical treatment on a child, even against the wishes of its parents.

There is a difference between these social interventions with regard to children and interventions aimed at genetic enhancements. While there is a consensus that nobody should be subjected to child abuse and that all children should have at least a basic education and should receive necessary medical care, it is unlikely that we will reach an agreement on proposals for genetic enhancements any time soon. Many parents will resist such proposals on principled grounds, including deep-seated religious or moral convictions. The best policy for the foreseeable future may therefore be to not legally require any genetic enhancements, except perhaps in extreme cases for which there is no alternative treatment. Even in such cases, it is dubious that the social climate in many countries is ready for mandatory genetic interventions.

The scope for ethics and public policy, however, extend far beyond the passing of laws requiring or banning specific interventions. Even if a given enhancement option is neither outlawed nor legally required, we may still seek to discourage or encourage its use in a variety of ways. Through subsidies and taxes, research-funding policies, genetic counseling practices and guidelines, laws regulating genetic information and genetic discrimination, provision of health care services, regulation of the insurance industry, patent law, education, and through the allocation of social approbation and disapproval, we may influence the direction in which particular technologies are applied. We may appropriately ask, with regard to genetic enhancement technologies, which types of applications we ought to promote or discourage.

5. Which Modifications Should Be Promoted and which Discouraged?

An externality, as understood by economists, is a cost or a benefit of an action that is not carried by a decision-maker. An example of a negative externality might be found in a firm that lowers its production costs by polluting the environment. The firm enjoys most of the benefits while escaping the costs, such as environmental degradation, which may instead paid by people living nearby. Externalities can also be positive, as when people put time and effort into creating a beautiful garden outside their house. The effects are enjoyed not exclusively by the gardeners but spill over to passersby. As a rule of thumb, sound social policy and social norms would have us internalize many externalities so that the incentives of producers more closely match the social value of production. We may levy a pollution tax on the polluting firm, for instance, and give our praise to the home gardeners who beautify the neighborhood.

Genetic enhancements aimed at the obtainment of goods that are desirable only in so far as they provide a competitive advantage tend to have negative externalities. An example of such a positional good, as economists call them, is stature. There is evidence that being tall is statistically advantageous, at least for men in Western societies. Taller men earn more money, wield greater social influence, and are viewed as more sexually attractive. Parents wanting to give their child the best possible start in life may rationally choose a genetic enhancement that adds an inch or two to the expected length of their offspring. Yet for society as a whole, there seems to be no advantage whatsoever in people being taller. If everybody grew two inches, nobody would be better off than they were before. Money spent on a positional good like length has little or no net effect on social welfare and is therefore, from societys point of view, wasted.

Health is a very different type of good. It has intrinsic benefits. If we become healthier, we are personally better off and others are not any worse off. There may even be a positive externality of enhancing ours own health. If we are less likely to contract a contagious disease, others benefit by being less likely to get infected by us. Being healthier, you may also contribute more to society and consume less of publicly funded healthcare.

If we were living in a simple world where people were perfectly rational self-interested economic agents and where social policies had no costs or unintended effects, then the basic policy prescription regarding genetic enhancements would be relatively straightforward. We should internalize the externalities of genetic enhancements by taxing enhancements that have negative externalities and subsidizing enhancements that have positive externalities. Unfortunately, crafting policies that work well in practice is considerably more difficult. Even determining the net size of the externalities of a particular genetic enhancement can be difficult. There is clearly an intrinsic value to enhancing memory or intelligence in as much as most of us would like to be a bit smarter, even if that did not have the slightest effect on our standing in relation to others. But there would also be important externalities, both positive and negative. On the negative side, others would suffer some disadvantage from our increased brainpower in that their own competitive situation would be worsened. Being more intelligent, we would be more likely to attain high-status positions in society, positions that would otherwise have been enjoyed by a competitor. On the positive side, others might benefit from enjoying witty conversations with us and from our increased taxes.

If in the case of intelligence enhancement the positive externalities outweigh the negative ones, then a prima facie case exists not only for permitting genetic enhancements aimed at increasing intellectual ability, but for encouraging and subsidizing them too. Whether such policies remain a good idea when all practicalities of implementation and political realities are taken into account is another matter. But at least we can conclude that an enhancement that has both significant intrinsic benefits for an enhanced individual and net positive externalities for the rest of society should be encouraged. By contrast, enhancements that confer only positional advantages, such as augmentation of stature or physical attractiveness, should not be socially encouraged, and we might even attempt to make a case for social policies aimed at reducing expenditure on such goods, for instance through a progressive tax on consumption.9

6. The Issue of Equality

One important kind of externality in germ-line enhancements is their effects on social equality. This has been a focus for many opponents of germ-line genetic engineering who worry that it will widen the gap between haves and have-nots. Today, children from wealthy homes enjoy many environmental privileges, including access to better schools and social networks. Arguably, this constitutes an inequity against children from poor homes. We can imagine scenarios where such inequities grow much larger thanks to genetic interventions that only the rich can afford, adding genetic advantages to the environmental advantages already benefiting privileged children. We could even speculate about the members of the privileged stratum of society eventually enhancing themselves and their offspring to a point where the human species, for many practical purposes, splits into two or more species that have little in common except a shared evolutionary history.10 The genetically privileged might become ageless, healthy, super-geniuses of flawless physical beauty, who are graced with a sparkling wit and a disarmingly self-deprecating sense of humor, radiating warmth, empathetic charm, and relaxed confidence. The non-privileged would remain as people are today but perhaps deprived of some their self-respect and suffering occasional bouts of envy. The mobility between the lower and the upper classes might disappear, and a child born to poor parents, lacking genetic enhancements, might find it impossible to successfully compete against the super-children of the rich. Even if no discrimination or exploitation of the lower class occurred, there is still something disturbing about the prospect of a society with such extreme inequalities.

While we have vast inequalities today and regard many of these as unfair, we also accept a wide range of inequalities because we think that they are deserved, have social benefits, or are unavoidable concomitants to free individuals making their own and sometimes foolish choices about how to live their lives. Some of these justifications can also be used to exonerate some inequalities that could result from germ-line engineering. Moreover, the increase in unjust inequalities due to technology is not a sufficient reason for discouraging the development and use of the technology. We must also consider its benefits, which include not only positive externalities but also intrinsic values that reside in such goods as the enjoyment of health, a soaring mind, and emotional well-being.

We can also try to counteract some of the inequality-increasing tendencies of enhancement technology with social policies. One way of doing so would be by widening access to the technology by subsidizing it or providing it for free to children of poor parents. In cases where the enhancement has considerable positive externalities, such a policy may actually benefit everybody, not just the recipients of the subsidy. In other cases, we could support the policy on the basis of social justice and solidarity.

Even if all genetic enhancements were made available to everybody for free, however, this might still not completely allay the concern about inequity. Some parents might choose not to give their children any enhancements. The children would then have diminished opportunities through no fault of their own. It would be peculiar, however, to argue that governments should respond to this problem by limiting the reproductive freedom of the parents who wish to use genetic enhancements. If we are willing to limit reproductive freedom through legislation for the sake of reducing inequities, then we might as well make some enhancements obligatory for all children. By requiring genetic enhancements for everybody to the same degree, we would not only prevent an increase in inequalities but also reap the intrinsic benefits and the positive externalities that would come from the universal application of enhancement technology. If reproductive freedom is regarded as too precious to be curtailed, then neither requiring nor banning the use of reproductive enhancement technology is an available option. In that case, we would either have to tolerate inequities as a price worth paying for reproductive freedom or seek to remedy the inequities in ways that do not infringe on reproductive freedom.

All of this is based on the hypothesis that germ-line engineering would in fact increase inequalities if left unregulated and no countermeasures were taken. That hypothesis might be false. In particular, it might turn out to be technologically easier to cure gross genetic defects than to enhance an already healthy genetic constitution. We currently know much more about many specific inheritable diseases, some of which are due to single gene defects, than we do about the genetic basis of talents and desirable qualities such as intelligence and longevity, which in all likelihood are encoded in complex constellations of multiple genes. If this turns out to be the case, then the trajectory of human genetic enhancement may be one in which the first thing to happen is that the lot of the genetically worst-off is radically improved, through the elimination of diseases such as Tay Sachs, Lesch-Nyhan, Downs Syndrome, and early-onset Alzheimers disease. This would have a major leveling effect on inequalities, not primarily in the monetary sense, but with respect to the even more fundamental parameters of basic opportunities and quality of life.

7. Are Germ-Line Interventions Wrong Because They Are Irreversible?

Another frequently heard objection against germ-line genetic engineering is that it would be uniquely hazardous because the changes it would bring are irreversible and would affect all generations to come. It would be highly irresponsible and arrogant of us to presume we have the wisdom to make decisions about what should be the genetic constitutions of people living many generations hence. Human fallibility, on this objection, gives us good reason not to embark on germ-line interventions. For our present purposes, we can set aside the issue of the safety of the procedure, understood narrowly, and stipulate that the risk of medical side-effects has been reduced to an acceptable level. The objection under consideration concerns the irreversibility of germ-line interventions and the lack of predictability of its long-term consequences; it forces us to ask if we possess the requisite wisdom for making genetic choices on behalf of future generations.

Human fallibility is not a conclusive ground for resisting germ-line genetic enhancements. The claim that such interventions would be irreversible is incorrect. Germ-line interventions can be reversed by other germ-line interventions. Moreover, considering that technological progress in genetics is unlikely to grind to an abrupt halt any time soon, we can count on future generations being able to reverse our current germ-line interventions even more easily than we can currently implement them. With advanced genetic technology, it might even be possible to reverse many germ-line modifications with somatic gene therapy, or with medical nanotechnology.11 Technologically, germ-line changes are perfectly reversible by future generations.

It is possible that future generations might choose to retain the modifications that we make. If that turns out to be the case, then the modifications, while not irreversible, would nevertheless not actually be reversed. This might be a good thing. The possibility of permanent consequences is not an objection against germ-line interventions any more than it is against social reforms. The abolition of slavery and the introduction of general suffrage might never be reversed; indeed, we hope they will not be. Yet this is no reason for people to have resisted the reforms. Likewise, the potential for everlasting consequences, including ones we cannot currently reliably forecast, in itself constitutes no reason to oppose genetic intervention. If immunity against horrible diseases and enhancements that expand the opportunities for human growth are passed on to subsequent generations in perpetuo, it would be a cause for celebration, not regret.

There are some kinds of changes that we need be particularly careful about. They include modifications of the drives and motivations of our descendants. For example, there are obvious reasons why we might think it worthwhile to seek to reduce our childrens propensity to violence and aggression. We would have to take care, however, that we do not do this in a way that would make future people overly submissive or complacent. We can conceive of a dystopian scenario along the lines of Brave New World, in which people are leading shallow lives but have been manipulated to be perfectly content with their sub-optimal existence. If the people transferred their shallow values to their children, humanity could get permanently stuck in a not-very-good state, having foolishly changed itself to lack any desire to strive for something better. This outcome would be dystopian because a permanent cap on human development would destroy the transhumanist hope of exploring the post-human realm. Transhumanists therefore place an emphasis on modifications which, in addition to promoting human well-being, also open more possibilities than they close and which increase our ability to make subsequent choices wisely. Longer active lifespans, better memory, and greater intellectual capacities are plausible candidates for enhancements that would improve our ability to figure out what we ought to do next. They would be a good place to start.12

Notes

1. See Eric K. Drexler, Nanosystems: Molecular Machinery, Manufacturing, and Computation (New York: John Wiley & Sons, Inc., 1992); Ray Kurzweil, The Age of Spiritual Machines: When Computers Exceed Human Intelligence (New York: Viking, 1999); Hans Moravec, Robot: Mere Machine to Transcendent Mind. (New York: Oxford University Press, 1999).

2. See Robert A. Freitas Jr., Nanomedicine, Volume 1: Basic Capabilities (Georgetown, Tex.: Landes Bioscience, 1999).

3. See Robert Ettinger, The Prospect of Immortality (New York: Doubleday, 1964); James Hughes, The Future of Death: Cryonics and the Telos of Liberal Individualism, Journal of Evolution and Technology 6 (2001).

4. See Eric K. Drexler, Engines of Creation: The Coming Era of Nanotechnology (London: Fourth Estate, 1985).

5. See David Lewis, Dispositional Theories of Value, Proceedings of the Aristotelian Society Supp. 63 (1989).

6. See Erik Parens, ed., Enhancing Human Traits: Ethical and Social Implications. (Washington, D. C: Georgetown University Press, 1998).

7. See Leon Kass, Life, Liberty, and Defense of Dignity: The Challenge for Bioethics (San Francisco: Encounter Books, 2002).

8. See Jonathan Glover, What Sort of People Should There Be? (New York: Penguin, 1984); Gregory Stock, Redesigning Humans: Our Inevitable Genetic Future (New York, Houghton Mifflin, 2002); and Allen Buchanan et al., From Chance to Choice: Genetics & Justice (Cambridge, England: Cambridge University Press, 2002).

9. See Robert H. Frank, Luxury Fever: Why Money Fails to Satisfy in an Era of Excess (New York: Free Press, 1999).

10. Cf. Lee M. Silver, Remaking Eden: How Genetic Engineering and Cloning will Transform the American Family (New York: Avon Books, 1997); and Nancy Kress, Beggars in Spain (Avon Books, 1993).

11. See Freitas, op. cit.

12. For their helpful comments I am grateful to Heather Bradshaw, Robert A. Freitas Jr., James Hughes, Gerald Lang, Matthew Liao, Thomas Magnell, David Rodin, Jeffrey Soreff, Mike Treder, Mark Walker, Michael Weingarten, and an anonymous referee of the Journal of Value Inquiry.

Read the rest here:
Human Genetic Enhancements: A Transhumanist Perspective

Posted in Transhumanist | Comments Off on Human Genetic Enhancements: A Transhumanist Perspective

hCG Diet NJ. The hCG Diet is a physician supervised …

Posted: October 30, 2016 at 5:45 am

by Ramtin Kassir, M.D., F.A.C.S.

The HCG Diet is a physician supervised weight loss program that can help shed 1 to 3 lbs per day by modifying their metabolism and eating habits for long term results.

HCG (Human Choriogonadotropin) is a hormone that is injected daily (or taken orally with under-the tongue drops), accompanied by a VLCD (very low calorie diet of approximately 500 calories).

Each course of treatment consists of a minimum 26 days, with 23 of those days requiring a daily dose of HCG, either through injections or under-the-tongue drops. Treatment can last as long as 43 days (with as many as 40 injections), unless a patient is able to lose 34 to 40 pounds before the allotted time has passed. Patients will not receive HCG injections for the last three days of treatment so the hormone can cycle completely out of their bodies before they resume a normal diet. (It also takes about three days for HCG's effects to "kick in.")

Human Chorionic Gonadotropin (HCG) is a natural hormone produce by our bodies. It has many functions and is used to treat many medical conditions. It is the hormone that almost completely controls metabolic functions.

Most men and women can use the HCG diet. It is recommended that you coordinate with Dr. Kassir concerning your weight loss, along with the appropriate implementation of the HCG protocol.

HCG along with the Modified Simeon Diet causes your hypothalamus to mobilize fat from your abnormal fat cells to make it available for use. While you are only taking in ~500 calories, your hypothalamus is continually releasing the fat stored in your body. It maintains a normal basal metabolic rate and resets the hypothalamus to prevent future regain.

Yes!! Men usually get better results than women and it is just as safe for men as for women. HCG is actually already found in men. In fact, it is present in every human tissue, including males and non-pregnant females.

The hypothalamus gland moderates the thyroid, adrenals, fat storage and even more importantly, your metabolic rate.

Absolutely! All women experience very high levels of HCG during the nine months of their pregnancy with no adverse affects. In comparison, the amount used for weight loss is miniscule and has very few, if any, side effects.

The HGC is taken in the form of sublingual drops. The drops are placed under the tongue twice a day (morning and afternoon). It can also be injected daily.

That will be determined by you and the doctor based upon how much weight you would like to lose. Those decisions are made during your consultation with Dr. Kassir.

That will depend on your starting point. People with more weight to lose will experience a greater loss. On average, Dr. Kassirs patients lose around 15 to 40 lbs. per month.

The HCG is mobilizing your stored fat and making it available to your body as a source of energy. So even though you are consuming less calories, your body has access to the energy you have stored in fat cells. Therefore, you are utilizing thousands of calories that already exist in your body each day. Overall, most patients are not hungry and feel very good while on the program.

Yes. The HCG allows you to maintain the diet even while performing hard work or vigorous exercise. By mobilizing and using the abnormal fat stores for energy, you prevent the breakdown of lean muscle.

Some degree of moderation in eating will be necessary due to the tendency to gain weight after any type of weight loss program. However, stability of normal weight is relatively easy because the weight loss is from stored fat and not structural fat.

HCG is being called this because after taking it for weight loss, it reprograms your body to use stored fat for energy when your calorie intake is reduced for a period of time. In other words, it helps you to maintain your weight and not gain it back once normal caloric intake is resumed.

It is not necessary. However those with balanced hormones are seeing greater results and have more success in maintaining that loss. Not to mention the overall health benefits and feeling of well being experience with balance hormones.

Originally posted here:
hCG Diet NJ. The hCG Diet is a physician supervised ...

Posted in HCG Diet | Comments Off on hCG Diet NJ. The hCG Diet is a physician supervised …

Home – StemCell ARTS

Posted: October 30, 2016 at 5:44 am

Ready to rid yourself of pain and discomfort without undergoing surgery? Find out if youre a candidate for regenerative procedures.

Are You A Candidate?

Platelet Rich Plasma (PRP) uses excessive bioactive proteins to help regenerate and heal bone and initiate connective tissue repair.

More About PRP

I was back in the gym within the first month. Over time, the treatments fully healed my shoulders and knees, and allowed me to live my life again Read more

I will continue to come back here because this is the only thing that allowed me to have my life back. Read more

52Thursdaysis prominent fashion and lifestyle blog, covering both the east and west coasts.The bloghas been featured in Northern Virginia Magazine and Brightest Young Things, among many others. The blog focuses on style, beauty, fitness, and travel. Sarah Phillips, co-founder ofreadmore

When beginning the process of researching regenerative treatment options, patients can be presented with an overwhelming amount of information. This information can often result in more questions. Am I a candidate for regenerative therapies? How can these therapies help treatreadmore

For those interested in getting involved in an exercise routine, it is hard not to come across the world of CrossFit. Over the past couple years, there has been a significant increase in this fitness regime that combines resistance andreadmore

Experiencing a pop or clicking in the hip joint is often something that cannot be ignored. Pain radiating from the hip down the front of the thigh or groin area could be a result of a tear within the hipreadmore

Regenerativetherapiesare gaining more and more popularity within multiple modalities of the health care field. Orthopedic practices, in particular,are seeing an increase in patients due to professional athletes sharing their experiences with stem cells and platelets for treatment of sports injuriesreadmore

Read more here:
Home - StemCell ARTS

Posted in Virginia Stem Cells | Comments Off on Home – StemCell ARTS

Stem Cell Transplantation – Virginia Cancer

Posted: October 30, 2016 at 5:44 am

Breakthroughs in cancer treatment

High-dose chemotherapy (HDC) and bone marrow or blood stem cell transplantation (SCT) are the best treatments available for many kinds of cancer. To deliver high-dose chemotherapy, stem cells must be collected before treatment for infusion into the patient to support the recovery of the patients bone marrow.

The SCT procedure was developed more than 35 years ago and was considered such a major development of biomedical science that the individuals responsible were awarded the Nobel Prize in Medicine in 1989. Continued refinement has made SCT safer and widely available. In order to determine the role of HDC and SCT for the treatment of cancer, it is important to understand the terminology associated with this increasingly utilized treatment strategy.

Chemotherapy drugs and radiation therapy are used to treat cancer. Higher doses of therapy kill more cancer cells than lower doses of therapy in certain types of cancer. When higher doses of therapy kill more cancer than lower doses, doctors say there is a dose response effect. The delivery of higher doses of therapy is referred to as dose-intensive or high-dose therapy. Unfortunately, the higher doses of therapy used to destroy cancer cells also cause damage to normal cells. The bodys normal cells that are most sensitive to destruction by high-dose therapy are the blood-producing stem cells in the bone marrow.

Stem cells are early blood-forming cells that grow and mature in the bone marrow, but can circulate in the blood. When high-dose therapy is used to treat cancer, one of the major side effects is destruction of the stem cells living in the bone marrow. It is important to collect stem cells prior to treatment with high-dose chemotherapy so that the stem cells can then be infused to rescue bone marrow and hasten blood cell production and immune system recovery.

Stem cell transplants are classified based on which individual donates the stem cells and from where the stem cells are collected. Stem cells may be collected from the bone marrow, peripheral blood or umbilical cord. Therefore, the terms bone marrow transplantation, peripheral blood stem cell transplantation and umbilical cord transplantation are utilized. There are important advantages and disadvantages to utilizing stem cells collected from these different sources. The second part of stem cell transplant classification is determined by who donates the stem cells. Stem cells may come from the patient (autologous), an identical twin (syngeneic) or someone other than the patient (allogeneic). Allogeneic stem cells are further classified by whether the individual donating the stem cells is related or unrelated to the patient.

Read the rest here:
Stem Cell Transplantation - Virginia Cancer

Posted in Virginia Stem Cells | Comments Off on Stem Cell Transplantation – Virginia Cancer

Page 1,659«..1020..1,6581,6591,6601,661..1,6701,680..»