On February 25, the White House hosted a forum on the National Institute of Health’s Precision Medicine Initiative. This is an ambitious research study that aims to develop targeted drugs and treatments that would vary from individual to individual.
To reach the goal of eventually being able to make specific recommendations for patients based on their own combination of genes, environment and lifestyle, researchers plan to collect that kind of information from one million Americans. The study is so large so results can account for diversity among Americans with respect to factors such as ancestry, geography, and social and economic circumstances.
Lots of people would make that same assumption — it seems sensible that we would each “own” our genetic information. But the legal reality is quite different. And that could turn out to be a problem because research projects like the Precision Medicine Initiative rely on research participants trusting that their information is protected once they agree to share it.
As scholars with expertise in research ethics, informed consent and health law, we’re conducting research to clarify how different laws apply to information used for genomic research. We’ll identify gaps in those protections and suggest changes that may be necessary.Do you own your genes?
Contrary to President Obama’s expectations, the few US courts that have considered research participants’ claims of ownership of their biological materials have rejected them.
- John Moore’s doctor used his cells without his knowledge to develop and patent a cell line (cells that could continue to reproduce indefinitely for research). In 1990, the California Supreme Court held that Mr. Moore did not own the cells that had been removed from his body.
- The Greenbergs and other families affected by Canavan disease, an inherited, degenerative and fatal brain disease in children, provided a University of Miami researcher with tissue and blood samples, medical information, and money to develop a genetic test. The researcher patented the associated gene sequence, limiting families' access to it without payment. In 2003, a federal court rejected the parents’ claims that they owned their genetic samples.
- About 6,000 research participants responded to a letter sent by Dr. William Catalona, the developer of the prostate specific antigen test, and asked that their research samples stored at Washington University be transferred to Northwestern University, where Dr. Catalona had a new job. But a court determined that the research participants had no control over who held their specimens after collection.
The courts that have looked at the question have consistently decided that once we give our biological materials to researchers, the materials and the genetic information they contain belong to the researchers or, more specifically, the institutions that employ them.
A few states have adopted statutes concerning ownership of genes, but they may not alter court decisions. A Florida statute certainly did not make a difference in the Greenbergs’ case.Short of ownership, what protections exist?
So you don’t own your genes. But there are other protections for participants in the Precision Medicine Initiative and other research projects.
The primary one comes from the Federal Common Rule. It applies to research conducted or funded by 18 federal departments and agencies. Many universities and other institutions apply the Common Rule to their research too. And research on drugs and devices that must be approved by the Food and Drug Administration (FDA) must comply with very similar rules.
Under the Common Rule, with some exceptions, research studies must be reviewed and approved by an Institutional Review Board (IRB): a committee within the university or hospital, for instance, that scrutinizes proposed experiments involving human subjects. In approving a study, the IRB must evaluate, among other things, the adequacy of the consent process and confidentiality protections, whether risks are minimized and are reasonable in relation to the benefits, and whether the selection of subjects is equitable. The IRB provides a check on what researchers can do.
Once the Institutional Review Board approves a study, researchers can start recruiting people to participate. This is where another protection comes in — consent.
The researchers must disclose the research’s purpose, procedures and any risks and benefits of participating. In a study like the Precision Medicine Initiative, the primary risks are informational, not physical. For example, if an insurer learned that a research participant had a gene that increases the risk of Alzheimer’s, it might refuse long-term care coverage.
Based on the risks and benefits (if any) discussed in the consent form, participants can decide whether they want to take part. They may decline to participate if they do not trust the researchers or do not want to share their information.
In some circumstances, the Common Rule doesn’t require participant consent. These exceptions are allowed when the study poses little risk to the participant, often because the information cannot be connected to the individual.
In recent years, these exceptions have been called into question as researchers have repeatedly demonstrated that it is possible to identify people whose information has been used in research, but were thought to be unidentifiable. However, such reidentification requires significant effort and technical skills and, alone, is unlikely to result in harm to participants. Thus, it is not clear that we should forego the benefits of research conducted under these exceptions because of the theoretical threat to confidentiality.
Beyond these exceptions, some research — such as Facebook’s 2014 study that manipulated some 700,000 users’ newsfeeds to determine the effect of negative or positive words on their emotions — falls outside the Common Rule altogether.
In general, research that is not federally conducted or funded or subject to FDA regulations is not governed by federal research protections. Some states have adopted laws that apply similar protections to research not subject to either the Common Rule or the FDA regulations, but those laws vary considerably from state to state.Additional protections for research participants
The Health Insurance Portability and Accountability Act’s (HIPAA) privacy rule provides a national standard for protecting the use and disclosure of identifiable health information. The corresponding security rule establishes standards for securing electronic health records which could include results of genetic research.
In addition, the Genetic Information Nondiscrimination Act (GINA) prohibits use of genetic information to discriminate against asymptomatic individuals in employment and health insurance decisions. Although it has recognized gaps, GINA provides some protections against discrimination, should genetic information from a research study be disclosed.
As with the Common Rule, state medical privacy and antidiscrimination laws may supplement these federal protections. Thus, the protections afforded to participants may depend greatly on where they live. Moreover, Institutional Review Boards may be unfamiliar with the myriad laws that could combine to protect research participants and their possible gaps.
Beyond these legal requirements, the Precision Medicine Initiative may provide participants additional controls over their data on a voluntary basis. For example, participants could reevaluate their preferences for how their data are shared or used, withdraw their consent for future use of their data at any time and control the types of communications they receive about their information.
While these types of protections may fall short of full legal ownership rights over your genetic information, they do go beyond current legal requirements and may be the types of controls to which President Obama was alluding.What is needed?
We think it is essential for all those involved in research — IRBs, researchers and study participants — to understand what protections are available and what their limitations are.
That’s why we’ve undertaken a comprehensive analysis of federal and state laws that combine to form what we call the “web of protections.” We want to be able to describe how the laws work together, to identify gaps, and to suggest ways to improve those protections, as well as how all this should be described to prospective research participants.
To the extent that the current laws fall short of the types of protections and controls expected by participants in research studies like the Precision Medicine Initiative, we may be able to propose ways that the laws can be updated or supplemented to address concerns like President Obama’s. In this way, we can maintain the public trust on which this research relies.
Leslie E. Wolf, Professor of Law and Director, Center for Law Health and Society, College of Law, Georgia State University; Erin Fuse Brown, Assistant Professor of Law, Georgia State University, and Laura Beskow, Director of the Program for Empirical Bioethics, Associate Professor of Medicine, Duke University
Banner image credit: Shutterstock.com
Tufts University biologists have demonstrated (using a frog model*) for the first time that it is possible to prevent tumors from forming (and to normalize tumors after they have formed) by using optogenetics (light) to control bioelectrical signalling among cells.
Light/bioelectric control of tumors
Virtually all healthy cells maintain a more negative voltage in the cell interior compared with the cell exterior. But the opening and closing of ion channels in the cell membrane can cause the voltage to become more positive (depolarizing the cell) or more negative (polarizing the cell). That makes it possible to detect tumors by their abnormal bioelectrical signature before they are otherwise apparent.
The study was published online in an open-access paper in Oncotarget on March 16.
The use of light to control ion channels has been a ground-breaking tool in research on the nervous system and brain, but optogenetics had not yet been applied to cancer.
The researchers first injected cells in Xenopus laevis (frog) embryos with RNA that encoded a mutant RAS oncogene known to cause cancer-like growths.
The researchers then used blue light to activate positively charged ion channels,which induced an electric current that caused the cells to go from a cancer-like depolarized state to a normal, more negative polarized state. The did the same with a green light-activated proton pump, Archaerhodopsin (Arch). Activation of both agents significantly lowered the incidence of tumor formation and also increased the frequency with which tumors regressed into normal tissue.
“These electrical properties are not merely byproducts of oncogenic processes. They actively regulate the deviations of cells from their normal anatomical roles towards tumor growth and metastatic spread,” said senior and corresponding author Michael Levin, Ph.D., who holds the Vannevar Bush chair in biology and directs the Center for Regenerative and Developmental Biology at Tufts School of Arts and Sciences.
“Discovering new ways to specifically control this bioelectrical signaling could be an important path towards new biomedical approaches to cancer. This provides proof of principle for a novel class of therapies which use light to override the action of oncogenic mutations,” said Levin. “Using light to specifically target tumors would avoid subjecting the whole body to toxic chemotherapy or similar reagents.”
This work was supported by the G. Harold and Leila Y. Mathers Charitable Foundation.
* Frogs are a good model for basic science research into cancer because tumors in frogs and mammals share many of the same characteristics. These include rapid cell division, tissue disorganization, increased vascular growth, invasiveness and cells that have an abnormally positive internal electric voltage.
Abstract of Use of genetically encoded, light-gated ion translocators to control tumorigenesis
It has long been known that the resting potential of tumor cells is depolarized relative to their normal counterparts. More recent work has provided evidence that resting potential is not just a readout of cell state: it regulates cell behavior as well. Thus, the ability to control resting potential in vivo would provide a powerful new tool for the study and treatment of tumors, a tool capable of revealing living-state physiological information impossible to obtain using molecular tools applied to isolated cell components. Here we describe the first use of optogenetics to manipulate ion-flux mediated regulation of membrane potential specifically to prevent and cause regression of oncogene-induced tumors. Injection of mutant-KRAS mRNA induces tumor-like structures with many documented similarities to tumors, in Xenopus tadpoles. We show that expression and activation of either ChR2D156A, a blue-light activated cation channel, or Arch, a green-light activated proton pump, both of which hyperpolarize cells, significantly lowers the incidence of KRAS tumor formation. Excitingly, we also demonstrate that activation of co-expressed light-activated ion translocators after tumor formation significantly increases the frequency with which the tumors regress in a process called normalization. These data demonstrate an optogenetic approach to dissect the biophysics of cancer. Moreover, they provide proof-of-principle for a novel class of interventions, directed at regulating cell state by targeting physiological regulators that can over-ride the presence of mutations.
Knowing the minimum number of genes to create life would answer a fundamental question in biology.
This “minimal synthetic cell,” JCVI-syn3.0, was reported in an open-access paper published last week in the journal Science. By comparison, the first synthetic cell developed by the scientists, M. mycoides JCVI-syn1.0, has 1.08 million base pairs and 901 genes.*
The new cell contains 531,560 base pairs (the “alphabet” or sequence that makes up the DNA code) and 473 genes — the smallest number of genes of any organism that can be grown in a laboratory, according to the team.
JCVI | CVI-syn3.0 — Minimal Cell
“All of the…studies over the past 20 years have underestimated the number of essential genes by focusing only on the known world. This is an important observation that we are carrying forward into the study of the human genome,” said senior author and group leader J. Craig Venter, PhD.
For 50 years, researchers have studied essential and non-essential genes in bacteria to help biologists understand the core functions needed for life. In the newer field of synthetic biology, this same information will be able to help scientists design DNA sequences for new synthetic organisms — allowing them to build frameworks for industrial applications of synthetic organisms.
During construction** of JCVI-syn3.0, the team discovered that 149 of the genes actually had unknown functions, but were essential for robust growth. Those functions remain an area of continued work for the researchers. Other genes in the minimal synthetic cell were related to reading and expressing the DNA code, structure, and function of the outer cell membrane, and cell metabolism, or to preserving DNA integrity.
However, the team expects to decode the 149 genes in the future. “Our goal is to have a cell for which the precise biological function of every gene is known,” said Clyde Hutchison, PhD, first author and distinguished professor at JCVI.
Another major outcome of the minimal cell program will be new tools and semi-automated processes for synthesizing the DNA sequence needed for whole organisms, according to Daniel Gibson, PhD, an associate professor at JCVI.
“This paper signifies a major step toward our ability to design and build synthetic organisms from the bottom up with predictable outcomes,” he said. “The tools and knowledge gained from this work will be essential to producing next-generation production platforms for a wide range of disciplines.”
This work was funded by SGI, the JCVI endowment, and the Defense Advanced Research Projects Agency’s Living Foundries program.
* The research at JCVI leading to this report began in 1995 with DNA sequencing of the first free-living organism, Haemophilus influenza, followed by the DNA sequencing of Mycoplasma genitalium. A comparison of these two genomes revealed a common set of 256 genes that the team thought could be a minimal set of genes needed for viability.
In 1999, Hutchison led a team who published a paper describing techniques to identify the non-essential genes in M. genitalium.
The creation of the first synthetic cell (JCVI-syn1.0) in 2010 established a workflow for building and testing designs for the DNA of a whole organism. This included design of a minimal cell from the bottom up, starting with the DNA sequence.
** To create JCVI-syn3.0, the team used an approach of whole genome design (design of the DNA needed for a whole organism) and chemical synthesis followed by genome transplantation to test if the cell was viable. Their first attempt to minimize the genome began with a simple approach using information in the biochemical literature and some limited mutations of DNA, but this did not result in a viable genome. After improving methods, the team discovered a set of “quasi-essential” genes that are necessary for robust growth and that explained the failure of their first attempt.
The team built the genome in eight segments at a time so that each could be tested separately before combining them to generate a minimal genome. The team also explored the order of the genes and how that affects cell growth and viability. They found gene content was more critical than gene order. They went through three cycles of designing, building, and testing ensuring that the “quasi-essential” genes remained, which in the end resulted in a viable, self-replicating minimal synthetic cell that contained just 473 genes.
Abstract of Design and synthesis of a minimal bacterial genome
We used whole-genome design and complete chemical synthesis to minimize the 1079–kilobase pair synthetic genome of Mycoplasma mycoides JCVI-syn1.0. An initial design, based on collective knowledge of molecular biology combined with limited transposon mutagenesis data, failed to produce a viable cell. Improved transposon mutagenesis methods revealed a class of quasi-essential genes that are needed for robust growth, explaining the failure of our initial design. Three cycles of design, synthesis, and testing, with retention of quasi-essential genes, produced JCVI-syn3.0 (531 kilobase pairs, 473 genes), which has a genome smaller than that of any autonomously replicating cell found in nature. JCVI-syn3.0 retains almost all genes involved in the synthesis and processing of macromolecules. Unexpectedly, it also contains 149 genes with unknown biological functions. JCVI-syn3.0 is a versatile platform for investigating the core functions of life and for exploring whole-genome design.
Alzheimer’s may be the cruelest of brain diseases.
Decades before the first signs of dementia strike, toxic protein clumps called amyloid plaques have been slowly, insidiously building up in the brain. The plaques clog the brain’s waste disposal system and wreak havoc on the delicate molecular machinery that underlies our memories, our history, our personality.
By the time deceptively benign senile moments turn into full-blown dementia, it’s too late to treat.
Our helplessness against dementia is especially frustrating, because scientists know how to slow it down if caught early. The answer is passive immunization. Just like vaccines stimulate the body to produce antibodies and protect against various infections, we can introduce antibodies that prevent amyloid proteins from clumping.
In theory, these guardians would circulate the aging brain and protect it from amyloid buildup, especially in people with genetic mutations that increase their chance of developing dementia.
Yet clinical trials using antibody injections have consistently failed. The reasons are many, but one stands out: most antibody injections need to be given at extremely high doses to be moderately effective. And in pharmacology, the dose makes the poison.
With repeated high dose injections, therapeutic anti-amyloid antibodies can cause severe side effects. Unlike small molecule drugs, antibodies are huge proteins that throw our bodies’ immune systems into red alert. Immune cells deploy, seeking out and destroying the therapeutic antibodies long before they reach the brain.
Then there are off-target effects. Once inside the brain, antibodies can in some cases tamper with the brain’s normal functioning by disrupting the normal chatter between neurons.
The solution to these side effects is deceptively simple: forget bombarding the brain with large antibody doses. Instead, deliver the therapeutics in a slow and smooth trickle. But how?
A team at the EPFL in Lausanne, Switzerland may have an answer. A bioactive capsule—about an inch long and packed with genetically engineered cells that steadily pump out anti-amyloid antibodies—is implanted under the skin of susceptible patients long before the first signs of cognitive decline strike.
In a proof-of-concept study published this month in Brain, the team tested their capsule in two transgenic Alzheimer’s mouse models engineered to produce abnormal amounts of human amyloid proteins in the brain.
When implanted into young mice long before symptoms appeared, the encapsulated cells steadily synthesized anti-amyloid antibodies for 10 months and significantly reduced pathological signs of protein clumps in the brain later in life.A Living Antibody Factory
The capsule — a one-of-a-kind bioengineering feat — is based on previous work from the same lab back in 2014.
The team started with genetic cloning. They introduced genes that encode a type of anti-amyloid antibody, called MAb-11, into a virus.
Next, they infected immortalized mouse cells in a dish with these viruses. The cells took up the extra genes and integrated them into their own genome. Before long, the cells started secreting MAb-11.
The problem then is getting the cells into a mouse without stimulating the recipient’s immune system. That’s where the capsule comes in.
Made from biocompatible porous material that lets nutrients in and antibodies out, the inch-long nugget is seamlessly sealed tight using ultrasonic waves. The capsule has an integrated port that allows scientists to directly inject antibody-producing cells into the inner chamber, where they take root in the chamber’s hydrogel filling.
Because the capsule shields the precious cells from the recipient’s immune system, these antibody-producing cells can live inside the host for months — a highly efficient, living antibody factory right inside the body.Destroying Plaques
Capsules in hand, the team next implanted them under the skin of two different mouse models of Alzheimer’s disease. Both models were genetically engineered to have mutations often seen in humans at risk for the disease.
To start out, the scientists wanted to see if they could delay the pathological symptoms of Alzheimer’s after the plaques had already formed.
Nine months after implantation, scientists found that the cells had expanded to fill up the capsule. The cells looked healthy and were hard at work, constantly secreting MAb-11 into the bloodstream.
When scientists looked into the mice’s brains, they found that MAb-11 had tagged onto the amyloid proteins, sending out a warning call to the local immune system. Further, they saw microglia — specialized immune cells that patrol the brain and gobble up waste — had burst into activity, engulfing toxic amyloid clumps much more efficiently than microglia from control mice.
As a result, the antibodies significantly reduced the number and size of amyloid plaques throughout the brain. Encouraged, the team next asked the harder question: can the capsule prevent toxic protein buildup before symptoms start?
The result was a striking yes.
When the scientists started treatment 6 months before mice first showed any telltale signs of amyloid plaques, they found MAb-11 reduced toxic plaque buildup by roughly 95% (compared to control mice) ten months after implantation.Long Road Ahead
The results are undoubtedly encouraging.
Because the cells used to produce antibodies are immortalized, they can be grown at a much more rapid pace than any other type of cells, easily scaling up production. And because the capsule protects the cells from the host’s immune system, in theory the treatment could be given to anyone without risk of rejection.
That said, it’ll be a long road before the capsule reaches market.
For one, scientists found that some mice eventually developed resistance. Bit by bit, their immune systems recognized MAb-11 as an invader and produced antibodies of their own to wipe out the therapeutic protein. Although simultaneous treatment with anti-CD4 (a protein that blocks the anti-drug response) helped, this expensive fix isn’t really practical for long-term use in human patients.
Like a cat-and-mouse game, researchers may have to switch from MAb-11 to another type of anti-amyloid antibody every few months to avoid tolerance.
Then there’s this: the amount of amyloid plaques in the brain doesn’t always correlate with memory decline. Unfortunately, the authors didn’t run their capsule-implanted mice through memory tests, and so (although very likely) it’s hard to say whether the treatment actually slowed memory loss in these mice.
Alzheimer’s disease is notorious for its “graveyard of drugs” — drug candidates that initially showed promise, but ultimately failed in humans.
Although just the first step, this study solves one of the thorniest issues plagueing the disease — prevention. The microcapsule offers a way to begin treatment early. Instead of passively reacting to the toxic protein buildup cascade, we may be able to nip the process in the bud.
But perhaps the most titillating result is this: many neurodegenerative diseases —Parkinson’s, Huntington’s, ALS (of the ice bucket challenge fame) — all involve a buildup of toxic proteins. The cell-encapsulating device could, in theory, be a universal bullet against some of the most difficult brain disorders of our time.
The study may just be the first step, but it’s one hell of a big one.
Banner image credit: Shutterstock.com
A singularity, according to cosmology, is located at the core of infinite density and pressure, where matter is continually merging ad infinitum. At the perimeter of this core is the event horizon, where the force of gravity becomes so strong that nothing can escape. Beyond the event horizon everything that occurs is unknown and cannot affect an outside observer. Light emitted from inside the horizon can never reach the observer and anything that passes through the horizon from the observer’s side is never seen again — a point where the space-time continuum folds infinitely on itself due to massive gravity. The most commonly known example of such an event horizon in physics is defined around general relativity’s description of a black hole, a celestial object so dense that no matter or radiation can escape its gravitational field.
Here the focus is not on an event horizon comparable to a black hole or other cosmological events, but on a singularity that stems from human ingenuity and innovation
Eight decades ago in the mid 1940s, scientist John von Neumann revolutionized the budding field of computing memory. His idea for computing memory was in storing programs by describing bytes of computer code as if they were neurons, thereby framing an analogy between the digital computer and the human brain (Aspray 1990). Von Neumann noted the importance of the storage capacity of computer memory could parallel the importance of memory of the biological nervous systems (Aspray 1990). He also described the idea of a technological singularity as a crucial moment in the evolution of the human. While no reference is given as to when von Neumann said this, it is reported in numerous papers that on one particular day in the 1950s while walking with Stanislaw Ulam, he said that technological acceleration “gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs as we know them could not continue” (Ulam n.d.).
What does it mean that human affairs as we know them could not continue? The literature concerning accelerating technology and social unrest, environmental devastation, war and, alternatively, the many beneficial breakthroughs, are vast and broad. We might approach the concept of a technological singularity as: who is the singularity rather than what is the singularity? This essay focuses on a paradigmatic shift and its potential in benefiting human life. In particular, radical life extension of biological life, artificial life, and other life forms yet to be identified.
As such, the term human use refers to the idea of human innovation being at the root of technological acceleration. In The Human Use of Human Beings, Norbert Wieners states, “[t]he human species is strong only insofar as it takes advantage of the innate, adaptive, learning faculties that its physiological structure makes possible” (1954, p. 57-58). And, specifically that a “cybernetic view of human nature emphasized the physical structure of the human body and the tremendous potential for learning and creative action that human physiology makes possible” (Bynum 2008).
Self-improving AI / exponential growth
Science, and especially science which is related to AI, has been a reoccurring topic of science fiction with catch words like: self-generating AI, self-generating nanotechnological assemblers, and self-replicating super intelligent machines. These terms create future scenarios, which on one hand may seem far-fetched and on the other hand may be worth investigating more closely. Thus, the technological singularity can be viewed as a “hypothesized point in the future variously characterized by the technological creation of self-improving intelligence, unprecedented and rapid technological progress, or some combination of the two” (Anissimov 2008).
Vernor Vinge, professor of mathematics and credited author as of the technological singularity, claims that this event horizon is “… a point where our old models must be discarded and a new reality rules” due to “a change comparable to the rise of human life on Earth” (1993). Recently Vinge further stated that the technological singularity could be a combination — a synergy of events, namely the result of an artificial intelligence, intelligence amplification, biomedical advances, Internet growth and a digital Gaia (2008, p.1). For example, unlike a wake-up scenario where a computer quickly surpasses human-level intelligence and autonomously manufactures copies of itself, Vinge suggests that we create and program superhuman artificial intelligence into computers and, simultaneously, biologically enhance our own brains. Additionally, the networking capability of our physical extensions becomes connected and immersive. Here, Vinge envisions a digital Gaia, where even larger and embedded microprocessors become so useful and real that they would be considered a superhuman being (p. 2). In short, this would look like an evolving system wherein all participants — biological and digital — are immersed as each organism and particle interconnects. Standing back from this vision, it might look like a spiral arm of our Milky Way as a system or body. Up close, all points might be individuated or synergetic, or a combination of both.
Kurzweil (1999) takes the technotopia approach that accelerating change and exponential growth will bring about a period of extremely rapid technological progress. Kurzweil argues that the event can be evidenced by a long-term pattern of accelerating change that generalizes Moore’s Law to those technologies which predate integrated circuits, thereby arguing that exponential growth will continue as new technologies are invented. In the near term, such new technologies within the ecological spectrum include artificial tornados used to generate electricity and biofuels in providing alternatives to oil-based fuels. In the more distant future, such new technologies could harness the power of light to pattern surfaces on nanoscale for energy, or nano-sensors for detecting environmental contaminants.
Nevertheless, such conjecture is speculative. The theories concerning a technological singularity are open to questioning and debate. Some speculations point to the laws of physics and an unlimited increase in computer power. Some point to a reinforcing loop of change where it can continue for so long, but eventually burn itself out. While there are no evidential facts stating an event horizon will occur, it is possible that it could come to pass. In fact, according to computer scientists, evolution theorists, and futurist — and also as research and developments in technology suggest as marked by academic and industry trends — a singularity is more than possible.
Innovations – Human Use
Taking into account innovation as a selection process of adoption, diffusion, improvement and hybridization, it is often the user who shapes an overall impact. For example, the World Wide Web was invented by Tim Berners-Lee and Robert Cailliau in 1990 as a protocol for exchanging documents among physicists. Nevertheless, it was the user who turned the invention into a global tool for communication. Likewise, the adoption and diffusion of nuclear energy now provides incomparable benefits for magnetic resonance imaging. Teflon’s fluorine-containing polymers have been hybridized to provide the Mars Rover with a durable and environmental resistance structure. When it comes to the accelerating technologies, the possibilities are endless.
According to Dr. Mihail Roco, Senior Advisor of the International Strategy for Nanotechnology Research and Development, “[n]anotechnology, biotechnology, information technology, biomedical and cognitive sciences, and systems approach develop in close interdependence. The synergism among the converging fields will play a determinant role in the birth and growth of new technologies, as sought beginning from the molecular scale” (2001).
On the molecular scale, quantum dots and other nanotechnologies could possess behavioral characteristics allowing them to picture, calculate, and follow the molecular structure of neural cells. Nano-neural interfaces, including hardware and software, offer potential for high-level information interfaces with neurological cells and the central nervous system. These characteristics comes close to what Von Neumann noted when he drew parallels between semiconductors and the human central nervous system, and later when his interests turned toward the modeling of the nervous system and the human brain (Aspray 1987).
New models – bio/techno
Alfred North Whitehead, mathematical logician and philosopher of science suggests that, “…every organism in some way anticipates the future and then chooses one among a number of possible routes to adjust its own behavior to what it expects to encounter. In other words, every organism exhibits some degree of aim or purpose” (Rifkin 1999, p. 208) and becomes a model. Such model can be seen in what Whitehead provides as a philosophical vision of behavior. Also, such model can be recognized in Wiener’s (1954, p. 57-58) scientific framework of cybernetics and the potential for organisms to be viewed as formations in assessing technological advancements. Perhaps, “[a] living organism is no longer seen as a permanent form but rather as a network of activity. With this new definition of life, the philosophy of becoming supersedes the philosophy of being …” (p. 208-209) and life becomes a process bound to a notion of change.
Norbert Wiener’s “cybernetic view of human nature emphasized the physical structure of the human body and the tremendous potential for learning and creative action that human physiology makes possible” (Bynum 1999). Wiener writes, “[c]ybernetics takes the view that the structure of the machine or of the organism is an index of the performance” (1954, p. 57). And further that “[hu]man like all other organisms lives in a contingent universe, but man’s advantage over the rest of nature is that he has the physiological and hence the intellectual equipment to adapt himself to radical changes in his environment” (1954, p. 58).
New models – radical life extension
In that it might be prescient to ask who is the singularity, such an event might be a series of user-mediated innovations resulting from the teaming up of nano-bio-info technologies with neuroscience. Such a convergence offers potential in helping people diagnosed with physical and mental conditions and having difficulty engaging in daily life activities. Pushing the envelope on this convergence, one scenario might be that human life continues past its maximum biochemical process, as reported by the scientific Hayflick theory and as verified in the case of Jeanne Calment who lived to be 122 (Whitney 1997). Until then, humans augment, enhance, adopt and hybridize in attempting to modify life.
For example, bodily augmentation developed in the fields of wearable technologies and alternative personas assemble in virtual environments, yet these fields do not directly affect the particular human being’s biological makeup. Artificial life, virtual replicas, and digital presence offer alternatives to biology. Wet bioart offers alternatives to inherent traits of cells and organisms. Performance art offers alternatives to body as vehicle and body as material. Body art offers alternatives to physicality and identity. Are there artistic, design-based practice, theory and/or academic discourse concerning the scope of human enhancement for the purpose of life extension?
One artistic work which could be viewed as a type of immortal life is the work of Cynthia Verspaget. While Verspaget’s Anarchy Cell Line brings to bear issues concerning the legacy of the HeLa cells, the original cells of Henrietta Lacks who expired in 1951 but whose cells continue to live on, the artistic work is that of a collection of single cells and not of an organism or body. The author’s own artistic work concerns the biological organism that is aging. The project, Bone Density investigates regenerating bone tissue at a point when the tissue cells are abnormally degenerating. Each cell is part of the entire organism, but its degeneration is not consequential to the identity and livelihood of the person unless it affects the entire bodily system.
The exploratory experimentation and manipulation of biological life systems, from single cells to organisms, is increasingly drawing attention to artistic practice and theory. As noted, biological art and transbioart practice have reached far into the uncomfortable zone of bioengineering and genetics, where science and medicine reside, in aptly creating bio-experiments and offering opinions on the meaning of life. On another side of the creative spectrum, exploratory creations with nanotechnological particles have become a molecular vehicle for establishing artistic practice and theory. In and around these domains one can see the perpetual interconnectivity of what Roy Ascott calls a telematic embrace, where the human use of computer networks is the medium “moving beyond object art and time-based art, uses simulation to render what is invisible visible, to bring the virtual, the potential, the unseen, and unrealized into view. In this process, a multiplicity of viewpoints, of worldview, is required and provided by networked perception and intelligence” (2003, p. 231).
Whether it is smarter-than-human machines or exponential technological acceleration, or the desire to life longer, the impact of nanotechnology, biotechnology, information technology, biomedical and cognitive sciences, and a systems approach as reported by Vinge, Kurzweil, and Roco, could bring about a singularity. Apart from the notion of a singularity as introduced by Von Neumann, these technologies could also bring about radical life extension. Whitehead’s comment that by anticipating the future, we can choose a route and adjust our behavior is apt in developing a practicable approach to an event horizon, should one occur. The idea of human use in regards to our using our intellectual equipment in our ability to adapt, as suggested Wiener, offers an appropriate set of circumstances in considering extended life. Further, Wiener’s views reflect an ethical and philosophical approach to life. However, it differs from Wiener’s own worldview, the author’s interest in radical life extension is anticipated by the transhumanist worldview.
 Gordon Moore’s valued observation made in 1965, which projects the doubling of transistors every couple of years. Moore’s Law has been maintained and still holds true today.
 Hayflick Limit Theory of Aging claims there is a limit on the number of times a cell can divide resulting in a limited cell lifespan.
HeLa cell is the term used to describe original cancer cells which were discarded form the person of Henrietta Lacks during cervical cancer operation in 1951. The immortal HeLa cell line is repeatedly used in medical research.
Ascott, R., 2003. Telematic Embrace, Berkeley: University of California Press.
Aspray, W., 1990. John von Neumann and the Origins of Modern Computing. Cambridge: The MIT Press.
Desanctis, G. & Poole, M.S., 1990. Understanding the use of group decision support systems: the theory of adaptive structuration. In J Fulk, ed. Organizations and Communication Technology, Newbury Park: Sage.
Extropyemail@example.com, 2008. ExI Singularity Discussion – Human – Takeoff. [E-mail]. Message from M. Anissimov. Sent 13 June 2008, 04:52. Available at:http://lists.extropy.org/pipermail/extropy-chat/ [Accessed 20 June 2008]
Good, I.J., 1965. Speculations concerning the first ultraintelligent machine. In F.L. Alt & M. Rubinoff,
Kurzweil, R., 1999. The Age of Spiritual Machines. New York: Viking.
Lansbury, B., 2008. World IP Today: analyzing global patent activity and technology innovations. Thomson Reuters. [internet]. May. Available at: http://scientific.thomsonreuters.com/news/newsletter/2008-05/8452909/ [accessed 5 June 2008]
Rifkin, J., 1999. The Biotech Century. New York: Penguin Putnam.
Roco, M.C., 2001. International Strategy for Nanotechnology Research and Development. In Journal of Nanoparticle Research, vol. 3, no. 5-6, pp. 353-360. Nederlands: Springer Netherlands.
Vinge, V., 1993. The Coming Technological Singularity. Whole Earth Review, 10 Dec.
Vinge, V., 2008. Signs of the Singularity. IEEE Spectrum, June 2008. Available at http://www.spectrum.ieee.org/jun08/6306 [accessed 25 June 2008]
Vita-More, N. 2007. Brave Biological Design. Strategies for Engineered Negligible Senescence Conference. Cambridge.
Webb, S., 2002. If the Universe Is Teeming with Aliens: Where Is Everybody? Fifty Solutions to Fermi’s Paradox and the Problem of Extraterrestrial Life. New York: Springer.
Whitney, C.R., 1997. Jeanne Calment, World’s Elder, Dies at 122. New York Times. [internet] 5 Aug. Available at: http://query.nytimes.com/gst/fullpage.html?res=9C01E7D7113DF936A3575BC0A961958260
[accessed 4 June 2008]
Wiener, N., 1954. The Human Use of Human Beings. Boston: Da Capo Press.
The post A Mediated Singularity: Human Use as a Passport to Life Extension appeared first on h+ Media.
The creators of Aipoly hope the app can be helpful for people with severe vision impairments—and perhaps for those trying to learn a new language. They also hope it will be faster than other image-recognition-related apps that rely on the aid of other humans, like Be My Eyes, or that require the Internet, such as TapTapSee.
Ramez Naam’s book Apex won Philip K Dick award for distinguished original science fiction paperback.
A new one-atom-thick flat material made up of silicon, boron, and nitrogen can function as a conductor or semiconductor (unlike graphene) and could upstage graphene and advance digital technology, say scientists at the University of Kentucky, Daimler in Germany, and the Institute for Electronic Structure and Laser (IESL) in Greece.
Reported in Physical Review B, Rapid Communications, the new Si2BN material was discovered in theory (not yet made in the lab). It uses light, inexpensive earth-abundant elements and is extremely stable, a property many other graphene alternatives lack, says University of Kentucky Center for Computational Sciences physicist Madhu Menon, PhD.
Limitations of other 2D semiconducting materials
A search for new 2D semiconducting materials has led researchers to a new class of three-layer materials called transition-metal dichalcogenides (TMDCs). TMDCs are mostly semiconductors and can be made into digital processors with greater efficiency than anything possible with silicon. However, these are much bulkier than graphene and made of materials that are not necessarily earth-abundant and inexpensive.
Other graphene-like materials have been proposed but lack the strengths of the new material. Silicene, for example, does not have a flat surface and eventually forms a 3D surface. Other materials are highly unstable, some only for a few hours at most.
The new Si2BN material is metallic, but by attaching other elements on top of the silicon atoms, its band gap can be changed (from conductor to semiconductor, for example) — a key advantage over graphene for electronics applications and solar-energy conversion.
The presence of silicon also suggests possible seamless integration with current silicon-based technology, allowing the industry to slowly move away from silicon, rather than precipitously, notes Menon.
University of Kentucky | Dr. Madhu Menon Proposes New 2D Material
Abstract of Prediction of a new graphenelike Si2BN solid
While the possibility to create a single-atom-thick two-dimensional layer from any material remains, only a few such structures have been obtained other than graphene and a monolayer of boron nitride. Here, based upon ab initio theoretical simulations, we propose a new stable graphenelike single-atomic-layer Si2BN structure that has all of its atoms with sp2 bonding with no out-of-plane buckling. The structure is found to be metallic with a finite density of states at the Fermi level. This structure can be rolled into nanotubes in a manner similar to graphene. Combining first- and second-row elements in the Periodic Table to form a one-atom-thick material that is also flat opens up the possibility for studying new physics beyond graphene. The presence of Si will make the surface more reactive and therefore a promising candidate for hydrogen storage.
Researchers at at RMIT University in Australia have developed a cheap, efficient way to grow special copper- and silver-based nanostructures on textiles that can degrade organic matter when exposed to light.
Don’t throw out your washing machine yet, but the work paves the way toward nano-enhanced textiles that can spontaneously clean themselves of stains and grime simply by being put under a light or worn out in the sun.
The nanostructures absorb visible light (via localized surface plasmon resonance — collective electron-charge oscillations in metallic nanoparticles that are excited by light), generating high-energy (“hot”) electrons that cause the nanostructures to act as catalysts for chemical reactions that degrade organic matter.
The challenge for researchers has been to bring the concept out of the lab by working out how to build these nanostructures on an industrial scale and permanently attach them to textiles. The RMIT team’s novel approach was to grow the nanostructures directly onto the textiles by dipping them into specific solutions, resulting in development of stable nanostructures within 30 minutes.
When exposed to light, it took less than six minutes for some of the nano-enhanced textiles to spontaneously clean themselves.
The research was described in the journal Advanced Materials Interfaces.
Scaling up to industrial levels
Rajesh Ramanathan, a RMIT postdoctoral fellow and co-senior author, said the process also had a variety of applications for catalysis-based industries such as agrochemicals, pharmaceuticals, and natural products, and could be easily scaled up to industrial levels. “The advantage of textiles is they already have a 3D structure, so they are great at absorbing light, which in turn speeds up the process of degrading organic matter,” he said.
“Our next step will be to test our nano-enhanced textiles with organic compounds that could be more relevant to consumers, to see how quickly they can handle common stains like tomato sauce or wine,” Ramanathan said.
“There’s more work to do to before we can start throwing out our washing machines, but this advance lays a strong foundation for the future development of fully self-cleaning textiles.”
Abstract of Robust Nanostructured Silver and Copper Fabrics with Localized Surface Plasmon Resonance Property for Effective Visible Light Induced Reductive Catalysis
Inspired by high porosity, absorbency, wettability, and hierarchical ordering on the micrometer and nanometer scale of cotton fabrics, a facile strategy is developed to coat visible light active metal nanostructures of copper and silver on cotton fabric substrates. The fabrication of nanostructured Ag and Cu onto interwoven threads of a cotton fabric by electroless deposition creates metal nanostructures that show a localized surface plasmon resonance (LSPR) effect. The micro/nanoscale hierarchical ordering of the cotton fabrics allows access to catalytically active sites to participate in heterogeneous catalysis with high efficiency. The ability of metals to absorb visible light through LSPR further enhances the catalytic reaction rates under photoexcitation conditions. Understanding the modes of electron transfer during visible light illumination in Ag@Cotton and Cu@Cotton through electrochemical measurements provides mechanistic evidence on the influence of light in promoting electron transfer during heterogeneous catalysis for the first time. The outcomes presented in this work will be helpful in designing new multifunctional fabrics with the ability to absorb visible light and thereby enhance light-activated catalytic processes.
Duke University researchers have discovered a new form of MRI that’s 10,000 times more sensitive and could record actual biochemical reactions, such as those involved in cancer and heart disease, and in real time.
Let’s review how MRI (magnetic resonance imaging) works: MRI takes advantage of a property called spin, which makes the nuclei in hydrogen atoms act like tiny magnets. By generating a strong magnetic field (such as 3 Tesla) and a series of radio-frequency waves, MRI induces these hydrogen magnets in atoms to broadcast their locations. Since most of the hydrogen atoms in the body are bound up in water, the technique is used in clinical settings to create detailed images of soft tissues like organs (such as the brain), blood vessels, and tumors inside the body.
MRI’s ability to track chemical transformations in the body has been limited by the low sensitivity of the technique. That makes it impossible to detect small numbers of molecules (without using unattainably more massive magnetic fields).
So to take MRI a giant step further in sensitivity, the Duke researchers created a new class of molecular “tags” that can track disease metabolism in real time, and can last for more than an hour, using a technique called hyperpolarization.* These tags are biocompatible and inexpensive to produce, allowing for using existing MRI machines.
“This represents a completely new class of molecules that doesn’t look anything at all like what people thought could be made into MRI tags,” said Warren S. Warren, James B. Duke Professor and Chair of Physics at Duke, and senior author on the study. “We envision it could provide a whole new way to use MRI to learn about the biochemistry of disease.”
Sensitive tissue detection without radiation
The new molecular tags open up a new world for medicine and research by making it possible to detect what’s happening in optically opaque tissue instead of requiring expensive positron emission tomography (PET), which uses a radioactive tracer chemical to look at organs in the body and only works for (typically) about 20 minutes, or CT x-rays, according to the researchers.
This research was reported in the March 25 issue of Science Advances. It was supported by the National Science Foundation, the National Institutes of Health, the Department of Defense Congressionally Directed Medical Research Programs Breast Cancer grant, the Pratt School of Engineering Research Innovation Seed Fund, the Burroughs Wellcome Fellowship, and the Donors of the American Chemical Society Petroleum Research Fund.
* For the past decade, researchers have been developing methods to “hyperpolarize” biologically important molecules. “Hyperpolarization gives them 10,000 times more signal than they would normally have if they had just been magnetized in an ordinary magnetic field,” Warren said. But while promising, Warren says these hyperpolarization techniques face two fundamental problems: incredibly expensive equipment — around 3 million dollars for one machine — and most of these molecular “lightbulbs” burn out in a matter of seconds.
“It’s hard to take an image with an agent that is only visible for seconds, and there are a lot of biological processes you could never hope to see,” said Warren. “We wanted to try to figure out what molecules could give extremely long-lived signals so that you could look at slower processes.”
So the researchers synthesized a series of molecules containing diazarines — a chemical structure composed of two nitrogen atoms bound together in a ring. Diazirines were a promising target for screening because their geometry traps hyperpolarization in a “hidden state” where it cannot relax quickly. Using a simple and inexpensive approach to hyperpolarization called SABRE-SHEATH, in which the molecular tags are mixed with a spin-polarized form of hydrogen and a catalyst, the researchers were able to rapidly hyperpolarize one of the diazirine-containing molecules, greatly enhancing its magnetic resonance signals for over an hour.
The scientists believe their SABRE-SHEATH catalyst could be used to hyperpolarize a wide variety of chemical structures at a fraction of the cost of other methods.
Abstract of Direct and cost-efficient hyperpolarization of long-lived nuclear spin states on universal 15N2-diazirine molecular tags
Conventional magnetic resonance (MR) faces serious sensitivity limitations, which can be overcome by hyperpolarization methods, but the most common method (dynamic nuclear polarization) is complex and expensive, and applications are limited by short spin lifetimes (typically seconds) of biologically relevant molecules. We use a recently developed method, SABRE-SHEATH, to directly hyperpolarize 15N2 magnetization and long-lived 15N2 singlet spin order, with signal decay time constants of 5.8 and 23 min, respectively. We find >10,000-fold enhancements generating detectable nuclear MR signals that last for more than an hour. 15N2-diazirines represent a class of particularly promising and versatile molecular tags, and can be incorporated into a wide range of biomolecules without significantly altering molecular function.
To my eyes professional ethics is a self-defeating field, in that it appears to corrode the ethics of those who participate. The basic problem here is one of incentives, an age-old story. The funding and standing for a professional ethicist depends upon finding a continual supply of ethical issues that can be used to justify the continued employment and budget of said ethicist. When it comes to medicine, however, and for most other fields of human endeavor, there is no such supply. All of the true ethical problems in medicine and medical science were solved long ago and the solutions finessed in great detail across centuries of thought and writing. These ethical problems in medicine are few in number and basically boil down to what to do in limited resource triage situations (the best you can), whether or not to harm people deliberately (no), and whether or not to work towards better medicine (yes). Thus a gainfully employed professional ethicist must invent new, not-actually-real problems pretty much from the get-go, or give up and admit that the job is pointless. Natural selection then ensures that we only see those who prefer the money over the integrity.
We can measure the decay in the ethics of professional ethicists by the degree to which they can contort themselves to produce different answers to those I provided for the few legitimate ethical challenges in medicine above. If you spend any time at all following the progression of aging research and efforts to extend healthy human lives, then you will see a great many ethicists demonstrating the decay of their personal ethics in just this fashion. There are any number of salaried ethicists willing to throw their newly invented logs in front of the wheels of progress in this field, and tell us just how terrible it would be to cure age-related disease and lengthen productive and healthy human lives. The processes of aging and the age-related disease it produces are the greatest cause of pain, suffering, and death in the world by a very large margin. The way to remove all of this pain, suffering, and death is to build therapies that can repair the causes of aging, therefore preventing all resulting disease and disability. That will also extend life, because healthy, youthful people have a very low mortality rate. Yet the cadre of professional ethicists weigh their present position against the lives and livelihoods of billions and go right ahead with their flimsy objections.
That this whole situation exists, that institutions responsible for providing and improving technologies have increasingly indulged a parasitical arm that siphons off funding in order to obstruct the processes of improvement, is yet another sign that the world we live in is far from perfect. It also suggests that human nature leaves much room for improvement in the years to come, once such an engineering project becomes possible. In this article we see an example of the type of "ethics" I find so objectionable, though it isn't as though one has to try hard to find other examples:improving hygiene and protecting us from infectious diseases through antibiotics and vaccines. But current research into extending our lives presents an interesting twist. It also raises ethical questions.
I spoke to experts who foresee a coming revolution in medicine, where we treat age instead of disease. To do this, they are trying to figure out how aging works at the molecular level. The goal isn't exactly immortality. Instead, these scientists have noticed that a variety of diseases, from cancers to Alzheimer's, share aging as a major risk factor. Rather than spending a lot of money to treat each disease individually, why not tackle the root cause? What if there was a pill that slowed how our cells age, letting us avoid age-related illnesses - and also likely extending our lives? It's a safe bet that when most of us contemplate the gaping abyss of mortality, we decide we'd like to postpone that fate as long as possible, while remaining mobile, independent, and mentally sharp. If we must die, let us go peacefully in our sleep, at home, when we're at least 100.
But how might such a revolution ripple across society? Life expectancy already varies greatly. It's tied to education, wealth, and even where you live. According to Alexander Capron, an expert in health policy and ethics at the University of Southern California Gould School of Law, life-extending therapies could exacerbate these differences. For example, if these treatments are expensive, or aren't covered by affordable health care plans, only people with disposable cash will have access. This means people with money and resources will have the choice to live longer. Those who don't, won't.
"We can't cherry-pick the costs or savings to focus on," says Patrick Lin, director of the Ethics + Emerging Sciences Group at California Polytechnic State University. Instead, he says,to fairly examine the ethics involved, we should consider impacts both on the individual and society level. "Yes, healthier people may mean lower health costs and more productivity, but that's a partial picture at best. We'd also have to consider the impact of extended lives on, say, Social Security, pensions, job openings given fewer retirements, crime from unemployment, natural resources, urban density, copyright durations, prison sentences, and many, many other effects."
Another effect to consider is how families pass on their legacies, says Nigel Cameron, president of the Center for Policy on Emerging Technologies. Life extension would mean more time with extended family. But it will also mean that inheritance and property will transfer later and less often, which could put more pressure on younger generations to acquire property independently. Lifespan extension could also influence politics and social change, with different age groups pushing for different policies. "There are big generational differences in economic and social interests," Cameron says. "The whole thing becomes much more extended if people live longer, much more competitive."
Still, the research that could lead to life extension is happening, so the conversation about its implications should, too. "Personally, I'm cautiously optimistic about life extension research, but we need to be careful to manage the hype and not ignore the risks," says Lin. "Will we ever become immortal? I don't know, and no one else can see that far, either. But even extending our lives another 20-100 years or more, to start with, is a game-changer."
“If there is one dominant myth about the world…it is that we all go around assuming the world is much more of a planned place than it is.”—Matt Ridley, The Evolution of Everything
For those fascinated by the cosmic gears that guide us forward, and for those who question how central humans are in the story of unfolding of change, a new(ish) book by Matt Ridley, The Evolution of Everything, offers a convincing narrative for just how far our planning brains are from being master and commander of the fast-paced world in which we now live.
Ridley’s book applies the evolution-centric framing of progress that fans of Kevin Kelly’s What Technology Wants will recognize, with sixteen chapters—each representing a new field to which one can apply his theory. The book is a dream read for aspiring polymaths, with far-ranging topics that span the evolution of the universe, morality, technology, the economy, government, religion, and a remarkably up-to-the-minute summary of the Internet (Blockchain fans rejoice!).
Human society isn’t planned by the human brain, Ridley argues, but is instead the result of emergent, bottom-up, and ultimately unpredictable forces of evolution. It’s a convincing narrative that aims to dispel the arrogance associated with humankind’s craving for—and belief in—our having control.
The book’s central argument is best summed up in a casually used quote Ridley offers from 20th-century philosopher Alain, who says that a bad boat design will result in that boat’s sinking to the bottom of the ocean.
Alain points out, “One could then say, that it is the sea herself who fashions the boat.”
Boat designs evolved over time from those who experimented with putting things in the water to see what could float and then copying and improving the best designs (that is, the ones that didn’t sink). These designs didn’t come formed inside anyone’s mind, ready for building. It’s our tendency to ascribe a clever boat schematic to those working on the designs of our boats but humans didn’t design boats—the water did.
This ethos is one of progress by trial and error in pursuit of a goal rather than top-down control. In biological evolution the goal is survival given environmental constraints; in boats, it’s not sinking, given the dynamics of water. The goal and constraints define the shape of progress and result in better designs.
It’s a mind-set now being adopted more widely within business innovation circles, as seen by the success of experimentation-based methodologies such as the lean startup approach. As Eric Ries, author of The Lean Startup, first proposed, businesses can radically transform their likelihood of success by reducing central planning and adapting a product to the needs of the customer in an iterative way over time. The customer’s needs become the core designer in building better products.
It’s also this intuitive understanding of trial and error that allows kindergarten students to consistently outperform the planning-minded brains of recent MBA graduates on the popular marshmallow challenge. Kindergarteners instinctively prototype towards an outcome rather than overthink and plan.
The sculpting prowess of evolution is now being used to build even our most advanced technologies as well. With recent advances in artificial intelligence and machine learning, computer science is moving beyond the deliberately designed world of engineering to one where computers self-code to do things such as drive cars, recognize where a picture was taken, and solve centuries-old biology problems.
Machine learning is a way for programmers to assign a task to a piece of software, define what success looks like, and then sit back and wait for the code to teach itself through experimentation.
So what does this mean for us—a species who some argue reached our much vaunted place atop the animal kingdom through an ability to imagine outcomes and execute accordingly? It’s likely more nuanced than Ridley’s binary approach of central-planning is bad/bottom-up is good.
Though Ridley demonstrates a self-awareness for the one-sidedness of his arguments, they do push too far at times. Everything from World War I to the 2008 financial crisis is blamed on top-down problems.
Others, however, have convincingly argued that these two events were the ultimate of bottom-up disasters (for example, this amazing podcast by Dan Carlin profiles the calamity of unexpected horrors that plunged the world into the First World War). The book also ignores that top-notch central planning has resulted in good business strategy and, at times, good policy. For example, Apple, who do understand customer-focused design principles better than anyone, has succeeded through making their iOS environment a notoriously closed-source place. And as distinguished Singularity University faculty David Roberts often points out, the coordinated efforts by our world governments have addressed the depletion of the ozone layer with great success.
Things occur from a blending of top-down and bottom-up, and they aren’t always good or bad in the way that Ridley suggests.
The larger point of Ridley’s book is spot-on, however. Humans tend to reason that things are the result of coordinated efforts from people we assume are smarter and more qualified than we are. For Ridley, we better become a more bottom-up-based society—and fast. He thinks central planning is bad policy, bad for business, and bad for the citizens of the world.
The world does face a growing list of challenges, and Ridley’s magic pill is counterintuitive—namely that humanity ought to get out of its own overcontrolling way. Ridley’s practical guide might best be embodied by Forgetting Sarah Marshall’s do-less mentality for learning to surf. And that mentality may be too much to stomach for a control-thirsty species like ours.
Image credit: Shutterstock.com
University of Michigan Medical School researchers have discovered a way to convert mouse stem cells (taken from an embryo) that have become “primed” (reached the stage where they can differentiate, or develop into every specialized cell in the body) to a “naïve” (unspecialized) state by simply adding a drug.
This breakthrough has the potential to one day allow researchers to avoid the ethically controversial use of human embryos left over from infertility treatments. To achieve this breakthrough, the researchers treated the primed embryonic stem cells (“EpiSC”) with a drug called MM-401* (a leukemia drug) for a short period of time.
Reverting back to the embryonic state
As the research team reports in the journal Cell Stem Cell, this drug treatment caused more than half of these EpiSC cells to return to a naïve (less specialized) state as “reverted embryonic stem cells” (rESC). The researchers then bred healthy mice from those reverted rESC cells — proving that the drug-treated cells were still viable and had the ability to become any type of cell (achieved pluripotency).
This study is significant because it’s the first time scientists have been able to make stem cells revert to their original state without complications. Also, the drug leaves no trace behind, whereas genetic modification of the stem cells (used by other researchers) may block the stem cells from developing into healthy cells. The researchers only needed to treat the EpiSCs with a single drug and for just a few days.
Past attempts by other teams to return mouse EpiSC cells to the original naïve state have either resulted in a far lower proportion of cells returned to a reverted state, or have produced cells that were not viable. Those past studies also needed to use cocktails of multiple drugs given over the long term.
The work was funded by the National Institutes of Health and the Leukemia and Lymphoma Society.
Human stem cells next
Meanwhile, scientists at the University of Cambridge have also just reported (in an open-access paper in the journal Stem Cell Reports) that they have also produced stem cells directly from embryos — but they did it with human stem cells (for the first time).
“Until now it hasn’t been possible to isolate these naïve stem cells, even though we’ve had the technology to do it in mice for thirty years — leading some people to doubt it would be possible,” says Ge Guo, the study’s first author and research associate in the Stem Cell Potency group at Cambridge.
“But we’ve managed to extract the cells and grow them individually in culture. Naïve stem cells have many potential applications, from regenerative medicine to modeling human disorders.”
Jenny Nichols, PhD, joint senior author of the study, says that one of the most exciting applications of their new technique would be to study disorders that arise from cells that contain an abnormal number of chromosomes. Ordinarily, the body contains 23 pairs of identical chromosomes (22 pairs and one pair of sex chromosomes), but some children are born with additional copies, which can cause problems. For example, children with Down’s syndrome are born with three copies of chromosome 21.
“Even in many ‘normal’ early-stage embryos, we find several cells with an abnormal number of chromosomes,” explains Nichols. “Because we can separate the cells and culture them individually, we could potentially generate ‘healthy’ and ‘affected’ cell lines. This would allow us to generate and compare tissues of two models, one ‘healthy’ and one that is genetically-identical other than the surplus chromosome. This could provide new insights into conditions such as Down’s syndrome.”
The research was supported by the Medical Research Council, Biotechnology and Biological Sciences Research Council, Swiss National Science Foundation, and the Wellcome Trust.
* The drug, MM-401, specifically targets epigenetic chemical markers on histones, the protein “spools” that DNA coils around to create structures called chromatin. These epigenetic changes signal the cell’s DNA-reading machinery and tell it where to start uncoiling the chromatin in order to read it.
A gene called Mll1 is responsible for the addition of these epigenetic changes, which are like small chemical tags called methyl groups. Mll1 plays a key role in the uncontrolled explosion of white blood cells in leukemia, which is why researchers developed the drug MM-401 to interfere with this process. But Mll1 also plays a role in cell development and the formation of blood cells and other cells in later-stage embryos.
Stem cells do not turn on the Mll1 gene until they are more developed. The MM-401 drug blocks Mll1’s normal activity in developing cells so the epigenetic chemical markers are missing. These cells are then unable to continue to develop into different types of specialized cells but are still able to revert to healthy naive pluripotent stem cells.
Abstract of MLL1 Inhibition Reprograms Epiblast Stem Cells to Naive Pluripotency
The interconversion between naive and primed pluripotent states is accompanied by drastic epigenetic rearrangements. However, it is unclear whether intrinsic epigenetic events can drive reprogramming to naive pluripotency or if distinct chromatin states are instead simply a reflection of discrete pluripotent states. Here, we show that blocking histone H3K4 methyltransferase MLL1 activity with the small-molecule inhibitor MM-401 reprograms mouse epiblast stem cells (EpiSCs) to naive pluripotency. This reversion is highly efficient and synchronized, with more than 50% of treated EpiSCs exhibiting features of naive embryonic stem cells (ESCs) within 3 days. Reverted ESCs reactivate the silenced X chromosome and contribute to embryos following blastocyst injection, generating germline-competent chimeras. Importantly, blocking MLL1 leads to global redistribution of H3K4me1 at enhancers and represses lineage determinant factors and EpiSC markers, which indirectly regulate ESC transcription circuitry. These findings show that discrete perturbation of H3K4 methylation is sufficient to drive reprogramming to naive pluripotency.
Abstract of Naive Pluripotent Stem Cells Derived Directly from Isolated Cells of the Human Inner Cell Mass
Conventional generation of stem cells from human blastocysts produces a developmentally advanced, or primed, stage of pluripotency. In vitro resetting to a more naive phenotype has been reported. However, whether the reset culture conditions of selective kinase inhibition can enable capture of naive epiblast cells directly from the embryo has not been determined. Here, we show that in these specific conditions individual inner cell mass cells grow into colonies that may then be expanded over multiple passages while retaining a diploid karyotype and naive properties. The cells express hallmark naive pluripotency factors and additionally display features of mitochondrial respiration, global gene expression, and genome-wide hypomethylation distinct from primed cells. They transition through primed pluripotency into somatic lineage differentiation. Collectively these attributes suggest classification as human naive embryonic stem cells. Human counterparts of canonical mouse embryonic stem cells would argue for conservation in the phased progression of pluripotency in mammals.
Here is another example of recent data on the relationship between levels of physical activity in later life and the health of the brain. With the advent of low-cost accelerometers and more accurate data on activity, it is becoming clear that even the very modest exercise involved in activities such as cleaning or walking shows correlations with health. To to the degree that this relationship involves causation, the important mechanisms likely relate to the status of the vascular system, the rate at which tiny blood vessels suffer structural failure and destroy small portions of brain tissue. That is driven by the pace of arterial stiffening, progression of hypertension, and other factors that are slowed by regular exercise and sped up by the consequences of a sedentary life style, such as higher levels of chronic inflammation caused by visceral fat tissue.A new study shows that a variety of physical activities from walking to gardening and dancing can improve brain volume and cut the risk of Alzheimer's disease by 50%. The researchers studied a long-term cohort of patients in the 30-year Cardiovascular Health Study, 876 in all, across four research sites in the United States. These participants had longitudinal memory follow up, which also included standard questionnaires about their physical activity habits. The research participants, age 78 on average, also had MRI scans of the brain analyzed by advanced computer algorithms to measure the volumes of brain structures including those implicated in memory and Alzheimer's such as the hippocampus. The physical activities performed by the participants were correlated to the brain volumes and spanned a wide variety of interests from gardening and dancing to riding an exercise cycle at the gym. Weekly caloric output from these activities was summarized.
The results of the analysis showed that increasing physical activity was correlated with larger brain volumes in the frontal, temporal, and parietal lobes including the hippocampus. Individuals experiencing this brain benefit from increasing their physical activity experienced a 50% reduction in their risk of Alzheimer's dementia. Of the roughly 25% in the sample who had mild cognitive impairment associated with Alzheimer's, increasing physical activity also benefitted their brain volumes.
Better methods of detecting the various forms of amyloid as it builds up in tissues with age should result in greater support for development and availability of the means to remove this unwanted form of metabolic waste. Amyloids are present in every individual to some degree, that presence increasing with age, and are known to cause or be associated with numerous age-related diseases. This is one of the fundamental forms of damage that causes aging, yet amyloid levels are rarely assessed in healthy individuals, or even for patients with diseases that are relevant but something other than full-blown amyloidosis. Ideally everyone, healthy or not, should undergo amyloid clearance therapies - like that developed and trialed by Pentraxin for transthyretin amyloid - every few years starting in middle age or earlier.Researchers have developed a molecular probe that can detect an array of different amyloid deposits in several human tissues. This new probe is extremely sensitive and was used at very low concentrations to correctly identify every positive amyloidosis sample when compared to the traditional clinical tests. The probe also picked up some amyloidosis signals that the traditional methods were unable to detect. This result means that the new probe could be used to detect amyloidosis before symptoms present, leading to faster and hence more effective treatment.
Aggregates of amyloid proteins form and deposit in different tissues which can affect the normal function. As the disease progresses and amyloid deposits grow, tissues become irreversibly damaged. Amyloid deposits can be found in many different organs leading to a wide range of possible symptoms and making diagnosis challenging. To date, the primary mode of diagnosis for amyloidosis has been the Congo red stain. However, evidence from the team shows that their new probe is much more sensitive, being able to detect small amyloid deposits in samples that were previously determined to be amyloid-free.
According to the U.S. Office of Rare Diseases (ORD) amyloidosis is a rare disease, affecting less than 200,000 people in the U.S.. However, the Amyloidosis Foundation suspects that the figures are underreported and that amyloidosis is not that rare - just rarely diagnosed. A more sensitive diagnostic method would help to uncover the reality of the situation. "Given the sensitivity of the probe, we think this would make an excellent complement to traditional methods and could eventually be a replacement. It could also be used to identify new types of amyloids and presymptomatic patients who are at risk of developing the disease."
Researchers at the University of Houston have developed a new technique for killing bacteria in 5 to 25 seconds using highly porous gold nanodisks and light, according to a study published today in Optical Materials Express. The method could one day help hospitals treat some common infections without using antibiotics, which could help reduce the risk of spreading antibiotics resistance.
Gold nanoparticles are used because they absorb light strongly, converting the photons quickly into heat and reaching temperatures hot enough to destroy various types of nearby cells — including cancer and bacterial cells. Scientists create gold nanoparticles in the lab by dissolving gold, reducing the metal into smaller and smaller disconnected pieces until the size must be measured in nanometers. Once miniaturized, the particles can be crafted into various shapes.
In 2013, corresponding author Wei-Chuan Shih, a professor in the electrical and computer engineering department, and his colleagues created a new type of gold nanoparticle in the form of discs riddled with pores, lending the particles a sponge-like look that helps increase their heating efficiency while maintaining their stability, said Shih.
Zapping with light too
In the new work, the researchers set out to test the antimicrobial properties of their new nanoparticles when activated by light. They grew bacteria in the lab including E. coli and two types of heat-resistant bacteria that thrive in even the most scorching environments such as the hot springs of Yellowstone National Park.
Then, they placed the bacteria cells on surface of a single-layer coating of the tiny disks and shone near infrared light from a laser on them. Afterward, they used cell viability tests and SEM imaging to see what percentage of cells survived the procedure.
Using a thermal imaging camera, the research team showed that the surface temperature of the particles reached temperatures up to 180 degrees Celsius nearly instantaneously, “delivering thermal shocks” into the surrounding array — killing all of the bacterial cells within 25 seconds, the researchers report.
E. coli proved most vulnerable to the treatment; all of its cells were dead after only five seconds of laser exposure. The other two types of bacteria required the full 25 seconds, but that’s still much quicker than traditional sterilization methods such as boiling water or using dry-heat ovens, which can take minutes to an hour to work, said Shih. And it’s “considerably shorter” than what other nanoparticle arrays have demonstrated in recent studies, the researchers write. The time needed to achieve similar levels of cell death in those studies ranges from 1 to 20 minutes.
In control trials, the researchers found that neither the gold disks nor light from the laser alone killed nearly as many cells.
The technique has important potential biomedical applications, said Shih. Currently, the researchers are investigating using the particles as a simple coating for catheters to help reduce the number of urinary tract infections in hospitals.
“Any sort of light activated procedure would be much easier to implement at the bedside of a patient,” instead of removing and potentially replacing the catheter every time it needs to be cleaned, he said.
Another potential application they’re exploring is integrating the nanoparticles with filter membranes in small water filters, he said, to help improve water quality.Abstract of Photothermal inactivation of heat-resistant bacteria on nanoporous gold disk arrays
A rapid photothermal bacterial inactivation technique has been developed by irradiating near-infrared (NIR) light onto bacterial cells (Escherichia coli,Bacillus subtilis, Exiguobacterium sp. AT1B) deposited on surfaces coated with a dense, random array of nanoporous gold disks (NPGDs). With the use of cell viability tests and SEM imaging results, the complete inactivation of the pathogenic and heat-resistant bacterial model strains is confirmed within ~25 s of irradiation of the NPGD substrate. In addition to irradiation control experiments to prove the efficacy of the bacterial inactivation, thermographic imaging showed an immediate averaged temperature rise above 200 °C within the irradiation spot of the NPGD substrate. The light-gated photothermal effects on the NPGD substrate offers potential applications for antimicrobial and nanotherapeutic devices due to strong light absorption in the tissue optical window, i.e., the NIR wavelengths, and robust morphological structure that can withstand high instantaneous thermal shocks.
New lip-reading technology developed at the University of East Anglia could help in solving crimes and provide communication assistance for people with hearing and speech impairments.
The visual speech recognition technology, created by Helen L. Bear, PhD, and Prof Richard Harvey of UEA’s School of Computing Sciences, can be applied “any place where the audio isn’t good enough to determine what people are saying,” Bear said. Those include criminal investigations, entertainment, and especially where are there are high levels of noise, such as in cars or aircraft cockpits, she said.
Bear said unique problems with determining speech arise when sound isn’t available — such as on video footage — or if the audio is inadequate and there aren’t clues to give the context of a conversation. Or on those ubiquitous annoying videos with music that masks speech. The sounds ‘/p/,’ ‘/b/,’ and ‘/m/’ all look similar on the lips, but now the machine lip-reading classification technology can differentiate between the sounds for a more accurate translation.
“We are still learning the science of visual speech and what it is people need to know to create a fool-proof recognition model for lip-reading, but this classification system improves upon previous lip-reading methods by using a novel training method for the classifiers,” said Bear.
“Lip-reading is one of the most challenging problems in artificial intelligence, so it’s great to make progress on one of the trickier aspects, which is how to train machines to recognize the appearance and shape of human lips,” said Harvey.
The research, part of a three-year project, was supported by the Engineering and Physical Sciences Research Council (EPSRC). The research will be presented at the International Conference on Acoustics, Speech and Signal Processing (ICASSP) in Shanghai.
In the open access paper I'll point out today, the authors provide a high level overview of the evidence that suggests immune cells called astrocytes play a primary role in the progression of age-related neurodegenerative conditions. The immune system of the brain is quite different, somewhat more intricate, and more specialized than its equivalents elsewhere in the body, and those systems are themselves very complex and only partially mapped. The brain is shielded from the sort of haphazard exposure to toxins and pathogens that other tissues must face by the existence of the blood-brain barrier, a shield lining the blood vessels that pass through the brain. The portfolio of tasks carried out by the immune system within that barrier has shifted accordingly. In the brain specialized types of immune cell, neuroglia such as microglia and the aforementioned astrocytes, undertake a very broad range of activities beyond simply sweeping up waste and destroying pathogens, and are tightly integrated into the core functions of the brain. They participate in some of the most important and fundamental neural processes, such as the formation and alteration of connections between neurons, for example.
Most of the common diseases of aging have an inflammatory component. Pathology and degeneration is accelerated by the decline of the immune system into a state of ineffective, constant inflammation. The causes of that decline are discussed elsewhere, and include a sort of misconfiguration perhaps brought on by exposure to persistent pathogens such as cytomegalovirus, and the slow and falling rate of generation of replacement immune cells in adults. Effectively addressing these causes seems a very plausible task for the next couple of decades, based on promising studies in animals from the past few years. In the brain, things are going to be much the same at the high level, but different in the details. A lot of research from recent years points to microglia as the agents of neural inflammation, but the authors here suggest that there is just as much evidence for astrocytes to be involved in the generation of a harmful inflammatory state:Parkinson disease (PD), Alzheimer disease (AD), amyotrophic lateral sclerosis (ALS) and frontotemporal dementia (FTD), among others. Therefore, most investigations have been carried out under this belief, leading to the coining of the term "neurodegeneration." In the 90s, a new era began that considered the possibility of a more important role of the so far neglected neuroglia. This led to different findings opposed to the belief of the astrocytes sole function being neuronal structural support.
Astrocytes have been found to have a much more active role that the one predicted by the earlier guessers. Particularly, they are involved in the ions exchange with neurons, they are organized as a syncytium that allows them to interchange information with other astrocytes residing in a defined net through different types of Ca+++ signals while regulating the release of signaling molecules involved in the production of trophic factors, transmitters and transporters that when released to the extracellular medium will modulate the synaptic activity synchronizing the neuronal functions. Also they are involved in the extracellular K+ uptake, in synaptogenesis and gene expression; adapting, at the same time, the permeability of the blood-brain barrier to the neuronal and synaptic needs.
Slowly but relentlessly, the results of several studies have confirmed the existence of an active role played not only by the astrocytes but also by microglia. Mounting evidence suggests that astrocytes modulate microglial response, through the establishment of a complex cross-talk between both types of cells mediated by the production of different chemokines and cytokines. Therefore, we think that the broader term primary degenerative disorders of the central nervous system (PDD CNS) alludes to the complex pathology of these diseases (in contrast to the classic term neurodegeneration). An early astrocytic dysfunction in the PDDs of the CNS has been broadly observed. We advocate that these observations obtained from different degenerative pathologies, but mostly from experimental animal studies, may be the trees from a forest characterized by primary astrocytic dysfunction as the main process starting them.
To help with this crisis, artificial intelligence startup X2AI is in the middle of a two week stay in Beirut, Lebanon, where it’s piloting the use of artificial intelligence as a psychotherapy treatment for refugees.
The post People in refugee camps are starting to see a bot for therapy appeared first on Singularity University.