Move over, graphene. “Stanene” — a single layer of tin atoms — could be the world’s first material to conduct electricity with 100 percent efficiency at the temperatures that computer chips operate, according to a team of theoretical physicists led by researchers from the U.S. Department of Energy’s (DOE) SLAC National Accelerator Laboratory and Stanford University.
Stanene — the Latin name for tin (stannum) combined with the suffix used in graphene — could “increase the speed and lower the power needs of future generations of computer chips, if our prediction is confirmed by experiments that are underway in several laboratories around the world,” said team leader Shoucheng Zhang, a physics professor at Stanford and the Stanford Institute for Materials and Energy Sciences (SIMES), a joint institute with SLAC.
For the past decade, Zhang and colleagues have been calculating and predicting the electronic properties of a special class of materials known as topological insulators, which conduct electricity only on their outside edges or surfaces and not through their interiors. When topological insulators are just one atom thick, their edges conduct electricity with 100 percent efficiency. These unusual properties result from complex interactions between the electrons and nuclei of heavy atoms in the materials.
Their calculations indicated that a single layer of tin would be a topological insulator at and above room temperature, and that adding fluorine atoms to the tin would extend its operating range to at least 100 degrees Celsius (212 degrees Fahrenheit).
Ultimately a silicon substitute?
Zhang said the first application for this stanene-fluorine combination could be for interconnects — wiring that connects the many sections of a microprocessor — allowing electrons to flow as freely as cars on a highway. Traffic congestion would still occur at on- and off-ramps made of conventional conductors, he said. But stanene wiring should significantly reduce the power consumption and heat production of microprocessors.
Manufacturing challenges include ensuring that only a single layer of tin is deposited and keeping that single layer intact during high-temperature chip-making processes.
“Eventually, we can imagine stanene being used for many more circuit structures, including replacing silicon in the hearts of transistors,” Zhang said.
Additional contributors included researchers from Tsinghua University in Beijing and the Max Planck Institute for Chemical Physics of Solids in Dresden, Germany. The research was supported by the Mesodynamic Architectures program of the Defense Advanced Research Projects Agency.
Abstract of Physical Review Letters paper
The search for large-gap quantum spin Hall (QSH) insulators and effective approaches to tune QSH states is important for both fundamental and practical interests. Based on first-principles calculations we find two-dimensional tin films are QSH insulators with sizable bulk gaps of 0.3 eV, sufficiently large for practical applications at room temperature. These QSH states can be effectively tuned by chemical functionalization and by external strain. The mechanism for the QSH effect in this system is band inversion at the Γ point, similar to the case of a HgTe quantum well. With surface doping of magnetic elements, the quantum anomalous Hall effect could also be realized.
The FDA has approved marketing of four diagnostic devices from Illumina (a manufacturer of DNA sequencing machines) for “next generation sequencing” (NGS) — meaning the devices can now quickly and cheaply read and interpret large segments of the genome (the set of genetic information in your body) in a single test.
Two of the devices allow laboratories to sequence a patient’s genome for any purpose, according to Jeffrey Shuren, M.D., Director of FDA’s Center for Devices and Radiological Health in an FDA blog.
“The software compares the patient’s sequence to a normal human genome sequence used for reference and identifies the differences.”
The other two devices can only detect changes in the CFTR gene, which can result in cystic fibrosis, a disease inherited through a faulty CFTR gene from both parents. (More than 10 million Americans are carriers of cystic fibrosis.)
One of these tests could identify men and women with the faulty CFTR gene; the second test looks for other, perhaps unexpected, mutations in the CFTR gene that could be having an impact on the patient’s health, Shuren said.
Personalized medicine and pharmacogenomics
Based on this FDA decision, “clinicians can now selectively look for an almost unlimited number of genetic changes that may be of medical significance,” National Institutes of Health head Francis Collins and FDA head Margaret Hamburg write in an editorial in the New England Journal of Medicine (open access).
For example, “patients diagnosed with a cancer for which there are few therapeutic options may … benefit from drug therapies originally aimed at other cancers that share common driver mutations.”
What happens when your entire genomic information is in your electronic medical record, they suggest? So instead of having to take a DNA sample, ship it, and wait for a lab to run a test, only a quick electronic query would provide your physician with the needed information to determine the course of treatment. That includes pharmacogenomics — the use of genomic information to identify the right drug at the right dose for each patient.
But Collins and Hamburg also caution that new genomic findings need to be validated before they can be integrated into medical decision making. “Doctors and other health care professionals will need support in interpreting genomic data and their meaning for individual patients. Patients will want to be able to talk about their genetic information with their doctor.. and participate alongside their doctors in making more informed decisions.”
In the United States, where genetically modified strains dominate the most common crops, fights have turned to whether foods that contain those foods should have to be labeled. In Europe, strict rules have kept GM crops almost entirely out.
But the use of genetically modified insects presents a third use case, where public opinion remains unformed. Insects, generally modified not to be able to breed effectively, have been considered as a way to fight malaria. They are now being considered in Europe as a way to prune the population of olive fruit flies, a pest that threatens one of the continent’s most lucrative crops.
Olive fly infestation can make table olives unsellable and degrades the quality of olive oil. The bugs are increasingly resistant to pesticides.
The British company Oxitec has asked for permission to release in Spain bugs it has genetically engineered to sabotage their communities. If approved, it would be the first outdoor trial of a GM insect in the EU.
The Oxitec flies are males that, when they mate with the naturally occurring females, will pass on a gene that causes females to die as larva. Male offspring will survive and go on to pass the same deadly flaw on to the next generation of female flies. In this way, the company says, the population of the crop-ruining pests will be slashed.
In a 2012 study, Oxitec researchers were able to eliminate unmodified flies in less than two months. But the findings noted that “field confirmation is required.” That’s what Oxitec is hoping to get in Spain.
The experiment sounds relatively harmless since the genetically engineered fruit flies will, by design, die out. They are unlikely to escape human control. (That may sound like a dystopian fantasy out of a Margaret Atwood novel, but it happened earlier this year, when a long-banned strain of genetically modified wheat was found growing in Oregon.)
But the animals would die in the larval stage inside the olives. Left inside the fruit, they would likely end up in the food supply.
That prospect has sparked criticism from Europe’s anti-GMO community.
“Releasing Oxitec’s GM fruit flies is a deeply flawed approach to reducing numbers of these pests, because large numbers of their offspring will die as maggots in the fruit. Not only does this fail to protect the crop, millions of GM fruit fly maggots — most dead, but some alive — will enter the food chain where they could pose risks to human health and the environment,” Helen Wallace, the executive director of GeneWatch UK, said in a news release.
Oxitec emphasizes that its genetic approach would replace widespread use of pesticides.
“The Oxitec solution to insect pest control is the most environmentally friendly and sustainable form of pest control currently available,” the company claims.
But GeneWatch alleges that the modified flies might actually pass resistance, potentially inherited from the Greek flies that Oxitec modified, on to wild populations.
Oxitec is making the case that new problems require new solutions.
“European agriculture is facing some severe challenges. The burden of agricultural pests is ever present while the number of control approaches is shrinking in the face of insecticide resistance and de-registration of existing chemical treatments. To survive and prosper, European farming will need to evaluate and embrace new solutions and new technologies which are effective, sustainable and safe,” said CEO Hadyn Parry.
The burning question, then, is are genetic pesticide strategies new enough that they will do more than just initiate another round of problems like those industrial agriculture is facing now thanks to last generation’s innovations?
Photos: Art Poskanzer and Giancarlo Dessi via Wikimedia Commons
Telemedicine and online education aim to connect great teachers and skilled doctors to thousands or millions using video. Google’s latest experiment expands the list, placing experts, from chefs to yoga teachers, on call for anyone, anytime.
Based on Google’s video chat provider, Google Hangouts, the new service is called Google Helpouts and was recently launched to the public. In a blog post, Google engineering VP, Udi Manber, said, “Our goal is simple: help people help each other.”
And to that end, Helpouts already boasts a small but growing list of expert helpers including psychotherapists, makeup artists, career coaches, and electricians.
Helpers set their own rates by the minute, per session, or both and offer to book future appointments or connect instantly (if they’re online). Rates range from free to $75+ per meeting. Customers pay with Google Wallet, and Google takes 20% of the proceeds.
For a company that generally provides free services (on the surface, at least), Helpouts isn’t necessarily cheap. And because first impressions are everything, and a disappointing session might be the end of it for many—Google’s offering a money back guarantee and controlling quality with an up-front screening process.
Interested experts must first apply for an “invitation” then submit their service for review before being accepted to the site.
No doubt there will be some who go to Helpouts for a guitar lesson or pointers on how to improve pushups and crunches—but by far, the most promising services are consultations with physicians.
“Helpouts will allow our patients to get high quality affordable care,” said Dr. Tom Lee, chief executive of Google-funded One Medical Group. “The system is so wasteful.”
Efficiency for both patients and doctors would appear to be the name of the game. In many cases, there’s little reason to make an appointment three weeks in advance and spend an hour in a waiting room for a 10-minute conversation with a doctor.
Google jumped through the regulatory hoops (HIIPA compliance) to ensure their service is as secure as it can be. Further, all doctors must be licensed, in good standing, and thoroughly vetted by a Google-hired thirdy party.
Helpouts is a cool idea. Kind of a hybrid YouTube instructional video and online education course. Both lack interaction with the expert, where a series of questions might clarify difficult concepts. And while there are other similar services, like LiveNinja for example, none command an audience as big as Google’s.
We don’t know if it will take off, but Google thinks it’s a fairly simple algorithm. As Manber told a recent gathering of reporters, ”In the end, convenience and efficiency always win.” And we might add quality into the mix. Just saying.
Why MNT nanomachines won’t work, but there’s still plenty of room at the bottom
An Interview with Dr. Richard A.L. Jones
Richard Jones has a first degree and PhD in Physics from Cambridge University. After postdoctoral work at Cornell University, he was a lecturer in physics at Cambridge for eight years before moving to Sheffield University as a Professor of Physics He is a Fellow of the Royal Society and is a Council member of EPSRC, the UK government body that funds physical science and engineering. He blogs about nanotechnology and science policy at www.softmachines.org, and his book “Soft Machines: nanotechnology and life” is published by Oxford University Press.
H+: You’re a critic of Eric Drexler–one of the most recognizable names people associate with nanotechnology. You say you like Drexler’s ideas from Engines of Creation (1986) but dislike the ideas from his Nanosystems (1992). What’s the difference?
RJ: I’m both a fan of Eric Drexler and a critic – though perhaps it would be most correct to say I’m a critic of many of his fans. Like many people, I was inspired by the vision of Engines of Creation, in outlining what would be possible if we could make functional machines and devices at the nanoscale. If Engines set out the vision in general terms, Nanosystems was a very thorough attempt to lay out one possible concrete realisation of that vision. Looking back at it twenty years on, two things strike me about it. One was already pointed out by Drexler himself – it says virtually nothing about electrons and photons, so the huge potential that nanostructures have to control their interaction, which forms the basis of the actually existing nanotechnologies that underlie electronic and optoelectronic devices, is unexplored. The other has only become obvious since the writing of the book. Engines of Creation draws a lot on the example of cell biology as an existence proof that advanced nanoscale machines, operating with atom precision, can be made. This represents one of Drexler’s most original contributions to the formation of the idea of nanotechnology – Feynman’s famous 1959 lecture, in contrast, had very little to say about biology. Since Nanosystems was written, though, we’ve discovered a huge amount about the mechanisms of how the nanomachines of biology actually work, and even more importantly, why they work in the way they do; what this tells us is that biology doesn’t use the paradigm of scaling down macroscopic mechanical engineering that underlies Nanosystems. So while it’s right to say that biology gives us an existence proof for advanced nanotechnology, it doesn’t at all support the idea that the mechanical engineering paradigm is the best way to achieve it. The view I’ve come to is that, on the contrary, the project of scaling down mechanical engineering to atomic dimensions will be very much more difficult than many of Drexler’s followers think.
H+: Does that mean you think it is impossible to make MNT-based nanomachines?
RJ: No, not impossible. But compared to the expectations of their more enthusiastic proponents, I think they will be very much more difficult to make, they will probably need special conditions such as ultra-high vacuum and very low temperatures, and as a result their impact will be considerably less than popularly envisaged.
H+: Why would Drexler’s nanomachines need ultra-low temperatures and vacuum conditions to operate?
RJ: Low temperatures are needed to suppress Brownian motion, and the consequential wobbling around of nanoscale mechanisms and machines, which will hinder their precise operation. The amplitude of this wobbling scales (for classical systems) as the square root of absolute temperature, so to reduce it by a factor of 2 needs a temperature of 75 K (i.e. liquid nitrogen temperature), by a factor of 10 needs a temperature of 3 K (i.e. liquid helium temperature). How much wobbling you can live with is going to depend on the design, of course; macroscopic precision engineering depends on fine tolerances, so one should expect design paradigms based on macroscopic mechanical engineering to be quite sensitive to the effect of positional uncertainty from thermal vibrations.
Ultra-high vacuum will be needed because in ambient conditions (and even more so in aqueous conditions) any molecules that adsorb and get caught up in rolling or rubbing surfaces are at risk of undergoing uncontrolled mechano-chemistry – the forces on the molecules will often be large enough to break covalent bonds, producing reactive species that will greatly increase the friction and lead to damage to the structures.
H+: Even if MNT nanomachines were confined to tightly controlled labs, couldn’t they still lead to a manufacturing revolution in some areas? For example, maybe you could put graphite in one end and have billions of dollars worth of carbon nanotubules come out the other. Even if the nanomachines don’t directly interact with the outside environment, they have a huge impact.
RJ: This depends, on the one hand, how much it costs to develop and maintain nano-machines in this controlled environment, what throughput you are able to achieve, and how the costs would compare with rival products. There are commercial technologies that depend on ultra-high vacuum or very low temperatures, but they are expensive and inconvenient. I doubt that making nanotubes this way would ever be competitive, even if it were possible. But a large-scale integrated opto-electronic quantum computer might be a different matter.
H+: How is the world at the nanoscale different from the world at the macroscale, how are organic nanomachines (i.e. cells and bacteria) adapted to these conditions, and what challenges do they pose to MNT-based nanomachines?
RJ: Biological nanomachines exist in a world that is warm and wet. At the nanoscale, the way physics works in these conditions is very unfamiliar to our intuitions, developed as they are at the macro-scale. The random jiggling of Brownian motion not only makes nanoscale particles move around randomly, it means that any structures have random internal flexings, vibrations and stretchings. On these small scales, surfaces stick to each other, and water behaves like a very viscous fluid – like the stickiest molasses – would at the macroscale. Biological nanomachines have evolved, not so much just to cope with this challenging environment, but to thrive in it, using design principles that are completely unknown in our macroscopic engineering. So biological motors don’t just live with the fact that they are rather floppy and are continuously randomly flexing and writhing, that’s why they work at all. And often they work at extraordinarily high efficiencies.
H+: So if you made one of Drexler’s MNT nanomachines and injected it into a human being’s bloodstream to clean up arterial plaques, what would happen to it in this alien nano-environment?
RJ: To be fair to Drexler, many of the images one sees of medical nanobots are the products of their artists’ imagination rather than any properly worked out nano designs. But to think of the problems that a nanoscale robot designed on the principles of mechanical engineering would encounter in the bloodstream, you need to think of something lacking in rigidity, constantly shaking around due to Brownian motion, and attracting sticky molecules from the environment to every free surface. It’s not an environment that a precisely engineered mechanical design would work well in. A mechanical clock, made of rubber, in a washing machine filled with glue would give you some idea.
H+: In 2005, you proposed six important things MNT proponents could do to bolster the feasibility of MNT. They are listed below. How much progress have they made meeting your challenge?
- Do more detailed and realistic computer modeling of hypothesized nanomachine components (gears, shafts, etc.) to determine if they would hold their shapes and not disintegrate or bend if actually built.
- Re-do computer simulations of MNT nanomachines, this time using realistic assumptions about Brownian motion and thermal noise. The nanomachines’ “hard” parts would be more like rubber, and they would experience intense turbulence at all times. Delicate nanomachine mechanisms could be easily destroyed.
- Re-do nanomachine computer simulations to realistically account for friction between the moving parts and for heat buildup. Heat and vibration could destroy nanomachines.
- Do more detailed computer simulations of Drexler’s nano-scale motor to make sure it would actually work. He never modeled it down to the level of individual atoms. The motor is a critical component in Drexler’s theoretical nanomachine designs as it powers many of the moving parts.
- Design a nano-scale valve or pump that selectively moves matter between the nanomachine’s enclosed inner area and the ambient environment. To be useful, nanomachines would need to “ingest” matter, modify it internally, and then expel it, but they would also have to block entry of unwanted matter that would jam up their exposed nano-moving parts. A valve or pump that is 100% accurate at discriminating between good and bad foreign materials is thus needed.
- Flesh out a convincing, multi-year implementation plan for building the first MNT nanomachines. Either top-down or bottom-up approaches may be pursued. In either case, the plan must be technically feasible and must make sense.
RJ: Not a great deal, as far as I can tell. There was some progress made on points 1, 2 and 3 following the introduction of the software tool Nanoengineer by the company Nanorex, but this seems to have come to a halt around 2008. I don’t know of any progress on 4 and 5. The 2007 Technology Roadmap for Productive Nanosystems from Batelle and the Foresight Nanotech Institute has a good list of things that need to be done to achieve progress with a number of different approaches to making functional nanoscale systems, including MNT approaches, but it does not go into a great deal of detail about how to do them.
H+: What are “soft machines”?
RJ: I called my book “Soft Machines” to emphasise that the machines of cell biology work on fundamentally different principles to the human-made machines of the macro-world. Why “soft”? As a physicist, one of my biggest intellectual influences was the French theoretical physicist Pierre-Gilles de Gennes (1932-2007, Nobel Prize for Physics 1991). De Gennes popularised the term “soft matter” for those kinds of materials – polymers, colloids, liquid crystals etc – in which the energies with which molecules interact with each other are comparable with thermal energies, making them soft, mutable and responsive. These are the characteristics of biological matter, so calling the machines of biology “soft machines” emphasises the different principles on which they operate. Some people will also recognise the allusion to a William Burroughs novel (for whom a soft machine is a human being).
H+: What kind of work have you done with soft machines?
RJ: In my own lab we’ve been working on a number of “soft machine” related problems. At the near-term end, we’ve been trying to understand what makes the molecules go where when you manufacture a solar cell from solutions of organic molecules – the idea here is that if you understand the self-assembly processes you can get a well-defined nanostructure that gives you a high conversion efficiency with a process you can use on a very large scale very cheaply. Further away from applications, we’ve been investigating a new mechanism for propelling micro- and nano-scale particles in water. We use a spatially asymmetric chemical reaction so the particle creates a concentration gradient around itself, as a result of which osmotic pressure pushes it along.
H+: What commercial and/or consumer applications might your research and similar research have?
RJ: There’s still a long way to go, but an obvious goal of the work on propulsion is to make particles that can swim towards a target, using a mechanism analogous to that which bacteria use to swim towards food or away from poisons. Then this could be incorporated in something like a drug delivery device.
H+: Putting aside MNT, what other design approaches would be most likely to yield advanced nanomachines?
RJ: If we are going to use the “soft machines” design paradigm to make functional nano machines, we have two choices. We can co-opt what nature does, modifying biological systems to do what we want. In essence, this is what is underlying the current enthusiasm for synthetic biology. Or we can make synthetic molecules and systems that copy the principles that biology uses, possibly thereby widening the range of environments in which it will work. Top-down methods are still enormously powerful, but they will have limits.
H+: So “synthetic biology” involves the creation of a custom-made microorganism built with the necessary organic parts and DNA to perform a desired function. Even if it is manmade, it only uses recognizable, biological parts in its construction, albeit arranged in ways that don’t occur in nature. But the second approach involving “synthetic molecules and systems that copy the principles that biology uses” is harder to understand. Can you give some clarifying examples?
RJ: If you wanted to make a molecular motor to work in water, you could use the techniques of molecular biology to isolate biological motors from cells, and this approach does work. Alternatively, you could work out the principles by which the biological motor worked – these involve shape changes in the macromolecules coupled to chemical reactions – and try to make a synthetic molecule which would operate on similar principles. This is more difficult than hacking out parts from a biological system, but will ultimately be more flexible and powerful.
H+: Why would it be more flexible and powerful?
RJ: The problem with biological macromolecules is that biology has evolved very effective mechanisms for detecting them and eating them. So although DNA, for example, is a marvellous material for building nanostructures and devices from, its going to be difficult to use these directly in medicine simply because our cells are very good at detecting and destroying foreign DNA. So using synthetic molecules should lead to more robust systems that can be used in a wider range of environments.
H+: In spite of your admiration for nanoscale soft machines, you’ve said that manmade technology has a major advantage because it can make use of electricity in ways living organisms can’t. Will soft machines use electricity in the future somehow?
RJ: Biology uses electrical phenomenon quite a lot – e.g. in our nervous system – but generally this relies on ion transport rather than coherent electron transport. Photosynthesis is an exception, as may be certain electron transporting structures recently discovered in some bacteria. There’s no reason in principle that the principles of self-assembly shouldn’t be used to connect up electronic circuits in which the individual elements are single conducting or semi-conducting molecules. This idea – “molecular electronics” – is quite old now, but it’s probably fair to say that as a field it hasn’t progressed as fast as people had hoped.
Trends in nanotechnology
H+: In your opinion, what are the most important nanotechnology advances that have happened in the last 10 years?
RJ: What’s excited me most personally has been the remarkable advance of DNA nanotechnology. The visionary work of Ned Seeman showed how powerful the idea of programmed self-assembly of DNA strands to produce atomically precise nanostructures could be; the last ten years has seen DNA used to make single molecule motors and machines and to carry out logical operations. These are true synthetic “soft machines”. At the moment these are still laboratory curiosities, but given the way the cost of synthesising DNA is falling this may soon change.
H+: As a person who had spent his life working on nanotechnology, do you think the field is advancing exponentially or linearly?
RJ: It’s not a single technology, so it’s not correct to think of it advancing at a single rate. Some things haven’t gone as fast as people perhaps expected – e.g. single atom manipulation using scanning probe microscopes – while other areas have leapt ahead – DNA-based nanotechnology, for example. But, in reality, most technologies advance in fits and starts. Phases of exponential growth are very natural – it’s natural to set yourself the target of making some fixed fractional improvement every year, and that’s all exponential growth is. But then at some point some physical limit kicks in and improvement plateaus. You see this pattern, for example, in the improvement of steam engine efficiencies in the 19th century – this grew exponentially for some decades, but at some point the 2nd law of thermodynamics exerted itself to cause the growth rate to tail off.
H+: More generally, you’ve cast skepticism on the notion that technology is advancing exponentially or at an ever-growing rate. You said the Human Genome Project–often cited as proof that technology is improving exponentially–was a “bubble” and that innovation in energy and biology is actually slowing down. Do you still believe this? Will things accelerate in the future?
RJ: The idea that the Human Genome Project was a “bubble” originated with the econo-physicist Didier Sornette – he refers to it as a “social bubble”, by which he means that many supporters of the project reinforced each other in their enthusiasm for it, in effect creating an exaggerated general perception of its short-term benefits which attracted resources to the project. Of course “bubbles” have long been associated with new technologies; investors are swept up in enthusiasm for the transformative nature of the technology, over-invest in marginal and oversold projects, and often lose their money. The early days of the railways were full of such episodes, and the dot-com bubble is a more recent one. Technological bubbles may not always be entirely a bad thing – we did end up with railway networks and lots of helpful ICT infrastructure, even if innocent people lost their savings. But the fact that we seem to need bubbles to make progress technologically is actually a sign of our failure to more consciously invest for the long term – as a society, we only seem able to make these investments if we delude ourselves that the returns will arrive quicker than is rationally likely to be the case. But building an economic system on self-delusion seems like a bad idea to me. Our short-termism leads also to a particular type of pathology. Innovation in the realm of information – digital innovation – can be done much more quickly than innovation in the material realm – like nanotechnology – or the biological realm. So if we are biased to favouring innovations that bring returns in the short-term we will end up underinvesting in areas like nanotechnology and nanomedicine, and progress in those areas will slow down.
H+: But don’t advances in information technology enable faster advances in materials and biology? Don’t more powerful computers and computer models speed up research on everything else?
RJ: Yes, indeed they do. Information technology allows us to do much, much more, both in gathering data and in analysing large datasets. On the other hand the science we need to do gets more difficult, because the easy stuff has already been done. But for all the excitement over “big data”, I suspect the limiting factor, the rate-limiting step, for generating really new insights and knowledge remains those scarce resources, human ingenuity and creativity.
H+: Engines of Creation (your favorite Drexler book) predicted that highly advanced nanomachines would inevitably be created, that they would be programmable with data, that they would enable atom-by-atom construction of things (including copies of themselves), that they would revolutionize manufacturing, medicine, human longevity and food production, and that they could remove manmade pollutants from the atmosphere but also serve as weapons of mass destruction. Do you agree with all of that?
RJ: I don’t think anything in the future of technology is inevitable. Technology isn’t an autonomous force that follows some deterministic linear trajectory, instead as we develop our technologies we find a way through a garden of forking paths. Not every conceivable technology that isn’t contrary to the laws of nature is doable given the constraints that bind us, the technologies that come to pass depend on the history of what we’ve invented already. Above all, a technology may be possible, but someone has still got to make it happen. As Drexler wrote in Engines of Creation, ”we need useful dreams to guide our actions”, but we shouldn’t be surprised if things don’t turn out the way we expect.
H+: What types of nanotech-enabled advances will happen in the future?
RJ: Don’t forget that much everyday information technology is already the result of top-down fabrication that is now operating well within the nanoscale. Within twenty years we should have much more sophisticated optoelectronic devices, which combine nanostructured metals, dielectrics and semiconductors to achieve complete control of the interaction of light and electrons. By that time, we should be moving towards the implementation of quantum computers. More mundanely, and perhaps even sooner, I hope we will be able to manufacture efficient solar cells, very cheaply, on a very large scale. On the longer timescale, I would hope that medical nanotechnology would have progressed much further, so we were able to understand and intervene much more purposefully in the molecular level operations of cells. As a result, for example, we might hope that regenerative medicine would be much further advanced.
I’d hope to see much more selective drug delivery, for example cancer therapies that were more effective and with fewer side effects, reliable gene therapy to correct genetic diseases, delivery of small interfering RNA molecules, for example as anti-viral agents, to give just a few examples. In regenerative medicine, I’d hope it would lead to reliable reprogramming of cells and the development of better scaffolds for new tissues.
H+: Over the next 10-20 years, what are your thoughts on Moore’s Law, and how will we build future generations of integrated circuits?
RJ: People have been predicting the end of Moore’s law for some time, and its continuation this long is a powerful lesson about how effective incremental engineering innovation can be, with brilliant innovations like phase shift lithography pushing current top-down methods to sizes that seemed impossible only a few years ago. So one should be careful about calling an end to this run of success. That said, Moore’s law will certainly come to an end sooner or later, quite possibly in the next ten years. My own suspicion is that what will kill Moore’s law won’t be physics, but economics. As the cost of a single next-generation fabs runs into the tens of billions of dollars, at some point that’s going to be indigestible for the companies and shareholders that will have to invest in them. This will probably be a hiatus, rather than a complete end to the growth of computing power, though – at some point we will work out how to implement quantum computing in a scalable way.
H+: Gray Goo doomsday scenario: Realistic? Could it ever happen?
RJ: It must surely be possible in principle to make self-replicating, adaptive devices that takes matter and energy from the environment to feed its growth and reproduction; that’s what bacteria are. To realise a “Gray Goo” scenario the artificial replicators would have to out-compete the bacterial ecosystem; this will be a tall order given bacteria’s adaptability and effectiveness. But, in any case, this remains a very distant danger.
H+: So what should we fear about nanotechnology?
RJ: There are things to worry about with the development of nanotechnology, for example with the potential toxicity of nanoscale materials and the societal problems from unequal access to technology. But rather than worrying about runaway technology, my biggest fear now is the opposite – that we won’t devote enough resources to get the innovation we need. In this case we’ll end up devoting more of society’s efforts to just getting by, with a worsening environment and with depleted resources. Of course we also need to recognise that some of the downsides of nanotechnology that people have talked about have already arrived – mass surveillance is here already.
H+: But unequal access to technology has always existed: In the early 90’s, only rich people had them. Ten years later, even many poor people had them. The same could be said for new medicines that are at first expensive and then cheap once their patents expire. Why should unequal access to technology be any worse a problem in the future than it is now?
RJ: Whether we have wide access to new technologies or whether that access is restricted to a privileged few is for us collectively to choose, as it’s a matter of politics rather than technology. But history seems to suggest that having more advanced technologies makes societies less, not more, equal. Access to new technologies gives access to power, and people don’t seem very good at sharing power.
H+: What do you think of the label “nanotechnology”? Is it a valid field? What do people most commonly misunderstand about it?
RJ: Nanotechnology, as the term is used in academia and industry, isn’t really a field in the sense that supramolecular chemistry or surface physics are fields. It’s more of a socio-political project, which aims to do to physical scientists what the biotech industry did to life scientists – that is, to make them switch their focus from understanding nature to intervening in nature by making gizmos and gadgets, and then to try and make money from themthat.
What I’ve found, doing quite a lot of work in public engagement around nanotechnology, is that most people don’t have enough awareness of nanotechnology to misunderstand it at all. Among those who do know something about it, I think the commonest misunderstanding is the belief that it will progress much more rapidly than is actually possible. It’s a physical technology, not a digital one, so it won’t proceed at the pace we see in digital technologies. As all laboratory-based nanotechnologists know, the physical world is more cussed than the digital one, and the smaller it gets the more cussed it seems to be…
H+: How come no one ever talks about building better micromachines? Why is the focus on nanomachines? After all, micromachines would still be small enough to, say, crawl through blood vessels.
RJ: That’s a good question – microsurgery is already clinically important now, and there will be further improvements as robot microsurgery becomes more advanced. But the fundamental argument for nanomedicine is this – we know that the fundamental processes of cell biology take place at the nanoscale, so if we are going to achieve the goal of intervening in biology’s most fundamental processes, this will need to be done at the nanoscale – at the level of individual biological molecules.
H+: Your thoughts on picotechnology and femtotechnology?
RJ: There’s a roughly inverse relationship between the energy scales needed to manipulate matter and the distance scale at which that manipulation takes place. Manipulating matter at the picometer scale is essentially a matter of controlling electron energy levels in atoms, which involves electron volt energies. This is something we’ve got quite good at when we make lasers, for example. Things are more difficult when we go smaller. To manipulate matter at the nuclear level – i.e. on femtometer length scales – needs MeV energies, while to manipulate matter at the level of the constituents of hadrons – quarks and gluons – we need GeV energies. At the moment our technology for manipulating objects at these energy scales is essentially restricted to hurling things at them, which is the business of particle accelerators. So at the moment we really have no idea how to do femtotechnology of any kind of complexity, nor do we have any idea whether whether there is anything interesting we could do with it if we could. I suppose the question is whether there is any scope for complexity within nuclear matter. Perhaps if we were the sorts of beings that lived inside a neutron star or a quark-gluon plasma we’d know.
H+: In what ways should science policy and the way science is practiced be changed?
RJ: I’m most familiar with science policy in the UK, but I think the UK exhibits some of the difficulties that the USA has, only in a more extreme form. In the last thirty years or so our governments have focused on a “supply-side” science policy – it’s been assumed that if one supports basic science and ensures a supply of skilled and trained manpower this will automatically translate into technological innovation. My concern is that we don’t currently attend to the “demand side” of innovation. In the past we had many laboratories (in both the private sector and the public sector) that connected basic science to the people who could convert technological innovations into new processes and products – one thinks in the USA of great institutions like Bell Laboratories. These applied technology labs have been run down or liquidated, and their place has not been fully taken by the new world of spin-outs and venture capital backed start-ups, whose time horizons are too short to develop truly radical innovations in the material and biological realms. So we need to do something to fill that gap.
H+: What do you think of the transhumanist and Singularity movements?
RJ: These are terms that aren’t always used with clearly understood meanings, by me at least. If by Transhumanism, we are referring to the systematic use of technology to better the lot of humanity, then I’m all in favour. After all, the modern Western scientific project began with Francis Bacon, who said its purpose was ”an improvement in man’s estate and an enlargement of his power over nature”. And if the essence of Singularitarianism is to say that there’s something radically unknowable about the future, then I’m strongly in agreement. On the other hand, if we consider Transhumanism and Singularitarianism as part of a belief package promising transcendence through technology, with a belief in a forthcoming era of material abundance, superhuman wisdom and everlasting life, then it’s interesting as a cultural phenomenon. In this sense it has deep roots in the eschatologies of the apocalyptic traditions of Christianity and Judaism. These were secularised by Marx and Trotsky, and technologised through, on the one hand, Fyodorov, Tsiolkovsky and the early Russian ideologues of space exploration, and on the other by the British Marxist scientists J.B.S. Haldane and Desmond Bernal. Of course, the fact that a set of beliefs has a colourful past doesn’t mean they are necessarily wrong, but we should be aware that the deep tendency of humans to predict that their wishes will imminently be fulfilled is a powerful cognitive bias.
The Longevity Dividend is an advocacy and education initiative that aims to gather sufficient public and political support to reshape the flow of public funds into aging science: to explicitly aim to slow aging and extend healthy life, and to greatly increase government funding for that goal via the National Institutes of Health. The initiative has been around for some years, but of late the level of organization and public advocacy has stepped up a few notches. This, the continuing success of the SENS Research Foundation in gathering support and allies in the scientific community, and Google's Calico operation are, I think, all signs of the times. Past years of persistent advocacy have succeeded in waking up some portions of the community, and the results are now emerging. This is the entry into the next cycle of development, in which there is a lot more funding and interest for medical research that might potentially extend healthy human life spans."The Longevity Dividend: Geroscience Meets Geropolitics." The authors showcase work in the emerging interdisciplinary field of geroscience, which is based on the knowledge that aging itself is the major risk factor for most chronic diseases prevalent in the older population. "In recent years, researchers studying the biological underpinnings of the aging process have made impressive progress in understanding the genetics, biology, and physiology of aging," said GSA Executive Director and CEO James Appleby, RPh, MPH. "With adequate research support, we could be in reach of a breakthrough similar to those in public health in the 19th century and medicine in the 20th."
While researcher S. Jay Olshansky's article in the PDF linked below is much more ambitious in terms of goals and possibilities than I recall being the case for his public position in the past - there is a table in there that includes the word "immortality," for example - this is still not open support for SENS and rejuvenation of the old through repair therapies. It is support for slowing aging, which implicitly means support for the present slow road in aging research, the drug development and metabolic manipulation that is unlikely to result in great gains, and which will absorb a great deal of time and money in the course of going pretty much nowhere.
Still, a starting point is a starting point. When the Longevity Dividend folk set to work in order to dispel public misconceptions relating to overpopulation and increased infirmity in longer lives - both absolutely unfounded fears - then all efforts to extend life benefit. A rising tide raises all boats, and it is in everyone's interest to inform the public that yes, life extension in fact means health extension, and population will generally grow only slowly as human life spans become much longer.
Although people who benefit from advances in aging science will probably live longer, the extension of healthy life is the primary goal. In addition, reductions in the infirmities of old age and increased economic value to individuals and societies would accrue from the extension of healthy life.
It is only a matter of time before aging science acquires the same level of prestige and confidence that medicine and public health now enjoy, and when that time comes, a new era in human health will emerge. An abundance of formidable obstacles are standing in the way, including strongly held views of how to proceed, a history of association with dubious aging interventions, and misconceptions about the goals in mind and the impact of success on population growth and the environment. Once the air clears and aging science is translated into effective and safe interventions that can be measured and documented to extend our healthy years, the 21st century will bear witness to one of the most important new developments in the history of medicine.
The article by Dan Perry is also worth reading:Like the rough beast of the famous poem by W. B. Yeats, a scientific consensus that aging might be slowed to avert chronic diseases in older people is slouching toward serious consideration in public policy.
Richard Miller addressed a scientific audience a few years ago with an only slightly tongue-in-cheek assessment of why biogerontology has failed to be embraced as a panacea for age-related diseases and disability among the older population. Miller assessed the obstacles to finding a cure for aging as 85 percent political and 15 percent scientific. Among the political obstacles Miller noted:
Regardless of which of Miller's hurdles are most daunting, the fact remains that federal funding of biomedical research continues to pursue cures and better treatments for specific diseases, especially for those with vocal constituencies. Recent developments, however, including congressional interest and creation of the trans-NIH Geroscience Interest Group (GSIG), are setting the stage for a determined push for increased federal support for age-modifying research with clinical potential.
SENS and rejuvenation research is still not a part of this funding picture. Those involved are generally much more conservative, or at least feel the need to appear so in public. I think it will take more years of steady growth in funding and support for SENS, and the emergence of one or two important technology demonstrations in rejuvenation resulting from ongoing SENS Research Foundation projects for it to start to feature in discussions of large-scale funding and goals. All funding at this level is political, the public funding much more so of course, and change is slow.
I guess you could say that I’ve always been perplexed by the concept of time. Painstakingly aware of things being forgotten, from my 6th birthday I began devoting the hours pre-bedtime to logging into a journal, in an OCD-style panic that left me guilt-ridden if I skimmed over even the slightest detail. Perhaps that very thing might have a direct effect on my future. And much to the annoyance of my somewhat patient mother, I learnt I was often right. Passing acquaintances or emotions that might have been skipped over in a more concise entry, would often be the exact thing I’d be looking for in retrospect.
What was the original reaction or intuition when I met that person or experienced that thing?
As a child I had an obsession with computers that lead me to be subsequently banned from my dad’s office by my mother. I took solace on nights when I could tell he was home by the faint clicks of him typing away at his Dell PC (he was an electrical engineering professor at a nearby university). Happy to have company by the little fat tomboy child he rarely saw, we sat eating snacks that we hid from my mother, whilst he explained to me circuit boards and C++.
One of the most prolific moments of self-awareness in my childhood was the day my dad’s computer crashed. Looking to back up his files quickly before he came home, I was guided through the process of ‘system restore’. Going through the calendar and backing up my dad’s PC to the specific date he had previously saved, I was struck by the analogy to my life. Here I was, logging in data every day in my journal. And yet, despite in some way being able to relive the emotions through re-reading previous posts, there was no direct ‘system restore’ to connect to preferential versions of myself. How would I ensure continuity? How would I back up to previous moments of inspiration or enthusiasm? I was aware of the potential to become distracted from the original goals that my ‘self’ had set out for me. Even reading my diary aged 7-8 showed, as to be expected, a massive change in my outlook. And I was troubled, much like the PC. How do I enable direct continuity from the person I am, was, and will be? How do I system restore myself in the case that I get distracted, or fail, or lose perspective?
My first action was to begin writing letters to my future self. I can’t ever quite convey the humility experienced opening a letter on your 18th birthday that you wrote 10 years prior. As adults we look back at our childhood and consider ourselves to be inferior. Yet our views, so pure and untouched by the later mishmash of growing up, hold an honesty and a directness.
‘Dear Riva, Riva here. I am 8. You are 18. You are reading this. Hello. How are you? I hope you are well and happy.’
And so it went on for 5 sides of paper. I’d even included a little side letter for any potential boyfriend I may have. I’ll never forget handing it over and watching this guy read a note from his girlfriend, aged 8, telling him to ‘be good to me’..accompanied with a carefully drawn picture of Bart Simpson.
Obsessed with metaphysics, I’d later skip school to go to any talk related to time. There was the 16th century nobleman a teacher told me about, who despite having control of most of the land in his region, began taking apart clocks later in life, wondering why something so basic controlled him. Then there was Dali, with his melting clock- which I learnt was more of a reference to camembert than to metaphysics. The french phrase ‘les temps destruit tout’ and the inscriptions on 15th century wedding rings in London museums that stated ‘memento mori’. What happens if I don’t want to?
I ended up working in technology. I seemed more surprised by it than my friend’s did, who reminded me of sleepovers where they had woken up to find I had sat in front of their computer the whole night straight eating cookies straight out of the pack. I wanted to see what software they had, and moreover I wanted to find their parents secret folders. By this point my mother had refused to even let me own a computer. But aged 22, I was studying programming languages in Berlin. It became quite obvious to me that if we could harness the power of technology and direct it towards genuine world problems instead of consumerism, we would be able to do great things. So i invited people to my house to discuss this. We started off with 12 people in my living room, by the time I left Berlin there was a discussion group of nearly 300.
So what’s my point in telling my story? Well, because now, more than ever, I’m worried. Those hopes for the future- humanity living longer, better and happier- well, these are goals not confined to transhumanism in any way. But as I’ve got more involved with venture capital, start-ups and emerging technologies, I’ve noticed how distracted people have become. Perhaps you’ll assert that there’s an ideological optimism in thinking that people even have these original intentions. But I can’t help that we all do somewhere. It’s just that we lose track- no logging, no continuity, no reflection. I think most of our childhood selves would abhor us now. We become a sponge for everyone else’s ideas, reflecting the world around us. And until we collectively agree that the scene we reflect is utopian, we have to resist that. In some way, society needs to ‘system restore’.
Imagine a state of mind, free from any biases- from anything, read, felt or experienced. Almost (if not totally) impossible, we’re left to deducing things to logic. But the idea of training oneself to be free from as much cultural, social or personal bias to me lies the foundation for attempting rational thought. For some it comes easy, but for the most of is it’s something we need to train ourselves to do. To a degree, we all need to at least attempt to think outside of ourselves and to play devil’s advocate to our own convictions.
On the first Berlin Singularity event when a new friend propositioned the topic of life extension, I was horrified. But then I was even more troubled by the fact that I was horrified. I asked myself to set out the argument for offence. Then one by one, I realised that none of my propositions for concluding that life extension was ‘insane, gross, disgusting, egotistical’ were actually valid. They were all marred by an extreme social and cultural bias. Just because a thing had always been in such a way, did not mean that it was valid. Just because we had accepted aging in the past as a result of not really having any other choice, did not mean it was valid. Accepting aging was totally illogical- in the same vein that we do not accept cancers, or accidents, or any other cause of mortality. I couldn’t find any differences between these diseases and the notion of aging. This wasn’t about being ‘immortal’ (a word I think we all need to shun), but if I loved life (and the lives of those around me), why would I not want to enjoy it, healthier, for as long as I possibly could?
All the money in the world can’t stop time from destroying everything. And although we may never absolve ourselves from being under it’s grasp in some way or another, we’re able now to confront biological time in a way never considered before. We’re all racing against internal clockwork. And just like the nobleman taking apart the clock to try and gain control of the one thing that eluded him, so too are longevity research groups taking apart our internal clockwork and examining the mechanism. It’s the stuff that humanity has always dreamt of. It’s also one of the areas least discussed outside our relatively small circle.
So how do we get people to system restore? Outreach is the biggest downfall in the longevity space. Thinking research is the only financial necessity is the grandest failure of most companies I meet. And when one biotech startup founder told me he ‘didn’t need every Tom, Dick or Harry understanding his research’, I agreed- I mean for all extents and purposes I’d be quite insane to think that the masses are going to care enough to understand the fine details of his molecular nanotechnology company. But then, at the same time, we all need to collectively promote our end goals. In simple terms, his company was aiming for a simple thing that everyone could understand- he just needed to step outside the science realm to understand how to communicate it. And by this I mean, outside of the circles we move in, collaboratively, and without egoism. Every piece of outreach I work on now, I consider part of a bigger picture, an attempt for a social system restore.
The first step is encouraging the avoidance of distraction. Last year I was asked to lecture business and marketing students in a university in Berlin. I thought about the things they had been taught by other members of staff- the focus on consumerism, the focus on sales. What companies would these students go on to form? What would they potentially go on to fund? Instead of following the past– all the things that were once and have always been- ourselves, our peers and our students need to contemplate that every action, whether it be business or personal, is a direct contribution to a collective future. It seems obvious but remains poorly followed. I daydream of kids and grandkids and future generations who will have no idea my array of molecules even existed. Just as I was aware as a child that the smallest detail could play a massive impact down the line, so too am I aware of that in adulthood. To think as the Iroquois tribes did – “to look ahead , as one of the first mandates given to us as chiefs, to make sure and to make every decision that we make relate to the welfare and well-being of the seventh generation to come. . . .”
“What about the seventh generation? Where are you taking them? What will they have?”
The transhumanist and life extension movement gets a lot of rap for being egotistical. I can’t help but consider it to be the complete opposite. The reason why I’m drawn to putting more funding and attention into longevity research is not simply for my own benefit. I’ve stated time and time again that as much as I reject aging, I’ll only age knowing I have somehow contributed for it to be better for the generations to come. Our lives are our letters to the future. It’s going to take a generation to think outside of themselves for once. As a species, we need to get our goals in check. Consider all the money and time channelled into projects with little or no positive legacy. And the projects that are aiming for positive impacts aren’t telling anyone about it. Speak up. We’re listening.
For every company I work with, I like to give them the same gift. It’s a copy of one of Georges Seurat’s paintings. His use of pointillism – a collection of microdots on the canvas in a multitude of colours- from up close seem nonsensical, from further back a beautifully intrinsic scene. To me this is the perfect analogy of the biotechnology movement. Each company a tiny dot, part of a bigger picture, a bigger goal. Nonsensical to the masses up close, in the lab, but from further back a busy and exciting landscape.
It’s up to us to collaborate on portraying the bigger picture. And it’s not just about companies putting more money into outreach, it also comes down to personal effort. There shouldn’t be the slightest whiff of egoism in this movement. What we’re aiming for is a better life for humanity. And I’m not sure there could be a better call for a little altruism than that.
###Riva-Melissa Tez, is the Associate Director of Stuart Calimport’s Longevity Intelligence Communications. Riva-Melissa studied Philosophy at University College London and has a track record of entrepreneurship and social innovation include the co-creation of a social network for children. Her passion for philosophy got her interested in the deeper implications of technology and she founded Berlin Singularity- a group focusing on bringing and promoting pro-longevity and futurist discussions to mainland Europe. Now living in San Francisco, she co-runs Kardashev Communications a German/US consultancy group promoting and connecting funds to emerging technologies. She is a lecturer at the DAB university in Berlin and regularly presents and writes about the absurdity of death, and the different approaches that science and technology groups are using to tackle this problem worldwide.
The following is the opinion of the author only and does not represent an official position of H+ Magazine or Humanity+.
Transhumanism is a positive philosophy about the future based in optimism, rational thinking and the application of science and technology to improve the human condition. We seek to live longer, stay healthier, and become smarter and even more physically fit. We want to develop tools and technologies to help ourselves and others do the same.
We want to live longer, be healthier and happier, become smarter, keep learning and have more fun. And we propose using science and technology to do it. Does this sound good? Then you are very possibly a transhumanist.
Transhumanism isn’t a cult or religion. There are no canonical texts or predetermined methods. “What it is, is up to us.”
Transhumanism is compatible with various religious beliefs and traditions. But, inevitably, because it is based in science, the conclusions of transhumanism may contradict some elements of traditional religious beliefs. For example, transhumanists seek to extend life and do not accept death as a given fact due to a “creator” or supernatural deity. We do not consider immortality to be the sole domain of faith or deity and we reject the entire idea of “supernatural”.
Anything that actually happens is by definition “natural” if not always understood by us. Transhumanism rejects as fallacies ideas that place nature above man or his works; this is known as the Naturalistic Fallacy. We have the ability to become active agents in our own design and the design of our environment. Transhumanists believe that we can not ignore our responsibilities to consider the designs we create, but we also believe strongly in our right and even our duty to create them. We reject the deep ecology movement and ecological terrorism in all forms.
On a related note, transhumanism importantly suggests that we consider the risks arising from failing to develop technologies as well as those risks which originate from developing them. The Proactionary Principle is presented as an alternative to the conservative Precautionary Principle which is applied almost everywhere in scientific and technological policy making. We are not ignorant about possible negative consequences or dangerous technologies and ideas. However this isn’t about gambling.
Fundamentally transhumanism is an optimistic idea. We suggest that humans can and should improve themselves,and that we can be better off if we do. Transhumanists believe that the future can be bright and better than today. We reject romanticized notions about the past or the natural world. We consider that what appear to be insurmountable obstacles are sometimes found to be illusory limitations in our own understanding. We think world wide abundance is possible if we put our minds to the task of creating it.
The DIY or “maker” ethic is deeply embedded in transhumanist thinking and extends to the aesthetics of the movement and its members, as well the focus on the enhancement of the human body and mind. As it is based in science, transhumanism is not just an affiliation, idea or belief but also includes a set of tools you can use. There are practical and demonstrable methods for achieving various, but not all, transhuman objectives available today. Many of us are already enhanced for example.
We reject the notion that an elite should control access to ideas or technologies. The goal of universal abundance and access to knowledge are founding transhumanist principles and elite control of knowledge makes this impossible. Technologies and science progress more in open and free societies free from surveillance. And since progress in these areas is fundamental to our goals, we also support access to knowledge and lowering the costs of medicines, medical treatments, etc. Global access to free knowledge and technology is a critical element of a future abundant transhumanist society.
Transhumanism rejects the idea of a fixed and unchanging human nature. We observe that what we call our “self” is in part a social construct that exists outside of the body although the brain is the seat of intelligence and everything that makes us “us”. Since we can now alter the architecture of our bodies and brains, we can become more. Already millions of people are electronic cyborgs using technology to keep them alive or to see and hear. We also consider that elements of what make humans special, intelligence, consciousness, and qualia of experience arise in other living beings and might also arise in man made artifacts: artificial intelligences and robots.
The Principle of Morphological Freedom is a fundamental concept of transhumanism. We do not accept limitations of law or religion on who we are or what we might become. Racism is clearly unacceptable from this perspective as are gender and sexual biases and discrimination. However, the principle clearly requires us to develop new philosophical ideas, moral codes, and legal systems. Consider, are we also free to make ourselves damaged and dysfunctional or toxic and dangerous? We don’t know all the answers.
Transhumanists accept uncertainty and are not looking for answers to all questions from a universal infallible source. Therefore, transhumanism is not a dead tradition but a living and growing philosophy and movement. We are creating it together right now.
While transhumanism has a history in the 1960s drug culture and countercultural thinkers such as Timothy Leary and Terence McKenna, this is largely now historical. Transhumanism is not a new age religion or replacement for religion. It is also not about getting high, although we do seek to raise our hedonic setting, enhance our perception and cognition, and extend our physical performance. We might use chemicals or other methods to do it. Related to the notion of Morphological Freedom, is the no less important idea of Cognitive Freedom.
Transhumanists vary in opinion on various details and topics. For example, not everyone considers the emergence of a greater than human intelligence in a machine to be a near term probability. Others support Kurzweil’s estimates of 2045 or even think he is too conservative. Some transhumanists are cryonicists, but others consider cryonics to be at best a gamble that might not pay off. Many transhumanists are vegans or follow a paleo diet. Some are avowed carnivores. Your mileage may vary.
One area where transhumanists differ is politics. There are transhumanists all over the political spectrum but what we agree on is essential ideas such as liberty and freedom. Dictatorships and authoritarianism are not compatible with transhumanist ideas such as Morphological Freedom and Cognitive Liberty for example. But beyond that, our existing ideas about how to organize society and act politically are being rapidly altered through the use of technology and media. Consider that the best possible political decision making system of a future of a transhuman society possibly has no precedent in human history.
However, some individuals have tried to connect transhumanism with far right or far left political ideologies including both communism and various nationalist ideologies. Transhumanists do consider the use of gene therapies and genetic alterations of humans to cure disease and also to enhance ourselves, but we reject the notions of the Eugenics movement and all previous related and hateful philosophies tied to those ideas. This is very dangerous to our project and transhumanists should be aware that multiple attempts have been made to co-opt and use our ideas to promote other movements. We universally reject the politics of hate and fear.
Some have confused transhumanism with its darker cousin cyberpunk. Transhumanism is not a literary genre, although there are some excellent transhumanist science fiction books. We don’t believe that the future must be a dystopia or that armageddon is inevitable. While transhumanism is often covered in such settings, this leads to a lot of mistaken ideas and directions. Dwelling on such negative scenarios and possibilities can short circuit your ability to think rationally especially about possibly positive far future concepts. We therefore propose a rational optimism, a positive futurism, and we suggest avoiding dystopic thinking altogether. Instead, how about engaging in real practical work to bring our ideas into reality?
Modern transhumanism is not about mystical beliefs but rather is based in ideas that arose in the late 20th Century starting with FM 2030 in 1960s and the Extropian movement in the 1980s and 1990s. Modern transhumanism has no true historical links to alchemy or the Western hermetic traditions. However, some individuals have tried to make these connections. The alchemists and others did explore many similar ideas to transhumanism from a non-scientific perspective.
We don’t reject the wisdom of the ages, but we do respect the progress that application of science and reason have given us. What were once imaginary or magical ideas are commonly made real today through technology. And many more amazing things will soon be possible.
Anyone that is promoting a return to a pastoral “natural” past, promoting fear of dystopia, or the future more generally, is not a transhumanist no matter how they describe their ideas. All transhumanists are trying to create a better, happier, and safer world using technology if they deserve the name “transhumanist”.
War does not advance the transhuman project, and is one of the greatest dangers we face. But at the same time military developments have pushed the envelope of what is possible and imaginable. Advocates of rapidly advancing change often forget that their ideas can produce conflict or that conflicts can produce advancement.
Finally, transhumanists consider the use of reason and rational thinking to make decisions. We want to make good decisions and we are interested in tools and technologies that help us do that. But we also know that we must apply compassion and connect with a community of others as integral to our project. I therefore encourage you to DIY, erase fear, and engineer joy. Invent your own future, become your own avatar. And have fun!
Equal parts art and science, passion and rationality, Jason Silva continues to curate the exponential in a new series called “The Future of Us” on AOL.
Silva told us the new stuff will be recognizable to those who’ve seen “Shots of Awe” or his philosophical espresso shots. Sponsorship by Chevrolet came with creative freedom and the funds to improve the end product.
In making the videos, Silva collaborated with friend Barry Ptolemy, director of “Transcendent Man,” and exchanged Central Park for Big Sur and Malibu.
“I got to take the content from my philosophical espresso shots and build on it—the production value, shoot with two cameras instead of one camera, invest more in stock footage—just take it to the next level.”
In the first episode, Silva sketches a few broad strokes. Our exponential future, he says, is in biotechnology, nanotechnology, and robotics. Episode two dives into biotech—how mind arose from flesh and is, in turn, creating flesh anew.
Quoting Freeman Dyson, he says, “In the near future, a new generation of artists are going to write genomes with the fluency that Blake and Byron wrote verses.”
The third installment covers nanotech, and the latest episode remixes Silva’s “Patterns” video exploring the artificial division between natural and manmade.
Silva told us, “The idea that nature and technology are separate is receding so fast it’s increasingly becoming obvious that there’s a continuum between the born and the made, and we’re just in the middle of it.”
The series will be eight episodes long, released weekly. Silva’s film shorts still attract plenty of eyeballs, and with AOL, he’s exposing a whole new audience to exponential tech and the singularity.
Talking to Silva, you get the feeling guerrilla filmmaking and performance art are his greatest passions. “I make the videos because I have the urge to create.” Art and and science, he says, are two sides of the same coin.
Silva is at his best synthesizing, contextualizing, and spinning the story of science, tech, and philosophy for a non-technical audience. And hey, everyone is non-technical, until something inspires them to learn more.
Image Credit: Jason Goodman
UT Dallas computer scientists have developed a technique to make 3D images faster and with more accuracy.
The method uses anisotropic (irregular) triangles — triangles with sides that vary in length depending on their direction — to create 3D “mesh” computer graphics that more accurately approximate the shapes of the original objects, and in a shorter amount of time than current techniques.
These types of images are used in movies, video games and computer modeling of various phenomena, such as the flow of water or air across the Earth, the deformation and wrinkles of clothes on the human body, or in mechanical and other types of engineering designs.
Researchers hope this technique will also lead to greater accuracy in models of human organs to more effectively treat human diseases, such as cancer.
“Anisotropic mesh can provide better simulation results for certain types of problems, for example, in fluid dynamics,” said Dr. Xiaohu Guo, associate professor of computer science in the Erik Jonsson School of Engineering and Computer Science whose team created the technique.
The technique finds a practical application of the Nash embedding theorem, which was named after mathematician John Forbes Nash Jr., subject of the film A Beautiful Mind.
How to generate an image up to 125 times faster
The computer graphics field represents shapes in the virtual world through triangle mesh. Traditionally, it is believed that isotropic triangles — where each side of the triangle has the same length regardless of direction — are the best representation of shapes.
However, the aggregate of these uniform triangles can create edges or bumps that are not on the original objects. Because triangle sides can differ in anisotrophic images, creating images with this technique would allow the user flexibility to more accurately represent object edges or folds.
Guo and his team found that replacing isotropic triangles with anisotropic triangles in the particle-based method of creating images resulted in smoother representations of objects. Depending on the curvature of the objects, the technique can generate the image up to 125 times faster than common approaches.
For example, 155 seconds to create a circular image with Guo’s approach, versus more than 19,500 seconds for a common approach to generate an image of similar quality.
Objects using anisotropic triangles are of a more accurate quality, and most noticeable to the human eye when it comes to wrinkles and movement of clothes on human representatives.
The next step of this research is moving from representing the surface of 3D objects to representing 3D volume. “If we are going to create accurate representations of human organs, we need to account for the movement of cells below the organ’s surface,” Guo said.
Zichun Zhong, research assistant in computer science and PhD candidate at UT Dallas, was also involved in this research. Researchers from the University of Hong Kong, Inria Nancy Grand Est in France, Nvidia Corporation in California and UT Southwestern Medical Center also participated.
“These types of images are used in movies, video games, Computer-Aided Design (CAD), Computer-Aided Manufacturing (CAM), computational fluid dynamics (CFD) fields, scientific visualization, architecture design, etc.,” Zhong explained to KurzweilAI.
“It can work better to capture the behavior of physical phenomena, such as such as the flow of water or air across the Earth, the deformation and wrinkles of clothes on the human body, or in mechanical and other types of engineering designs. Medical scientists hope this technique will also lead to greater accuracy in models of human organs to more effectively treat human diseases, such as cancer.
“If we can find good commercial partners and attractive applications, it will appear on the market within several months.”
Abstract of SIGGRAPH 2013 presentation
This paper introduces a particle-based approach for anisotropic sur- face meshing. Given an input polygonal mesh endowed with a Rie- mannian metric and a specified number of vertices, the method generates a metric-adapted mesh. The main idea consists of mapping the anisotropic space into a higher dimensional isotropic one, called “embedding space”. The vertices of the mesh are generated by uniformly sampling the surface in this higher dimensional embedding space, and the sampling is further regularized by optimizing an energy function with a quasi-Newton algorithm. All the computations can be re-expressed in terms of the dot product in the embedding space, and the Jacobian matrices of the mappings that connect different spaces. This transform makes it unnecessary to explicitly represent the coordinates in the embedding space, and also provides all necessary expressions of energy and forces for efficient computations. Through energy optimization, it naturally leads to the desired anisotropic particle distributions in the original space. The triangles are then generated by computing the Restricted Anisotropic Voronoi Diagram and its dual Delaunay triangulation. We compare our results qualitatively and quantitatively with the state-of-the-art in anisotropic surface meshing on several examples, using the standard measurement criteria.
I noticed this study today which reinforces the well-known correlation between wealth and longevity, but more strongly reinforces the point that people age at different rates. If you have developed more pronounced age-related disease or disability at 70, then your odds of reaching 90 are not so good in comparison to your more healthy peers. Aging is damage, and age-related disease and degeneration is the visible manifestation of that damage. Insofar as we have control over the pace of aging, that is a matter of good lifestyle choices: exercise, calorie restriction, good use of preventative medical resources, and so forth. There is also the matter of supporting research so as to improve the capabilities of the medical technologies available in your old age - a factor much more important than the others when it comes to determining your expected length of life.To identify factors associated with survival to the age of 90 years old in 70+ elderly people [we examined data from] 75 randomly selected administrative communities in Gironde and Dordogne (France) [containing members of the] PAQUID prospective cohort on brain and functional ageing. A sub-sample of 2,578 community dwellers aged 70 years and over at baseline in 1988 [were followed] over 20 years. Data on socio-material environments, lifestyle, health, perceived health, and family background were collected at home every 2-3 years over 20 years, with a prospective update of vital status. Participants were compared according to their survival status (subjects who reached 90 compared to those who did not).
Some factors associated with survival were common to both genders, whereas some others appeared gender specific. For men, tenant status (hazard ratio, HR=1.46), former or current smoking (HR=1.17), disability (respective HR of 1.50, 1.78 and 2.81 for mild, moderate and severe level), dementia (HR=1.51), a recent hospitalisation (HR=1.32), dyspnoea (HR=1.32), and cardiovascular symptoms (HR=1.15) were associated with lower chance of becoming nonagenarian. Conversely, regular physical activity (HR=0.74) was associated with higher chance of survival.
For women, the presence of a professional help (HR=1.19), living arrangements (HR=1.29 and HR=1.33), disability (respective HR of 1.55, 1.95 and 2.70 for mild, moderate and severe disability), dementia (HR=1.54), a recent hospitalisation (HR=1.19), diabetes (HR=1.49), and dyspnoea (HR=1.20) were associated with lower chance of becoming nonagenarian. Conversely, satisfaction of level income (HR=0.87), comfortable housing (HR=0.81), length of living in the dwelling (HR=0.80 upper to 6 years), regular physical activity (HR=0.89) and a medium (HR=0.79) or good (HR=0.68) subjective health, were associated with higher chance of becoming nonagenarian.
Researchers at Scripps Institution of Oceanography at UC San Diego have developed a method for greatly enhancing biofuel production in tiny marine algae by genetically engineering a key growth component in biofuel.
The researchers say a significant roadblock in algal biofuel research surrounds the production of lipid oils, the fat molecules that store energy that can be produced for fuel: algae mainly produce the desired lipid oils when they are starved for nutrients.
Yet if they are limited in nutrients, they don’t grow well. With a robust diet algae grow well, but they produce carbohydrates instead of the desired lipids for fuel.
Genetically engineering diatoms
As reported in this week’s online edition of the Proceedings of the National Academy of Sciences (open access), Scripps graduate student Emily Trentacoste and her colleagues used a data set of genetic expression (called “transcriptomics” in laboratories) to target a specific enzyme inside a group of microscopic algae known as diatoms (Thalassiosira pseudonana).
By metabolically engineering a “knock-down” of fat-reducing enzymes called lipases, the researchers were able to increase lipids (oils) without compromising growth. The genetically altered strains they developed, the researchers say, could be produced broadly in other species.
“These results demonstrate that targeted metabolic manipulations can be used to increase accumulation of fuel-relevant molecules, with no negative effects on growth,” said Trentacoste. “We have shown that engineering this pathway is a unique and practical approach for increasing lipid yields.”
“Scientifically this is a huge achievement,” said Mark Hildebrand, a marine biology professor at Scripps and a coauthor of the study. “Five years ago people said you would never be able to get more lipids without affecting growth negatively. This paper shows that there isn’t an intrinsic barrier and gives us hope of more new things that we can try — it opens the door to a lot more work to be done.”
Faster, cheaper production
In addition to lowering the cost of biofuel production by increasing lipid content, the new method has led to advances in the speed of algal biofuel crop production due to the efficient screening process used in the new study.
“Maintaining high growth rates and high biomass accumulation is imperative for algal biofuel production on large economic scales,” the authors note in the paper.
“Increasing lipid accumulation in microalgae is a major priority to boost the economic viability of algal biofuels, but growth and biomass are also important characteristics in large-scale production systems,” Trentacoste told KurzweilAI. “The specific enzyme targeted in this study is conserved throughout eukaryotes, and could be targeted in other production strains as well, thus these methods could be applied to many algal biofuel systems.
“A U.S. provisional patent application has been filed in relation to this invention,” she said. “Interested licensees can contact Dr. Donald Kakuda (firstname.lastname@example.org) at University of California-San Diego’s Technology Transfer Office.”
The National Institutes of Health, California Energy Commission, Air Force Office of Scientific Research, Department of Energy, and National Science Foundation supported the research.
Abstract of Proceedings of the National Academy of Sciences paper
Biologically derived fuels are viable alternatives to traditional fossil fuels, and microalgae are a particularly promising source, but improvements are required throughout the production process to increase productivity and reduce cost. Metabolic engineering to increase yields of biofuel-relevant lipids in these organisms without compromising growth is an important aspect of advancing economic feasibility. We report that the targeted knockdown of a multifunctional lipase/phospholipase/acyltransferase increased lipid yields without affecting growth in the diatom Thalassiosira pseudonana. Antisense-expressing knockdown strains 1A6 and 1B1 exhibited wild-type–like growth and increased lipid content under both continuous light and alternating light/dark conditions. Strains 1A6 and 1B1, respectively, contained 2.4- and 3.3-fold higher lipid content than wild-type during exponential growth, and 4.1- and 3.2-fold higher lipid content than wild-type after 40 h of silicon starvation. Analyses of fatty acids, lipid classes, and membrane stability in the transgenic strains suggest a role for this enzyme in membrane lipid turnover and lipid homeostasis. These results demonstrate that targeted metabolic manipulations can be used to increase lipid accumulation in eukaryotic microalgae without compromising growth.
Researchers have demonstrated that acarbose, a drug used to treat type 2 diabetes, extends life in mice. This should probably be taken as speculative until more studies are run, as another treatment for type 2 diabetes, metformin, has erratic results on life span in rodent studies. Like metformin, the mechanism of action for arcabose involves influencing glucose metabolism, though in a completely different way.
Studies of this nature take place because the high costs of regulation in medical development, along with the reluctance of regulators to approve anything new these days, make it more cost-effective to find marginal new uses for existing drugs than to go out and develop new therapies or new classes of therapies. It is unfortunate that so much research time is diverted into channels that cannot result in radical breakthroughs or great advances.A drug commonly used to treat type 2 diabetes increases the median lifespan of male mice by 22 percent. The effects of the drug known as acarbose were smaller in female mice, producing only a 5 percent increase in lifespan. The study also found that the effect on maximum lifespan was similar in male and female mice, increasing longevity by 11 percent and 9 percent, respectively. "The new results on acarbose support the idea that drugs may someday be developed to prevent many diseases while also slowing the aging process itself."
Acarbose [is] believed to work by slowing the digestion of starches, which prevent rapid increases in blood sugar levels after meals. Most of the mice in the study die of some form of cancer. Authors say the longer lifespan of the acarbose-treated mice suggests that the drug may, through unknown pathways, help to prevent cancer as aging proceeds. [Because] acarbose is known to be safe for long-term human use, it may be possible for clinical researchers to evaluate its effects on aging and age-related diseases, both in people who take the drug to treat their diabetes, and in healthy volunteers. "Further studies in mice may shed light how the cellular and physiological connections between acarbose and control of glucose levels may influence the pace of aging."
Kano is a computer you make yourself. Simple as Lego, powered by Pi.
- Kano Books, illustrated and intuitive
- Kano OS and Levels on 8GB SD card
- DIY Speaker
- Raspberry Pi Model B
- Kano Keyboard Combo
- Custom case
- Card mods and stencils
- Cables: HDMI*, Mini-USB
- Smart power plug (all region pins available)
- WiFi powerup
The software combines Kano OS, a distribution of Debian Linux, with an interface that feels a bit like a console game. It runs six Kano Levels, software projects to make Pong, Snake, Minecraft, videos, and music.
In another leap for 3D printing, researchers at the USC Viterbi School of Engineering have developed a faster 3D printing process that allows for 3D-printing multi-material objects in minutes instead of hours.
Fabrication time and the complexity of multi-material objects have been a hurdle to widespread use of 3D printing.
Speeding up printing
USC Viterbi researchers developed improved mask-image-projection-based stereolithography (MIP-SL) to drastically speed up the fabrication of homogeneous 3D objects. In the MIP-SL process, a 3D digital model of an object is sliced by a set of horizontal planes and each slice is converted into a two-dimensional mask image.
The mask image is then projected onto a photocurable liquid resin surface and light is projected onto the resin to cure it in the shape of the related layer.
The USC Viterbi team also developed a two-way movement design for bottom-up projection so that the resin could be quickly spread into uniform thin layers. As a result, production time was cut from hours to a few minutes.
In their latest paper, the team successfully applies this more efficient process to the fabrication of heterogeneous objects (which comprise different materials that cure at different rates).
This new 3D printing process will allow for dental and robotics models, for example, to be fabricated more cost- and time-efficiently than ever before.
“Multi-material printers are commercially available from Stratasys (Objet Connex). However, only limited materials (photocurable resins) can be used since liquid resins need to pass through small nozzles. Our approach may expand the selections of base materials that are used in multi-material printing,” Chen explained to KurzweilAI.
“Our system provides more design freedoms for product designers and may enable them to design components with better performance or multi-functions,” Daniel J. Epstein, Department of Industrial and Systems Engineering and the study’s lead researcher, added.
“It is still in the research phase. We will actively commercialize it through licensing to existing companies or creating a new company in the future.”
Chen his team next plan to investigate how to develop an automatic design approach for heterogeneous material distribution for user-specified physical properties and how to improve the fabrication speed.
The study was partially supported by the National Science Foundation.
Abstract of Proceedings of the 2013 International Mechanical Engineering Congress & Exposition (IMECE 2013) paper
Heterogeneous object modeling and fabrication has been studied in the past few decades. Recently the idea of digital materials has been demonstrated by using Additive Manufacturing (AM) processes. Our previous study illustrated that the mask-image-projection based Stereolithography (MIP-SL) process is promising in fabricating such heterogeneous objects. In the paper, we present an integrated framework for modeling and fabricating heterogeneous objects based on the MIP-SL process. Our approach can achieve desired grading transmission between different materials in the object by considering the fabrication constraints of the MIP-SL process. The MIP-SL process planning of a heterogeneous model and the hardware setup for its fabrication are also presented. Test cases including physical experiments are performed to demonstrate the possibility of using heterogeneous materials to achieve desired physical properties. Future work on the design and fabrication of objects with heterogeneous materials is also discussed.
A computer program called the Never Ending Image Learner (NEIL) is now running 24 hours a day at Carnegie Mellon University, searching the Web for images, doing its best to understand them. And as it builds a growing visual database, it is gathering common sense on a massive scale.
NEIL leverages recent advances in computer vision that enable computer programs to identify and label objects in images, to characterize scenes and to recognize attributes, such as colors, lighting and materials, all with a minimum of human supervision. In turn, the data it generates will further enhance the ability of computers to understand the visual world.
But NEIL also makes associations between these things to obtain common sense information: cars often are found on roads, buildings tend to be vertical, and ducks look sort of like geese.
“Images are the best way to learn visual properties,” said Abhinav Gupta, assistant research professor in Carnegie Mellon’s Robotics Institute. “Images also include a lot of common sense information about the world. People learn this by themselves and, with NEIL, we hope that computers will do so as well.”
Since late July, the NEIL program has analyzed three million images, identifying 1,500 types of objects in half a million images and 1,200 types of scenes in hundreds of thousands of images. It has connected the dots to learn 2,500 associations from thousands of instances.
You can view NEIL’s findings at the project website (or help train it): http://www.neil-kb.com.
World’s largest structured visual knowledge base
One motivation for the NEIL project is to create the world’s largest visual structured knowledge base, where objects, scenes, actions, attributes and contextual relationships are labeled and catalogued.
“What we have learned in the last 5-10 years of computer vision research is that the more data you have, the better computer vision becomes,” Gupta said.
Some projects, such as ImageNet and Visipedia, have tried to compile this structured data with human assistance. But the scale of the Internet is so vast — Facebook alone holds more than 200 billion images — that the only hope to analyze it all is to teach computers to do it largely by themselves.
Shrivastava said NEIL can sometimes make erroneous assumptions that compound mistakes, so people need to be part of the process. A Google Image search, for instance, might convince NEIL that “pink” is just the name of a singer, rather than a color.
“People don’t always know how or what to teach computers,” he observed. “But humans are good at telling computers when they are wrong.”
People also tell NEIL what categories of objects, scenes, etc., to search and analyze. But sometimes, what NEIL finds can surprise even the researchers. Gupta and his team had no idea that a search for F-18 would identify not only images of a fighter jet, but also of F18-class catamarans.
As its search proceeds, NEIL develops subcategories of objects — cars come in a variety of brands and models. And it begins to notice associations — that zebras tend to be found in savannahs, for instance, and that stock trading floors are typically crowded.
NEIL is computationally intensive, the research team noted. The program runs on two clusters of computers that include 200 processing cores.
NEIL’s knowledge can be used wherever machine perception is required (e.g., image retrieval, robotics applications, object and scene recognition, describing images, visual properties of objects and even visual surveillance,” Gupta explained to KurzweilAI.
“NEIL has analyzed more than 5 million images and built a database of 0.5 million images and 3000 relationships in 4 months. The NEIL visual knowledge base also includes visual models of concepts (e.g., car, crowded, trading floor) and relationships between concepts (e.g., cars have wheels, trading floors are crowded). These models and relationships will be made available for academic research use. We also invite academic users to submit concepts that they would like NEIL to learn and later use these models for their own research.
“Once the technology is mature (hopefully in the near future), we expect NEIL’s knowledge base to have multiple commercial applications.”
The research is supported by the Office of Naval Research and Google Inc.
Abstract of International Conference on Computer vision (ICCV) paper
NEIL (Never Ending Image Learner) is a computer program that runs 24 hours per day and 7 days per week to automatically extract visual knowledge from Internet data. NEIL uses a semi-supervised learning algorithm that jointly discovers common sense relationships (e.g., “Corolla is a kind of/looks similar to Car”,“Wheel is a part of Car”) and labels instances of the given visual categories. It is an attempt to develop the world’s largest visual structured knowledge base with minimum human labeling effort. As of 10th October 2013, NEIL has been continuously running for 2.5 months on 200 core cluster (more than 350K CPU hours) and has an ontology of 1152 object categories, 1034 scene categories and 87 attributes. During this period, NEIL has discovered more than 1700 relationships and has labeled more than 400K visual instances.
ICE, the Institute for Customer Experience, has issued a nice Slideshare presentation on transhumanism and transhumanist technologies.
“Transhumanism is the belief or theory that the human race can evolve beyond its current physical and mental limitations by means of science and technology. The more we explored this subject, the more we got fascinated to see how people are riding on the current era technologies to surpass the capabilities of human body. If the current explorations in transhumanism are anything to go by, then, we believe the future will be very exciting!
ICE would love to hear your feedback, comments and suggestions on this presentation. Please mail comments and suggestions to email@example.comTranshumans: Technology Powered Superhumans from Institute of Customer Experience
© Institute of Customer Experience, 2012. Republished with permission.
The Institute of Customer Experience (ICE) is a not-for-profit initiative by Human Factors International (HFI) started in 2012 with a vision to create a knowledge platform for designers, technopreneurs and innovators.
Learn more here: http://ice.humanfactors.com
Cryonics is the science and industry of preserving the physical structure of the mind on death, indefinitely preventing its decay through low-temperature storage. At some point future technology will be capable of restoring a preserved individual to active life - and given what we know about aging and the pace of development in biotechnology, it is likely that this will be long past the point at which degenerative aging is cured, and complete control over growth, disease, and regeneration is achieved. Those are arguably easier challenges than that of restoring a vitrified brain into a new body. The difficulty is irrelevant if you can wait for decades or centuries, of course. Time is on the side of the cryopreserved provided that the institutions of cryonics continue for the long term.
The Alcor Life Extension Foundation is one of the small number of cryonics providers, a long-term venture dating back four decades to the early days in which cryonics moved from overambitious amateur venture to a more professional medical undertaking. If you take a look at the Alcor News blog, you'll find a link to a recent Nova video segment that didn't run on air, but can be viewed online:Max More to tour the facility and learn about the field of "cryonics."
NARRATOR: Near the hot desert just outside of Phoenix, Arizona is a company called Alcor. Despite the high temperature outside, within, over 100 human bodies are being preserved at very low temperatures. Host David Pogue met with the president and CEO Max More to learn about the field of cryonics.
DAVID POGUE: So who's in this gallery here?
MAX MORE: These are some of our patients. We call them patients because we don't regard them as dead people. The idea is that what we call death today is somewhat of an arbitrary line. Really it's today's doctors giving up and saying, "There's nothing more I can do for this person and I'm letting them go." What we're doing is we're saying, "Let's not quit there. Let's give the future a chance to bring these patients back."
As it turns out Alcor has a YouTube channel these days. I shouldn't be at all surprised - any organization of any size either has a channel or should have a channel, with YouTube or a similar service. It's an obvious step when it comes to outreach and education, provided your budget rises to at least the modest level required to produce informative videos of a suitable quality. So if you take a look you'll find a brace of videos from the Alcor 40 conference held last year, as well as a series of FAQ videos to explain cryonics and its role in medicine. For example:In this in-depth analysis of Cryonics, Alcor President, Max More, explores how Cryonics is, in fact, simply an extension of critical care medicine.
There’s no shortage of smartphone apps to help people track their health. And in recent months, medical apps have started growing up, leaving behind the novelty of attaching probes to a smartphone to offer, they hope, serious clinical tools.
Last month in a Ted Talk, Shiv Gaglani showed that a standard physical exam can now be done using only smartphone apps and attachments. From blood pressure cuff to stethoscope and otoscope — the thing the doctor uses to look in your ears — all of the doctor’s basic instruments are now available in “smart” format.
Gaglani described the project, called Smartphone Physical, as putting together “the new physicians handbag of the 21st century.”
The work has generated a lot of interest and will likely become the basis for a company. Working with fellow medical students Michael Hoaglin and Michael Batista, Gaglani has identified best-in-class devices and apps. The team is setting standards to continue to expand the list with apps and add-ons that are proven to work. They’re also exploring ways to make all of the individual tools work better together.
The current list of tools and explanatory website aim to raise awareness among doctors about the existence of the new technologies. There are no financial partnerships involved.
Some of the devices introduce aspects of crowdsourcing. For instance, the CellScope otoscope calls on a reference database of thousands of pictures of inner ears, whereas a single doctor will see only a fraction of that over the course of his or her career
Some of the instruments go beyond the tests performed in a physical exam. For example, SpiroSmart allows the iPhone to measure lung function by visually analyzing lip reverberation. (Traditional spirometers measure the air the patient exhales.)
The AliveCor ECG device reads cardiac activity, like its conventional counterpart, with electrodes. The electrodes and a single coin-size battery are all that’s added to the smartphone to create a machine accurate enough to be approved by the FDA.
Smaller, especially when it’s also cheaper, can be game changing.
For instance, MobiSante’s ultrasound wand substitutes the clunky and expensive imaging machine with a smartphone and a desktop application, making it both smaller and cheaper than traditional equipment. The company hopes the move will make ultrasound devices available in the 60 percent of the world where they’re currently not.
Even genetic diagnosis, or identifying a microbe by its genetic code, can now be done far from the lab using an add-on qPCR thermal cycler. (Singularity Hub has also covered a small standalone DNA amplifier from Lava Amp).
The devices don’t just give doctors more tools; they make it possible for patients to track their own health indicators between doctor’s visits. Blood pressure and lung function are prime examples. Of course, putting medical instruments in patients’ hands could exacerbate what Gaglani casually referred to as “cyberchondria.” (Yes, WebMD addicts, doctors are onto you.)
“There will certainly be a subset of patients who will try to interpret the data themselves. That’s going to happen, and there’s going to be some psychological ramifications for those patients,” Gaglani told Singularity Hub.
But patients also benefit by uploading their data and being able to document trends for the doctor during a visit. Research shows that patients who feel that they’re doctor is giving them personalized, rather than generic, advice are more likely to follow it. And doctors like simple instruments that allow them to focus more on patients and less on equipment during office visits, said Smartphone Physical’s Hoaglin.
Still, there are some glitches before smartphone physicals really take off. The doctor who tries to make his or her smartphone a mobile clinic of sorts currently has to navigate a number of different apps to drive each of the instruments. Some work for both Android and iPhone, but several only work on Apple devices. The data from each of the devices and apps also flow to different cloud platforms.
That’s an obvious target for Smartphone Physical as the project tries to convert what are now novelty devices into reliable medical equipment.
Images courtesy Cellscope, Think Labs and MobiSante
In another major new application of graphene, Columbia Engineering researchers have taken advantage of graphene’s special properties — its mechanical strength and electrical conductivity — to develop a nanomechanical system that can create FM signals — in effect, the world’s smallest FM radio transmitter.
“This is an important first step in advancing wireless signal processing and designing ultrathin, efficient cell phones, Mechanical Engineering Professor James Hone said. The miniaturized devices can also be put on the same chip that’s used for data processing, he added.
NEMS replacing MEMS
Graphene, a single atomic layer of carbon, is the strongest material known to man, and also has electrical properties superior to the silicon used to make the chips found in modern electronics.
The combination of these properties makes graphene an ideal material for nanoelectromechanical systems (NEMS), which are scaled-down versions of the microelectromechanical systems (MEMS) used widely for sensing of vibration and acceleration. For example, Hone explains, MEMS sensors figure out how your smartphone or tablet is tilted to rotate the screen.
In this new study, the team took advantage of graphene’s mechanical “stretchability” to tune the output frequency of their custom oscillator, creating a nanomechanical version of an electronic component known as a voltage controlled oscillator (VCO).
With a VCO, explains Hone, it is easy to generate a frequency-modulated (FM) signal, which is used for FM radio broadcasting. The team built a graphene NEMS whose frequency was about 100 megahertz, which lies right in the middle of the FM radio band (87.7 to 108 MHz).
They used low-frequency musical signals (both pure tones and songs from an iPhone) to modulate the 100 MHz carrier signal from the graphene, and then retrieved the musical signals again using an ordinary FM radio receiver.
“This device is by far the smallest system that can create such FM signals,” says Hone.
While graphene NEMS will not be used to replace conventional radio transmitters, they have many applications in wireless signal processing.
“Today’s cell phones have more computing power than systems that used to occupy entire rooms, explained Electrical Engineering Professor Kenneth Shepard.
“However, some types of devices, particularly those involved in creating and processing radio-frequency signals, are much harder to miniaturize.
“These “off-chip” components take up a lot of space and electrical power. In addition, most of these components cannot be easily tuned in frequency, requiring multiple copies to cover the range of frequencies used for wireless communication.”
Graphene NEMS can address both problems: they are very compact and easily integrated with other types of electronics, and their frequency can be tuned over a wide range because of graphene’s tremendous mechanical strength.
“There is a long way to go toward actual applications in this area,” notes Hone, “but this work is an important first step. The Hone and Shepard groups are now working on improving the performance of the graphene oscillators to have lower noise. At the same time, they are also trying to demonstrate integration of graphene NEMS with silicon integrated circuits, making the oscillator design even more compact.
This work is supported by Qualcomm Innovation Fellowship 2012 and the U.S. Air Force, using facilities at the Cornell Nano-Scale Facility and the Center for Engineering and Physical Science Research (CEPSR) Clean Room at Columbia University.
Abstract of Nature Nanotechnology paper
Oscillators, which produce continuous periodic signals from direct current power, are central to modern communications systems, with versatile applications including timing references and frequency modulators. However, conventional oscillators typically consist of macroscopic mechanical resonators such as quartz crystals, which require excessive off-chip space. Here, we report oscillators built on micrometre-size, atomically thin graphene nanomechanical resonators, whose frequencies can be electrostatically tuned by as much as 14%. Self-sustaining mechanical motion is generated and transduced at room temperature in these oscillators using simple electrical circuitry. The prototype graphene voltage-controlled oscillators exhibit frequency stability and a modulation bandwidth sufficient for the modulation of radiofrequency carrier signals. As a demonstration, we use a graphene oscillator as the active element for frequency-modulated signal generation and achieve efficient audio signal transmission.