Netflix has tried a number of different ways to improve its recommendations, and it recently announced that it is increasingly using artificial intelligence to do so. It thus joins the ranks of Facebook, Google and Amazon in using artificial intelligence to pore over users’ habits in hopes of learning what really makes them tick — and thereby to sell them goods and services they might not have found on their own.
But Netflix isn’t buying all the computing power to build an artificial intelligence system for itself. Instead, Netflix will host its artificial neural networks right on competitor Amazon’s Web Services cloud.
Artificial intelligence requires a lot of computing power, which has heretofore been limited its use as a business strategy to Internet giants. Netflix, with just a tenth of Facebook’s profits in the last quarter of 2013, is nowhere near as big, although it has boosted profits and users since its failed effort to split streaming and DVD rentals.
While Netflix’s servers have just one or two GPU cards apiece, Amazon’s cloud service will allow it to task just the computing power that the company needs, and only when it needs it.
“[We] wanted to avoid needing special machines in a dedicated data center and instead leverage the full, on-demand computing power we can obtain from AWS,” Netflix said in a blog post.
Some of Netflix’s AI work is already at play in the recommendations it delivers to viewers. But it’s adding firepower at the same time that the rise of streaming has given it more insight into how viewers watch programs — even when they hit pause. The company has also used aggregated user data to guide its development of original programming. The New York Times famously reported that the popularity of political dramas, Kevin Spacey, and the original British version of “House of Cards” created a Venn diagram from which the award-winning remake emerged.
In some ways then, Netflix’s next original program will be a barometer of its accomplishments in artificial intelligence: If the program is better than “House of Cards,” its neural networks are getting somewhere. If it’s as bad as “Hemlock Grove,” we won’t have to worry about Netflix unearthing our guilty secrets anytime soon.
Images: Courtesy of Shutterstock, Netflix
Future Day at Steam3 Conference, Texas
Bryan Alexander (author of ‘The New Digital Storytelling‘) will be speaking.
A future-focused sense event to explore and develop multi-dimensional, immersive approaches to the future of education!
Sydney Futurists are hosting a google hangout discussion.
Details on Meetup
There is a limit of ten (10) people who can participate in the Hangout directly ie interaction with the others via audio and video – if you want to be one of those ten people go here at 6pm Sat:
and click the “Play” button. If you are not one of the initial ten people you can still watch and submit questions. If you are happy just to watch in real time and be able to submit text questions to the participants you can also go to YouTube here:
Future Day 2014 Lunch at Thanksgiving Point
Future Day is 1 March, and the Mormon Transhumanist Association invites you to join us for lunch and casual conversation about the future at the Thanksgiving Point Tower Deli in Lehi, Utah, at 11:45am on 1 March 2014. Bring your friends and family, and of course your thoughts and questions about a future of radical flourishing in compassion and creation through technology and religion.
Springfield, Illinois – please contact Matthew for more details.
Likely to include some fuzzy math lessons!
Future Day @BIL in San Fran will be celebrated – Marc Ciotola (ex Nasa) will coordinate Future Day at BIL this year.Transhuman Visions 2.0
Future Day will be celebrated at the Transhuman Visions 2.0 conference – with H+ representatives Natasha Vita-More, and Linda GlennFuture Day – Millennium Project Online
Futurists’ Worldwide 24-Hour Discussion to Celebrate World Future Day March 1st to be Hosted Online by The Millennium Project.
The contacts for the leaders of the collaborating organizations are:
– Association of Professional Futurists, Cindy Frewen, Chair cfw(at)frewenarchitects(dot)com
– Club of Amsterdam: Felix B Bopp, Chairman – felix(at)clubofamsterdam(dot)com
– Humanity+: Adam Ford, Secretary tech101(at)gmail(dot)com
– The Millennium Project, Jerome Glenn, CEO, Jerome.Glenn(at)Millennium-Project(dot)org
– World Future Society: Tim Mack, President tmack(at)wfs(dot)org
– World Futures Studies Federation, Jennifer Gidley, President wfsf.president(at)jennifergidley(dot)com
Wherever you are in the world, you are invited at 12:00 noon [March 1st] in your time zone to join this global conversation about the future.Hong Kong
There was a raucous Pre-Future Day lunch gathering at Hong Kong Polytechnic University, Feb 28 2014, featuring Future Day founder Dr. Ben Goertzel, the OpenCog Hong Kong team and assorted HK futurists. Discussion centered on the planned future release of a (peaceful) army of OpenCog-powered toy robots to learn from, teach and entertain the children of the world; and ultimately transcend human intelligence.Columbia
Facebook Page: https://www.facebook.com/events/499832553470301/
Con motivo del “International Future Day”, estamos organizando para este 1° de Marzo, una charlita en alguna universidad acerca de “EL FUTURO DEL ENVEJECIMIENTO”.
Aún no tenemos el lugar, pero apenas lo tenga, avisamos.
March 1st 2014 — 7 pm, Paris, bar/restaurant “Demain, c’est loin” (“tomorrow, it’s far”) — 9 rue Julien Lacroix. FutureDay, with an emphasis on longevity.
Edouard, Longévité & Santé
Harcourt Butler Technological Institute, Kanpur
Nawabganj – Kanpur
Date/Time : 1:00 pm – 4:00 pm
Location: Harcourt Butler Technological Institute, Kanpur
March 1st is unofficially world “Future Day.” Help make forward thinking into a tradition by participating in our first gathering of the Seattle Futurist Society!
Initial motion for this gathering took place in the Reddit Futurology community. We are hoping to extend the technological, sociological, political, economic and transhuman themes which are commonly discussed online.
We will start with keynote speakers, followed by Q&A, and an open debate/postulation platform. Content will be streamed online for remote questions and discussion.Brazil – São Paulo
Downtown future Pizza and Beer – please contact Andre about details.
Future Day Sweden (Stockholm)
For Future Day in Stockholm this year, a number of futures-focused organisations will arrange a collaborative event to illustrate alternative futures. The day will be organised as a experiential game, where participants collect insights about different futures followed by a co-creative process, where teams craft possible future scenarios.
Time: Saturday March 1 from 11AM to 3PM.
Future Day Stockholm 2014 is an event which will be co-created by Människa+, Stockholm Futurists, Guerilla Office, CoHero Game Design, and other organisations.
The event is free, including general admission to Tekniska Muséet, where the event is held.Future Day Edmonton
Celebration of Future Day holiday at CCIS L1-160 http://www.futureday.org/ combining those involved in Technology and Future of Medicine course LABMP 590http://www.singularitycourse.com/ http://www.youtube.com/user/kimsolez at present and in the past, and those involved in Leonard Cohen Night events in the past http://www.leonardcohennights.org/index2.html and in our planned 2016 event. Participation by Skype is most welcome. We anticipate Tori Sheldon Skyping with us at 4 pm Edmonton time. Entertainment by Mallory Chipman and Joel Crichton . Special presentation by Ken Chapman .
Room number has changed etc.: https://www.facebook.com/events/139684502884517/ If you are so inclined please join us for this event by Skype! Saturday March 1st 4 pm to 10 pm Mountain Standard Time. Celebration of Future Day in CCIS L1-440 on the University of Alberta Campus http://www.futureday.org/combining those involved in Technology and Future of Medicine course LABMP 590 http://www.singularitycourse.com/ http://www.youtube.com/user/kimsolez at present and in the past, and those involved in Leonard Cohen Night events in the past http://www.leonardcohennights.org/index2.html and in our planned 2016 event. Participation by Skype is most welcome. Entertainment by Mallory Chipman and Joel Crichton . Special presentation by Ken Chapman . Student presentation by William Parker on the Qualcomm Tricorder XPRIZE . The Edmonton Journal newspaper is doing a feature story on the LABMP 590 course and this event and so will be sending at least a photographer to the event, and possibly a reporter/journalist as well.Future Day Melbourne (Australia)
At Unitarian Hall East Melbourne – starts a 10.30
There will be a number of speakers: Colin Kline (AI and Futurology), Andrew Dun, Matt Fisher (Cryonics), Patrick Robotham (xRisk), James Fodor (Whole Brain Emulation) and many more covering topics from Technological Progress, Strategic Forecasting
Organised by Adam Ford
Organised by Jose Cordeiro & Fernando Ortega
Stuart Armstrong, a former mathematician currently employed as a philosopher at Oxford University's Future of Humanity Institute, has recently released an elegant little booklet titled Smarter Than Us. The theme is the importance of AGI to the future of the world. While not free, the booklet is available for purchase online in PDF form for a suggested donation of $5 and a minimum donation of 25 cents.
Armstrong wrote Smarter Than Us at the request of the Machine Intelligence Research Institute, formerly called the Singularity Institute for AI -- and indeed, the basic vibe of the booklet will be very familar to anyone who has followed SIAI/MIRI and the thinking of its philosopher-in-chief Eliezer Yudkowsky. Armstrong, like the SIAI/MIRI folks, is an adherent of the school of thought that the best way to work toward an acceptable future for humans is to try and figure out how to create superintelligent AGI systems that are provably going to be friendly to humans, even as the systems evolve and use their intelligence to drastically improve themselves.
The booklet is clearly written -- very lucid and articulate, and pleasantly lacking the copious use of insider vocabulary that marks much of the writing of the MIRI community. It's worth reading as an elegant representation of a certain perspective on the future of AGI, humanity and the world.
Having said that, though, I also have to add that I find some of the core ideas in the book highly unrealistic.
The title of this article summarizes one of my main disagreements. Armstrong seriously seems to believe that doing analytical philosophy (specifically, moral philosophy aimed at formalizing and clarifying human values so they can be used to structure AGI value systems) is likely to save the world.
I really doubt it!The Promise and Risk of AGI
Armstrong and I are both lapsed mathematicians, and we both agree generally with 20th century mathematician I.J. Good's sentiment that "the first intelligent machine is the last innvention humanity will ever make." In fact Armstrong makes a stronger statement, to wit
Over the course of a generation or two from the first creation of AI—or potentially much sooner— the world will come to resemble whatever the AI is programmed to prefer. And humans will likely be powerless to stop it.
I actually think this goes too far -- it assume that the first highly powerful AGI on Earth is going to have a desire to reshape the world according to its preferences. It may not. It may well feel that it's better just to leave much of the world as-is, and proceed with its own business. But in any case, there's no doubt Armstrong gets the transformative power AGI is going to have. LIke me, he believes that human-level AGI will transform human society massively; and that it will also fairly rapidly invent superhuman AGI, which will have at least the potential to -- if it feels like it -- transform things much more massively.
Armstrong thinks this seems pretty risky. Rightly enough, he observes that if someone happened to have a breakthrough and create a superhuman AGI today, we would really have no way to predict what this AGI would wreak upon the human world. Heaven? Hell? Something utterly incomprehensible? Quick annihilation?Saving the World wtih Analytical Philosophy
I agree with Armstrong that creating superhuman AGI, with our present level of knowledge, would be extremely risky and uncertain.
Where I don't agree with him is regarding the solution to this problem. His view, like that of his MIRI comrades, is that the best approach is to try to create an AGI whose "Friendliness" to humans can be formally proved in some way.
This notion wraps up a lot of problems, of which the biggest are probably:
- It's intuitively, commonsensically implausible that we're going to be able to closely predict or constrain the behavior of a mind massively more intelligent than ourselves
- It seems very hard to constrain the future value system and interests of an AGI system that is able to rewrite its own source code and rebuild its own hardware. Such an AGI seems very likely to self-modify into something very different than its creators intended, working around any constraints they placed on it in ways they didn't predict
- Proving anything rigorous and mathematical and also useful about superintelligent self-modifying AGIs in the real world, seems beyond the scope of current mathematics. It may or may not be possible, but we don't seem to have the mathematical tools for it presently.
- Even if we could somehow build an AGI that could be mathematically proven to never revise its value system even as it improves its intelligence -- how would we specify its initial value system?
Many of Armstrong's friends at MIRI are focusing on Problem 3, trying to prove theorems about superintelligent self-modifying AGIs. So far they haven't come up with anything remotely useful -- though the quest has helped them generate some moderately interesting math, which doesn't however tell you anything about actual AGI systems in the (present or future) real world.
Armstrong, on the other hand, spends more time on Problem 4. This is an aspect of the overall problem that MIRI/FHI have not spent much time on so far. The most discussed solution to come of of this group is Yudkowsky's notion of "Coherent Extrapolated Volition", which has many well documented flaws, including some discussed here.
One of Armstrong's conclusions regarding Problem 4 is that, as he puts it, "We Need to Get It All Exactly Right." Basically, he thinks we need to quite precisely formally specify the set of human values, because otherwise an AGI is going to incline toward creating its own values, which may not be at all agreeable to us. As he puts it:
￼￼￼Okay, so specifying what we want our AIs to do seems complicated. Writing out a decent security protocol? Also hard. And then there’s the challenge of making sure that our protocols haven’t got any holes that would allow a powerful, efficient AI to run amok.
But at least we don’t have to solve all of moral philosophy . . . do we?
Unfortunately, it seems that we do."Solving moral philosophy" seems to me an occurrence that is extraordinarily unlikely to eventuate. Generally, it seems to me that the discipline of philosophy has never really been about solving problems; it's more about raising issues, questioning assumptions, and provoking interesting thought and discussion.... The odds seem very very high that the problems of moral philosophy are not "solvable" in any useful, and that in fact they largely represent basic contradictions and confusions at the heart of human nature... Some of my thoughts about these contradictions and confusions are in a recent blog post of mine. As I see it, the contradictions at the heart of human morality are part of what drives human progress forward. As inconsistent, unsolvable, perplexing human morality struggles and fails to make itself precise and consistent, it helps push us onward...
My father Ted Goertzel, a sociologist, gave a talk in 2012 at the Future of Humanity Institute's AGI Safety and Impacts conference, which was coupled with the AGI-12 AGI research conference, part of the AGI conference series I organize each year. (Come to AGI-14 in Quebec City Aug 1-4 2014 if you can, by the way!) .... During his talk, he posted the following question to the FHI folks (I can't remember the exact wording, but here is the gist):
When, in human history, have philosophers philosophized an actually workable, practical solution to an important real-world problem and saved the day?
Nobody in the audience had an answer.
My dad has always been good at bringing things down to Earth.
Mathematics, on the other hand, will almost surely be part of any future theory of AGI, and will likely be very helpful with AGI development one day.
However, this doesn't necessarily mean that highly mathematical approaches to AGI are the best route at this stage. We must remember that mathematical rigor is of limited value unto itself. A mathematical theory describing something of no practical relevance (e.g. Hutter's AIXI, an AGI design that requires infinitely powerful computers; or recent MIRI papers on Lobian issues, etc.), is not more valuable than a non-rigorous theory that says useful things about practical situations. Sure, an irrelevant math theory can sometimes be a step on the way to a powerfully relevant math theory; but often an irrelevant math theory is just a step on the way to more irrelevant math theories ...
Oftentimes, in the development of a new scientific area, a high-quality non-rigorous theory comes first -- e.g. Faraday's field lines or Feynman diagrams, or Darwin's natural selection theory -- and then after some time has passed, a rigorous theory comes along. Chasing rigor can sometimes be a distraction from actually looking at the real phenomena at hand....Put differently: Studying "what can most easily be formalized" (e.g. AGI on hypothetical infinitely powerful machines) is not necessarily a good intermediate step to studying the slippery, currently not-tractably-formalizable aspects of reality. A Muted Argument Against Pragmatic AGI Approaches It's worth noting that the aspect of MIRI/SIAI's perspective that I've found most annoying, and argued against here, doesn't rear its head in Armstrong's booklet in any direct way. I'm referring to the idea -- which I've labeled "The Singularity Institute's Scary Idea" -- that an AI, if not created according to a rigorous mathematical theory of Friendliess, is almost certain to kill all humans.
Armstrong's booklet is much more rational on this point, and takes the position, roughly paraphrasing, that: If we don't conceive a much better theory of AGI and the world than we have now, we're really not going to have any reliable way to predict what will happen if a powerful AGI is released upon the world....
This is harder to argue with. "We don't know, and that's scary" is a lot more sensible than "If we can't formally prove it won't kill us, then it definitely will kill us."
Rather than positing, as some SIAI/MIRI supporters have, that if a system like my OpenCog were completed to the level of human-level general intelligence it would inevitably kill everybody due to its lack of a provably safe architecture, Armstrong makes a milder critique of the OpenCog style approach:
Other approaches, slightly more sophisticated, acknowledge the complexity of human values and attempt to instil them into the AI indirectly. The key features of these designs are social interactions and feedback with humans. Through conversations, the AIs develop their initial morality and eventually converge on something filled with happiness and light and ponies. These approaches should not be dismissed out of hand, but the proposers typically underestimate the difficulty of the problem and project too many human charac- teristics onto the AI. This kind of intense feedback is likely to produce moral humans. (I still wouldn’t trust them with absolute power, though.) But why would an alien mind such as the AI react in com￼parable ways? Are we not simply training the AI to give the correct answer in training situations?
... [T]hough it is possible to imagine a safe AI being developed using the current approaches (or their descendants), it feels extremely unlikely.I can answer at least one of his questions from the above. No, if one taught an OpenCog system (for instance) values via interacting with it in real-life situations, we would NOT be "simply training the AI to give the correct answer in training situations." We would be doing something much subtler -- as would be easily observable via studying the internal state of the AGI's mind. Theory or Experiment First?
I do agree with Armstrong that, before we launch superhuman AGIs upon the world, it would be nice to have a much better theory of how they operate and how they are likely to evolve. No such theory is going to give us any guarantees about what future superhuman AGIs will bring, but the right theory may help us bias and sculpt the future possibilities.
However, I think the most likely route to such a theory will be experimentation with early-stage AGI systems....
I have far more faith in this sort of experimental science than in "philosophical proofs", which I find generally tend to prove whatever the philosopher doing the proof intuitively believed in the first place...
Of course it seems scary to have to build AGI systems and play with them in order to understand AGI systems well enough to build ones whose growth will be biased in the directions we want. But so it goes. That's the reality of the situation. Life has been scary and unpredictable from the start. We humans should be used to it by now!
Solving all the problems of moral philosophy and using the solution to program the value system of an AGI system that has been mathematically proven incapable of drifting from its initial value system as it self-modified and increases its intelligence -- this ain't gonna happen. I think Armstrong actually realizes this -- but he figures that the further we can go in his suggested direction the better, even if what actually happens is a hybrid of this sort of rigorous approach with a practical engineering/education approach to AGI.
Armstrong's booklet ends a bit anticlimactically, with a plea to donate money to MIRI or FHI, so that their philosophical and formal exploration of issues related to the future of AGI can continue. Actually I agree these institutions should be funded -- while I disagree with many of their ideas, I'm glad they exist, as they keep a certain interesting dialogue active. But I think massively more funding should go into the practical creation and analysis of AGI systems. This, not abstract philosophizing, is going to be the really useful source of insights into the future of AGI and its human implications.
Stuart Armstrong was kind enough to respond to the originally posted version of this article (which was the same as the current version but without this section), in the comments area below the article. He said:
What I expect from formal "analytic philosophy" methods:1) A useful decomposition of the issue into problems and subproblems (eg AI goal stability, AI agency, reduced impact, correct physical models on the universe, correct models of fuzzy human concepts such as human beings, convergence or divergence of goals, etc...) 2) Full or partial solutions some of the subproblems, ideally of general applicability (so they can be added easily to any AI design). 3) A good understanding of the remaining holes. and lastly: 4) Exposing the implicit assumptions in proposed (non-analytic) solutions to the AI risk problem, so that the naive approaches can be discarded and the better approaches improved.
Of all these, I think (4) is the only one that's reasonably likely to come about. Philosophy is an awful lot better at finding holes and hidden assumptions, than at finding solutions. Of course (4) in itself could be a fantastic, perhaps even critical value-added.Don't get me wrong, I love philosophy --- the list of philosophers whose works greatly influenced my thinking about AI and cognition would be very long ... Nietzsche, Peirce, Husserl, Dharmakirti, Dignaga, Huang Po, Leibniz, Baudrillard, Wittgenstein, Whitehead, Russell, Benjamin Whorf, Gregory Bateson, Bucky Fuller ... the list would go on and on.... Modern academic philosophy doesn't charm me so much, but David Chalmers, Galen Strawson and Nick Bostrom are definitely among those worth reading.... I think philosophy is great at raising issues, exposing assumptions, and inspiring thought in novel directions.... I just don't see it as being very valuable for definitively "solving problems" ... If philosophy is to have value in our collective attempt to navigate the Singularity, it's IMO more likely to be via inspiring the minds of the scientists who create scientific theories of AGI, in the period when we have early-stage AGIs to interact with and study, and hence have sufficient relevant empirical data to ground such theories...
Japanese researchers at the National Institute for Materials Science and Shinshu University have developed a way to shrink capacitors — key components that store energy — further, which could accelerate the development of more compact, high-performance next-gen electronic devices. The study appears in the journal ACS Nano.
Takayoshi Sasaki and colleagues note that current technology has almost reached its limit in terms of materials and processing, which in turn limits the performance that manufacturers can achieve. In response, researchers have gone to the nanoscale, but “nanocapacitors” are not easy to make. They require harsh, difficult-to-use methods and even then, they may not work that well.
Layers of different types of oxide nanosheets
So Sasaki’s team developed an easier way to make high-performance “ultrathin” capacitors. The researchers found that they could use gentle techniques and mild conditions to create a sandwich consisting of layers of two different types of oxide nanosheets to produce an ultrathin capacitor.
In addition, the new capacitor has a capacitance density of as high as ∼27.5 μF (microfarads) per square centimeter, which is approximately 2000 times higher than those of currently available commercial products.
The fabrication process, involving layer-by-layer assembly of the nanosheets without costly fabrication lines and special annealing processes for metal electrode layers is another big beneﬁt. Furthermore, the all-nanosheet capacitors can be readily assembled on plastic or ﬂexible substrates.
Better than graphene
The researchers say that, in the future, the ultrathin capacitors could be used in printed circuit boards and in DRAM memory storage devices, for example. They also speculate that “the virtually inﬁnite varieties of oxide nanosheets, which can be used to assemble various nanosheet architectures, suggest that 2D heterointerfaces will oﬀer an unprecedented versatility for the realization of new 2D states and molecularly thin ﬁlm devices even beyond graphene.”
The authors acknowledge funding from the Japan Science and Technology Agency and MEXT, Japan.
Abstract of ACS Nano paper
All-nanosheet ultrathin capacitors of Ru0.95O20.2–/Ca2Nb3O10–/Ru0.95O20.2– were successfully assembled through facile room-temperature solution-based processes. As a bottom electrode, conductive Ru0.95O20.2– nanosheets were first assembled on a quartz glass substrate through a sequential adsorption process with polycations. On top of the Ru0.95O20.2– nanosheet film, Ca2Nb3O10– nanosheets were deposited by the Langmuir–Blodgett technique to serve as a dielectric layer. Deposition parameters were optimized for each process to construct a densely packed multilayer structure. The multilayer buildup process was monitored by various characterizations such as atomic force microscopy (AFM), ultraviolet–visible absorption spectra, and X-ray diffraction data, which provided compelling evidence for regular growth of Ru0.95O20.2– and Ca2Nb3O10– nanosheet films with the designed multilayer structures. Finally, an array of circular films (50 μm ) of Ru0.95O20.2– nanosheets was fabricated as top electrodes on the as-deposited nanosheet films by combining the standard photolithography and sequential adsorption processes. Microscopic observations by AFM and cross-sectional transmission electron microscopy, as well as nanoscopic elemental analysis, visualized the sandwich metal–insulator–metal structure of Ru0.95O20.2–/Ca2Nb3O10–/Ru0.95O20.2– with a total thickness less than 30 nm. Electrical measurements indicate that the system really works as an ultrathin capacitor, achieving a capacitance density of 27.5 μF cm–2, which is far superior to currently available commercial capacitor devices. This work demonstrates the great potential of functional oxide nanosheets as components for nanoelectronics, thus contributing to the development of next-generation high-performance electronic devices.
A team of researchers at MIT, Oak Ridge National Laboratory, and in Saudi Arabia succeeded in creating subnanoscale pores in a sheet of graphene, a development that could lead to ultrathin filters for improved desalination or water purification. Their findings are published in the journal Nano Letters.
The new work, led by graduate student Sean O’Hern and associate professor of mechanical engineering Rohit Karnik, is the first step toward actual production of such a graphene filter.
Making these minuscule holes in graphene — a hexagonal array of carbon atoms, like atomic-scale chicken wire — occurs in a two-stage process. First, the graphene is bombarded with gallium ions, which disrupt the carbon bonds. Then, the graphene is etched with an oxidizing solution that reacts strongly with the disrupted bonds — producing a hole at each spot where the gallium ions struck. By controlling how long the graphene sheet is left in the oxidizing solution, the MIT researchers can control the average size of the pores.
A big limitation in existing nanofiltration and reverse-osmosis desalination plants, which use filters to separate salt from seawater, is their low permeability: Water flows very slowly through them. The graphene filters, being much thinner, yet very strong, can sustain a much higher flow. “We’ve developed the first membrane that consists of a high density of subnanometer-scale pores in an atomically thin, single sheet of graphene,” O’Hern says.
For efficient desalination, a membrane must demonstrate “a high rejection rate of salt, yet a high flow rate of water,” he adds. One way of doing that is decreasing the membrane’s thickness, but this quickly renders conventional polymer-based membranes too weak to sustain the water pressure, or too ineffective at rejecting salt, he explains.
With graphene membranes, it becomes simply a matter of controlling the size of the pores, making them “larger than water molecules, but smaller than everything else,” O’Hern says — whether salt, impurities, or particular kinds of biochemical molecules.
Permeability 50 times greater for a high flow rate
The permeability of such graphene filters, according to computer simulations, could be 50 times greater than that of conventional membranes, as demonstrated earlier by a team of MIT researchers led by graduate student David Cohen-Tanugi of the Department of Materials Science and Engineering. But producing such filters with controlled pore sizes has remained a challenge. The new work, O’Hern says, demonstrates a method for actually producing such material with dense concentrations of nanometer-scale holes over large areas.
“We bombard the graphene with gallium ions at high energy,” O’Hern says. “That creates defects in the graphene structure, and these defects are more chemically reactive.” When the material is bathed in a reactive oxidant solution, the oxidant “preferentially attacks the defects,” and etches away many holes of roughly similar size. O’Hern and his co-authors were able to produce a membrane with 5 trillion pores per square centimeter, well suited to use for filtration. “To better understand how small and dense these graphene pores are, if our graphene membrane were to be magnified about a million times, the pores would be less than 1 millimeter in size, spaced about 4 millimeters apart, and span over 38 square miles, an area roughly half the size of Boston,” O’Hern says.
With this technique, the researchers were able to control the filtration properties of a single, centimeter-sized sheet of graphene: Without etching, no salt flowed through the defects formed by gallium ions. With just a little etching, the membranes started allowing positive salt ions to flow through. With further etching, the membranes allowed both positive and negative salt ions to flow through, but blocked the flow of larger organic molecules. With even more etching, the pores were large enough to allow everything to go through.
Scaling up the process to produce useful sheets of the permeable graphene, while maintaining control over the pore sizes, will require further research, O’Hern says.
Karnik says that such membranes, depending on their pore size, could find various applications. Desalination and nanofiltration may be the most demanding, since the membranes required for these plants would be very large. But for other purposes, such as selective filtration of molecules — for example, removal of unreacted reagents from DNA — even the very small filters produced so far might be useful.
“For biofiltration, size or cost are not as critical,” Karnik says. “For those applications, the current scale is suitable.”
The work also included researchers at Oak Ridge National Laboratory and King Fahd University of Petroleum and Minerals (KFUPM). The project received support from the Center for Clean Water and Clean Energy at MIT and KFUPM and the U.S. Department of Energy.
University of Manchester researchers recently developed a similar approach to desalination, using one-atom-wide graphene-oxide (GO) capillaries in multilayer GO membranes (laminates), as KurzweilAI reported, but it requires more work to eliminate smaller salt molecules.
Abstract of Nano Letters paper
We report selective ionic transport through controlled, high-density, subnanometer diameter pores in macroscopic single-layer graphene membranes. Isolated, reactive defects were first introduced into the graphene lattice through ion bombardment and subsequently enlarged by oxidative etching into permeable pores with diameters of 0.40 ± 0.24 nm and densities exceeding 1012 cm–2, while retaining structural integrity of the graphene. Transport measurements across ion-irradiated graphene membranes subjected to in situ etching revealed that the created pores were cation-selective at short oxidation times, consistent with electrostatic repulsion from negatively charged functional groups terminating the pore edges. At longer oxidation times, the pores allowed transport of salt but prevented the transport of a larger organic molecule, indicative of steric size exclusion. The ability to tune the selectivity of graphene through controlled generation of subnanometer pores addresses a significant challenge in the development of advanced nanoporous graphene membranes for nanofiltration, desalination, gas separation, and other applications.
Yes. A space elevator appears possible and space elevator infrastructure could indeed be built via a major international effort, a study conducted by experts under the auspices of the International Academy of Astronautics has found, Space.com writer Leonard David reports.
Two technologies pacing the development of the space elevator are an ultra-strong space tether and other space elevator components, and lightweight solar cells, according to study lead editor Peter Swan.
David quotes Arthur C. Clarke: “The space elevator will be built ten years after they stop laughing…and they have stopped laughing!”
A copy of “Space Elevators: An Assessment of the Technological Feasibility and the Way Forward” is available through Virginia Edition Publishing Company at: www.virginiaedition.com/sciencedeck.
There is some debate over whether rapamycin administration actually slows aging or only reduces cancer risk in mice: both sides argue the point from rigorous studies, but unlike many other compounds and methodologies the evidence for life extension in mice is strong and reproducible. These are debates over the cause of that life extension.
The recent paper quoted below comes from researchers who favor manipulation of mTOR as a way forward to treat aging, and who argue that rapamycin does slow aging. But again, from my point of view all such efforts to develop drugs to alter metabolism to modestly extend life are the slow, expensive road to a poor end result. We should be focused on building therapies to repair the damage that causes aging, an end result that is both of greater utility and can meaningfully help old people. There is not much use in a way to slow aging when you are already old.Target of Rapamycin (TOR) is involved in cellular and organismal aging. Rapamycin extends lifespan and delays cancer in mice. It is important to determine the minimum effective dose and frequency of its administration that still extends lifespan and prevents cancer. Previously we tested 1.5 mg/kg of rapamycin given subcutaneously 6 times per two weeks followed by a two-week break. This intermittent treatment prolonged lifespan and delayed cancer in cancer-prone female FVB/N HER-2/neu mice.
Here, the dose was decreased from 1.5 mg/kg to 0.45 mg/kg per injection. This treatment was started at the age of 2 months (group Rap-2), 4 months (Rap-4), and 5 months (Rap-5). Three control groups received the solvent from the same ages. Rapamycin significantly delayed cancer and decreased tumor burden in Rap-2 and Rap-5 groups, increased mean lifespan in Rap-4 and Rap-5 groups, and increased maximal lifespan in Rap-2 and Rap-5 groups. In Rap-4 group, mean lifespan extension was achieved without significant cancer prevention.
The complex relationship between life-extension and cancer-prevention depends on both the direct effect of rapamycin on cancer cells and its anti-aging effect on the organism, which in turn prevents cancer indirectly. We conclude that total doses of rapamycin that are an order of magnitude lower than standard total doses can detectably extend life span in cancer-prone mice.
The history of results achieved while trying to extend mouse life span via manipulation of sirtuins with drugs is not particularly impressive, all told, characterized by an inability to replicate early results, a lack of effectiveness, and challenges from the rest of the scientific community. Nonetheless sirtuins play a role in numerous cellular mechanisms of general interest, so research continues in that sense.
Here one of the later drug candidates for sirtuin manipulation is claimed to modestly extend mean mouse life span - but based on the history you should probably not be terribly excited by this news, even if you consider the development of drugs to slow aging by metabolic manipulation to be a useful activity rather than a distraction from better forms of longevity science:The prevention or delay of the onset of age-related diseases prolongs survival and improves quality of life while reducing the burden on the health care system. Activation of sirtuin 1 (SIRT1), an NAD+-dependent deacetylase, improves metabolism and confers protection against physiological and cognitive disturbances in old age. SRT1720 is a specific SIRT1 activator that has health and lifespan benefits in adult mice fed a high-fat diet.
We found extension in lifespan, delayed onset of age-related metabolic diseases, and improved general health in mice fed a standard diet after SRT1720 supplementation. Inhibition of proinflammatory gene expression in both liver and muscle of SRT1720-treated animals was noted. SRT1720 lowered the phosphorylation of NF-κB pathway regulators in vitro only when SIRT1 was functionally present. Combined with our previous work, the current study further supports the beneficial effects of SRT1720 on health across the lifespan in mice.
“The future is already here — it’s just not evenly distributed,” William Gibson famously said. Future Day aims to do something about that.
The global event begins at noon March 1 in New Zealand (6 PM Friday Feb. 28 in New York), when Stephanie Pride of the World Futures Studies Federation launches the Future Day Google Hangout, and continues in cities around the world.
Groups in dozens of locations are participating, including Beijing, New Delhi, Rome, London, Melbourne, Dubai, Berlin, Tokyo, Mexico City, Amsterdam, Sydney, Hong Kong, Paris, Nawabganj (Kanpur, India), São Paulo, Stockholm, and Edmonton; and in Columbia, Venezuela, Peru, Israel, and Iran.
U.S. locations include Hawaii, Los Angeles, Thanksgiving Point (Utah), Seattle, Springfield (Illinois), San Francisco (@BIL), Washington, D.C., and Piedmont, California (Transhuman Visions 2.0 conference).
KurzweilAI has also contacted DJ Steve Aoki. who will be performing Saturday in Sao Paulo and Votuporanga, Brazil.*
Tweet: @futureday @MillenniumProj #FutureDay
* UPDATE Feb. 28 10:48 PM EST: This just in from DJ Steve Aoki, in Sao Paulo, Brazil for his concert Saturday: “It’s no secret that I’m fascinated by future technology and science. We need to do everything in our power to make the future a radically better place, by any means necessary.” Links:
UPDATE Mar 1 12:31 AM EST: World Future Day Meetups
For scientists to determine if a cell is functioning properly, they must destroy it (with X-rays), possibly giving false accounts of how the cell actually works.
Now, researchers at the U.S. Department of Energy’s (DOE) Argonne National Laboratory have created a new probe that freezes cells to “see” at greater detail without damaging the sample.*
Traditional X-ray fluorescence methods look at cells that have either been immersed in water or dehydrated. For wet specimens at room temperature, the radiation can break the bonds linking molecules together and cause them to scatter, changing the sample’s structure.
For dehydrated specimens, potassium and other diffusible ions are washed away during chemical fixation, which kills the cell and loosens the cell membrane, allowing ions to escape. Moreover, when the sample is dehydrated, the cell can shrink, distort or even collapse.
To address this issue, Argonne researchers developed a hard X-ray fluorescence nanoprobe called the Bionanoprobe, which makes three-dimensional images that map out the locations of trace elements, like iron or potassium, in frozen biological samples.
“We don’t want to dry the sample; we want to keep it hydrated,” says Chen. “We plunge the sample into liquid ethane at very high speeds and then look at the frozen sample directly.”
In a process reminiscent of cyronics, rapidly cooling biological specimens to temperatures of -260°F preserves the natural state of a cell’s organelles and trace elements while retaining the water in the sample. The Bionanoprobe’s vacuum chamber eliminates frosting and convective heating and automatically acquires tomographic (sectioned images) data sets.
The Bionanoprobe can also produce extremely high-resolution images at the smallest scales — below 100 nanometers. Si Chen, principal author of the study, uses X-ray optics called zone plates to focus the X-ray beam down to a miniscule small spot. A simple scan produces an image with a full fluorescent spectrum for each scanning step.
A recent study created X-ray fluorescence images of an immortal cervical cancer cell line called HeLa cells. The samples were plunge-frozen, chemically fixed and then treated with an iron oxide core in a titanium dioxide shell nanocomposite, which allowed researchers to determine if the nanocomposites actually made it into the cell nucleus.
Gale Woloschak, professor at Northwestern University’s Feinberg School of Medicine, who conducted the study, had created nanoparticles that target and kill cancer cells, but when the researchers wanted to see where the nanoparticles actually wound up in the cell, they ran into trouble with traditional X-ray methods.
“This is the problem,” says Woloschak. “If you think of how two-dimensional X-ray imaging works, X-rays penetrate through the entire cell, so it’s hard to determine whether the nanoparticles are above, below or inside the nucleus. What the Bionanoprobe does is give us a three-dimensional image — we could actually see that the nanoparticles were imbedded in the nucleus.”
The work is reported in “The Bionanoprobe: hard X-ray fluorescence nanoprobe with cryogenic capabilities,” published in the Journal of Synchrotron Radiation. Funding was provided by the National Institutes of Health.
* Two other Department of Energy labs, Pacific Northwest (PNNL) and Lawrence Livermore National Laboratories (LLNL), took another approach to the X-ray problem by using free-electron lasers to create images that accurately reflect the known structure of proteins (KurzweiAI news article here).
Abstract of Journal of Synchrotron Radiation paper
Hard X-ray fluorescence microscopy is one of the most sensitive techniques for performing trace elemental analysis of biological samples such as whole cells and tissues. Conventional sample preparation methods usually involve dehydration, which removes cellular water and may consequently cause structural collapse, or invasive processes such as embedding. Radiation-induced artifacts may also become an issue, particularly as the spatial resolution increases beyond the sub-micrometer scale. To allow imaging under hydrated conditions, close to the `natural state’, as well as to reduce structural radiation damage, the Bionanoprobe (BNP) has been developed, a hard X-ray fluorescence nanoprobe with cryogenic sample environment and cryo transfer capabilities, dedicated to studying trace elements in frozen-hydrated biological systems. The BNP is installed at an undulator beamline at sector 21 of the Advanced Photon Source. It provides a spatial resolution of 30 nm for two-dimensional fluorescence imaging. In this first demonstration the instrument design and motion control principles are described, the instrument performance is quantified, and the first results obtained with the BNP on frozen-hydrated whole cells are reported.
Piedmont, California near Oakland will be the traditional setting for a radical “deep-future” symposium titled “TRANSHUMAN VISIONS 2.0 — East Bay” on Saturday March 1. Inside the 50′s-era Veteran’s Memorial Auditorium perched above the town’s Police Department, there will be 12 hours (9 am — 9 pm) of serious brain-boosting ideas, keynoted by eminent transhumanists Natasha Vita-More and Max More.
Here’s how event producer Hank Pellissier described it to KurzweilAI:
To kick it off, NaturalStacks, the conference sponsors, will be providing 450 free doses of their CILTEP “smart drug.” Speakers will include Turchin Alexei* (Moscovite Immortalist), Gennady and Wendy Stolyarov (co-authors of the children’s book Death is Wrong), Kevin Russell (techno-optimist), Monica Anderson (artificial intelligence revolutionary), Zoltan Istvan (The Transhumanist Wager), Brian Wang (Next Big Future) — who will be discussing the “genius babies” of BGI laboratories in China (known as the world’s largest genomic sequencing center), Egil Asprem (Norwegian esotericist), Andre Watson (on nanotechnology), Linda M. Glenn (on bioethics), Brad Carmack (Mormon transfigurist), blogger Michael Anissimov*, the award-winning Terra Nova Robotics Club of Pacifica, Hank Pellissier, and John Smart — who heads non-profit Brain Preservation Foundation and stunned the audience at the first Transhuman Visions conference when he assuredly stated that “everybody in this room will live to be 300 years old, if they want to,” and “Psychotropics & Transhumanists” about the psychedelic drug Ayahuasca by NYC futurist Gray Scott. We will conclude with “Immortal Music” — pop songs about living forever. Tickets ($35 and lower) can be purchased via EventBrite.
*According to a report on the The Proactionary Transhumanist blog on the “first ever street action for transhumanism in the United States,” on Wed. Feb. 26, activists Turchin Alexei, Jason Xu, and Michael Anissimov “occupied the large green Android bot [at the Googleplex], holding up signs saying, ‘Immortality now,’ ‘Viva Calico,’ and ‘Google, please, solve Death.’” (They politely left after a campus police request.) Simultaneously, in Union Square in New York, transhumanist activist Sarah Jordan Amechazurra was holding a sign saying “Google, please, solve Death.”
As you might guess this isn't really a popular science effort, but rather an entry into the time-honored documentary genre of giving screen time to strong characters in an industry largely unfamiliar to the public, people who are forging their way against the flow, working to achieve great and unusual things. There's a blog and PDF press kit if you want to look further.
You can also get a sense of the thing from the trailer, but I'll use this as a springboard to note the existence of a very real challenge when it comes to advocacy and fundraising for efforts to develop the means to treat and reverse degenerative aging. The public at large, and even people who take a little time to investigate the work of the research community, largely cannot tell the difference between serious efforts that might actually work, such as the work of the SENS Research Foundation and its allies, and scientific-sounding efforts that are in fact just ways to sell supplements that cannot possibly do anything meaningful to the course of aging, which is what has become of Sierra Sciences.
Sierra Sciences was at one point a serious effort to investigate manipulation of telomeres and telomerase as a means to treat aging, but at some point venture capital demands profits. Hence the slide of this company, like others before it, from legitimate research venture to just another group selling packaged herb extracts. Somewhere back in the day someone figured out that if you sound like a scientist people will buy what you sell regardless of how dubious your pitch is. It works even better if you actually used to be a scientist - so that's what we tend to see in this sort of situation. It's a damn shame, but it is what it is.
So you you have a film equating de Grey, who coordinates a well-supported disruption of the status quo in aging research, complete with ongoing research projects aimed at the creation of actual, real rejuvenation over the next few decades, with Andrews, who is a scientist turned supplement seller - yet another in the long series of people to leave the rails of doing meaningful research in favor of hawking marginal and frankly dubious products here and now. These two people and the broader efforts they represent couldn't be more different. One is a shot at rejuvenation, and the other has made himself irrelevant to that goal.
This is a microcosm of the reasons why much of the mainstream scientific community are exceedingly unhappy with the "anti-aging" marketplace. When folk in the street - and journalists who know better, but who live and die by page view counts - don't take the time to distinguish between fraudulent "anti-aging" products and legitimate laboratory research, and the largest megaphones are wielded by supplement sellers, then the fundraising environment for aging research becomes challenging.
The future of longevity is not herbal supplements, never was herbal supplements, and never will be herbal supplements. Anyone trying to sell you a supposedly longevity-enhancing ingested product here and now, today, has left the real road to human rejuvenation far behind. All they have to sell are wishes, dreams, and lies. The only valid, viable way forward is to fund the right sort of research: the development of targeted therapies capable of repairing or reversing the known root causes of aging, and stop-gap treatments such as stem cell therapies that can temporarily reverse some of the consequences of aging to a degree that merits the high cost of development. Nothing exists today that can accomplish that first goal, and it will be at least two decades before early rejuvenation therapies emerge, even assuming great progress in fundraising over that time.
So to meander to a conclusion: there is probably no such thing as bad publicity. The more that the public hears about the prospects for treating aging, more likely it is that some people will come to favor that goal, and the easier it becomes for scientists to raise funds for new ventures or to expand existing SENS programs. But I, not in the target audience of course, would much prefer to see that done in a more discriminating way than the example herein.
In year 2014 mobile phones often included a camera, video recorder, and calculator. In year 2045 your mobile phone (or wearables, ingestibles etc) will include a 3D-printer (nano-assemblers), super-intelligence, the capability for extrasolar travel, an entire hospital within it, and a library containing the entirety of human knowledge.
Mobile phones in 1983 were large bricks costing $3,500 - £3,900, but they could only make calls. In year 2014 a slimline mobile phone could be bought for only $6, and it usually included a radio, games, calculator, and sometimes a camera too! Prepare for the Singularity! Can you imagine the world in 2045? Year 2014 is a good point in time to perform this price comparison. It is 31 years since 1983, the point when mobile phones first became publicly available. 2014 is also 31 years before year 2045 (a date commonly associated with the Singularity). A lot can change in 31 years, especially when you realize the rate of progress is a lot faster than it was in 1983. We have a clear example of 31 years of progress.
What will the next 31 years produce?
### The 1983 price of $3,900 was sourced from CNN: http://edition.cnn.com/2010/TECH/mobile/07/09/cooper.cell.phone.inventor/ (http://archive.is/ybiF8). The price of $3,500 was sourced from the BBC: http://www.bbc.co.uk/news/technology-22013228 (http://archive.is/wPUA5).The UK price of £5 was sourced from the following links (original links may change prices or die; note the archived page links for the correct 2014 prices): http://www.argos.co.uk/static/Product/partNumber/5187760.htm (http://archive.is/KRqBx), or http://direct.asda.com/Alcatel-10-10-Mobile-Phone---T-Mobile/008738220,default,pd.html (http://archive.is/3dD8A).The US price of $6 was sourced from the following link (original link may change price or die; note the archived page link for the correct 2014 price): http://www.walmart.com/ip/payLo-Kyocera-Kona-Prepaid-Cell-Phone/26923422 (http://archive.is/fXgcw). See also http://www.maclife.com/article/gallery/visual_history_cell_phone and http://www.washingtonpost.com/wp-srv/special/business/a-gadgets-life/
The Singularity Hub Membership Program is celebrating its second anniversary and the community is more passionate about the future than ever!
That’s right, the membership program launched in 2012 and has been a great way for tech enthusiasts, innovators, futurists, entrepreneurs, and Singularity University alumni to connect with each other, watch videos on disruptive technology, and support Singularity Hub. In that time, many Hub readers have joined this exclusive program and 2014 is set to be our best year ever.
To celebrate an already incredible program, we’ve added even more features:
- Members-Only Forums: Now available through the main website, members can learn more about each others’ interests, startups, and initiatives every day!
- New Payment System: Our much easier-to-use payment system is live. Sign up for a monthly, quarterly, or annual plan – you can even give a membership as a gift to a friend or loved one!
- Topical Google Hangouts: Soon-to-be-launched Google Hangouts will help members from across the world get together virtually to discuss the latest robots, medical breakthroughs, virtual-reality gadgets and more!
These features are sure to provide even more content and connections to inspire members. But this is just the beginning – we have a lot in store for Singularity Hub this year and members will get a front row seat on it all.
So if you’re passionate about emerging technologies, the grand challenges facing humanity, and being part of the Singularity University community online, check out our membership page today and learn all that the program has to offer, then sign up and join us!
With robotics companies devising machines to handle a range of tasks that are dangerous for humans, some have designed machines to navigate tunnels. But in addition to civil engineers, the tunnel-dwelling robots have landed a high-profile client: the United States Border Patrol.
Drug smugglers use a network of underground tunnels, connected with sewage systems and drainage pipes, to ferret their cargo into the United States from Mexico. Since 1990, the Tucson Sector of the Border Patrol has discovered more than 100 tunnels housing more than 17,500 pounds of drugs.
“Tunnel robots are essential in Tucson Sector Border Patrol’s enforcement operations. They can reach places that are too small for agents to search and help to safeguard agents from chemicals in street run off and air quality issues in confined and underground locations,” a spokesperson told Singularity Hub.
When possible, the agency uses robots instead of agents to look for signs of human activity along the tunnels, such as footprints or bad joints in the piping. The robots spare agents the risks of chemical exposure, collapsing tunnels or potential violence from surprised smugglers who happen to be in the tunnels when Border Patrol arrives.
The robots can be set up to enter a tunnel in 15 minutes, less time than it would take an agent to gear up, the agency said, and can also travel the length of a tunnel in about one-sixth the time it takes agents to do it. The number of tunnels discovered each year has remained about the same since the robots were put to work.
Three of the four robots in use by the agency along the entire southern border work in the Tucson sector. The agency uses the 80-pound Versatrax 300 VLR, made by Canadian company Inuktun, and the 18-pound Pointman Tactical Robot made by New Mexico-based ARA. The Versatrax specializes in pipes and requires a generator, while the Pointman is all-terrain and runs on a battery. Both send a video feed back to the operator.
The agency paid just under $200,000 for the four robots. It says it will continue buying robots on a case-by-case basis, potentially acquiring a variety of machines to tackle different terrains and situations.
Some believe the new tools ought to be vetted before they are put into wider use.
Brian Buchner, president of the National Association for Civilian Oversight of Law Enforcement, commended the Border Patrol for trying to keep its agents safe. But he suggested that robots, like any other new policing tool, need to be vetted for possible problems.
“The use of robots in law enforcement, like any other tool an officer or agent has at their disposal — pepper spray, a baton, or a TASER, for example — requires comprehensive policies and procedures governing their use as well as proper training, supervision, and oversight of the officers who will actually be using them,” he said. “Technological advancements can make law enforcement safer for officers and the public, but they can also be abused.”
Photos: Jamie Lantzy via Wikimedia Commons, ARA, Inuktun
Cancer and Alzheimer's disease appear to be inversely related. If you have cancer your odds of Alzheimer's are lower, and vice versa. Since the lifestyle risk factors for both are essentially the same, this is an interesting finding, to say the least. Researchers are digging into the biochemical mechanisms that might explain this state of affairs:There is epidemiological evidence that patients with certain Central Nervous System (CNS) disorders have a lower than expected probability of developing some types of Cancer. We tested here the hypothesis that this inverse comorbidity is driven by molecular processes common to CNS disorders and Cancers, and that are deregulated in opposite directions.
We conducted transcriptomic meta-analyses of three CNS disorders (Alzheimer's disease, Parkinson's disease and Schizophrenia) and three Cancer types (Lung, Prostate, Colorectal) previously described with inverse comorbidities. A significant overlap was observed between the genes upregulated in CNS disorders and downregulated in Cancers, as well as between the genes downregulated in CNS disorders and upregulated in Cancers. We also observed expression deregulations in opposite directions at the level of pathways.
Our analysis points to specific genes and pathways, the upregulation of which could increase the incidence of CNS disorders and simultaneously lower the risk of developing Cancer, while the downregulation of another set of genes and pathways could contribute to a decrease in the incidence of CNS disorders while increasing the Cancer risk. These results reinforce the previously proposed involvement of the PIN1 gene, Wnt and P53 pathways, and reveal potential new candidates, in particular related with protein degradation processes.
Numerous studies now show that stem cell populations in old tissues remain large, and have explored a few of the mechanisms that explain why these stem cells are no longer as active in tissue maintenance as they were in youth. In a number of cases researchers have been able to demonstrate partial reversal of this decline by altering the signaling environment, overriding the age-related changes that seem to be responsible without addressing the underling causes of these changes, which are no doubt reactions to rising levels of cellular damage.
It is likely that researchers will find naive applications of this sort of restoration of stem cell activity will greatly raise cancer risk, as cancer suppression is probably the reason why stem cells have evolved this diminished action response to the damage of aging - our longevity is thought by many researchers to be a balancing act between risk of cancer and levels of tissue maintenance in an environment of steadily rising damage. The ability to detect and selectively and safely treat cancer is improving rapidly, however, so a blunt restoration of stem cell activity may well turn out to be an acceptable stop-gap approach to improve health in old age:Previous studies have demonstrated an age related decline in the size of the neural stem cell (NSC) pool and a decrease in neural progenitor cell proliferation, however, the mechanisms underlying these changes are unclear. In contrast to previous reports, we report that the numbers of NSCs is unchanged in the old age subependyma and the apparent loss is because of reduced proliferative potential in the aged stem cell niche.
Transplantation studies reveal that the proliferation kinetics and migratory behavior of neural precursor cells are dependent on the age of the host animal and independent of the age of the donor cells suggesting that young and old age neural precursors are not intrinsically different. Factors from the young stem cell niche rescue the numbers of NSC colonies derived from old age subependyma and enhance progenitor cell proliferation in vivo in old age mice. Finally, we report a loss of Wnt signaling in the old age stem cell niche that underlies the lack of expansion of the NSC pool after stroke.
NASA’s Kepler mission announced Wednesday the discovery of 715 new planets. These newly-verified worlds orbit 305 stars, revealing multiple-planet systems much like our own solar system.
Nearly 95 percent of these planets are smaller than Neptune, which is almost four times the size of Earth. This discovery marks a significant increase in the number of known small-sized planets more akin to Earth than previously identified exoplanets (planets outside our solar system).
“The Kepler team continues to amaze and excite us with their planet hunting results,” said John Grunsfeld, associate administrator for NASA’s Science Mission Directorate in Washington. “That these new planets and solar systems look somewhat like our own, portends a great future when we have the James Webb Space Telescope in space to characterize the new worlds.”
New statistical technique for verifying planets
Since the discovery of the first planets outside our solar system roughly two decades ago, verification has been a laborious planet-by-planet process. Now, scientists have a statistical technique that can be applied to many planets at once when they are found in systems that harbor more than one planet around the same star.
To verify this bounty of planets, a research team co-led by Jack Lissauer, planetary scientist at NASA’s Ames Research Center in Moffett Field, Calif., analyzed stars with more than one potential planet, all of which were detected in the first two years of Kepler’s observations — May 2009 to March 2011.
The research team used a technique called verification by multiplicity, which relies in part on the logic of probability. Kepler observes 150,000 stars, and has found a few thousand of those to have planet candidates. If the candidates were randomly distributed among Kepler’s stars, only a handful would have more than one planet candidate. However, Kepler observed hundreds of stars that have multiple planet candidates. Through a careful study of this sample, these 715 new planets were verified.
These multiple-planet systems are fertile grounds for studying individual planets and the configuration of planetary neighborhoods. This provides clues to planet formation.
Four of these new planets are less than 2.5 times the size of Earth and orbit in their sun’s habitable zone, defined as the range of distance from a star where the surface temperature of an orbiting planet may be suitable for life-giving liquid water.
One of these new habitable zone planets, called Kepler-296f, orbits a star half the size and 5 percent as bright as our sun. Kepler-296f is twice the size of Earth, but scientists do not know whether the planet is a gaseous world, with a thick hydrogen-helium envelope, or it is a water world surrounded by a deep ocean.
This latest discovery brings the confirmed count of planets outside our solar system to nearly 1,700. The findings papers will be published March 10 in The Astrophysical Journal and are available for download.
Abstract of arXiv paper (Jack J. Lissauer et al.)
We extend the statistical analysis of Lissauer et al. (2012, ApJ 750, 112), which demonstrates that the overwhelming majority of Kepler candidate multiple transiting systems (multis) represent true transiting planets, and develop therefrom a procedure to validate large numbers of planet candidates in multis as bona fide exoplanets. We show that this statistical framework correctly estimates the abundance of false positives already identified around Kepler targets with multiple sets of transit-like signatures based on their abundance around targets with single sets of transit-like signatures. We estimate the number of multis that represent split systems of one or more planets orbiting each component of a binary star system. We use the high reliability rate for multis to validate more than one dozen particularly interesting multi-planet systems are validated in a companion paper by Rowe et al. (2014, ApJ, this issue). We note that few very short period (P < 1.6 days) planets orbit within multiple transiting planet systems and discuss possible reasons for their absence. There also appears to be a shortage of planets with periods exceeding a few months in multis.
Abstract of arXiv paper (Jason F. Rowe et al.)
The Kepler mission has discovered over 2500 exoplanet candidates in the first two years of spacecraft data, with approximately 40% of them in candidate multi-planet systems. The high rate of multiplicity combined with the low rate of identified false-positives indicates that the multiplanet systems contain very few false-positive signals due to other systems not gravitationally bound to the target star (Lissauer, J. J., et al., 2012, ApJ 750, 131). False positives in the multi- planet systems are identified and removed, leaving behind a residual population of candidate multi-planet transiting systems expected to have a false-positive rate less than 1%. We present a sample of 340 planetary systems that contain 851 planets that are validated to substantially better than the 99% confidence level; the vast majority of these have not been previously verified as planets. We expect ~2 unidentified false-positives making our sample of planet very reliable. We present fundamental planetary properties of our sample based on a comprehensive analysis of Kepler light curves and ground-based spectroscopy and high-resolution imaging. Since we do not require spectroscopy or high-resolution imaging for validation, some of our derived parameters for a planetary system may be systematically incorrect due to dilution from light due to additional stars in the photometric aperture. None the less, our result nearly doubles the number of verified exoplanets.
Johns Hopkins researchers have trained the immune systems of mice to fight melanoma, a deadly skin cancer, by using nanoparticles designed to target cancer-fighting immune cells, The experiments, described in ACS Nano February 24, represent a significant step toward using nanoparticles and magnetism to treat a variety of conditions, the researchers say.
“By using small enough particles, we could, for the first time, see a key difference in cancer-fighting cells, and we harnessed that knowledge to enhance the immune attack on cancer,” said Jonathan Schneck, M.D., Ph.D., a professor of pathology, medicine and oncology at the Johns Hopkins University School of Medicine‘s Institute for Cell Engineering.
Schneck’s team has pioneered the development of artificial white blood cells (“artificial antigen-presenting cells” or aAPCs), which show promise in training animals’ immune systems to fight diseases such as cancer. To do that, the aAPCs must interact with immune cells known as naive T cells that are already present in the body, awaiting instructions about which specific invader they will battle.
The aAPCs bind to specialized receptors on the T cells’ surfaces, “presenting” the T cells with distinctive proteins called antigens. This process activates the T cells, programming them to battle a specific threat such as a virus, bacteria, or tumor, as well as to make more T cells.
The team had been working with microscale particles, which are about one-hundredth of a millimeter across. But, says Schneck, aAPCs of that size are still too large to get into some areas of a body and may even cause tissue damage because of their relatively large size. In addition, the microscale particles bound equally well to naive T cells and others, so the team began to explore using much smaller nanoscale aAPCs.
Since size and shape are central to how aAPCs interact with T cells, Karlo Perica, a graduate student in Schneck’s laboratory, tested the impact of these smaller particles.
Magnetic field-based cell clustering activates T cells
To see whether there indeed was a relationship between activation and receptor clustering, Perica applied a magnetic field to the cells, causing the iron-based nano-aAPCs to attract one another and cluster together, bringing the receptors with them. The clustering did indeed activate the naive T cells, and it made the activated cells even more active — effectively ramping up the normal immune response.
To examine how the increased activation would play out in living animals, Perica tested the impact of these smaller particles.treated a sample of T cells with nano-aAPCs targeting those T cells that were programmed to battle melanoma. The researchers next put the treated cells under a magnetic field and then put them into mice with skin tumors.
The tumors in mice treated with both nano-aAPCs and magnetism stopped growing, and by the end of the experiment, they were about 10 times smaller than those of untreated mice, the researchers found. In addition, they report, six of the eight magnetism-treated mice survived for more than four weeks showing no signs of tumor growth, compared to zero of the untreated mice.
“We were able to fine-tune the strength of the immune response by varying the strength of the magnetic field and how long it was applied, much as different doses of a drug yield different effects,” says Perica. “We think this is the first time magnetic fields have acted like medicine in this way.”
In addition to its potential medical applications, Perica notes that combining nanoparticles and magnetism may give researchers a new window into fundamental biological processes. “In my field, immunology, a major puzzle is how T cells pick out the antigen they’re targeting in a sea of similar antigens in order to find and destroy a specific threat,” he says. “Receptors are key to that action, and the nano-aAPCs let us detect what the receptors are doing.”
“We have a bevy of new questions to work on now: What’s the optimal magnetic ‘dose’? Could we use magnetic fields to activate T cells without taking them out of the body? And could magnets be used to target an immune response to a particular part of the body, such as a tumor’s location?” Schneck adds. “We’re excited to see where this new avenue of research takes us.”
A Miltenyi Biotec researcher was also involved in the study.
This work was supported by the National Institute of Allergy and Infectious Diseases, the National Cancer Institute, Miltenyi Biotec. and the Cancer Research Institute.
Abstract of ACS Nano paper
Iron–dextran nanoparticles functionalized with T cell activating proteins have been used to study T cell receptor (TCR) signaling. However, nanoparticle triggering of membrane receptors is poorly understood and may be sensitive to physiologically regulated changes in TCR clustering that occur after T cell activation. Nano-aAPC bound 2-fold more TCR on activated T cells, which have clustered TCR, than on naive T cells, resulting in a lower threshold for activation. To enhance T cell activation, a magnetic field was used to drive aggregation of paramagnetic nano-aAPC, resulting in a doubling of TCR cluster size and increased T cell expansion in vitro and after adoptive transfer in vivo. T cells activated by nano-aAPC in a magnetic field inhibited growth of B16 melanoma, showing that this novel approach, using magnetic field-enhanced nano-aAPC stimulation, can generate large numbers of activated antigen-specific T cells and has clinically relevant applications for adoptive immunotherapy.
Richard Kramer of the University of California, Berkeley and his colleagues have invented a “photoswitch” chemical named DENAQ that confers light sensitivity on normally light-insensitive retinal ganglion cells, restoring light perception in blind mice.*
An earlier photoswitch investigated by the researchers in 2012 (reported by KurzweilAI) called AAQ requires very bright ultraviolet light, which can be damaging; and AAQ dissipates from the eye within a day after injection.
But just one injection of DENAQ into the eye confers light sensitivity for several days with ordinary white light.
As described in a study appearing in the February 19 issue of the Cell Press journal Neuron, the compound may be a potential drug candidate for treating patients suffering from degenerative retinal disorders.
Experiments on mice with functional, nonfunctional, or degenerated rods and cones showed that DENAQ only impacts retinal ganglion cells if the rods and cones have already died. It appears that degeneration in the outer retina leads to changes in the electrophysiology in the inner retina that enables DENAQ photosensitization, while the presence of intact photoreceptors prevents DENAQ action.
“Further testing on larger mammals is needed to assess the short- and long-term safety of DENAQ and related chemicals,” says Kramer. “It will take several more years, but if safety can be established, these compounds might ultimately be useful for restoring light sensitivity to blind humans.”
* Progressive degeneration of photoreceptors — the rods and cones of the eyes — causes blinding diseases such as retinitis pigmentosa and age-related macular degeneration. The retina has three layers of nerve cells, but only the outer layer contains the rod and cone cells that respond to light, enabling us to see the world. When the rods and cones die during the course of degenerative blinding diseases, the rest of the retina remains intact but unable to respond to light. Even though the innermost layer’s nerve cells, called retinal ganglion cells, remain connected to the brain, they no longer transmit information useful for vision.
Abstract of Neuron paper
- DENAQ photosensitizes blind retinas to white light with intensity similar to daylight
- Photosensitization in vivo lasts for days after a single intraocular injection
- DENAQ photosensitizes retinal ganglion cells only if the rods and cones have degenerated
- DENAQ restores light-elicited behavior and enables visual learning in blind mice
Retinitis pigmentosa (RP) and age-related macular degeneration (AMD) are blinding diseases caused by the degeneration of rods and cones, leaving the remainder of the visual system unable to respond to light. Here, we report a chemical photoswitch named DENAQ that restores retinal responses to white light of intensity similar to ordinary daylight. A single intraocular injection of DENAQ photosensitizes the blind retina for days, restoring electrophysiological and behavioral responses with no toxicity. Experiments on mouse strains with functional, nonfunctional, or degenerated rods and cones show that DENAQ is effective only in retinas with degenerated photoreceptors. DENAQ confers light sensitivity on a hyperpolarization-activated inward current that is enhanced in degenerated retina, enabling optical control of retinal ganglion cell firing. The acceptable light sensitivity, favorable spectral sensitivity, and selective targeting to diseased tissue make DENAQ a prime drug candidate for vision restoration in patients with end-stage RP and AMD.