Age-related deterioration in blood vessels and the broader cardiovascular system generates damage in the brain. Blood vessel walls are elastic, a property that depends on the molecular structure of the proteins making up the extracellular matrix in that tissue. This structure is progressively degraded by the presence of sugary metabolic waste known as advanced glycation end-products (AGEs), which leads to the formation of cross-links between proteins and a consequent loss of elasticity. Stiffening of blood vessels causes hypertension and many of the cellular and molecular mechanisms involved overlap with those that speed the progression of atherosclerosis, a condition in which blood vessel walls become sources of chronic inflammation and are remodeled into fatty deposits by abnormal cellular activity. All of this causes a rising number of structural failures in the small blood vessels of the brain. Each one is effectively a tiny, unnoticed stroke, killing cells in a minuscule area of the brain. This harm adds up over time and is one of the contributing causes of age-related cognitive impairment.
A recently published paper suggests that more of the age-related changes observed in the brain may be due to vascular degeneration than previously thought. If so this implies that research aimed at removing cross-links has a greater importance, as do efforts to block the very early causes of atherosclerosis, such as the generation of oxidized lipids due to mitochondrial DNA damage. It also places a greater value on the basics of cardiovascular health in general: fitness, exercise, resilience, and so forth. When it comes to longevity and medicine, we must protect the brain: all other parts of the body could, in theory, be completely rebuilt or replaced if that becomes necessary, but the structure of the brain is the structure of the self. Lose that and there is nothing that can be done. Retain it and even if you must be cryopreserved as a last resort, there is still a chance at a future.functional magnetic resonance imaging (fMRI) may be due to vascular (or blood vessels) changes, rather than changes in neuronal activity itself. Given the large number of fMRI studies used to assess the ageing brain, this has important consequences for understanding how the brain changes with age and challenges current theories of ageing. A fundamental problem of fMRI is that it measures neural activity indirectly through changes in regional blood flow. Thus, without careful correction for age differences in vasculature reactivity, differences in fMRI signals can be erroneously regarded as neuronal differences. An important line of research focuses on controlling for noise in fMRI signals using additional baseline measures of vascular function. However, such methods have not been widely used, possibly because they are impractical to implement in studies of ageing.
An alternative candidate for correction makes use of resting state fMRI measurements, which is easy to acquire in most fMRI experiments. While this method has been difficult to validate in the past, the unique combination of an impressive data set across 335 healthy volunteers over the lifespan, as part of the CamCAN project, allowed researchers to probe the true nature of ageing effects on resting state fMRI signal amplitude. Their research showed that age differences in signal amplitude during a task are of a vascular, not neuronal, origin.
The use of RSFA is predicated on its sensitivity to vascular rather than neural factors. The effects of ageing on RSFA were significantly mediated by vascular factors, but importantly not by the variability in neuronal activity. The scaling analysis revealed that much of the effects of age on task-based activation studies with fMRI do not survive correction for changes in vascular reactivity, and are likely to have been overestimated in previous fMRI studies of ageing.
U.S. Executive Study Examines Disruptive Innovation
New York – March 5, 2015 – Big Think, the online video network where the world’s leading thinkers examine the most essential ideas of our age, and Singularity University (SU), a education and business accelerator, today announced the results of the 2015 Disruptive Innovation Survey.
Big Think and Singularity University partnered to conduct this first-ever study into the traits of exponential leadership to fully understand the impact of exponential technology on global business. The survey was developed from and informed by Salim Ismail’s ‘Exponential Organizations’ and SU co-founder and CEO Peter Diamandis’ and Steven Kotler’s new book, ‘Bold’.
The study of 1,283 U.S. based executives revealed today’s chief executives who are viewed as leaders in disruptive innovation:
- Elon Musk of Tesla Motors, Larry Page of Google, Jeff Bezos of Amazon.com and Richard Branson of Virgin Group named as top four disruptive innovators leading large organizations.
- Yancey Strickler of Kickstarter, Travis Kalanick of Uber, Brian Chesky of Airbnb and Chance Barnett of Crowdfunder named as top four most effective leaders achieving disruptive innovation in young, start-up companies.
- Chris Anderson of TED, Salman Khan of Khan Academy, Joanne Liu of Doctors Without Borders, Sue Desmond-Hellmann of The Bill & Melinda Gates Foundation named as top four most effective leaders achieving disruptive innovation in non-profit organizations.
A selection of the chief executives named as 2015’s top exponential leaders are continuing the conversation surrounding disruptive innovation by taking part in exclusive Big Think video interviews, including Richard Branson, Chris Anderson, Yancey Strickler and Chance Barnett.
Respondents also identified 3D printing, additive manufacturing and nanomaterials as the top technologies that will have the most disruptive impact on business in the next three years. In addition, 63 percent of respondents identified experimentation and “failing fast” as the most important practice in creating disruptive innovation.
“Disruptive innovation is a polarizing force and company leaders must understand how they can benefit from such change,” said Peter Hopkins, Co-Founder and President, Big Think. “At Big Think, we bring actionable insights from the greatest minds of our time and we are excited to share the results as leaders work to disruptively innovate for success.”
“Within the next decade, as many as 40 percent of today’s S&P companies may be gone – disrupted by rapidly advancing technologies and the entrepreneurs adapting quickly to this new environment,” said Rob Nail, Associate Founder and CEO, Singularity University. “The leaders we identified in this survey have provided the early models of how to not only survive, but thrive in this new era of exponential organizations.”
Full survey results can be accessed here.
Total survey respondents of 1,283 United States based executives.
About Big Think
Big Think is a video-centric online learning platform, where 2,000 of the world’s leading thinkers explore the most essential ideas of our age. Created for an astute and engaged audience, Big Think’s 12,000+ exclusive videos are short, sharp and elegantly filmed and go beyond the lecture to provide expert knowledge and actionable insights. Big Think’s videos have been viewed more than 100 million times and the Big Think YouTube channel has over 1 million subscribers. Big Think’s videos are also available on Facebook, where the brand boasts over a million ‘Likes’. In addition, Big Think produces exclusive content for corporate subscribers to Big Think Edge, providing major companies with videos and supplemental materials tailored to their employees’ needs, to help them build the expertise they need to innovate and compete. For more information, please visit www.bigthink.com.
About Singularity University
Singularity University’s (SU) mission is to educate inspire and empower a generation of leaders to apply exponential technologies to address humanity’s grand challenges. As a California Benefit Corporation SU is committed to creating positive global impact through three core areas: Education Innovation and Community. Since 2009 SU has hosted entrepreneurs industry leaders and government officials from more than 85 countries and has prepared individuals and organizations for exponential technology changes through a series of events conferences and education programs and within its business accelerator SU Labs. SU’s Founding Corporate Partners include Genentech Autodesk Cisco ePlanet Ventures Google Kauffman Foundation and Nokia.
After Babar malware, security researchers detected a new strain of malware dubbed Casper that appears to be linked to the French intelligence service.
Surveillance is the primary goal of Intelligence Agencies worldwide, a few weeks ago cyber security researchers detected a new malware, dubbed Babar, that is considered a product of the French intelligence. According to the experts, Babar malware was used by the General Directorate for External Security (DGSE) for surveillance and cyber espionage operations.
The General Directorate for External Security is the France‘s external intelligence agency, which is controlled by the French ministry of defence, in charge of intelligence activities and national security. Casper was discovered by Canadian malware researchers that linked it to the French General Directorate for External Security.
Babar is a powerful spyware package that is capable of eavesdropping on online conversations held via popular messaging platforms, including Skype, MSN and Yahoo messenger, as well as logging keystrokes and monitoring victim’s web activities. Babar was used to spy on several Iranian nuclear research institutes and universities, but it was used also to monitor activities of European financial institutions. The name Babar is reported in one of the documents leaked by NSA whistleblower Edward Snowden. The secret slides produced by the Canadian intelligence agency linked Babar to the French Government.
Now, security experts have spotted a new malware, dubbed Casper, which is a spyware designed to track Internet users for surveillance purpose. Casper malware was used by the hackers to compromise target systems, spy on them and drop other advanced persistent malware.
“According to the report, which Motherboard reviewed in advance, Casper was hosted on a hacked Syrian government website in April of last year. The incident caught the attention of some security researchers because the attackers used two zero-day vulnerabilities to infect victims” states a blog post published by the Motherboard news portal.
The report analyzed by Motherboard revealed that Casper was designed by a French hacking group, linked to the French government, to conduct several espionage campaigns over the last few years. As explained in the report the hacker behind Casper had access to two zero-day exploits that were used in an instance detected in April 2014.
Security researchers believe that Casper requested a significant effort for its development in term of resource and financial investment, a prerogative of Government-built malware.
Babar and Casper have the same root
Malware specialists have discovered several similarities between Casper and Babar, and the experts suggest that Casper was “likely” developed by the same group behind Babar. It seems that these and other hacking tools are part of the French cyber arsenal.
In December 2014, a Cyphort Labs firm detected a sophisticated strain of malware sample implementing a very complex evasion technique. “The malware is dubbed ‘EvilBunny’ and is designed to be an execution platform for Lua scripts injected by the attacker. ” According to the new report written by Joan Calvet, a malware researcher at anti-virus maker ESET, the EvilBunny together with other hacking tools, like the NBOT tool, suggested ties to the French intelligence services.
“We have reasons to believe that French intelligence has been using—or is even still using—at least four different malware families,” Marion Marschalek, another researcher who worked with Calvet and Paul Rascagneres in investigating the malware, told Motherboard.
Casper is just the latest tool in order of time to be linked to the Animal Farm group. Security researchers believe Casper has been active since at least 2009 or 2010.
“Other security researchers agree that Casper, perhaps named after the famous cartoon “friendly ghost,” was likely created by the French government and its spying agency the General Directorate for External Security (DGSE). They refer to the hacking group as the “Animal Farm” because of each malware’s animal-like and cartoon-inspired names.” continues the Motherboard.
Costin Raiu, the director of the Global Research and Analysis Team at Kaspersky Labs, confirmed that his team has been tracking Animal Farm since 2013, the popular expert has no doubt on the nature of the hacking collective behind Casper.
“When you have such a large-scale operation going on for several years using multiple zero-days without any kind of financial outcome,”Raiu told Motherboard, “it’s obvious that it’s nation-state sponsored—it has to be.”
France’s Defense Ministry did not respond to Motherboard’s requests for comment.
###Pierluigi Paganini is Chief Information Security Officer at Bit4Id, firm leader in identity management, member of the ENISA (European Union Agency for Network and Information Security) Treat Landscape Stakeholder Group, he is also a Security Evangelist, Security Analyst and Freelance Writer. Editor-in-Chief at “Cyber Defense Magazine”, Pierluigi is a cyber security expert with over 20 years experience in the field, he is Certified Ethical Hacker at EC Council in London. The passion for writing and a strong belief that security is founded on sharing and awareness led Pierluigi to find the security blog “Security Affairs” recently named a Top National Security Resource for US. Pierluigi is a member of the “The Hacker News” team and he is a writer for some major publications in the field such as Cyber War Zone, ICTTF, Infosec Island, Infosec Institute, The Hacker News Magazine and for many other Security magazines. Author of the Books “The Deep Dark Web” and “Digital Virtual Currency and Bitcoin”.
With emerging technology comes great responsibility. Robot image via www.shutterstock.com.
On July 31, 2012, a massive blackout swept across northeast India. At 1 pm local time, a power line in the state of Madhya Pradesh became overloaded and tripped out. As the supply grid struggled to pick up the slack, other lines went down. By 1:03, a cascading series of failures had pushed the electricity supply grid into a state of chaos, resulting in the largest blackout in human history. More than an estimated 600 million people lost power temporarily as a result of the collapse.
This blackout is a stark reminder of how vulnerable we all are to chaotic collapse around the many complex technological systems we rely on. Yet we continue to develop powerful new technologies at a rapid rate, with little thought as to how their very complexity and interconnectedness may cause them to unravel in the future.Neuromorphic chip with a maximum load–equivalent to 400 billion synaptic operations per second per watt. Day Donaldson, CC BY
The sheer audacity of our technological prowess is reflected in this year’s list of Top Ten Emerging Technologies from the World Economic Forum (WEF) – now in its fourth year. The list spans advances in genetic engineering and the use of rapid DNA sequencing and digitization for personalized health care, to artificial intelligence, neuromorphic computer chips that mimic the human brain, and advanced robotics.
The list, which is intended to raise awareness on the potential benefits and possible pitfalls around leading emerging technological trends, is an impressive testament to how rapidly our ability to control and manipulate the world around us is changing. Yet it also spotlights the inherent dangers of runaway innovation and the need for the responsible development of emerging technologies.World Economic Forum Top Ten Technologies 2015 World Economic Forum/Andrew Maynard, CC BY-NC-SA Innovation isn’t automatically for the good
Take artificial intelligence (AI), for instance – one of the WEF Top Ten technologies. Both Elon Musk and Stephen Hawking have warned against the risks of out-of-control AI. Earlier this year, they signed an open letter highlighting the opportunities and challenges around developing robust and beneficial AI. The letter acknowledges that, without foresight and due consideration, AI’s promise of substantially augmenting human intelligence as we seek to eradicate disease, make better use of available resources and improve quality of life, could be undermined by unanticipated drawbacks. Imagine, for instance, the dangers of maliciously programmed autonomous weapons, or intelligent machines that don’t understand or respect human values.
Other technologies in the WEF Top Ten present similar challenges. Drones raise concerns around security and privacy. The digital genome brings us closer to Gattaca-like discrimination practices based on our DNA. Precise genetic engineering techniques enable the ethically complex re-design and re-invention of living organisms. Additive manufacturing methods such as 3-D printing raise new challenges in how novel materials and processes are used safely.
These technologies are not inherently safe and secure. Yet they’re nevertheless critically important because of their potential benefits.Even basics like clean drinking water aren’t guaranteed for all people. Shawn, CC BY-NC-SA Click to enlarge We need tech to address world problems
Despite possible downsides, society needs technology innovation. We live in a world where one billion people are still without basic sanitation. Over six million children die each year before reaching the age of five. Over 14% of the world’s people each live on less than US$1.25 a day. More than 2.5 billion people are at risk from infectious diseases like dengue and malaria. Ensuring sufficient water, food and energy to sustain acceptable living standards will become increasingly challenging in the coming years.
Technology innovation alone will not solve these and other challenges. But without it, many will not be solved at all.
So we can’t afford to slam the breaks on emerging technologies. Each of the WEF Top Ten technologies holds the promise of advances that will change lives for the better. From novel recyclable plastics, to advanced manufacturing processes, to the possibility of personally tailored health programs, these technologies represent the tip of an innovation iceberg that could help eradicate disease, alleviate poverty and inequality, and help address some of the most pressing challenges of our time.
Yet good intentions alone will not ensure we see the benefits of these technological breakthroughs.
Tech doesn’t emerge into a vacuum
In January of this year, the Stockholm Resilience Center released research indicating that we’re rapidly pushing our planet toward a harmful tipping point. The Center previously identified nine planetary boundaries within which humanity can continue to develop and thrive. They warned that crossing these boundaries “could generate abrupt or irreversible environmental changes.” According to researchers, four of these boundaries (including climate change and biosphere integrity) have now been crossed as a result of human activity.
Emerging technologies are amplifying this coupling between our actions and our environment. Whether through new approaches to reducing environmental impact (the development of fuel-cell technologies or innovative recyclable materials, for instance) or technologies that have the potential to radically alter human-environment interactions (for instance, applications of AI and distributed manufacturing), there is an intimate and complex dynamic between planetary vulnerabilities and new technologies.Strong and weak interdependencies between the 2015 World Economic Forum Top Ten Emerging Technologies. World Economic Forum/Andrew Maynard, CC BY-NC-SA
Adding to this complexity, emerging technologies are highly interdependent on one another. AI development, for instance, affects and is affected by neuromorphic technologies, which in turn are relevant to drones, next-generation robotics and even the digital genome. This, in turn, is connected to precision genetic engineering and from there, we rapidly reconnect back to AI.
The result is a massively interconnected socio-techno-environmental system that’s very complexity leaves it vulnerable to rapid, chaotic and potentially catastrophic collapse. As with other complex systems, it’s one that will appear stable and predictable – until, suddenly, it isn’t. And this leaves us with a problem.We need to think about emerging technologies – including additive manufacturing, aka 3D printing – to anticipate problems. Creative Tools, CC BY How to innovate responsibly
In 2013, Stilgoe, Owen and McNaughten published a framework for “responsible innovation” that could help reduce at least some of the vulnerabilities associated with a rapid rise in powerful emerging technologies. As part of a growing body of research on responsible innovation, they proposed a series of recommendations aimed at enabling innovators, policy makers and others to take care of the future through “collective stewardship of science and innovation in the present.” At the center of their recommendations were four “dimensions” of responsible innovation:
- Anticipation of emerging issues
- Reflexivity over how well or poorly existing approaches are ensuring responsible development of new technologies
- Inclusion of key stakeholders in making decisions
- Responsiveness to emergent risks and opportunities
Governments and investors are beginning to take such ideas increasingly seriously – for instance, the European Union Horizon 2020 framework program for research and innovation includes a specific work program on Responsible Research and Innovation.
Yet if we are to avoid our collective inventiveness pushing us off the metaphorical precipice of chaotic failure, much more is needed.New technologies – like these carbon nanotubes – must be developed with an eye toward how they interact with other tech.Pacific Northwest National Laboratory, CC BY-NC-SA
Crucially, we need to ensure that informed responsibility is built into the process of innovation from the ground up – starting with researchers and innovators, and continuing through investors and consumers. Complementing this, society must stimulate and support innovation that leads to socially, economically and environmentally sustainable progress.
The greatest challenge we face however, is in moving away from considering emerging technologies in isolation, and toward understanding and responding to the highly complex, nonlinear and potentially chaotic interplay between technology innovation, society and the environment.
This pushes us into uncharted waters. Over the past decade, there’s been substantial progress in understanding the risks and benefits of individual technologies such as nanotechnology, which is revolutionizing how we design new materials from atoms up, or synthetic biology, which is opening the door to digitally manipulating genetic information, and uploading it back into living organisms. Yet just as understanding a single transmission line in the Indian supply grid wouldn’t have helped avert the 2012 collapse, so understanding the risks and benefits of each emerging technology in turn will not help avoid future catastrophic failure.
To build a resilient tech-based future, we need new ideas, new research and new tools that will enable us to realize the benefits of technology innovation, while keeping us a safe distance from potentially catastrophic collapse. It’s a tough challenge, and one that will demand unprecedented levels of interdisciplinary investment, collaboration and creativity. Yet the price of not innovating responsibly is one that may just be too large to live with.
Andrew Maynard directs the University of Michigan Risk Science Center. His research and expertise covers the responsible development and use of emerging technologies, innovative approaches to addressing emergent risks, communicating about science and risk to diverse audiences. He has worked extensively in the field of nanotechnology, and was one of the early leaders of the US National Nanotechnology Initiative. His work also extends to other areas of emerging technology such as synthetic biology and geoengineering, although he is particularly interested in innovation trends that don’t fall into neat categories.
Maynard writes and speaks widely on technology innovation, and has testified before congressional committees on a number of occasions. He’s also served on National Academy panels and is co-chair of the World Economic Forum Global Agenda Council on Nanotechnology. As a Professor of Environmental Health Science, Maynard teaches Risk Assessment, Science Communication, and Environmental Health Policy. He also teaches Entrepreneurial Ethics, as part of the University of Michigan Master of Entrepreneurship program.
More broadly, Maynard is active in exploring new approaches to science.
This article originally appeared here, with the title “Responsible development of new technologies critical in complex, connected world”
Many people believe that medical control over aging will be stunningly expensive, and thus indefinite extension of healthy life will only be available to a wealthy elite. This is far from the case. If you look at the SENS approach to repair therapies, treatments when realized will be mass-produced infusions of cells, proteins, and drugs. Everyone will get the same treatments because everyone ages due to the same underlying cellular and molecular damage. You'll need one round of treatments every ten to twenty years, and they will be given by a bored clinical assistant. No great attention will be needed by highly trained and expensive medical staff, as all of the complexity will be baked into the manufacturing process. Today's closest analogs are the comparatively new mass-produced biologics used to treat autoimmune conditions, and even in the wildly dysfunctional US medical system these cost less than ten thousand dollars for a treatment.
Rejuvenation won't cost millions, or even hundreds of thousands. It will likely cost less than many people spend on overpriced coffee over the course of two decades of life, and should fall far below that level. When the entire population is the marketplace for competing developers, costs will eventually plummet to those seen for decades-old generic drugs and similar items produced in factory settings: just a handful of dollars per dose. The poorest half of the world will gain access at that point, just as today they have access to drugs that were far beyond their reach when initially developed.
Nonetheless, many people believe that longevity enhancing therapies will only be available for the wealthy, and that this will be an important dynamic in the future. Inequality is something of a cultural fixation at the moment, and it is manufactured as a fantasy where it doesn't exist in reality. This is just another facet of the truth that most people don't really understand economics, either in the sense of predicting likely future changes, or in the sense of what is actually taking place in the world today:The attitude now towards disease and old age and death is that they are basically technical problems. It is a huge revolution in human thinking. Throughout history, old age and death were always treated as metaphysical problems, as something that the gods decreed, as something fundamental to what defines humans, what defines the human condition and reality. Even a few years ago, very few doctors or scientists would seriously say that they are trying to overcome old age and death. They would say no, I am trying to overcome this particular disease, whether it's tuberculosis or cancer or Alzheimers. Defeating disease and death, this is nonsense, this is science fiction.
But, the new attitude is to treat old age and death as technical problems, no different in essence than any other disease. It's like cancer, it's like Alzheimers, it's like tuberculosis. Maybe we still don't know all the mechanisms and all the remedies, but in principle, people always die due to technical reasons, not metaphysical reasons. In the middle ages, you had an image of how does a person die? Suddenly, the Angel of Death appears, and touches you on the shoulder and says, "Come. Your time has come." And you say, "No, no, no. Give me some more time." And Death said, "No, you have to come." And that's it, that is how you die.
We don't think like that today. People never die because the Angel of Death comes, they die because their heart stops pumping, or because an artery is clogged, or because cancerous cells are spreading in the liver or somewhere. These are all technical problems, and in essence, they should have some technical solution. And this way of thinking is now becoming very dominant in scientific circles, and also among the ultra-rich who have come to understand that, wait a minute, something is happening here. For the first time in history, if I'm rich enough, maybe I don't have to die.
Death is optional. And if you think about it from the viewpoint of the poor, it looks terrible, because throughout history, death was the great equalizer. The big consolation of the poor throughout history was that okay, these rich people, they have it good, but they're going to die just like me. But think about the world, say, in 50 years, 100 years, where the poor people continue to die, but the rich people, in addition to all the other things they get, also get an exemption from death. That's going to bring a lot of anger.
And again, I don't want to give a prediction, 20 years, 50 years, 100 years, but what you do see is it's a bit like the boy who cried wolf, that, yes, you cry wolf once, twice, three times, and maybe people say yes, 50 years ago, they already predicted that computers will replace humans, and it didn't happen. But the thing is that with every generation, it is becoming closer, and predictions such as these fuel the process.
The same thing will happen with these promises to overcome death. My guess, which is only a guess, is that the people who live today, and who count on the ability to live forever, or to overcome death in 50 years, 60 years, are going to be hugely disappointed. It's one thing to accept that I'm going to die. It's another thing to think that you can cheat death and then die eventually. It's much harder. While they are in for a very big disappointment, in their efforts to defeat death, they will achieve great things. They will make it easier for the next generation to do it, and somewhere along the line, it will turn from science fiction to science, and the wolf will come.
Computer scientists at Saarland University and Carnegie Mellon University are studying the potential use of the human body as a touch sensitive surface for controlling mobile devices. They have developed flexible silicone rubber stickers with pressure-sensitive sensors that fit snugly to the skin.
By operating these touch input stickers, users can use their own body to control mobile devices. Because of the flexible material used, the sensors can be manufactured in a variety of shapes, sizes, and personalized designs, and can used for a variety of applications, such as extending functions of a smart watch.
The iSkin uses touch-sensitive stickers use electrically conducting sensors that can be worn anywhere on the skin, made from flexible, stretchable silicone.
Researchers have developed a variety of flexible, stretchable skin sensors, as KurzweilAI has reported. This new development focuses on sensors as mobile-device interfaces.
Give me some skin
‘The stickers allow us to enlarge the input space accessible to the user as they can be attached practically anywhere on the body,” explains Martin Weigel, a PhD student in the team led by Jürgen Steimle at Saarland University. Applying pressure to the sticker could, for example, answer an incoming phone call or adjust the volume of a music player. Or a keyboard sticker could be used to type and send messages. A sticker could be rolled up and put in a pocket, explains Steimle.
Users can also design customized (bespoke) designs for iSkin patches on a computer, using a simple graphics program to create different shapes.
“The patches are ‘skin-friendly,’ as they are attached to the skin with a biocompatible, medical-grade adhesive,” said Steimle. “Users can decide where they want to position the sensor patch and how long they want to wear it.”
Currently the sensor stickers are connected via cable to a computer system. According to Steimle, microchips may in the future allow the skin-worn sensor patches to communicate wirelessly with other mobile devices.
The researchers will present their iSkin project March 16 to March 20 at the CeBit computer expo in Hanover, Germany and at the SIGCHI conference in April in Seoul, Korea.This work has partially been funded by the Cluster of Excellence on Multimodal Computing and Interaction within the German Federal Excellence Initiative.
Max Planck Institute for Informatics and Saarland University | iSkin: Flexible, Stretchable and Visually Customizable On-Body Touch Sensors for Mobile Computing
Abstract of iSkin: Flexible, stretchable and visually customizable on-body touch sensors for mobile computing
We propose iSkin, a novel class of skin-worn sensors for touch input on the body. iSkin is a very thin sensor overlay, made of biocompatible materials, and is flexible and stretchable. It can be produced in different shapes and sizes to suit various locations of the body such as the finger, forearm, or ear. Integrating capacitive and resistive touch sensing, the sensor is capable of detecting touch input with two levels of pressure, even when stretched by 30% or when bent with a radius of 0.5 cm. Furthermore, iSkin supports single or multiple touch areas of custom shape and arrangement, as well as more complex widgets, such as sliders and click wheels. Recognizing the social importance of skin, we show visual design patterns to customize functional touch sensors and allow for a visually aesthetic appearance. Taken together, these contributions enable new types of on-body devices. This includes finger-worn devices, extensions to conventional wearable devices, and touch input stickers, all fostering direct, quick, and discreet input for mobile computing.
Scientists of the University of Luxembourg and of the Japanese electronics company TDK have extended sensitivity of a conductive oxide film used in solar cells in the near-infrared region to use more energy of the sun and thus create higher current.
Similar attempts have been made before, but this is the first time that these films were prepared by a one-step process and, at the same time, stable in air, the researchers say.
“The films made at the University of Luxembourg have been exposed to air for one and half years and are still as conductive as when they were fresh prepared,” says Prof. Susanne Siebentritt, head of the laboratory for photovoltaics at the University of Luxembourg.
Transparent conductive oxides are used in devices combining electronics and light, like LEDs, solar cells, and photodetectors. They combine the properties of metals, which are the best electrical conductors known, with those of oxides, which usually are transparent but not conductive, as for example glass. In solar cells the film has to be conductive because it constitutes the upper electrode. At the same time it has to be transparent so sunlight can reach the layer underneath, where the current is formed.
The oxides forming this film can be made conductive by deliberately adding impurities. Zinc oxide with aluminium added is a widely used example. In this case, the aluminum adds free electrons to the zinc oxide, which are responsible for the conductivity. However, these free electrons also absorb infrared light. That means that less sun energy can pass through.
The team of the University of Luxembourg and TDK have modified the process used to make the film to make pure zinc oxide more conductive … [using] a sputter process. This makes the material conductive even without aluminum,” explains Siebentritt.
This method enables fewer, but faster moving free electrons. “With this result, the conductivity is similar to the one with aluminum, but it enables a much better transparency in the infrared region as less free electrons cause also less absorption. That makes solar cells more efficient,” the researchers note.
The findings are published in the journal Progress in Photovoltaics.
Abstract of Highly conductive ZnO films with high near infrared transparency
We present an approach for deposition of highly conductive nominally undoped ZnO films that are suitable for the n-type window of low band gap solar cells. We demonstrate that low-voltage radio frequency (RF) biasing of growing ZnO films during their deposition by non-reactive sputtering makes them as conductive as when doped by aluminium (ρ≤1·10−3Ω cm). The films prepared with additional RF biasing possess lower free-carrier concentration and higher free-carrier mobility than Al-doped ZnO (AZO) films of the same resistivity, which results in a substantially higher transparency in the near infrared region (NIR). Furthermore, these films exhibit good ambient stability and lower high-temperature stability than the AZO films of the same thickness. We also present the characteristics of Cu(InGa)Se2, CuInSe2 and Cu2ZnSnSe4-based solar cells prepared with the transparent window bilayer formed of the isolating and conductive ZnO films and compare them to their counterparts with a standard ZnO/AZO bilayer. We show that the solar cells with nominally undoped ZnO as their transparent conductive oxide layer exhibit an improved quantum efficiency for λ > 900 nm, which leads to a higher short circuit current density JSC. This aspect is specifically beneficial in preparation of the Cu2ZnSnSe4 solar cells with band gap down to 0.85 eV; our champion device reached a JSC of nearly 39 mAcm−2, an open circuit voltage of 378mV, and a power conversion efficiency of 8.4 %. Copyright © 2015 John Wiley & Sons, Ltd.
In tests involving half of the catalytic reaction that takes place in fuel cells, a team led by materials scientist Pulickel Ajayan and chemist James Tou discovered that versions with about 10 percent boron and nitrogen were efficient in catalyzing an “oxygen reduction reaction” — a step in producing energy from feedstocks like methanol.
The research appeared in the American Chemical Society journal Chemistry of Materials. The reactions in most current fuel cells are catalyzed by platinum, but platinum’s high cost has prompted the search for alternatives, Ajayan said.
Ajayan’s Rice lab has excelled in turning nanostructures into macroscopic materials, like the oil-absorbing sponges invented in 2012 or, more recently, solid nanotube blocks with controllable densities and porosities. The new research combines those abilities with the Tour lab’s 2009 method to unzip nanotubes into conductive graphene nanoribbons.
Graphene edges are where the action is
As KurzweilAI has reported, researchers have come to realize that graphene’s potential as a catalyst doesn’t lie along the flat face but along the exposed edges, where molecules prefer to interact. The Rice team chemically unzipped carbon nanotubes into ribbons and then collapsed them into porous, three-dimensional aerogels, simultaneously decorating the ribbons’ edges with boron and nitrogen molecules.
The new material provides an abundance of active sites along the exposed edges for oxygen reduction reactions. Fuel cells turn hydrogen (or sources of hydrogen like methane) into electricity. The primary waste products for methanol are carbon dioxide and for hydrogen, just water.
Abstract of Boron- and nitrogen-substituted graphene nanoribbons as efficient catalysts for oxygen reduction reaction
We show that nanoribbons of boron- and nitrogen-substituted graphene can be used as efficient electrocatalysts for the oxygen reduction reaction (ORR). Optimally doped graphene nanoribbons made into three-dimensional porous constructs exhibit the highest onset and half-wave potentials among the reported metal-free catalysts for this reaction and show superior performance compared to commercial Pt/C catalyst. Furthermore, this catalyst possesses high kinetic current density and four-electron transfer pathway with low hydrogen peroxide yield during the reaction. First-principles calculations suggest that such excellent electrocatalytic properties originate from the abundant edges of boron- and nitrogen-codoped graphene nanoribbons, which significantly reduce the energy barriers of the rate-determining steps of the ORR reaction.
…them. Goodman, the chair for policy and law at Silicon Valley s Singularity University, is best when skeptical, as when he is examining the blurring…
“How do we democratize creation without killing everyone is basically the question.”
– Austen Heinz, Cambrian Genomics
“…It ALSO is the question that you should have an answer to before you start asking for millions of dollars to create tiny dinosaurs!”
– John Iadarola, ThinkTank
When you discover you are capable of anything, it’s time to check your motivations.
High technology requires sophisticated ethics, probing inquiry, deep insight…and this lesson grows in its significance and urgency as our society accelerates into an age of godlike power. Even though we’d like to think that science is the pure and noble search for knowledge, no science ever happens independently of funding, and the trend of late is that most magical new tools emerge into our world serving someone’s bottom line. When profit is the motive, and that profit’s measured by a quarterly report, there isn’t time enough for wisdom.
Complex systems thinking gave us biotech, but isn’t granted any say in how our corporations breed synthetic life that will, inevitably, turn our last wild places into artificial landscapes. Innovating hastily, our haste will be imprinted in the story of Promethean technologies that alter evolution’s course on Earth.
It’s not that GMOs are evil, or that we are “trespassing on God’s domain.” It’s that when we declare necessity the mother of invention, what we think we need determines how the tools we make are used. And when the consciousness that wields these tools is acting from “the optical illusion” of its separateness from nature, trying to control the universe to guarantee its own impossible security against the threat of death, we get Jurassic Park and Frankenstein, Moreau, Monsanto, Fukushima, Faust. The genie just reflects the wishes of its master – and there is always going to be a blind spot when we don’t create from wholeness, as a celebration of our true identity as Origin.
We can, and must, proceed as sparks of that-which-always-was and still explodes in novelty each moment. What would life-as-art look like if we made organisms from the living truth of our non-separation? How would full acceptance of our daunting new responsibility as this planet’s keystone species – embracing all the vast, far-reaching consequences of our every gesture – change the kinds of forms we bring into the world? Loving what we make, acknowledging our art and engineering as the action of an ongoing creative process, and seeing our techniques as how the universe explores itself, we have the opportunity to be good parents for whatever we bring through to take our place.
To orient ourselves in this mature and sensible alternative to global suicide, we’d hold the question: What would spiritually awakened genetic engineers decide to synthesize? Or:
What Would Buddha Splice?
Michael Garfield is a paleontologist, live painter, electronic guitarist, and performance philosopher. He is a writer at Globalish, where this article originally appeared, and Editor and Chief of SolPurpose. Michael has previously written several articles for h+ Magazine including the noteworthy Psychedelic Transhumanists.
He currently resides in Austin Texas.
University of Minnesota researchers have found that an ultrathin black phosphorus film — only 20 layers of atoms — allows for high-speed data communication on nanoscale optical circuits. Black phosphorus is a crystaline form of the element phosphorus.
The devices showed vast improvement in efficiency over comparable devices using graphene.
The work by University of Minnesota Department of Electrical and Computer Engineering Professors Mo Li and Steven Koester and graduate students Nathan Youngblood and Che Chen was published Monday March 2 in Nature Photonics.
Chip-makers are attempting to cram more processor cores on a single chip, but getting all those processors to communicate with each other has been a key challenge for researchers. So the goal is to find materials that will allow high-speed, on-chip communication using light.
Due to its unique properties, black phosphorus can be used to detect light very effectively, making it desirable for optical applications. The University of Minnesota team created intricate optical circuits in silicon and then laid thin flakes of black phosphorus over these structures.
Rivals germanium without the limits for optical circuits
The University of Minnesota team demonstrated that the performance of the black phosphorus photodetectors even rivals that of comparable devices made of germanium — considered the gold standard in on-chip photodetection. Germanium, however, is difficult to grow on silicon optical circuits, while black phosphorus and other two-dimensional materials can be grown separately and transferred onto any material, making them much more versatile.
The team also showed that the devices could be used for real-world applications by sending high-speed optical data over fibers and recovering it using the black phosphorus photodetectors. The group demonstrated data speeds up to 3 gigabits per second.
“Even though we have already demonstrated high speed operation with our devices, we expect higher transfer rates through further optimization,” said Nathan Youngblood, the lead author of the study. “Since we are the first to demonstrate a high speed photodetector using black phosphorus, more work still needs to be done to determine the theoretical limits for a fully optimized device.”
While black phosphorus has much in common with graphene — another two-dimensional material — the materials have significant differences, the most important of which is the existence of an energy gap, often referred to as a “band gap.”
Materials with a band gap, known as semiconductors, are a special group of materials that only conduct electricity when the electrons in that material absorb enough energy for them to “jump” the band gap. This energy can be provided through heat, light, and other means.
While graphene has proven useful for a wide variety of applications, its main limitation is its lack of a band gap. This means that graphene always conducts a significant amount of electricity, and this “leakage” makes graphene devices inefficient. In essence, the device is “on” and leaking electricity all the time.
Black phosphorus, on the other hand, has a widely-tunable band gap that varies depending on how many layers are stacked together. This means that black phosphorus can be tuned to absorb light in the visible range but also in the infrared. This large degree of tunability makes black phosphorus a unique material that can be used for a wide range of applications—from chemical sensing to optical communication.
Additionally, black phosphorus is a “direct-band” semiconductor, meaning it has the potential to efficiently convert electrical signals back into light. Combined with its high performance photodetection abilities, black phosphorus could also be used to generate light in an optical circuit, making it a one-stop solution for on-chip optical communication, with no restriction to a specific substrate or wavelength.
The University of Minnesota research was funded by the Air Force Office of Scientific Research and the National Science Foundation.
Abstract of Waveguide-integrated black phosphorus photodetector with high responsivity and low dark current
Layered two-dimensional materials have demonstrated novel optoelectronic properties and are well suited for integration in planar photonic circuits. Graphene, for example, has been utilized for wideband photodetection. However, because graphene lacks a bandgap, graphene photodetectors suffer from very high dark current. In contrast, layered black phosphorous, the latest addition to the family of two-dimensional materials, is ideal for photodetector applications due to its narrow but finite bandgap. Here, we demonstrate a gated multilayer black phosphorus photodetector integrated on a silicon photonic waveguide operating in the near-infrared telecom band. In a significant advantage over graphene devices, black phosphorus photodetectors can operate under bias with very low dark current and attain an intrinsic responsivity up to 135 mA W−1 and 657 mA W−1 in 11.5-nm- and 100-nm-thick devices, respectively, at room temperature. The photocurrent is dominated by the photovoltaic effect with a high response bandwidth exceeding 3 GHz.
Roboticist and aerospace engineer Julie Shah and her team at MIT are developing next-generation assembly line robots that are smarter and more adaptable than robots available on today’s assembly lines.
The team is designing the robots with artificial intelligence that enables them to learn from experience, so the robots will be more responsive to human behavior. The more robots can sense the humans around them and make adjustments, the safer and more effective the robots will be on the assembly line.
The research is supported by NSF award #1426799, National Robotics Initiative (NRI)/Collaborative Research: Models and Instruments for Integrating Effective Human-Robot Teams into Manufacturing.
NSF | MIT roboticists are developing smarter assembly-line robots
Quell is an FDA cleared, doctor recommended, 100% drug free device clinically wearable used to relieve chronic pain.How Quell works.
The Quell device is placed in a sport band, and an electrode is snapped onto the back of the device. The Quell band is then wrapped around your upper calf with the electrode in direct contact with your skin. Quell is powerful enough to stimulate the nerves in the upper calf carrying neural pulses to the brain tapping into the body’s natural pain relief response. Endogenous opioids are then released into the spine where pain signals are blocked from your body. The company claims users will experience pain relief within 15 minutes.
They’ve launched an Indiegogo campaign to engage with early adopters and this looks pretty interesting.
Indiegogo campaign: https://www.indiegogo.com/projects/quell-the-world-s-first-pain-relief-wearable
By measuring a series of diffraction pattern from a virus injected into an XFEL beam, researchers at Stanford’s Linac Coherent Light Source (LCLS) have determined the first three-dimensional structure of a virus, using a mimivirus.
X-ray crystallography has solved the vast majority of the structures of proteins and other biomolecules. The success of the method relies on growing large crystals of the molecules, which isn’t possible for all molecules.
“Free-electron lasers provide femtosecond X-ray pulses with a peak brilliance ten billion times higher than any previously available X-ray source,” the researchers note in a paper in Physical Review Letters. “Such a large jump in one physical quantity is very rare, and can have far reaching implications for several areas of science. It has been suggested that such pulses could outrun key damage processes and allow structure determination without the need for crystallization.”
The current resolution of the technique (about 100 nanometers) would be sufficient to image important pathogenic viruses like HIV, influenza and herpes, and further improvements may soon allow researchers to tackle the study of single proteins, the scientists say.
Mimivirus is one of the largest known viruses. The viral capsid is about 450 nanometers in diameter and is covered by a layer of thin fibres. A 3D structure of the viral capsid exists, but the 3D structure of the inside was previously unknown.
Abstract for Three-dimensional reconstruction of the giant mimivirus particle with an x-ray free-electron laser
We present a proof-of-concept three-dimensional reconstruction of the giant Mimivirus particle from experimentally measured diffraction patterns from an X-ray free-electron laser. Three-dimensional imaging requires the assembly of many two-dimensional patterns into an internally consistent Fourier volume. Since each particle is randomly oriented when exposed to the X-ray pulse, relative orientations have to be retrieved from the diffraction data alone. We achieve this with a modified version of the expand, maximize and compress (EMC) algorithm and validate our result using new methods.
Computer chips’ clocks have stopped getting faster, so chipmakers are instead giving chips more cores, which can execute computations in parallel.
Now, in simulations involving a 64-core chip, MIT computer scientists have improved a system that cleverly distributes data around multicore chips’ memory banks — increasing system computational speeds by 46 percent while reducing power consumption by 36 percent.
“Now that the way to improve performance is to add more cores and move to larger-scale parallel systems, we’ve really seen that the key bottleneck is communication and memory accesses,” says Daniel Sanchez, the TIBCO Founders Assistant Professor in MIT’s Department of Electrical Engineering and Computer Science.
“A large part of what we did in the previous project was to place data close to computation. But what we’ve seen is that how you place that computation has a significant effect on how well you can place data nearby.”
The problem of jointly allocating computations and data is very similar to one of the canonical problems in chip design known as “place and route.” The place-and-route problem begins with the specification of a set of logic circuits, and the goal is to arrange them on the chip so as to minimize the distances between circuit elements that work in concert.
This problem is what’s known as NP-hard, meaning that as far as anyone knows, for even moderately sized chips, all the computers in the world couldn’t find the optimal solution in the lifetime of the universe. Nonetheless, chipmakers have developed a number of algorithms that, while not absolutely optimal, seem to work well in practice.
Adapted to the problem of allocating computations and data in a 64-core chip, these algorithms will arrive at a solution in the space of several hours.
As shown in an open-access paper in the Proceedings of the 21st International Symposium on High Performance Computer Architecture, Sanchez and students Nathan Beckmann and Po-An Tsai developed their own algorithm, which finds a solution that is more than 99 percent as efficient as that produced by standard place-and-route algorithms. But it does so in milliseconds.
“What we do is we first place the data roughly,” Sanchez says. “You spread the data around in such a way that you don’t have a lot of [memory] banks overcommitted or all the data in a region of the chip. Then you figure out how to place the [computational] threads so that they’re close to the data, and then you refine the placement of the data given the placement of the threads. By doing that three-step solution, you disentangle the problem.”
In principle, Beckmann adds, that process could be repeated, with computations again reallocated to accommodate data placement and vice versa. “But we achieved 1 percent, so we stopped,” he says. “That’s what it came down to, really.”
The MIT researchers’ system monitors the chip’s behavior and reallocates data and threads every 25 milliseconds. That sounds fast, but it’s enough time for a computer chip to perform 50 million operations.
During that span, the monitor randomly samples the requests that different cores are sending to memory, and it stores the requested memory locations, in an abbreviated form, in its own memory circuit.
Every core on a chip has its own cache — a local, high-speed memory bank where it stores frequently used data. On the basis of its samples, the monitor estimates how much cache space each core will require, and it tracks which cores are accessing which data.
The monitor does take up about 1 percent of the chip’s area, which could otherwise be allocated to additional computational circuits. But Sanchez believes that chipmakers would consider that a small price to pay for significant performance improvements.
“There was a big National Academy study and a DARPA-sponsored [information science and technology] study on the importance of communication dominating computation,” says David Wood, a professor of computer science at the University of Wisconsin at Madison. “What you can see in some of these studies is that there is an order of magnitude more energy consumed moving operands around to the computation than in the actual computation itself. In some cases, it’s two orders of magnitude. What that means is that you need to not do that.”
The MIT researchers “have a proposal that appears to work on practical problems and can get some pretty spectacular results,” Wood says. “It’s an important problem, and the results look very promising.”
Abstract of Scaling Distributed Cache Hierarchies through Computation and Data Co-Scheduling
Cache hierarchies are increasingly non-uniform, so for systems to scale efficiently, data must be close to the threads that use it. Moreover, cache capacity is limited and contended among threads, introducing complex capacity/latency tradeoffs. Prior NUCA schemes have focused on managing data to reduce access latency, but have ignored thread placement; and applying prior NUMA thread placement schemes to NUCA is inefficient, as capacity, not bandwidth, is the main constraint. We present CDCS, a technique to jointly place threads and data in multicores with distributed shared caches. We develop novel monitoring hardware that enables fine-grained space allocation on large caches, and data movement support to allow frequent full-chip reconfigurations. On a 64-core system, CDCS outperforms an S-NUCA LLC by 46% on average (up to 76%) in weighted speedup and saves 36% of system energy. CDCS also outperforms state-of-the-art NUCA schemes under different thread scheduling policies.
Researchers at Queen’s University Belfast, the University of Manchester, and the STFC Daresbury Laboratory are developing new software to increase the ability of supercomputers to process big data faster while minimizing increases in power consumption.
To do that, computer scientists in the Scalable, Energy-Efficient, Resilient and Transparent Software Adaptation (SERT) project are using “approximate computing” (also known as “significance-based computing”) — a form of “overclocking” that trades reliability for reduced energy consumption.
The idea is to operate hardware slightly above the threshold voltage (also called near-threshold voltage, NTV), actually allowing components to operate in an unreliable state — and assuming that software and parallelism can cope with the resulting timing errors that will occur — using increased iterations to reach convergence, for example.
“We also investigate scenarios where we distinguish between significant and insignificant parts [of programs] and execute them selectively on reliable or unreliable hardware, respectively,” according to the authors of a paper in Computer Science – Research and Development journal. “We consider parts of the algorithm that are more resilient to errors as ‘insignificant,’ whereas parts in which errors increase the execution time substantially are marked as “significant.’”Software methods for improving error resilience include checkpointing for failed tasks and replication to identify silent data corruption. “This new software … [means] complex computing simulations which would take thousands of years on a desktop computer will be completed in a matter of hours,” according to the project’s Principal Investigator, Professor Dimitrios Nikolopoulos from Queen’s University Belfast.
The SERT project, due to start this month, has just been awarded almost £1million from the U.K. Engineering and Physical Sciences Research Council.
The researchers are simulating detailed models of natural phenomena such as ocean currents, the blood flow of a human body, and global weather patterns to help address some of the big global challenges, including sustainable energy, the rise in global temperatures, and worldwide epidemics.
Abstract of On the potential of significance-driven execution for energy-aware HPC
Dynamic voltage and frequency scaling (DVFS) exhibits fundamental limitations as a method to reduce energy consumption in computing systems. In the HPC domain, where performance is of highest priority and codes are heavily optimized to minimize idle time, DVFS has limited opportunity to achieve substantial energy savings. This paper explores if operating processors near the transistor threshold voltage (NTV) is a better alternative to DVFS for breaking the power wall in HPC. NTV presents challenges, since it compromises both performance and reliability to reduce power consumption. We present a first of its kind study of a significance-driven execution paradigm that selectively uses NTV and algorithmic error tolerance to reduce energy consumption in performance-constrained HPC environments. Using an iterative algorithm as a use case, we present an adaptive execution scheme that switches between near-threshold execution on many cores and above-threshold execution on one core, as the computational significance of iterations in the algorithm evolves over time. Using this scheme on state-of-the-art hardware, we demonstrate energy savings ranging between 35 and 67 %, while compromising neither correctness nor performance.