A Brooklyn-based startup called Modern Meadow has raised $40 million to become a top source of leather for the world’s makers of fashion and accessories, luggage, sporting goods, upholstery and furniture.
The post Modern Meadow raises $40 million to grow leather without livestock appeared first on Singularity University.
Harvard researchers have designed nanoscale electronic scaffolds (support structures) that can be seeded with cardiac cells to produce a new “bionic” cardiac patch (for replacing damaged cardiac tissue with pre-formed tissue patches). It also functions as a more sophisticated pacemaker: In addition to electrically stimulating the heart, the new design can change the pacemaker stimulation frequency and direction of signal propagation.
In addition, because because its electronic components are integrated throughout the tissue (instead of being located on the surface of the skin), it could detect arrhythmia far sooner, and “operate at far lower (safer) voltages than a normal pacemaker, [which] because it’s on the surface, has to use relatively high voltages,” according to Charles Lieber, the Mark Hyman, Jr. Professor of Chemistry and Chair of the Department of Chemistry and Chemical Biology.
Early arrhythmia detection, monitoring responses to cardiac drugs
“Even before a person started to go into large-scale arrhythmia that frequently causes irreversible damage or other heart problems, this could detect the early-stage instabilities and intervene sooner,” he said. “It can also continuously monitor the feedback from the tissue and actively respond.”
The patch might also find use, Lieber said, as a tool to monitor responses to cardiac drugs, or to help pharmaceutical companies screen the effectiveness of drugs under development.In the long term, Lieber believes, the development of nanoscale tissue scaffolds represents a new paradigm for integrating biology with electronics in a virtually seamless way.
The bionic cardiac patch can also be a unique platform to study the tissue behavior evolving during some developmental processes, such as aging, ischemia, or differentiation of stem cells into mature cardiac cells.
Although the bionic cardiac patch has not yet been implanted in animals, “we are interested in identifying collaborators already investigating cardiac patch implantation to treat myocardial infarction in a rodent model,” he said. “I don’t think it would be difficult to build this into a simpler, easily implantable system.”
Could one day deliver cardiac patch/pacemaker via injection
Using the injectable electronics technology he pioneered last year, Lieber even suggested that similar cardiac patches might one day simply be delivered by injection. “It may actually be that, in the future, this won’t be done with a surgical patch,” he said. “We could simply do a co-injection of cells with the mesh, and it assembles itself inside the body, so it’s less invasive.”
“I think one of the biggest impacts would ultimately be in the area that involves replacement of damaged cardiac tissue with pre-formed tissue patches,” Lieber said. “Rather than simply implanting an engineered patch built on a passive scaffold, our work suggests it will be possible to surgically implant an innervated patch that would now be able to monitor and subtly adjust its performance.”
In the long term, Lieber believes, the development of nanoscale tissue scaffolds represents a new paradigm for integrating biology with electronics in a virtually seamless way.
The study is described in a June 27 paper published in Nature Nanotechnology.
Abstract of Three-dimensional mapping and regulation of action potential propagation in nanoelectronics-innervated tissues
Real-time mapping and manipulation of electrophysiology in three-dimensional (3D) tissues could have important impacts on fundamental scientific and clinical studies, yet realization is hampered by a lack of effective methods. Here we introduce tissue-scaffold-mimicking 3D nanoelectronic arrays consisting of 64 addressable devices with subcellular dimensions and a submillisecond temporal resolution. Real-time extracellular action potential (AP) recordings reveal quantitative maps of AP propagation in 3D cardiac tissues, enable in situtracing of the evolving topology of 3D conducting pathways in developing cardiac tissues and probe the dynamics of AP conduction characteristics in a transient arrhythmia disease model and subsequent tissue self-adaptation. We further demonstrate simultaneous multisite stimulation and mapping to actively manipulate the frequency and direction of AP propagation. These results establish new methodologies for 3D spatiotemporal tissue recording and control, and demonstrate the potential to impact regenerative medicine, pharmacology and electronic therapeutics.
The U.S. Air Force got a wakeup call recently when AI software called ALPHA — running on a tiny $35 Raspberry Pi computer — repeatedly defeated retired U.S. Air Force Colonel Gene Lee, a top aerial combat instructor and Air Battle Manager, and other expert air-combat tacticians at the U.S. Air Force Research Lab (AFRL) in Dayton, Ohio. The contest was conducted in a high-fidelity air combat simulator.
According to Lee, who has considerable fighter-aircraft expertise (and has been flying in simulators against AI opponents since the early 1980s), ALPHA is “the most aggressive, responsive, dynamic and credible AI I’ve seen to date.” In fact, he was shot out of the air every time during protracted engagements in the simulator, he said.
ALPHA’s secret? Custom-designed “genetic fuzzy” algorithms designed for simulated air-combat missions, according to an open-access, unclassified paper published in the authoritative Journal of Defense Management. The paper was authored by a team of industry, Air Force, and University of Cincinnati researchers, including the AFRL Branch Chief.
ALPHA, which now runs on a standard consumer-grade PC, was developed by Psibernetix, Inc., an AFRL contractor founded by University of Cincinnati College of Engineering and Applied Science 2015 doctoral graduate Nick Ernest*, president and CEO of the firm, and a team of former Air Force aerial combat experts, including Lee.
Today’s fighters close in on each other at speeds in excess of 1,500 miles per hour while flying at altitudes above 40,000 feet. The cost for a mistake is very high. Microseconds matter, but an average human visual reaction time is 0.15 to 0.30 seconds, and “an even longer time to think of optimal plans and coordinate them with friendly forces,” the researchers note in the paper.
In fact, ALPHA works 250 times faster than humans, the researchers say. Nonetheless, ALPHA’s future role will stop short of fully autonomous combat.
According to the AFRL team, ALPHA will first be tested on “Unmanned Combat Aerial Vehicles (UCAV),” where ALPHA will be organizing data and creating a complete mapping of a combat scenario, such as a flight of four fighter aircraft — which it can do in less than a millisecond.
The AFRL team sees ACAVs as “AI wingmen” capable of engaging in air combat when teamed with manned aircraft.
The ACAVs will include an onboard battle management system able to process situational awareness, determine reactions, select tactics, and manage weapons — simultaneously evading dozens of hostile missiles, taking accurate shots at multiple targets, coordinating actions of squad mates, and recording and learning from observations of enemy tactics and capabilities.
Genetic fuzzy systems
The researchers based the design of ALPHA on a “genetic fuzzy tree” (GFT) — a subtype of “fuzzy logic” algorithms. The GFT is described in another open-access paper in Journal of Defense Management by Ernest and University of Cincinnati aerospace professor Kelly Cohen.
“Genetic fuzzy systems have been shown to have high performance, and a problem with four or five inputs can be solved handily,” said Cohen. “However, boost that to a hundred inputs, and no computing system on planet Earth could currently solve the processing challenge involved — unless that challenge and all those inputs are broken down into a cascade of sub-decisions.
“Most AI programming uses numeric-based control and provides very precise parameters for operations,” he said. In contrast, the AI algorithms that Ernest and his team developed are language-based, with if/then scenarios and rules able to encompass hundreds to thousands of variables. This language-based control, or fuzzy logic, can be verified and validated, Cohen says.
The “genetic” part of the “genetic fuzzy tree” system started with numerous automatically generated versions of ALPHA that proved themselves against a manually tuned version of ALPHA. The successful strings of code were then “bred” with each other, favoring the stronger, or highest performance versions.
In other words, only the best-performing code was used in subsequent generations. Eventually, one version of ALPHA rises to the top in terms of performance, and that’s the one that is utilized.
“In terms of emulating human reasoning, I feel this is to unmanned aerial vehicles as the IBM Deep Blue vs. Kasparov was to chess,” said Cohen. Or as Alpha Go was to Go.
* Support for Ernest’s doctoral research was provided by the Dayton Area Graduate Studies Institute and the U.S. Air Force Research Laboratory.
Abstract of Genetic Fuzzy based Artificial Intelligence for Unmanned Combat Aerial Vehicle Control in Simulated Air Combat Missions
UC | Flying in Simulator
Breakthroughs in genetic fuzzy systems, most notably the development of the Genetic Fuzzy Tree methodology, have allowed fuzzy logic based Artificial Intelligences to be developed that can be applied to incredibly complex problems. The ability to have extreme performance and computational efficiency as well as to be robust to uncertainties and randomness, adaptable to changing scenarios, verified and validated to follow safety specifications and operating doctrines via formal methods, and easily designed and implemented are just some of the strengths that this type of control brings. Within this white paper, the authors introduce ALPHA, an Artificial Intelligence that controls flights of Unmanned Combat Aerial Vehicles in aerial combat missions within an extreme-fidelity simulation environment. To this day, this represents the most complex application of a fuzzy-logic based Artificial Intelligence to an Unmanned Combat Aerial Vehicle control problem. While development is on-going, the version of ALPHA presented within was assessed by Colonel (retired) Gene Lee who described ALPHA as “the most aggressive, responsive, dynamic and credible AI (he’s) seen-to-date.” The quality of these preliminary results in a problem that is not only complex and rife with uncertainties but also contains an intelligent and unrestricted hostile force has significant implications for this type of Artificial Intelligence. This work adds immensely to the body of evidence that this methodology is an ideal solution to a very wide array of problems.
With its tiny brain (and no cortex), the elephantnose fish (Gnathonemus petersii)* achieves performance comparable to that of humans or other mammals in certain tasks, according to zoologists at the University of Bonn and a colleague from Oxford.
To perceive objects in the water, the fish uses electrolocation (similar to the echolocation of bats) to perceive objects in the water, aided by an electrical organ in its tail, which emits electrical impulses, and numerous electrical sensor organs in its skin. It also uses its visual sense.
Curiously, in an experiment** in which the animals became familiar with an object in an aquarium with the visual sense, they were also able to recognize it again using the electrical sense, although they had never perceived it electrically before.
In the experiment, when the two senses delivered different information in the close range of up to two centimeters, the fish trusted only the electrical information and were then “blind” to the visual stimuli. In contrast, for more distant objects, the animals relied above all on their eyes. And they perceived the environment best by using their visual and electrical senses in combination.
“This ability has only been found in mammals, suggesting such a high-level function might be associated with complex mammalian brain structures. Furthermore, the modality-specific inputs are weighted dynamically according to their reliability at different ranges,” the researchers note in a paper published online on PNAS.
“A transfer between the different senses was previously known only for certain highly developed mammals, such as monkeys, dolphins, rats, and humans”, says Professor Gerhard von der Emde at the Institute of Zoology at the University of Bonn. “In a dark, unfamiliar apartment, people feel their way forward to avoid stumbling. When the light goes on, the obstacles felt are recognized by the eye without any problem.”
The secret is in the cerebellum
So exactly how does Gnathonemus petersii achieve this surprising level of intelligence? A clue is provided by Emmanuel Gilissen on the website of The Royal Museum for Central Africa. He explains that the African mormyrids or elephant-nose fishes were noted for having unusually large brains already more than a century ago. “For a mean body mass of 26 g, the mean brain weight of Gnathonemus petersii reaches 0.53 g, almost three times its expected mean value of 0.19 g, as calculated from the relationship between brain size and body size in teleost fish (Kaufman, 2003).
“This character is probably, at least in part, related to their ability to sense prey and to communicate by generating and perceiving electric fields (Nieuwenhuys and Nicholson, 1969). In contrast with mammals, it is the cerebellum, and not the telencephalon that is greatly enlarged in these fishes. … In elephant-nose fishes, the valvula cerebelli covers most of the rest of the brain. In contrast, in another highly derived brain such as the human brain, it is the telencephalon, and more specifically the neocortex, a telencephalic structure unique to mammals, that entirely covers the rest of the brain. …
“In the electric fish Gnathonemus petersii, the brain is responsible for approximately 60% of body O2 consumption, a figure three times higher than that for any other vertebrate studied so far, including human.”
So are these electrically genius fish up there with the macaw?
* Gnathonemus petersii is widespread in the flowing waters of West Africa and hunts insect larva at dawn and dusk.
** The elephantnose fish was in an aquarium connected to two different chambers; the animal could choose. Behind openings to the chambers there were differently shaped objects: a sphere or a cuboid. The fish learned to steer toward one of these objects by being rewarded with insect larvae. Subsequently, it searched for this object again, to obtain the reward again. When does the fish use a particular sense? To answer this question, the researchers repeated the experiments in absolute darkness. Now the fish could rely only on its electrical sense. As shown by images taken with an infrared camera, it was able to recognize the object only at short distances. With the light on the fish was most successful, because it was able to use its eyes and the electrical sense for the different distances. To find out when the fish used its eyes alone, the researchers made the objects invisible to the electrical sense. Now, the sphere and cuboid to be discriminated had the same electrical characteristics as the water.
Abstract of Cross-modal object recognition and dynamic weighting of sensory inputs in a fish
Most animals use multiple sensory modalities to obtain information about objects in their environment. There is a clear adaptive advantage to being able to recognize objects cross-modally and spontaneously (without prior training with the sense being tested) as this increases the flexibility of a multisensory system, allowing an animal to perceive its world more accurately and react to environmental changes more rapidly. So far, spontaneous cross-modal object recognition has only been shown in a few mammalian species, raising the question as to whether such a high-level function may be associated with complex mammalian brain structures, and therefore absent in animals lacking a cerebral cortex. Here we use an object-discrimination paradigm based on operant conditioning to show, for the first time to our knowledge, that a nonmammalian vertebrate, the weakly electric fish Gnathonemus petersii, is capable of performing spontaneous cross-modal object recognition and that the sensory inputs are weighted dynamically during this task. We found that fish trained to discriminate between two objects with either vision or the active electric sense, were subsequently able to accomplish the task using only the untrained sense. Furthermore we show that cross-modal object recognition is influenced by a dynamic weighting of the sensory inputs. The fish weight object-related sensory inputs according to their reliability, to minimize uncertainty and to enable an optimal integration of the senses. Our results show that spontaneous cross-modal object recognition and dynamic weighting of sensory inputs are present in a nonmammalian vertebrate.
An artificial synapse that emulates a biological synapse while requiring less energy has been developed by Pohang University Of Science & Technology (POSTECH) researchers* in Korea.
A human synapse consumes an extremely small amount of energy (~10 fJ or femtojoules** per synaptic event).
The researchers have fabricated an organic nanofiber (ONF), or organic nanowire (ONW), electronic device that emulates the important working principles and energy consumption of biological synapses while requiring only ~1 fJ per synaptic event. The ONW also emulates the morphology (form) of a synapse.
The morphology of ONFs is similar to that of nerve fibers, which form crisscrossing grids to enable the high memory density of a human brain. The researchers say the highly-aligned ONFs can be massively produced with precise control over alignment and dimension; and this morphology may make possible the future construction of the high-density memory of a neuromorphic (brain-form-emulating) system.****
The researchers say they have emulated important working principles of a biological synapse, such as paired-pulse facilitation (PPF), short-term plasticity (STP), long-term plasticity (LTP), spike-timing dependent plasticity (STDP), and spike-rate dependent plasticity (SRDP).
The artificial synapse devices provide a new research direction in neuromorphic electronics and open a new era of organic electronics with high memory density and low energy consumption, the researchers claim. Potential applications include neuromorphic computing systems, AI systems for self-driving cars, analysis of big data, cognitive systems, robot control, medical diagnosis, stock-trading analysis, remote sensing, and other smart human-interactive systems and machines in the future, they suggest.
This research was supported by the Pioneer Research Center Program and Center for Advanced Soft-Electronics as a Global Frontier Project, funded by the Korean Ministry of Science, ICT, and Future Planning.
The research was published an open-access paper in Science Advances, a new sister journal of Science.
* Prof. Tae-Woo Lee, Wentao Xu, and Sung-Yong Min, PhD, with the Dept. of Materials Science and Engineering at POSTECH
** A fJ (femtojoule) is 10-15 Joule (one watt-second).
*** ~1014 synapses.
**** Previous attempts to realize synaptic functions in single electronic devices include resistive random access memory (RRAM), phase change memory (PCM), conductive bridges, and synaptic transistors.Abstract of Organic core-sheath nanowire artificial synapses with femtojoule energy consumption
Emulation of biological synapses is an important step toward construction of large-scale brain-inspired electronics. Despite remarkable progress in emulating synaptic functions, current synaptic devices still consume energy that is orders of magnitude greater than do biological synapses (~10 fJ per synaptic event). Reduction of energy consumption of artificial synapses remains a difficult challenge. We report organic nanowire (ONW) synaptic transistors (STs) that emulate the important working principles of a biological synapse. The ONWs emulate the morphology of nerve fibers. With a core-sheath–structured ONW active channel and a well-confined 300-nm channel length obtained using ONW lithography, ~1.23 fJ per synaptic event for individual ONW was attained, which rivals that of biological synapses. The ONW STs provide a significant step toward realizing low-energy–consuming artificial intelligent electronics and open new approaches to assembling soft neuromorphic systems with nanometer feature size.
In seven months, Barack Obama will leave the White House as president of the United States. He’s going to need a job. In an interview with Bloomberg on June 13, he hinted at the possibility of joining entrepreneurs and venture capitalists in Silicon Valley.
Picture a restaurant future in which a T-1000 scrambles eggs in a frying pan hand while aJetson-esque robot waits tables out front, her sass-meter dialed to a Siri-torching 11. Then un-think that because that’s not how it’s going to be at all. As more and more restaurants implement automation, robotics, and human-replacing tech into their business models, the rough outline of our food’s future is emerging. And it definitely doesn’t involve Skynet (most likely).
The post HOW ROBOTS IN RESTAURANTS WILL CHANGE THE WAY WE EAT AND LIVE appeared first on Singularity University.
Imagine if doctors could precisely insert a tiny amount of a custom drug into a specific circuit in your brain and improve your depression (or other mood problems) — instead of treating the entire brain.
That’s exactly what Duke University researchers have explored in mice. Stress-susceptible animals that appeared depressed or anxious were restored to relatively normal behavior this way, according to a study appearing in the forthcoming July 20 issue of Neuron.
The plan was to define specific glitches in the neural circuits and then use a drug to fix them. The ambitious goal: go from a protein, to a signaling activity, to a cell, to a circuit, to activity that happens across the whole brain, to actual behavior.
1. Identify the key neurons in the prefrontal cortex
The researchers first determined how the prefrontal cortex is used as a pacemaker for the limbic system, said lead researcher Kafui Dzirasa, an assistant professor of psychiatry and behavioral sciences, and neurobiology.
The team started by precisely placing arrays of 32 electrodes in four brain areas of the mice (see illustration above). Then they recorded brain activity as these mice were subjected to a stressful situation called chronic social defeat.* This allowed the researchers to observe the activity between the prefrontal cortex and three areas of the limbic system that are implicated in major depression.
To interpret the complicated data coming from the electrodes, the team used machine learning algorithms — identifying which parts of the data seemed to be the timing control signal between the prefrontal cortex and the limbic system— and then zeroed in on the individual neurons involved in that cortical signal and its corresponding circuit.
2. Inject a drug to restore function
They then applied engineered molecules called DREADD (Designer Receptors Exclusively Activated by Designer Drug), developed by University of North Carolina at Chapel Hill pharmacologist Bryan Roth, in very tiny amounts (0.5 microliter). A drug that attaches only to that DREADD is then administered to give the researchers control over the circuit.
They found that direct stimulation of PFC-amygdala neural circuitry with DREADDs normalized PFC-dependent limbic synchrony in stress-susceptible animals and restored normal behavior.
The researchers suggest that their findings also demonstrate an interdisciplinary approach that can be used to identify the large-scale network changes that underlie complex emotional pathologies and the specific network nodes that can be used to develop targeted interventions.
“These subcortical circuits are the key regulators of our emotional life,” said Helen Mayberg, a professor of psychiatry, neurology and radiology at Emory University who was not involved in this research. “What’s great about this paper is that they use different approaches to see a circuit that’s relevant to a lot of disorders,” said Mayberg, who has been pioneering deep-brain stimulation of very specific sites in the human prefrontal cortex to treat mood disorders.
But she cautions that to assess anything like “mood” in a mouse, one can only infer from its behaviors. “It’s hard to do, even in a human,” she said.
This work was supported by funding from National Institutes of Mental Health and a research incubator award from the Duke Institute for Brain Sciences.
* The mice were repeatedly exposed to larger aggressive animals for 10–15 consecutive days. At the end of this protocol, animals exhibit multiple depressive endophenotypes including hedonic dysfunction, circadian dysregulation, anxiety, and psychomotor retardation.
Abstract of Dysregulation of Prefrontal Cortex-Mediated Slow-Evolving Limbic Dynamics Drives Stress-Induced Emotional Pathology
Circuits distributed across cortico-limbic brain regions compose the networks that mediate emotional behavior. The prefrontal cortex (PFC) regulates ultraslow (<1 Hz) dynamics across these networks, and PFC dysfunction is implicated in stress-related illnesses including major depressive disorder (MDD). To uncover the mechanism whereby stress-induced changes in PFC circuitry alter emotional networks to yield pathology, we used a multi-disciplinary approach including in vivo recordings in mice and chronic social defeat stress. Our network model, inferred using machine learning, linked stress-induced behavioral pathology to the capacity of PFC to synchronize amygdala and VTA activity. Direct stimulation of PFC-amygdala circuitry with DREADDs normalized PFC-dependent limbic synchrony in stress-susceptible animals and restored normal behavior. In addition to providing insights into MDD mechanisms, our findings demonstrate an interdisciplinary approach that can be used to identify the large-scale network changes that underlie complex emotional pathologies and the specific network nodes that can be used to develop targeted interventions.
Physical exercise after learning improves memory and memory traces if the exercise is done four hours later, and not immediately after learning, according to findings recently reported (open-access) in the Cell Press journal Current Biology.
It’s not yet clear exactly how or why delayed exercise has this effect on memory. However, earlier studies of laboratory animals suggest that naturally occurring chemical compounds in the body known as catecholamines, including dopamine and norepinephrine, can improve memory consolidation, say the researchers at the Donders Institute at the Radboud University Medical Center in the Netherlands. One way to boost catecholamines is through physical exercise.
The researchers tested the effects of a single session of physical exercise after learning on memory consolidation and long-term memory. Seventy-two study participants learned 90 picture-location associations over a period of approximately 40 minutes before being randomly assigned to one of three groups: one group performed exercise immediately, the second performed exercise four hours later, and the third did not perform any exercise. The exercise consisted of 35 minutes of interval training on an exercise bike at an intensity of up to 80 percent of participants’ maximum heart rates. Forty-eight hours later, participants returned for a test to show how much they remembered while their brains were imaged via magnetic resonance imaging (MRI). The researchers found that those who exercised four hours after their learning session retained the information better two days later than those who exercised either immediately or not at all.
The researchers plan to follow up with another study of the timing and molecular underpinnings of exercise and its influence on learning and memory in more detail.
The researchers were supported by a grant from the European Research Council.
In a related study published in eLife June 2 with mice, researchers note that exercise is known to be accompanied by an increase in brain-derived neurotrophic factor (BDNF) in the hippocampus, which is associated with cognitive improvement and the alleviation of depression and anxiety.
But how? It is known that a substance known as β-hydroxybutyrate (DBHB), produced in the liver from fatty acids, serves as an alternative energy source when glucose (blood sugar) levels are reduced. In their research (with mice on a running wheel for 30 days vs. no exercise), they found that the resulting increase of DBHB blocked the action of histone enzymes, which normally inhibit the production of BDNF.
Confirming that, injecting DBHB directly into the brains of mice also led to increase in BDNF.
Abstract of Physical Exercise Performed Four Hours after Learning Improves Memory Retention and Increases Hippocampal Pattern Similarity during Retrieval
Persistent long-term memory depends on successful stabilization and integration of new memories after initial encoding [ 1, 2 ]. This consolidation process is thought to require neuromodulatory factors such as dopamine, noradrenaline, and brain-derived neurotrophic factor [ 3–7 ]. Without the release of such factors around the time of encoding, memories will decay rapidly [ 3, 5, 6, 8 ]. Recent studies have shown that physical exercise acutely stimulates the release of several consolidation-promoting factors in humans [ 9–14 ], raising the question of whether physical exercise can be used to improve memory retention [ 15–17 ]. Here, we used a single session of physical exercise after learning to exogenously boost memory consolidation and thus long-term memory. Three groups of randomly assigned participants first encoded a set of picture-location associations. Afterward, one group performed exercise immediately, one 4 hr later, and the third did not perform any exercise. Participants otherwise underwent exactly the same procedures to control for potential experimental confounds. Forty-eight hours later, participants returned for a cued-recall test in a magnetic resonance scanner. With this design, we could investigate the impact of acute exercise on memory consolidation and retrieval-related neural processing. We found that performing exercise 4 hr, but not immediately, after encoding improved the retention of picture-location associations compared to the no-exercise control group. Moreover, performing exercise after a delay was associated with increased hippocampal pattern similarity for correct responses during delayed retrieval. Our results suggest that appropriately timed physical exercise can improve long-term memory and highlight the potential of exercise as an intervention in educational and clinical settings.
Abstract of Exercise promotes the expression of brain derived neurotrophic factor (BDNF) through the action of the ketone body β-hydroxybutyrate
Exercise induces beneficial responses in the brain, which is accompanied by an increase in BDNF, a trophic factor associated with cognitive improvement and the alleviation of depression and anxiety. However, the exact mechanisms whereby physical exercise produces an induction in brain Bdnf gene expression are not well understood. While pharmacological doses of HDAC inhibitors exert positive effects on Bdnf gene transcription, the inhibitors represent small molecules that do not occur in vivo. Here, we report that an endogenous molecule released after exercise is capable of inducing key promoters of the Mus musculus Bdnf gene. The metabolite β-hydroxybutyrate, which increases after prolonged exercise, induces the activities of Bdnf promoters, particularly promoter I, which is activity-dependent. We have discovered that the action of β-hydroxybutyrate is specifically upon HDAC2 and HDAC3, which act upon selective Bdnf promoters. Moreover, the effects upon hippocampal Bdnfexpression were observed after direct ventricular application of β-hydroxybutyrate. Electrophysiological measurements indicate that β-hydroxybutyrate causes an increase in neurotransmitter release, which is dependent upon the TrkB receptor. These results reveal an endogenous mechanism to explain how physical exercise leads to the induction of BDNF.
The discovery power of the gene chip is coming to nanotechnology, as a Northwestern University research team develops a tool to rapidly test millions — and perhaps even billions — of different nanoparticles at one time to zero in on the best nanoparticle for a specific use.
When materials are miniaturized, their properties — optical, structural, electrical, mechanical and chemical — change, offering new possibilities. But determining what nanoparticle size and composition are best for a given application, such as catalysts, biodiagnostic labels, pharmaceuticals and electronic devices, is a daunting task.
“As scientists, we’ve only just begun to investigate what materials can be made on the nanoscale,” said Northwestern’s Chad A. Mirkin, a world leader in nanotechnology research and its application, who led the study. “Screening a million potentially useful nanoparticles, for example, could take several lifetimes. Once optimized, our tool will enable researchers to pick the winner much faster than conventional methods. We have the ultimate discovery tool.”
Combinatorial libraries of nanoparticles
Using a Northwestern technique that deposits materials on a surface, Mirkin and his team figured out how to make combinatorial libraries of nanoparticles in a controlled way. (A combinatorial library is a collection of systematically varied structures encoded at specific sites on a surface.) Their study was published today (June 24) by the journal Science.
The nanoparticle libraries are much like a gene chip, Mirkin says, where thousands of different spots of DNA are used to identify the presence of a disease or toxin. Thousands of reactions can be done simultaneously, providing results in just a few hours. Similarly, Mirkin and his team’s libraries will enable scientists to rapidly make and screen millions to billions of nanoparticles of different compositions and sizes for desirable physical and chemical properties.
“The ability to make libraries of nanoparticles will open a new field of nanocombinatorics, where size — on a scale that matters — and composition become tunable parameters,” Mirkin said. “This is a powerful approach to discovery science.”
Using just five metallic elements — gold, silver, cobalt, copper and nickel — Mirkin and his team developed an array of unique structures by varying every elemental combination. In previous work, the researchers had shown that particle diameter also can be varied deliberately on the 1- to 100-nanometer length scale.
More than half never existed before on Earth
Some of the compositions can be found in nature, but more than half of them have never existed before on Earth. And when pictured using high-powered imaging techniques, the nanoparticles appear like an array of colorful Easter eggs, each compositional element contributing to the palette.
To build the combinatorial libraries, Mirkin and his team used Dip-Pen Nanolithography, a technique developed at Northwestern in 1999, to deposit onto a surface individual polymer “dots,” each loaded with different metal salts of interest. The researchers then heated the polymer dots, reducing the salts to metal atoms and forming a single nanoparticle. The size of the polymer dot can be varied to change the size of the final nanoparticle.
The researchers used the tool to systematically generate a library of 31 nanostructures using the five different metals. They then used advanced electron microscopes to spatially map the compositional trajectories of the combinatorial nanoparticles.
The next materials to power fuel cells, efficiently harvest solar energy, or create new chips
Scientists can now begin to study these nanoparticles as well as build other useful combinatorial libraries consisting of billions of structures that subtly differ in size and composition. These structures may become the next materials that power fuel cells, efficiently harvest solar energy and convert it into useful fuels, and catalyze reactions that take low-value feedstocks from the petroleum industry and turn them into high-value products useful in the chemical and pharmaceutical industries.
Mirkin is a member of the Robert H. Lurie Comprehensive Cancer Center of Northwestern University as well as co-director of the Northwestern University Center for Cancer Nanotechnology Excellence. He also is a professor of medicine, chemical and biological engineering, biomedical engineering and materials science at Northwestern.
The research was supported by GlaxoSmithKline, the Air Force Office of Scientific Research,and the Asian Office of Aerospace R&D.
Abstract of Polyelemental nanoparticle libraries
Multimetallic nanoparticles are useful in many fields, yet there are no effective strategies for synthesizing libraries of such structures, in which architectures can be explored in a systematic and site-specific manner. The absence of these capabilities precludes the possibility of comprehensively exploring such systems. We present systematic studies of individual polyelemental particle systems, in which composition and size can be independently controlled and structure formation (alloy versus phase-separated state) can be understood. We made libraries consisting of every combination of five metallic elements (Au, Ag, Co, Cu, and Ni) through polymer nanoreactor–mediated synthesis. Important insight into the factors that lead to alloy formation and phase segregation at the nanoscale were obtained, and routes to libraries of nanostructures that cannot be made by conventional methods were developed.
The World Economic Forum’s annual list of this year’s breakthrough technologies, published today, includes “socially aware” openAI, grid-scale energy storage, perovskite solar cells, and other technologies with the potential to “transform industries, improve lives, and safeguard the planet.” The WEF’s specific interest is to “close gaps in investment and regulation.”
“Horizon scanning for emerging technologies is crucial to staying abreast of developments that can radically transform our world, enabling timely expert analysis in preparation for these disruptors. The global community needs to come together and agree on common principles if our society is to reap the benefits and hedge the risks of these technologies,” said Bernard Meyerson, PhD, Chief Innovation Officer of IBM and Chair of the WEF’s Meta-Council on Emerging Technologies.
The list also provides an opportunity to debate human, societal, economic or environmental risks and concerns that the technologies may pose — prior to widespread adoption.
One of the criteria used by council members during their deliberations was the likelihood that 2016 represents a tipping point in the deployment of each technology. So the list includes some technologies that have been known for a number of years, but are only now reaching a level of maturity where their impact can be meaningfully felt.
The top 10 technologies that make this year’s list are:
- Nanosensors and the Internet of Nanothings — With the Internet of Things expected to comprise 30 billion connected devices by 2020, one of the most exciting areas of focus today is now on nanosensors capable of circulating in the human body or being embedded in construction materials. They could use DNA and proteins to recognize specific chemical targets, store a few bits of information, and then report their status by changing color or emitting some other easily detectable signal.
- Next-Generation Batteries — One of the greatest obstacles holding renewable energy back is matching supply with demand, but recent advances in energy storage using sodium, aluminum, and zinc based batteries makes mini-grids feasible that can provide clean, reliable, around-the-clock energy sources to entire villages.
- The Blockchain — With venture investment related to the online currency Bitcoin exceeding $1 billion in 2015 alone, the economic and social impact of blockchain’s potential to fundamentally change the way markets and governments work is only now emerging.
- 2D Materials — Plummeting production costs mean that 2D materials like graphene are emerging in a wide range of applications, from air and water filters to new generations of wearables and batteries.
- Autonomous Vehicles — The potential of self-driving vehicles for saving lives, cutting pollution, boosting economies, and improving quality of life for the elderly and other segments of society has led to rapid deployment of key technology forerunners along the way to full autonomy.
- Organs-on-chips — Miniature models of human organs could revolutionize medical research and drug discovery by allowing researchers to see biological mechanism behaviors in ways never before possible.
- Perovskite Solar Cells — This new photovoltaic material offers three improvements over the classic silicon solar cell: it is easier to make, can be used virtually anywhere and, to date, keeps on generating power more efficiently.
- Open AI Ecosystem — Shared advances in natural language processing and social awareness algorithms, coupled with an unprecedented availability of data, will soon allow smart digital assistants to help with a vast range of tasks, from keeping track of one’s finances and health to advising on wardrobe choice.
- Optogenetics — Recent developments mean light can now be delivered deeper into brain tissue, something that could lead to better treatment for people with brain disorders.
- Systems Metabolic Engineering — Advances in synthetic biology, systems biology, and evolutionary engineering mean that the list of building block chemicals that can be manufactured better and more cheaply by using plants rather than fossil fuels is growing every year.
To compile this list, the World Economic Forum’s Meta-Council on Emerging Technologies, a panel of global experts, “drew on the collective expertise of the Forum’s communities to identify the most important recent technological trends. By doing so, the Meta-Council aims to raise awareness of their potential and contribute to closing gaps in investment, regulation and public understanding that so often thwart progress.”
Elon Musk’s proposal for Tesla to buy SolarCity in an all-stock deal is being met with the usual moans of disbelief. Still, the central core of the complaints — that Musk is making an endless capital call on the markets to fund some futuristic idea of a solar utility giant — is essentially correct. And that’s good news. You can argue with his execution — or whether this combination with Tesla was the right way to do this — but the vision is big and bold and the technology is moving in his direction.
Metastasis (spread of cancer) is one of the biggest challenges in cancer treatment. It is often not the original tumor that kills, but secondary growths. But a key question in cancer research has been how vulnerable cancer cells are able to survive once they break away from a tumor to spread around the body.
“Metastasis is currently incurable and remains one of the key targets of cancer research,” said lead researcher Stéphanie Kermorgant, PhD, from Barts Cancer Institut at Queen Mary University of London (QMUL). “Our research advances the knowledge of how two key molecules communicate and work together to help cancer cells survive during metastasis. We’re hoping that this might lead to the discovery of new drugs to block the spread of cancer within the body.”
The study, published in an open-access paper in Nature Communications, examined the changes that occur in cancer cells as they break away from tumors in cell cultures in mice and zebrafish*. The research revealed a previously unknown survival mechanism in cancer cells and found that molecules known as “integrins” could be key.
Integrins are proteins on the cell surface that attach to and interact with the cell’s surroundings. “Outside-in” and “inside-out” signaling by integrins is known to help the cancer cells attach themselves to their surroundings.
Key discovery: “inside-in” signaling
But the study suggests that when the cancer cells are floating, as they do during metastasis, the integrins switch from their adhesion role to take on an entirely new form of communication that has never been seen before: “Inside-in” signaling, in which integrins signal within the cell.
The researchers discovered that an integrin called beta-1 (β1) pairs up with another protein called c-Met and they move inside the cell together. The two proteins then travel to an unexpected location within the cell that is normally used to degrade and recycle cell material. Instead, the location is used for a new role of cell communication and the two proteins send a message to the rest of the cell to resist against death while floating during metastasis.
Using both breast and lung cells, the team found that metastases were less likely to form when β1 and c-Met were blocked from entering the cell together or were prevented from moving to the special location within the cell.
Integrins are already major targets for cancer treatment with drugs either being tested or in use in the clinic. Most integrin inhibitor drugs target their adhesive function and block them on the surface of the cancer cell. The researchers say that the limited success of these drugs could be partly explained by this newly discovered role of integrins within the cancer cell.
Keeping the integrins out of the cancer cell
But a new strategy could be to prevent the integrin from going inside the cell in the first place. The researchers hope that these insights could lead to the design of better therapies against metastasis and more effective treatment combinations that could prevent and slow both tumor growth and spread.
The research was funded by the UK Medical Research Council, Breast Cancer Now, Rosetrees Trust, British Lung Foundation, Cancer Research UK and Barts Charity.
* The team carried out part of their animal research work on zebrafish embryos to implement the principle of 3Rs (refine, reduce, replace) on their research on mice. Zebrafish provide a similar tumor microenvironment to humans, meaning fewer tests need to be carried out in mice and any future experiments in mice will have been optimized to have minimal toxicity. They are aiming to reduce the number of mice used by at least 90 per cent and ultimately use zebrafish to completely replace the use of mice.
Abstract of Beta 1-integrin–c-Met cooperation reveals an inside-in survival signalling on autophagy-related endomembranes
BCIQMUL | BCI Tumor Biology: integrins and metastasis
Receptor tyrosine kinases (RTKs) and integrins cooperate to stimulate cell migration and tumour metastasis. Here we report that an integrin influences signalling of an RTK, c-Met, from inside the cell, to promote anchorage-independent cell survival. Thus, c-Met and β1-integrin co-internalize and become progressively recruited on LC3B-positive ‘autophagy-related endomembranes’ (ARE). In cells growing in suspension, β1-integrin promotes sustained c-Met-dependent ERK1/2 phosphorylation on ARE. This signalling is dependent on ATG5 and Beclin1 but not on ATG13, suggesting ARE belong to a non-canonical autophagy pathway. This β1-integrin-dependent c-Met-sustained signalling on ARE supports anchorage-independent cell survival and growth, tumorigenesis, invasion and lung colonization in vivo. RTK–integrin cooperation has been assumed to occur at the plasma membrane requiring integrin ‘inside-out’ or ‘outside-in’ signalling. Our results report a novel mode of integrin–RTK cooperation, which we term ‘inside-in signalling’. Targeting integrin signalling in addition to adhesion may have relevance for cancer therapy.
A new study helps explain how brain structure and chemistry relate to “fluid intelligence” — the ability to adapt to new situations and solve problems one has never encountered before.
The study, reported in an open-access paper in the journal NeuroImage, observed two facets of fluid intelligence*:
- Verbal or spatial reasoning was linked to higher concentrations of a compound called NAA (N-acetyl aspartate) in the medial parietal and posterior cingulate cortices of the brain. NAA is a byproduct of glucose metabolism and a marker of energy production in the brain. It was measured with magnetic resonance spectroscopy.
- Number-related problem-solving was linked to brain volume in all subjects, measured using magnetic resonance imaging (MRI).
The analysis involved 211 research subjects, making it the largest study to date linking brain chemistry and intelligence in living humans. A follow-up analysis revealed that this pattern of findings was observed for males and females when analyzed separately.
A similar separation of reasoning abilities has been demonstrated in previous studies, but more studies will be needed to confirm and extend the findings, the researchers said.
The work was conducted in the Decision Neuroscience Laboratory at the Beckman Institute for Advanced Science and Technology, University of Illinois at Urbana-Champaign.
“Our findings contribute to a growing body of evidence to suggest that intelligence reflects multiple levels of organization in the brain — spanning neuroanatomy, for example, brain size; and neurophysiology, such as brain metabolism — and that specific properties of the brain provide a powerful lens to investigate and understand the nature of specific intellectual abilities,” said Aron Barbey, an affiliate of the Beckman Institute and of the Carl R. Woese Institute for Genomic Biology.
The Intelligence Advanced Research Projects Activity at the Office of the Director of National Intelligence supported this research.
* Three canonical tests of Gf (BOMAT, Number Series, and Letter Sets) and three working memory tasks (Reading, Rotation, and Symmetry span tasks).Abstract of Dissociable brain biomarkers of fluid intelligence
Cognitive neuroscience has long sought to understand the biological foundations of human intelligence. Decades of research have revealed that general intelligence is correlated with two brain-based biomarkers: the concentration of the brain biochemical N-acetyl aspartate (NAA) measured by proton magnetic resonance spectroscopy (MRS) and total brain volume measured using structural MR imaging (MRI). However, the relative contribution of these biomarkers in predicting performance on core facets of human intelligence remains to be well characterized. In the present study, we sought to elucidate the role of NAA and brain volume in predicting fluid intelligence (Gf). Three canonical tests of Gf (BOMAT, Number Series, and Letter Sets) and three working memory tasks (Reading, Rotation, and Symmetry span tasks) were administered to a large sample of healthy adults (n = 211). We conducted exploratory factor analysis to investigate the factor structure underlying Gf independent from working memory and observed two Gf components (verbal/spatial and quantitative reasoning) and one working memory component. Our findings revealed a dissociation between two brain biomarkers of Gf (controlling for age and sex): NAA concentration correlated with verbal/spatial reasoning, whereas brain volume correlated with quantitative reasoning and working memory. A follow-up analysis revealed that this pattern of findings is observed for males and females when analyzed separately. Our results provide novel evidence that distinct brain biomarkers are associated with specific facets of human intelligence, demonstrating that NAA and brain volume are independent predictors of verbal/spatial and quantitative facets of Gf.
Duke University researchers have designed a new computer processor that’s optimized for robot motion planning (for example, for quickly picking up and accurately moving an object in a cluttered environment while evading obstacles). The new processor can plan an optimal motion path up to 10,000 times faster than existing systems while using a small fraction of the required power.
The new processor is fast enough to plan and operate in real time, and power-efficient enough to be used in large-scale manufacturing environments with thousands of robots, according to George Konidaris, assistant professor of computer science and electrical and computer engineering at Duke.
“When you think about a car assembly line, the entire environment is carefully controlled so that the robots can blindly repeat the same movements over and over again,” said Konidaris. “The car parts are in exactly the same place every time, and the robots are contained within cages so that humans don’t wander past.”
But for uncontrolled environments (such as homes), robot motion planning has to be a lot smarter and able to learn in real time. That would save the time and expense of custom-engineering the environment around the robot, said Konidaris, who presented the new work yesterday (June 20) at a conference called Robotics: Science and Systems in Ann Arbor, Mich.
Duke Robotics | Robotic Motion Planning
Collision detection in real time
Most existing approaches for robot motion planning rely on general-purpose CPUs or computationally faster but more power-hungry graphics processors (GPUs). Instead, the Duke team specifically designed a new processor for motion planning.
“While a general-purpose CPU is good at many tasks, it cannot compete with a processor specially designed for just a single task,” said Daniel Sorin, professor of electrical and computer engineering and computer science at Duke.
Konidaris and Sorin’s team designed their new processor to perform collision detection — the most time-consuming aspect of motion planning — requiring thousands of collision checks in parallel. “We streamlined our design and focused our hardware and power budgets on just the specific tasks that matter for motion planning,” Sorin said.
The key was to use an FPGA (field-programmable gate array) integrated circuit, which can be configured by a designer for customized uses.
The technology works by breaking down the arm’s operating space into thousands of 3D volumes called voxels (volume pixels). The algorithm then determines whether or not an object is present in one of the voxels contained within pre-programmed motion paths. Thanks to the specially designed hardware, the technology can check thousands of motion paths simultaneously, and then stitch together the shortest motion path possible using the “safe” options remaining.
“The state of the art prior to our work used high-performance, commodity graphics processors that consume 200 to 300 watts,” said Konidaris. “And even then, it was taking hundreds of milliseconds, or even as much as a second, to find a motion plan. We’re at less than a millisecond, and less than 10 watts. Even if we weren’t faster, the power savings alone will add up in factories with thousands, or even millions, of robots.”
Konidaris also notes that the technology opens up new ways to use motion planning. “Previously, planning was done once per movement, because it was so slow,” he said, “but now it is fast enough that it could be used as a component of a more complex planning algorithm, perhaps one that sequences several simpler motions or plans ahead to reason about the movement of several objects.”
The new processor’s speed and power efficiency could create many opportunities for automation. So Konidaris, Sorin and their students have formed a spinoff company, Realtime Robotics, to commercialize the technology. “Real-time motion planning could really be a game-changer for robotics,” said Konidaris.
This research was supported by the Defense Advanced Research Projects Agency and the National Institutes of Health.
Abstract of Robot Motion Planning on a Chip
We describe a process that constructs robot-specific circuitry for motion planning, capable of generating motion plans approximately three orders of magnitude faster than existing methods. Our method is based on building collision detection circuits for a probabilistic roadmap. Collision detection for the roadmap edges is completely parallelized, so that the time to determine which edges are in collision is independent of the number of edges. We demonstrate planning using a 6-degree- of-freedom robot arm in less than 1 millisecond.
Graphene can be transformed in the lab from a semimetal into a semiconductor if it is confined into nanoribbons narrower than 10 nm (with controlled orientation and edges), but scaling it up for commercial use has not been possible. Until now.
University of Wisconsin-Madison scientists have discovered how to synthesize narrow, long “one-dimensional” (1-D) nanoribbons (sub-10 nanometers wide) directly on a conventional germanium semiconductor wafer.
That narrow width is not possible with the optical and electron-beam lithography techniques conventionally used in making chips, and integrating graphene nanoribbons onto insulating or semiconducting wafers has also been difficult.
The breakthrough was extremely slow growth (under 5 nanometers per hour), using a new variation of a technique called chemical vapor deposition (CVD), allowing nanoribbons with length-to-width aspect ratios greater than 70 to grow on the surface of a germanium wafer (and with the required smooth “armchair” edges — see the image on the right above).
In addition, this new fabrication process is compatible with existing semiconductor fabrication infrastructure. Appears promising. Let’s see which chipmakers go for it.
The research is described in an open-access article just published in Nature Communications.Abstract of Direct oriented growth of armchair graphene nanoribbons on germanium
Graphene can be transformed from a semimetal into a semiconductor if it is confined into nanoribbons narrower than 10 nm with controlled crystallographic orientation and well-defined armchair edges. However, the scalable synthesis of nanoribbons with this precision directly on insulating or semiconducting substrates has not been possible. Here we demonstrate the synthesis of graphene nanoribbons on Ge(001) via chemical vapour deposition. The nanoribbons are self-aligning 3° from the Ge110 directions, are self-defining with predominantly smooth armchair edges, and have tunable width to <10 nm and aspect ratio to >70. In order to realize highly anisotropic ribbons, it is critical to operate in a regime in which the growth rate in the width direction is especially slow, <5 nm h−1. This directional and anisotropic growth enables nanoribbon fabrication directly on conventional semiconductor wafer platforms and, therefore, promises to allow the integration of nanoribbons into future hybrid integrated circuits.
The SingularityU The Netherlands Summit is aimed at bringing awareness about exponential technologies and their impact on business and policy. To learn more, click here.
Australian researchers have discovered that an existing medication could have promise in preventing breast cancer in women carrying a faulty BRCA1 gene, who are at high risk of developing aggressive breast cancer.
Currently, many women with this mutation choose surgical removal of breast tissue and ovaries to reduce their chance of developing breast and ovarian cancer. Notably, in May 2013, actress Angelina Jolie, who reportedly had with an estimated 87 per cent risk of breast cancer and 50 per cent risk of ovarian cancer, chose to have d2ouble mastectomy with breast reconstruction.
Women with mutation have an approximately 65% cumulative risk of developing breast cancer by age 70, the researchers note, based on a 2003 combined analysis of 22 studies.
A drug option
But now, another option may be be possible, as 16 scientists (most in Australia) report in an advance online paper in Nature Medicine this week.
The researchers discovered that pre-cancerous cells could be identified by a marker protein called RANK. A concurrent study led by an Austrian group had also identified the importance of RANK.
This was an important breakthrough, they said, because an inhibitor of the RANK signalling pathway was already in clinical use: the drug denosumab. The researchers suggest the drug may have potential to prevent breast cancer from developing.
If confirmed in clinical studies, this would provide a non-surgical option to prevent breast cancer in women with elevated genetic risk.
“This is potentially a very important discovery for women who carry a faulty BRCA1 gene, who have few other options,” said Walter and Eliza Hall Institute of Medical Research professor Geoffrey J. Lindeman. “Current cancer prevention strategies for these women include surgical removal of the breasts and/or ovaries, which can have serious impacts on people’s lives.
“To progress this work, denosumab would need to be formally tested in clinical trials in this setting as it is not approved for breast cancer prevention,” he said.
The research was published this week in Nature Medicine and was supported by The National Breast Cancer Foundation, The Qualtrough Cancer Research Fund, The Joan Marshall Breast Cancer Research Fund, the Australian Cancer Research Foundation, Cancer Council Victoria, the Cancer Therapeutics Cooperative Research Centre, an Amgen Preclinical Research Program Grant, the National Health and Medical Research Council, the Victorian Cancer Agency, and the Victorian Government Operational Infrastructure Support Scheme.
The Walter and Eliza Hall Institute | ‘Holy grail’ of breast cancer prevention in high-risk women may be in sight
Abstract of RANK ligand as a potential target for breast cancer prevention in BRCA1-mutation carriers
Individuals who have mutations in the breast-cancer-susceptibility gene BRCA1 (hereafter referred to as BRCA1-mutation carriers) frequently undergo prophylactic mastectomy to minimize their risk of breast cancer. The identification of an effective prevention therapy therefore remains a ‘holy grail’ for the field. Precancerous BRCA1mut/+ tissue harbors an aberrant population of luminal progenitor cells, and deregulated progesterone signaling has been implicated in BRCA1-associated oncogenesis. Coupled with the findings that tumor necrosis factor superfamily member 11 (TNFSF11; also known as RANKL) is a key paracrine effector of progesterone signaling and that RANKL and its receptor TNFRSF11A (also known as RANK) contribute to mammary tumorigenesis, we investigated a role for this pathway in the pre-neoplastic phase of BRCA1-mutation carriers. We identified two subsets of luminal progenitors (RANK+ and RANK−) in histologically normal tissue of BRCA1-mutation carriers and showed that RANK+ cells are highly proliferative, have grossly aberrant DNA repair and bear a molecular signature similar to that of basal-like breast cancer. These data suggest that RANK+ and not RANK− progenitors are a key target population in these women. Inhibition of RANKL signaling by treatment with denosumab in three-dimensional breast organoids derived from pre-neoplasticBRCA1mut/+ tissue attenuated progesterone-induced proliferation. Notably, proliferation was markedly reduced in breast biopsies from BRCA1-mutation carriers who were treated with denosumab. Furthermore, inhibition of RANKL in a Brca1-deficient mouse model substantially curtailed mammary tumorigenesis. Taken together, these findings identify a targetable pathway in a putative cell-of-origin population in BRCA1-mutation carriers and implicate RANKL blockade as a promising strategy in the prevention of breast cancer.
Local Motors, creator of the world’s first 3D-printed cars, has developed the first self-driving “cognitive” vehicle, using IBM Watson Internet of Things (IoT) for Automotive.
The vehicle, dubbed “Olli,” can carry up to 12 people. It uses IBM Watson and other systems to improve the passenger experience and allow natural interaction with the vehicle. Olli will be used on public roads locally in Washington DC and later this year in Miami-Dade County.
Olli is the first vehicle to use the cloud-based cognitive computing capability of IBM Watson IoT to analyze and learn from high volumes of transportation data, produced by more than 30 sensors embedded throughout the vehicle. Sensors will be added and adjusted continually as passenger needs and local preferences are identified.
Four Watson developer APIs — Speech to Text, Natural Language Classifier, Entity Extraction and Text to Speech — will enable passengers to interact conversationally with Olli while traveling from point A to point B, discussing topics about how the vehicle works, where they are going, and why Olli is making specific driving decisions.
Watson empowers Olli to understand and respond to passengers’ questions as they enter the vehicle, such as destinations (“Olli, can you take me downtown?”) or specific vehicle functions (“how does this feature work?” or even “are we there yet?”). Passengers can also ask for recommendations on local destinations such as popular restaurants or historical sites based on analysis of personal preferences.
“Cognitive computing provides incredible opportunities to create unparalleled, customized experiences for customers, taking advantage of the massive amounts of streaming data from all devices connected to the Internet of Things, including an automobile’s myriad sensors and systems,” said Harriet Green, General Manager, IBM Watson Internet of Things, Commerce & Education.
Miami-Dade County and Las Vegas are also exploring a pilot program in which several autonomous vehicles would be used to transport people around Miami and Las Vegas.
IBM Internet of Things | Local Motors Debuts “Olli,” the First Self-driving Vehicle to Tap the Power of IBM Watson
IBM Internet of Things | Harnessing vehicle safety data with analytics