The quest to create camouflaging metamaterials that can “see” colors and automatically blend into the background is one step closer to reality, thanks to a breakthrough color-display technology unveiled this week by Rice University‘s Laboratory for Nanophotonics (LANP).
The new full-color display technology uses aluminum nanorods to create the vivid red, blue and green hues found in today’s top-of-the-line LCD televisions and monitors.
The technology is described in a new study in the Early Edition of the Proceedings of the National Academy of Sciences (PNAS) (open access).
The breakthrough is the latest in a string of recent discoveries by a Rice-led team that set out in 2010 to create metamaterials capable of mimicking the camouflage abilities of cephalopods — the family of marine creatures that includes squid, octopus and cuttlefish.
“Our goal is to learn from these amazing animals so that we could create new materials with the same kind of distributed light-sensing and processing abilities that they appear to have in their skins,” said LANP Director Naomi Halas, a co-author of the PNAS study.
She is the principal investigator on a $6 million Office of Naval Research grant for a multi-institutional team that includes marine biologists Roger Hanlon of the Marine Biological Laboratory in Woods Hole, Mass., and Thomas Cronin of the University of Maryland, Baltimore County.
“We know cephalopods have some of the same proteins in their skin that we have in our retinas, so part of our challenge, as engineers, is to build a material that can ‘see’ light the way their skin sees it, and another challenge is designing systems that can react and display vivid camouflage patterns,” Halas said.
The key: precision placement of plasmonic aluminum nanorods
LANP’s new color display technology delivers bright red, blue and green hues from five-micron-square pixels that each contains several hundred aluminum nanorods. By varying the length of the nanorods and the spacing between them, LANP researchers Stephan Link and Jana Olson showed they could create pixels that produced dozens of colors, including rich tones of red, green and blue that are comparable to those found in high-definition LCD displays.
“Aluminum is useful because it’s compatible with microelectronic production methods, but until now the tones produced by plasmonic aluminum nanorods have been muted and washed out,” said Link, associate professor of chemistry at Rice and the lead researcher on the PNAS study. “The key advancement here was to place the nanorods in an ordered array.”
Olson said the array setup allowed her to tune the pixel’s color in two ways, first by varying the length of the nanorods and second by adjusting the length of the spaces between nanorods.
“This arrangement allowed us to narrow the output spectrum to one individual color instead of the typical muted shades that are usually produced by aluminum nanoparticles,” she said.
Olson’s five-micron-square pixels are about 40 times smaller than the pixels used in commercial LCD displays. To make the pixels, she used aluminum nanorods that each measured about 100 nanometers long by 40 nanometers wide. She used electron-beam deposition to create arrays — regular arrangements of nanorods — in each pixel.
She was able to fine-tune the color produced by each pixel by using theoretical calculations by Rice physicists Alejandro Manjavacas, a postdoctoral researcher, and Peter Nordlander, professor of physics and astronomy.
“Alejandro created a detailed model of the far-field plasmonic interactions between the nanorods,” Olson said. “That proved very important because we could use that to dial in the colors very precisely.”
A future LCD display
Halas and Link said the research team hopes to create an LCD display that uses many of the same components found in today’s displays, including liquid crystals, polarizers and individually addressable pixels.
The photonic aluminum arrays would be used in place of the colored dyes that are found in most commercial displays. Unlike dyes, the arrays won’t fade or bleach after prolonged exposure to light, and the inherent directionality of the nanorods provides another advantage.
“Because the nanorods in each array are aligned in the same direction, our pixels produce polarized light,” he said. “This means we can do away with one polarizer in our setup, and it also gives us an extra knob that we can use to tune the output from these arrays. It could be useful in a number of ways.”
Link and Halas said they hope to further develop the display technology and eventually to combine it with other new technologies that the squid skin team has developed both for sensing light and for displaying patterns on large polymer sheets. For example, Halas and colleagues published a study in Advanced Materials in August about an aluminum-based CMOS-compatible photodetector technology for color sensing.
In addition, University of Illinois at Urbana-Champaign co-principal investigator John Rogers and colleagues published a proof-of-concept study in PNAS in August about new methods for creating flexible black-and-white polymer displays that can change color to match their surroundings.
“We hope to eventually bring all of these technologies together to create a new material that can sense light in full color and react with full-color camouflage displays,” Halas said.
The research was funded by the Department of Defense through the Office of Naval Research’s Basic Research Challenge program and by the Welch Foundation.
Abstract of Proceedings of the National Academy of Sciences paper
Aluminum is abundant, low in cost, compatible with complementary metal-oxide semiconductor manufacturing methods, and capable of supporting tunable plasmon resonance structures that span the entire visible spectrum. However, the use of Al for color displays has been limited by its intrinsically broad spectral features. Here we show that vivid, highly polarized, and broadly tunable color pixels can be produced from periodic patterns of oriented Al nanorods. Whereas the nanorod longitudinal plasmon resonance is largely responsible for pixel color, far-field diffractive coupling is used to narrow the plasmon linewidth, enabling monochromatic coloration and significantly enhancing the far-field scattering intensity of the individual nanorod elements. The bright coloration can be observed with p-polarized white light excitation, consistent with the use of this approach in display devices. The resulting color pixels are constructed with a simple design, are compatible with scalable fabrication methods, and provide contrast ratios exceeding 100:1.
…RENTAL WORLD Name: Jessica Scorpio Age: 27 Company: Getaround Measure of Success: When insurance companies balked at covering participating cars,
…Larry Page, who tasked her class at Silicon Valley s Singularity University with creating something that could help a billion people over the…
The first definitive defeat for a classical computer by a quantum computer could one day be achieved with a quantum device that runs an algorithm known as “boson sampling,” recently developed by researchers at MIT.
Boson sampling uses single photons of light and optical circuits to take samples from an exponentially large probability distribution, which has been proven to be extremely difficult for classical computers.
The snag: how to generate the dozens of single photons needed to run the algorithm.
Now researchers at the Centre for Quantum Photonics (CQP) at the University of Bristol with collaborators from the University of Queensland (UQ) and Imperial College London say they have discovered how.
“We realized we could chain together many standard two-photon sources in such a way as to give a dramatic boost to the number of photons generated,” said CQP research leader Anthony Laing, a research fellow at the Centre for Quantum Photonics in the University of Bristol’s School of Physics.
Details of the research are in a paper published in Physical Review Letters.
Sources of single photons
First author Austin Lund of the Centre for Quantum Computation and Communication Technology, School of Mathematics and Physics, University of Queensland, explained to KurzweilAI that “we have shown that a variant of the boson-sampling problem can be realized using sources of photons from spontaneous parametric down conversion (SPDC).
“These are sources of photons which can be thought of as distinguishable pairs of photons. Usually experiments will measure one of these photons which then heralds the presence of the partner photon.
“Though this can be a good source of high quality photons, unfortunately, it is necessarily probabilistic with the probability dropping exponentially in the number of photons desired.
“We have shown that this exponential drop in probability does not matter for the case of Boson Sampling. We have shown that SPDC style resources can be used to prove that a slightly modified version of Boson Sampling is hard for a classical computer to calculate.
“Since the original Boson Sampling algorithm study in 2010 there has been great interest in the quantum computing community to understand what particular aspect of quantum mechanics enhances the computing power for this particular algorithm.
“There has also been interest in the quantum optics community due to the simple nature of this algorithm for implementing the required interactions between photons. Indeed very small scale implementations of boson sampling have been achieved. Our result adds to this work by potentially allowing a simpler path to a large scale implementation which may challenge the limits of classical computers simulating this problem.”
Time frame: five to 20 years
So when might this happen? Lund told KurzweilAI that their focus has been on “pushing the boundaries of science and to what can be achieved. It is always difficult to give reasonable estimates on time frames as they generally depend on factors largely outside the control of scientists. Though we have seen demonstrations of pretty much all the bits and pieces needed to produce a device that outperforms a classical computer.
“The major challenge is in achieving the scale required, i.e., dozens of photons into hundreds of paths. There are people ready to take on this challenge awaiting the resources needed. If that happens soon, then in 5 years we may see the first quantum computing device which has a provable speed up over a classical computer. Perhaps a more likely time frame is 20 years. I’m keen to be surprised on this though.”
The research was supported by the Australian Research Council Centre of Excellence for Quantum Computation and Communication Technology, the Army Research Office (ARO), the Engineering and Physical Sciences Research Council (EPSRC), the European Research Council (ERC), the Centre for Nanoscience and Quantum Information (NSQI), the U.S. Air Force Office of Scientific Research (AFOSR) and the U.S. Army Research Laboratory (ARL).
Peter Rohde | A brief introduction to the exciting new field of boson-sampling optical quantum computing.
Abstract of Physical Review Letters paper
We pose a randomized boson-sampling problem. Strong evidence exists that such a problem becomes intractable on a classical computer as a function of the number of bosons. We describe a quantum optical processor that can solve this problem efficiently based on a Gaussian input state, a linear optical network, and nonadaptive photon counting measurements. All the elements required to build such a processor currently exist. The demonstration of such a device would provide empirical evidence that quantum computers can, indeed, outperform classical computers and could lead to applications.
MIT researchers have developed an algorithm for bounding that they’ve successfully
implemented in a robotic cheetah — MIT
MIT researchers have developed an algorithm for bounding that they’ve successfully implemented in a robotic cheetah.
In experiments on an indoor track, the robot sprinted up to 10 mph, even continuing to run after clearing a hurdle. The MIT researchers estimate that the current version of the robot may eventually reach speeds of up to 30 mph — half the top speed of the natural cheetah, the fastest land animal on Earth.
Sangbae Kim: Inspired by Nature — MIT Mechanical Engineering
Running like Usain Bolt
As it ramps up to top speed, a natural cheetah pumps its legs in tandem, bounding until it reaches a full gallop. The key to the bounding algorithm is in programming each of the robot’s legs to exert a certain amount of force in the split second during which it hits the ground to maintain a given speed. In general, the faster the desired speed, the more force must be applied to propel the robot forward.
Sangbae Kim, an associate professor of mechanical engineering at MIT, hypothesizes that this force-control approach to robotic running is similar, in principle, to the way world-class sprinters race.
“Many sprinters, like Usain Bolt, don’t cycle their legs really fast,” Kim says. “They actually increase their stride length by pushing downward harder and increasing their ground force, so they can fly more while keeping the same [running] frequency.”
Kim says that by adapting a force-based approach, the cheetah-bot is able to handle rougher terrain, such as bounding across a grassy field. In treadmill experiments, the team found that the robot handled slight bumps in its path, maintaining its speed even as it ran over a foam obstacle.
“Most robots are sluggish and heavy, and thus they cannot control force in high-speed situations,” Kim says. He says what makes the MIT robot so dynamic is a custom-designed, high-torque-density electric motor, designed by Jeffrey Lang, the Vitesse Professor of Electrical Engineering at MIT. These motors are controlled by amplifiers designed by David Otten, a principal research engineer in MIT’s Research Laboratory of Electronics. The combination of such special electric motors and custom-designed, bio-inspired legs allow force control on the ground without relying on delicate force sensors on the feet.
“Bounding is like an entry-level high-speed gait, and galloping is the ultimate gait,” Kim says. “Once you get bounding, you can easily split the two legs and get galloping.”
As an animal bounds, its legs touch the ground for a fraction of a second before cycling through the air again. The percentage of time a leg spends on the ground rather than in the air is referred to in biomechanics as a “duty cycle”; the faster an animal runs, the shorter its duty cycle.
Kim and his colleagues developed an algorithm that determines the amount of force a leg should exert in the short period of each cycle that it spends on the ground. That force, they reasoned, should be enough for the robot to push up against the downward force of gravity, in order to maintain forward momentum.
“Once I know how long my leg is on the ground and how long my body is in the air, I know how much force I need to apply to compensate for the gravitational force,” Kim says. “Now we’re able to control bounding at many speeds. And to jump, we can, say, triple the force, and it jumps over obstacles.”
In experiments, the team ran the robot at progressively smaller duty cycles, finding that, following the algorithm’s force prescriptions, the robot was able to run at higher speeds without falling. Kim says the team’s algorithm enables precise control over the forces a robot can exert while running.
By contrast, he says, similar quadruped robots may exert high force, but with poor efficiency. What’s more, such robots run on gasoline and are powered by a gasoline engine, in order to generate high forces.
“As a result, they’re way louder,” Kim says. “Our robot can be silent and as efficient as animals. The only things you hear are the feet hitting the ground. This is kind of a new paradigm where we’re controlling force in a highly dynamic situation. Any legged robot should be able to do this in the future.”
Kim and his colleagues will present details of the bounding algorithm at the IEEE/RSJ International Conference on Intelligent Robots and Systems, currently in progress in Chicago.
This work was supported by the Defense Advanced Research Projects Agency.
I’ve been writing about the ethics of human enhancement for some time. In the process, I’ve looked at many of the fascinating ethical and philosophical issues that are raised by the use of enhancing drugs. But throughout all this writing, there is one topic that I have studiously avoided. This is surprising given that, in many ways, it is the most fundamental topic of all: do the alleged cognitive enhancing drugs actually work?
One reason for avoiding this topic is that philosophers like to pursue hypotheticals: to imagine possible worlds and trace out their logical implications. And this can be all well and good, but as I have written elsewhere, there is a danger that it leads one to commit the “vice of in-principlism”. That is: the vice of talking about enhancement purely in terms of “well if, in principle, cognitive enhancing drugs worked, then the following would be true…”. This is a vice because there are many real-world substances that are alleged to have an enhancing effect. And it’s important that in all our philosophising we don’t ignore the real-world.
So, anyway, to make up for my historical failure, I am going to try to answer the question now. I do so by summarising three studies on the effects of cognitive enhancing drugs. Two of these are systematic reviews (one including a meta-analysis) of the available experimental literature, the other is a “phenomenological study” that I happen to find interesting. The studies focus on three drugs (or drug types) — Adderall (a mix of amphetamines and dextroamphetamines); Ritalin (methylphenidate) and Provigil (modafinil) — all of which are alleged to have enhancing effects, and are frequently used by students to improve their educational performance.
Are all these students wasting their time? Let’s see.
1. Repantis et al 2010: Systematic Review of Methylphenidate and Modafinil
The first study I am going to look at is a systematic review and meta-analysis by Dimitris Repantis and his colleagues. Their analysis was concerned solely with the cognitive enhancing effects of methylphenidate and modafinil. The authors found 46 studies on methyphenidate and 45 on modafinil that met their inclusion criteria. All of these studies were reviewed, but some did not have sufficient data to be extracted for their statistical analyses.
Their review focused on a variety of enhancing effects, specifically on: (a) mood; (b) motivation; c) wakefulness; (d) attention and vigilance; (e) memory and learning; and (f) executive functions and information processing. It also focused on different kinds of experimental trial and different classes of experimental subject. In the first instance, the focus was on healthy individuals, i.e. not individuals who were taking these drugs for some illness or disorder (e.g. ADHD). These individuals were then divided-up into two further subclasses — non-sleep deprived and sleep-deprived. The reviewers looked at the effects on such individuals in two scenarios: (i) single-dose trials — in which the experimental subjects were given a single dose of the relevant drug; and (ii) repeated-dose trials — in which the experimental subjects were given more than one dose over a period of time.
The summary findings in relation to methylphenidate were as follows:
Single Dose Trials (Non-sleep deprived): A single dose of methlyphenidate had a strong enhancing effect in relation to one outcome only: memory. No statistically significant effect was found in relation to attention, mood and executive function. A lack of appropriate baseline measures made it impossible to derive a statistical conclusion in relation to the effect on wakefulness. Only one study looked at the effects on motivation and found some subjectively-reported improvement in willingness to engage in mathematical tasks.
Repeated Dose Trials (Non-sleep deprived): Only two of the included studies looked at repeated usage. Consequently, no statistical analysis could be performed. One of these studies actually looked at two drugs and so the effect of methylphenidate was difficult to determine. The other study involved six weeks of usage by elderly healthy individuals and found some positive effect in relation to fatigue, but nothing in relation to the other parameters of enhancement.
Trials in Sleep-Deprived Individuals: Five of the included studies involved sleep-deprived individuals, two of them involved repeated doses. This wasn’t sufficient to perform a statistical analysis. Still, the results of the studies are interesting. No cognitive enhancing effect was found for single-dosage after one night of sleep deprivation, and in fact it was found that use of the drug may give rise to an overconfidence effect. No positive effects of repeated dosage were found for wakefulness in cases of long-term sleep deprivation (greater than 36 hours) and minimal effects were found in short-term cases (around 4 hours).
The authors conclude that no firm conclusion can be reached about the enhancing effect of methylphenidate at this stage (remember, they were writing in 2010), though they accept that there may be some positive effect in relation to memory. They also note that the popular belief that methylphenidate enhances attention is not confirmed by the meta-analysis.
Turning then to modafinil, the summary findings are as follows:
Single Dose Trials (Non-sleep deprived): A positive effect was found in relation to attention and wakefulness for single-dose trials (the latter is not particularly surprising given how modafinil works), though a negative result for wakefulness was also found the further away (in time) from the drug administration. In other words, modafinil may keep you awake for longer, but it may make you more tired at a later point in time. No significant effect was found in relation to mood, memory and motivation. And no analysis could be performed in relation to executive functioning.
Repeated Dose Trials (Non-sleep deprived): Only two of the included studies involved repeated drug administrations. The first found no effect on attentional tasks after an evening and morning administration. The second involved administrations over the course of three days and found both a positive and negative effect on mood (i.e. it made people happier, but also more anxious).
Single Dose Trials (Sleep-deprived): Statistical analysis found a strong-to-moderate positive effect of modafinil in relation to executive function, memory and wakefulness in sleep-deprived individuals. The effect generally declined the longer the period of deprivation continued. No effects were found in relation to mood and attention, and none of the studies looked at motivation.
Repeated Dose Trials (Sleep-deprived): Statistical analysis suggested a strong positive effect of repeated drug administrations on wakefulness (again, not hugely surprising), but no effect in relation to executive functioning and attention. None of the included studies looked at memory, mood or motivation.
The authors conclude that there is evidence of an enhancing effect for modafinil, primarily on attention in non-sleep deprived individuals, and on wakefulness, executive function and memory in sleep deprived individuals. They caution that, with the exception of wakefulness, these positive effects do not seem to be sustained in the long-term (i.e. across repeated drug administrations), and, furthermore, that modafinil may also give rise to an overconfidence effect in sleep-deprived individuals. This is important insofar as modafinil is often touted for use among sleep-deprived professionals, e.g. doctors.
2. Smith and Farah 2011: Are Prescription Stimulants Smart Pills?
The next study I am going to look at is by Elizabeth Smith and Martha Farah. Again, this one is a systematic review of the literature. It is concerned with two major issues: (i) how many people are using these drugs? and (ii) do they actually work? I’ll only be looking at the latter issue in this summary.
Smith and Farah’s review focused on two drug types: methylphenidates (Ritalin) and dextroamphetamines (by itself or in Adderall). They were concerned with alleged enhancing effects on healthy adults, and they reviewed placebo-controlled trials involving oral administration of the relevant drugs. They divided the available literature into four groups, each concerned with the effects of these drugs on different cognitive processes: (i) memory and learning; (ii) working memory; (iii) cognitive control; and (iv) other executive functions.
If you are at all interested in this topic, I would highly recommend reading Smith and Farah’s review. It really is a comprehensive and detailed summary of the available literature (though, bear in mind, there is overlap between this and the Repantis et al review). I cannot do justice to everything they say in this brief summary. I will, however, try my best:
Memory and Learning: The authors looked at 22 studies on the effects of methylphenidate and d-AMP on learning and memory. The studies covered 24 different tasks, some involving declarative memory (i.e. recall of facts), others involving non-declarative (procedural) memory (i.e. skills). They found some weak-to-strong evidence for an enhancing effect on declarative memory tasks, with that effect more pronounced over the long-term. Indeed, many studies suggest that these drugs have little-to-no effect in the short term, but significant effects in the long-term. The findings in relation to non-declarative memory tasks were more mixed, though more inclined to be positive for methlyphenidate than for d-AMP.
Working Memory: Working memory is like the brain’s “scratch pad” – a space in which various bits of information can be kept “online” while performing a given cognitive task. The authors looked at 23 studies on this, involving 27 different tasks. The results are far too complex to summarise fairly because they vary depending on the task the experimental subjects were asked to perform. But in general, the studies are mixed, with some showing positive effects, some showing no effect, and none showing a negative effect. The suggestion from the studies seems to be that the enhancing effect is greater on those who are less able to perform the given tasks in the first place.
Cognitive Control: Cognitive control is, in essence, the ability to override and control the brain’s more automatic, learned responses. The authors looked at 13 studies involving 16 control tasks. Again, the results are complex, but overall there were more null results than positive ones, and one finding of impairment. Careful analysis of the results once again suggests that the positive effects are greatest for those who have poor cognitive control in the first place.
Other Executive Functions: This is just a catch-all for other possible enhancing effects covered by the available studies. The authors looked at five such studies, covering a eight different tasks, including verbal fluency and grammatical reasoning, as well as Raven’s Progressive Matrices and variants on the Towers of London task. Overall, there were only two positive results, but the small number of tests makes it difficult to draw any firm conclusions.
So what is the upshot of all this? Once again there seems to some reasonable evidence for a positive effect on long-term memory recall, and some indication that these drugs work better for people with weaker cognitive performance. Still, the authors are cautious. Over a third of the studies reviewed showed no enhancing effect, and there is the danger that negative results are not being published (due to publication bias).
3. Vrecko 2013: Just how “cognitive” is cognitive enhancement?
The final study I want to look at is a bit different from the preceding two. It is not a systematic review. It is a single study, conducted on 24 students at an unnamed, elite university on the East Coast of the United States. The study is interesting to me because it focuses exclusively on what the students taking these drugs think it does to their ability to study (and engage in other types of academic work), and because it argues that the emotional effects of these drugs may be just as important (if not more important) than the cognitive effects. (I would, however, note that disentangling the emotional from the cognitive can be a tricky business).
I think the most valuable part of this study is the quotations it provides from actual student users, and I want to share some of those quotes here. Before I do that, I need to share something of Vrecko’s analytical framework. Using fairly standard methodological protocols, Vrecko noted that his interviewees’ responses suggested that drugs such as Adderall, Ritalin and Modafinil had four mood-enhancing effects. I’ll cover all four here, and include student quotes that illustrate them along the way.
The first mood-enhancing effect was “feeling up”. By this is meant an increased feeling of energy and well-being. With this improvement, the study participants felt more willing to do the kinds of academic work they needed to get done. For example, one student (Sarah) said:
Everything seems better, and more doable. Sometimes, a lot of the time actually, I’ll feel kind of, it’s hard to do anything. When I’m walking to the library I’ll think, if I didn’t have it [Adderall], there’s no way I’d get anything done. I’d just sit there in front of my computer, and be not doing anything.
The second mood-enhancing effect, closely related to the first, was “drivenness”. Students who took the drugs felt more driven to do things. As Vrecko puts it, the energy they had “would build up to a point at which there was a surplus or excess that needed to be discharged through activity”. One student described the feeling like this:
I didn’t want to stop what I was doing until it was completed up to a certain level of my satisfaction. So I wouldn’t ever have to do something and just be, oh, I’m tired, I’ll finish it in the morning. I would just finish it.
Ironically, this could have a negative effect too, particularly when the students became driven to do something other than what they should be doing. As one participant put it:
When I take it, I might feel like, “oh, I’m going to start cleaning my room,” or something else. So when it’s kicking in, I have to make sure I start telling myself, “ok, it’s work time. This is what you’ve got to do, this is why you’re doing it.”
The third mood-enhancing effect was “interestedness”. Several of the participants in Vrecko’s study reported a general inability to get interested in the academic subjects they were taking. But when they took the drugs, things suddenly seemed more interesting:
It just got to where I felt like if I was staring at something I just couldn’t take my eyes away from it—it made studying more interesting.
Another positive effect of this was an increased ability to avoid distractions like e-mail, facebook and chatting with friends in the library.
The fourth and final mood-enhancing effect was “enjoyment”. Students who took the drugs found that they were able to enjoy subjects that had previously seemed dull. (This is, obviously, closely allied to the “interestedness”-effect). One student described a particularly remarkable experience while doing an assignment:
I had this paper to write, for a class on art and Romanticism— pretty much the most boring topic I can imagine. Even just finding books in the library annoyed me, like, “why in the hell am I doing this?” But when I started reading [after having taken 20 mg of Adderall], I remember getting just completely absorbed in one book, and then another, and as I was writing I was making connections between them … And I was like, this is really cool, actually enjoying the process of putting ideas together. I hadn’t had that before.
In summary, Vrecko’s study is an interesting one. It suggests that one of the main effects of these so-called cognitive enhancing drugs may not be on cognition as such, but, rather, on removing the psychological barriers to doing cognitive work. The study is, however, a small one. The sample of students being interviewed may not be representative (e.g. they may be the “weaker” students who, based on the studies discussed above, seem to get the most positive effect). And there is no placebo control: it’s possible that the drugs themselves are little more than a psychological crutch for the students in question.
Let’s now return to the opening question: do the alleged cognitive-enhancing drugs actually work? It’s a difficult question to answer in the abstract. We would need to specify the drug we are interested in and what it would mean for it to “work”. Still, at a general level, I’m slightly more persuaded of their enhancing effects than I was before I read these studies. That may, however, be attributable to my hyper-scepticism prior to doing so.
For me, there are four big takeaways from these studies. This first is that methyphenidate (in particular) seems to have a decent enhancing effect on memory over the long-term. The second is that modafinil seems to have a decent enhancing effect on attention for single doses but not for repeated doses. The third is that the extent of the enhancing effect is likely to greater for those who struggle more in any given cognitive task. And the fourth is that, for students who persist in using them, the major benefit of these drugs may simply be their ability to remove the psychological barriers to getting things done.
John holds a PhD student specialising in the philosophy of criminal law (specifically, criminal responsibility and game theory). He formerly was a lecturer in law at Keele University, interested in technology, ethics, philosophy and law. He is currently a lecturer at the National University of Ireland, Galway (starting July 2014).
This blog post originally appeared here. Republished under Creative Commons License.
Singularity University Summit Europe 2014 On November 19 20, 2014, Singularity University Summit Europe will take place for the first time in…
…Zero-G Technology Demonstration, which was built by California-based company Made in Space, can operate at near-zero gravity and works by spraying…
…well. How it started: While taking classes at Silicon Valley s Singularity University, Sam Zaid and Jessica Scorpio were challenged by Google’s…
They recorded the EEG (brain waves) of human participants while they were awake after they were instructed to classify spoken words as either animals or objects by pressing a button, using the right hand for animals and the left hand for objects.
Once the participants were asleep, the testing continued, but with an entirely new list of words to ensure that responses would require the extraction of word meaning rather than a simpler pairing between stimulus and response. The researchers’ observations of brain activity showed that the participants continued to respond accurately to the words (although more slowly) as they slept.
The study also extends earlier work on subliminal processing by showing that speech processing and other complex tasks “can be done not only without being aware of what you perceive, but [also] without being aware at all.” Sid Kouider of Ecole Normale Supérieure suspects that such unconscious processing isn’t limited by the complexity of the task, but by whether it can be made automatic or not.
DARPA has awarded a $2.9 million contract to the Wyss Institute for Biologically Inspired Engineering at Harvard University to further develop the Soft Exosuit, a “wearable robot.”
It will be worn comfortably under clothing to enable soldiers to walk longer distances, reduce fatigue, and minimize risk of injury when carrying heavy loads.
The development is part of DARPA’s Warrior Web program, which seeks to develop technologies to prevent and reduce musculoskeletal injuries for military personnel, but the technologies could also have civilian applications, including first responders, athletes, and the disabled.
The lightweight device is designed to replace power-hungry battery packs and rigid components that can interfere with natural joint movement, as in heavier exoskeleton systems.
“While the idea of a wearable robot is not new, our design approach certainly is,” said Conor Walsh, an Assistant Professor of Mechanical and Biomedical Engineering at the Harvard School of Engineering and Applied Sciences (SEAS) and founder of the Harvard Biodesign Lab.
It is made of soft, functional textiles woven together into a piece of smart clothing that is pulled on like a pair of pants and intended to be worn under a soldier’s regular gear. The suit mimics the action of the leg muscles and tendons when a person walks, and provides small but carefully timed assistance at the joints of the leg without restricting the wearer’s movement.
Soft Robotic Exosuit — Harvard University
The current prototype uses a low-power microprocessor and network of supple strain sensors that act as the “brain” and “nervous system” of the Soft Exosuit, continuously monitoring various data signals, including the suit tension, the position of the wearer (e.g., walking, running, crouched), and more.
The team will also collaborate with clinical partners to develop a medical version of the suit that can help stroke patients, for example.
Collaborators include researchers at Boston University’s College of Health and Rehabilitation Sciences. Harvard postdocs, and Boston-based New Balance.
Upcoming Film ‘The University’ Charts the Birth of Singularity University—Check Them Out On Indiegogo
Upcoming Film The University Charts the Birth of Singularity University Check Them Out On Indiegogo Matt Rutherford, filmmaker and former producer…
Oregon State University researchers combined diatoms (a type of single-celled photosynthetic algae) with self-assembled plasmonic nanoparticles to create a low-cost sensor capable of detecting miniscule amounts of protein or other biomarkers.
Optical biosensors are important in health care for such applications as detecting levels of blood glucose or the presence of antibodies. They are also used for chemical detection in environmental protection.
Existing biosensors often require high-cost fabrication using artificial photonic crystals to make a precisely structured device. But diatoms appear to have just the right kind of intricate structure to integrate with gold or silver nanoparticles and produce a low-cost optical biosensor.
“It’s much lower cost, about 50 cents compared to $50,” said Alan Wang, an assistant professor of electrical engineering in the OSU College of Engineering.
The researchers found that using diatoms also increases the signal by 10 times and the sensitivity by 100 times. The current sensitivity of the OSU biosensor is 1 picogram per milliliter, which is much better than optical sensors used in clinics for detecting glucose, proteins and DNA, which have a sensitivity of 1 nanogram per milliliter.
The research still needs “significant funding input” to be available commercially, Wang told KurzweilAI in an email.
The research is sponsored by the Oregon Nanoscience and Microtechnologies Institute and Marine Polymer Technologies.
Abstract of IEEE Journal of Selected Topics in Quantum Electronics paper
We present an innovative surface-enhanced Raman spectroscopy (SERS) sensor based on a biological-plasmonic hybrid nanostructure by self-assembling silver (Ag) nanoparticles into diatom frustules. The photonic-crystal-like diatom frustules provide a spatially confined electric field with enhanced intensity that can form hybrid photonic-plasmonic modes through the optical coupling with Ag nanoparticles. The experimental results demonstrate 4-6× and 9-12× improvement of sensitivities to detect the Raman dye for resonance and nonresonance SERS sensing, respectively. Such low-cost and high-sensitivity SERS sensors have significant potentials for label-free biosensing.
Abstract of Optics Express paper
Diatoms are single-celled algaes that make photonic-crystal-like silica shells or frustules with hierarchical micro- & nano-scale features consisting of two-dimensional periodic pores. This article reports the use of diatom frustules as an integration platform to enhance localized surface plasmon resonances of self-assembled silver nanoparticles (NPs) on the surface of diatom frustules. Theoretical and experimental results show enhanced localized surface plasmons due to the coupling with the guided-mode resonances of the frustules. We observed 2 × stronger optical extinction and over 4 × higher sensitivity of surface-enhanced Raman scattering of Rhodmine 6G from the NPs-on-diatom than the NPs-on-glass structure.
A group of scientists in Chile has created* artificial biomembranes (mimicking those found in living organisms) on silicon surfaces, a step toward creating bio-silicon interfaces, where biological “sensor” molecules can be printed onto a cheap silicon chip with integrated electronic circuits.
Described in The Journal of Chemical Physics from AIP Publishing, the artificial membranes have potential applications such as detecting bacterial contaminants in food, toxic pollution in the environment, and dangerous diseases .
The idea is to create a “biosensor that can transmit electrical signals through the membrane,” said María José Retamal, a Ph.D. student at Pontificia Universidad Católica de Chile and first author of the paper.
Lipid membranes separate distinct spaces within cells and define walls between neighboring cells — a functional compartmentalization that serves many physiological processes, protecting genetic material, regulating what comes in and out of cells, and maintaining the function of separate organs.
Synthetic membranes that mimic nature offer the possibility of containing membrane proteins — biological molecules that could be used for detecting toxins, diseases and many other biosensing applications.
More work is needed to standardize the process by which proteins are to be inserted in the membranes, to define the mechanism by which an electrical signal would be transmitted when a protein binds its target, and to calibrate how that signal is detected by the underlying circuitry, Retamal said.
* Retamal and her colleagues created the first artificial membrane without using solvents on a silicon support base. They chose silicon because of its low cost, wide availability and because its “hydrophobicity” (how much it repels water) can be controlled chemically, allowing them to build membranes on top.
Next they evaporated a chemical known as chitosan onto the silicon. Chitosan is derived from chitin, a sugar found in the shells of certain crustaceans, like lobsters or shrimp. Whole bags of the powder can be bought from chemical companies worldwide. They chose this ingredient for its ability to form a moisturizing matrix. It is insoluble in water, but chitosan is porous, so it is capable of retaining water.
Finally they evaporated a phospholipid molecule known as dipalmitoylphosphatidylcholine (DPPC) onto the chitosan-covered silicon substrate and showed that it formed a stable “bilayer,” the classic form of a membrane. Spectroscopy showed that these artificial membranes were stable over a wide range of temperatures.
Abstract of The Journal of Chemical Physics paper
The recent combination of nanoscale developments with biological molecules for biotechnological research has opened a wide field related to the area of biosensors. In the last years, device manufacturing for medical applications adapted the so-called bottom-up approach, from nanostructures to larger devices. Preparation and characterization of artificial biological membranes is a necessary step for the formation of nano-devices or sensors. In this paper, we describe the formation and characterization of a phospholipid bilayer (dipalmitoylphosphatidylcholine, DPPC) on a mattress of a polysaccharide (Chitosan) that keeps the membrane hydrated. The deposition of Chitosan (∼25 Å) and DPPC (∼60 Å) was performed from the gas phase in high vacuum onto a substrate of Si(100) covered with its native oxide layer. The layer thickness was controlled in situ using Very High Resolution Ellipsometry (VHRE). Raman spectroscopy studies show that neither Chitosan nor DPPC molecules decompose during evaporation. With VHRE and Atomic Force Microscopy we have been able to detect phase transitions in the membrane. The presence of the Chitosan interlayer as a water reservoir is essential for both DPPC bilayer formation and stability, favoring the appearance of phase transitions. Our experiments show that the proposed sample preparation from the gas phase is reproducible and provides a natural environment for the DPPC bilayer. In future work, different Chitosan thicknesses should be studied to achieve a complete and homogeneous interlayer.
Scientists have combined two unconventional forms of carbon — one shaped like a soccer ball, the other a tiny diamond — to make a rectifier (which conducts electricity in only one direction).
This tiny electronic component could play a key role in shrinking chip components down to the size of molecules to enable faster, more powerful devices.
“We wanted to see what new, emergent properties might come out when you put these two ingredients together to create a ‘buckydiamondoid,’” said Hari Manoharan of the Stanford Institute for Materials and Energy Sciences (SIMES) at the U.S. Department of Energy’s SLAC National Accelerator Laboratory.
“What we got was basically a one-way valve for conducting electricity.”
The research team, which included scientists from Stanford University, Belgium, Germany and Ukraine, reported its results Sept. 9 in Nature Communications (open access).
Many electronic circuits have three basic components: a material that conducts electrons; rectifiers, which commonly take the form of diodes, to steer that flow in a single direction; and transistors to switch the flow on and off.
Buckyballs — short for buckminsterfullerenes — are hollow carbon spheres whose 1985 discovery earned three scientists a Nobel Prize in chemistry. Diamondoids are tiny linked cages of carbon joined, or bonded, as they are in diamonds, with hydrogen atoms linked to the surface, but weighing less than a billionth of a billionth of a carat. Both are subjects of a lot of research aimed at understanding their properties and finding ways to use them.
In 2007, a team led by researchers from SLAC and Stanford discovered that a single layer of diamondoids on a metal surface can emit and focus electrons into a tiny beam. Manoharan and his colleagues wondered: What would happen if they paired an electron-emitting diamondoid with another molecule that likes to grab electrons?
A valve for channeling electron flow
They discovered that the hybrid is an excellent rectifier: the electrical current flowing through the molecule was up to 50 times stronger in one direction, from electron-spitting diamondoid to electron-catching buckyball, than in the opposite direction.
While this is not the first molecular rectifier ever invented, it’s the first one made from just carbon and hydrogen, a simplicity researchers find appealing, said Manoharan, who is an associate professor of physics at Stanford. The next step, he said, is to see if transistors can be constructed from the same basic ingredients.
“Buckyballs are easy to make — they can be isolated from soot — and the type of diamondoid we used here, which consists of two tiny cages, can be purchased commercially,” he said. “And now that our colleagues in Germany have figured out how to bind them together, others can follow the recipe. So while our research was aimed at gaining fundamental insights about a novel hybrid molecule, it could lead to advances that help make molecular electronics a reality.”
Other research collaborators came from the Catholic University of Louvain in Belgium and Kiev Polytechnic Institute in Ukraine. The primary funding for the work came from U.S. the Department of Energy Office of Science (Basic Energy Sciences, Materials Sciences and Engineering Divisions).
Abstract of Nature Communications paper
The unimolecular rectifier is a fundamental building block of molecular electronics. Rectification in single molecules can arise from electron transfer between molecular orbitals displaying asymmetric spatial charge distributions, akin to p–n junction diodes in semiconductors. Here we report a novel all-hydrocarbon molecular rectifier consisting of a diamantane–C60 conjugate. By linking both sp3 (diamondoid) and sp2 (fullerene) carbon allotropes, this hybrid molecule opposingly pairs negative and positive electron affinities. The single-molecule conductances of self-assembled domains on Au(111), probed by low-temperature scanning tunnelling microscopy and spectroscopy, reveal a large rectifying response of the molecular constructs. This specific electronic behaviour is postulated to originate from the electrostatic repulsion of diamantane–C60 molecules due to positively charged terminal hydrogen atoms on the diamondoid interacting with the top electrode (scanning tip) at various bias voltages. Density functional theory computations scrutinize the electronic and vibrational spectroscopic fingerprints of this unique molecular structure and corroborate the unconventional rectification mechanism.
Researchers at Princeton University have “crystallized” light. They are not shining light through crystal — they are actually transforming light into crystal, as part of an effort to develop exotic materials such as room-temperature superconductors.
The researchers locked together photons so that they became fixed in place. “It’s something that we have never seen before,” said Andrew Houck, an associate professor of electrical engineering and one of the researchers. “This is a new behavior for light.”
The results raise intriguing possibilities for a variety of future materials, and also address questions in condensed matter physics — the fundamental study of matter.
“We are interested in exploring — and ultimately controlling and directing — the flow of energy at the atomic level,” said Hakan Türeci, an assistant professor of electrical engineering and a member of the research team. “The goal is to better understand current materials and processes and to evaluate materials that we cannot yet create.”
The team’s findings, reported online Sept. 8 in the journal Physical Review X (open access), are part of an effort to answer fundamental questions about atomic behavior by creating a device that can simulate the behavior of subatomic particles.
Special-purpose quantum computers
Such a tool could be an invaluable method for answering questions about atoms and molecules that are not answerable even with today’s most advanced computers. In part, that’s because current computers operate under the rules of classical mechanics, while the world of atoms and photons obeys the rules of quantum mechanics, which include a number of strange and very counterintuitive features.
One of these odd properties is called “entanglement,” in which multiple particles become linked and can affect each other over long distances. A computer based on the rules of quantum mechanics could help crack problems that are currently unsolvable. But building a general-purpose quantum computer has proven to be incredibly difficult.
Another approach, which the Princeton team is taking, is to build a system that directly simulates the desired quantum behavior. Although each machine is limited to a single task, it would allow researchers to answer important questions without having to solve some of the more difficult problems involved in creating a general-purpose quantum computer.
The device could also allow physicists to explore fundamental questions about the behavior of matter by mimicking materials that only exist in physicists’ imaginations.
An ‘artificial atom’ that makes photons behave like particles
To build their machine, the researchers created a structure made of superconducting materials that contains 100 billion atoms engineered to act as a single “artificial atom.” They placed the artificial atom close to a superconducting wire containing photons.
By the rules of quantum mechanics, the photons on the wire inherit some of the properties of the artificial atom — in a sense linking them. Normally, photons do not interact with each other, but in this system, the researchers are able to create new behavior in which the photons begin to interact in some ways like particles.
“We have used this blending together of the photons and the atom to artificially devise strong interactions among the photons,” said Darius Sadri, a postdoctoral researcher and one of the authors. “These interactions then lead to completely new collective behavior for light — akin to the phases of matter, like liquids and crystals, studied in condensed matter physics.”
It is of course known from double-slit and other experiments that sometimes light behaves like a wave and other times like a particle. But the Princeton researchers have engineered a whole new particle behavior.
“Here we set up a situation where light effectively behaves like a particle in the sense that two photons can interact very strongly,” he said. “In one mode of operation, light sloshes back and forth like a liquid; in the other, it freezes.”
The current device is relatively small, with only two sites where an artificial atom is paired with a superconducting wire. But the researchers say that by expanding the device and the number of interactions, they can increase their ability to simulate more complex systems — growing from the simulation of a single molecule to that of an entire material. In the future, the team plans to build devices with hundreds of sites with which they hope to observe exotic phases of light such as superfluids and insulators.
“There is a lot of new physics that can be done even with these small systems,” said James Raftery, a graduate student in electrical engineering and one of the authors. “But as we scale up, we will be able to tackle some really interesting questions.”
The research team also included Sebastian Schmidt, a senior researcher at the Institute for Theoretical Physics at ETH Zurich, Switzerland. Support for the project was provided by: the Eric and Wendy Schmidt Transformative Technology Fund; the National Science Foundation; the David and Lucile Packard Foundation; the U.S. Army Research Office; and the Swiss National Science Foundation.
Abstract of Physical Review X paper
Here, we report the experimental observation of a dynamical quantum phase transition in a strongly interacting open photonic system. The system studied, comprising a Jaynes-Cummings dimer realized on a superconducting circuit platform, exhibits a dissipation-driven localization transition. Signatures of the transition in the homodyne signal and photon number reveal this transition to be from a regime of classical oscillations into a macroscopically self-trapped state manifesting revivals, a fundamentally quantum phenomenon. This experiment also demonstrates a small-scale realization of a new class of quantum simulator, whose well-controlled coherent and dissipative dynamics is suited to the study of quantum many-body phenomena out of equilibrium.
UT Arlington researchers have discovered a way to cool electrons to -228 °C at room temperature, which could lead to a new type of transistor that can operate at extremely low energy consumption levels.
The process involves passing electrons through a quantum well to cool them and keep them from heating. The team detailed its research in Nature Communications (open access) on Wednesday, Sept. 10.
“We are the first to effectively cool electrons at room temperature. Researchers have done electron cooling before, but only when the entire device is immersed into an extremely cold cooling bath … of liquid helium or liquid nitrogen,” said Seong Jin Koh, an associate professor in the UT Arlington Materials Science & Engineering Department, who led the research.
To suppress electron excitation and cool electrons, the team used a unique nanoscale structure with a source electrode, a quantum well, a tunneling barrier, a quantum dot, another tunneling barrier, and a drain electrode
Usha Varshney, program director in the National Science Foundation’s Directorate for Engineering, which funded the research, said that when implemented in transistors, “these research findings could potentially reduce energy consumption of electronic devices by more than 10 times, compared to the present technology.”
That translates into much smaller, lighter batteries that don’t have to be charged as often.
UT Dallas scientists were also were part of the research team. The National Science Foundation and the Office of Naval Research supported the research.
Abstract of Nature Communications paper
Fermi-Dirac electron thermal excitation is an intrinsic phenomenon that limits functionality of various electron systems. Efforts to manipulate electron thermal excitation have been successful when the entire system is cooled to cryogenic temperatures, typically <1 K. Here we show that electron thermal excitation can be effectively suppressed at room temperature, and energy-suppressed electrons, whose energy distribution corresponds to an effective electron temperature of ~45 K, can be transported throughout device components without external cooling. This is accomplished using a discrete level of a quantum well, which filters out thermally excited electrons and permits only energy-suppressed electrons to participate in electron transport. The quantum well (~2 nm of Cr2O3) is formed between source (Cr) and tunnelling barrier (SiO2) in a double-barrier-tunnelling-junction structure having a quantum dot as the central island. Cold electron transport is detected from extremely narrow differential conductance peaks in electron tunnelling through CdSe quantum dots, with full widths at half maximum of only ~15 mV at room temperature.
The modular genetic circuits, which are engineered from parts of otherwise unrelated bacterial genomes, can be set up to handle multiple chemical inputs simultaneously with a minimum of interference from their neighbors.
The work, reported in the American Chemical Society journal ACS Synthetic Biology, gives scientists more options as they design synthetic cells for specific tasks, such as production of biofuels, environmental remediation, or treatments for human diseases.
The researchers are creating complex genetic logic circuits similar to those used to build traditional computers and electrical devices. In a simple circuit, if one input and another input are both present (AND gate), the circuit carries out its instruction. With genetic circuitry based on this type of Boolean logic, a genetic logic circuit might prompt the creation of a specific protein when it senses two chemicals — or prompt a cell’s DNA to repress the creation of that protein.
Simple circuits have become easier to create as synthetic biologists develop more tools, but they require more sophisticated tools for complex problems. Rice’s Matthew Bennett and his colleagues are intent upon following a path similar to that of computer programmers, whose capabilities grew from simple Pong to the immersive worlds of modern games.
“One of the ultimate goals of this technology is to allow cells to sense and respond to their environment in programmatic ways,” said Bennett, an assistant professor of biochemistry and cell biology. “We want to be able to program cells to go into an environment and do what they’re supposed to do.
“Right now, one of the main ways we do that is through transcriptional logic gates. These are akin to electronic circuits — the logic gates in our computers. In cells, they work a little bit differently, but there are a lot of parallels.”
Logic gates designed by Bennett’s team and others react in a programmed way when they sense chemicals in their immediate environment. If certain combinations of chemicals are present in the environment, the gate will turn on a gene that may either repress or promote the expression of a protein.
The research, led by Rice graduate student David Shis, drew from a genetic toolbox of chimeric (with parts from different sources) transcription factors. These modular proteins incorporate the gene regulatory capacity of one transcription factor and the environmental sensing capabilities of another.
The researchers demonstrated that as many as four chimeras with the same DNA-binding modules can work together and serve as gates with multiple inputs, either repressing — or overriding the repression of — specific genes. They successfully tested the ability of chimera combinations in the bacteria Escherichia coli to up- or down-regulate the expression of a gene encoding green fluorescent protein.
“Often, when you make a genetic logic gate, you have to have many genes in the background to allow the gate to work,” Bennett said. “We’ve been able to eliminate the need for that by programming transcription factors — which are specific proteins that turn genes on and off — to respond to their environment directly and activate a specific gene in a very modular way.
“We can now program both environmental sensing and downstream genetic regulation into the same module,” he said.
Bennett sees synthetic biology addressing many issues. “We might be able to use cells to report on, or remediate, environmental pollution. Or we might be able to program them to find a tumor in your body and respond to it. To do that, we need to be able to instruct cells to sense the environment of the tumor and, depending on what chemicals the cells detect, respond accordingly.”
Metabolic engineers might find complex synthetic circuits that are able to adjust on the fly, he said. “In fermentation, for example, you might want gene regulation in the cells to change as a process evolves. These new circuits can sense different sugars in the culture and direct gene regulation to maximize production.”
The National Institutes of Health, through the joint National Science Foundation/National Institute of General Medical Sciences Mathematical Biology Program, and the Robert A. Welch Foundation supported the research.
Abstract of ACS Synthetic Biology paper
In prokaryotes, the construction of synthetic, multi-input promoters is constrained by the number of transcription factors that can simultaneously regulate a single promoter. This fundamental engineering constraint is an obstacle to synthetic biologists because it limits the computational capacity of engineered gene circuits. Here, we demonstrate that complex multi-input transcriptional logic gating can be achieved through the use of ligand-inducible chimeric transcription factors assembled from the LacI/GalR family. These modular chimeras each contain a ligand-binding domain and a DNA-binding domain, both of which are chosen from a library of possibilities. When two or more chimeras have the same DNA-binding domain, they independently and simultaneously regulate any promoter containing the appropriate operator site. In this manner, simple transcriptional AND gating is possible through the combination of two chimeras, and multiple-input AND gating is possible with the simultaneous use of three or even four chimeras. Furthermore, we demonstrate that orthogonal DNA-binding domains and their cognate operators allow the coexpression of multiple, orthogonal AND gates. Altogether, this work provides synthetic biologists with novel, ligand-inducible logic gates and greatly expands the possibilities for engineering complex synthetic gene circuits.
Weak repetitive transcranial magnetic stimulation (rTMS) applied to mice can shift abnormal neural connections to more normal locations in the brain, researchers from The University of Western Australia and the Université Pierre et Marie Curie in France have demonstrated.
The discovery has implications for treatment of nervous system disorders related to abnormal brain organization, such as depression, epilepsy, and tinnitus.
To better understand what magnetic stimulation does to the brain, Research Associate Professor Jennifer Rodger from UWA’s School of Animal Biology and colleagues tested a low-intensity version of the therapy — known as low-intensity repetitive transcranial magnetic stimulation (LI-rTMS) — on mice born with abnormal brain organization.
The study is described in the Journal of Neuroscience. Lead author PhD candidate Kalina Makowiecki said the research demonstrated that even at low intensities, pulsed magnetic stimulation could reduce abnormally located neural connections, shifting them towards their correct locations in the brain.
“This reorganization is associated with changes in a brain chemical called BDNF (brain-derived neurotrophic factor) and occurred in several brain regions, across a whole network.
“Our findings greatly increase our understanding of the specific cellular and molecular events that occur in the brain during this therapy and have implications for how best to use it in humans to treat disease and improve brain function,” Makowiecki suggested.
Abstract of Journal of Neuroscience, paper
Repetitive transcranial magnetic stimulation (rTMS) is increasingly used as a treatment for neurological and psychiatric disorders. Although the induced field is focused on a target region during rTMS, adjacent areas also receive stimulation at a lower intensity and the contribution of this perifocal stimulation to network-wide effects is poorly defined. Here, we examined low-intensity rTMS (LI-rTMS)-induced changes on a model neural network using the visual systems of normal (C57Bl/6J wild-type, n = 22) and ephrin-A2A5−/− (n = 22) mice, the latter possessing visuotopic anomalies. Mice were treated with LI-rTMS or sham (handling control) daily for 14 d, then fluorojade and fluororuby were injected into visual cortex. The distribution of dorsal LGN (dLGN) neurons and corticotectal terminal zones (TZs) was mapped and disorder defined by comparing their actual location with that predicted by injection sites. In the afferent geniculocortical projection, LI-rTMS decreased the abnormally high dispersion of retrogradely labeled neurons in the dLGN of ephrin-A2A5−/− mice, indicating geniculocortical map refinement. In the corticotectal efferents, LI-rTMS improved topography of the most abnormal TZs in ephrin-A2A5−/− mice without altering topographically normal TZs. To investigate a possible molecular mechanism for LI-rTMS-induced structural plasticity, we measured brain derived neurotrophic factor (BDNF) in the visual cortex and superior colliculus after single and multiple stimulations. BDNF was upregulated after a single stimulation for all groups, but only sustained in the superior colliculus of ephrin-A2A5−/− mice. Our results show that LI-rTMS upregulates BDNF, promoting a plastic environment conducive to beneficial reorganization of abnormal cortical circuits, information that has important implications for clinical rTMS.
The number of new cases in Liberia is “increasing exponentially,” according to a statement Monday by the World Health Organization (WHO), and “many thousands of new cases are expected in Liberia over the coming 3 weeks.”
There’s also a 20% chance that that the Ebola epidemic (as it is now called) will reach the U.S. by the end of September, according to experts writing in the journal PLOS Currents: Outbreaks (open access), Medical News Today (MNT) reported today, because Nigeria, where the outbreak has also spread, has many international travel links. An estimated 6,000 passengers fly from Nigeria to the U.S. every week.”
However, U.S. health care is expected to halt transmission, limiting outbreaks to isolated cases, according to the PLOS study, and “the risk of transmission of Ebola virus disease during air travel remains low,” WHO advised on August 14.
Meanwhile a fourth patient — a World Health Organization doctor — infected with Ebola virus has arrived from Sierra Leone in the U.S. for treatment, NBC News reported Tuesday, to be treated at Emory University Hospital (the third, Rick Sacra, M.D.., is being treated at the University of Nebraska Medical Center).
And an undisclosed number of people who’ve been exposed to the Ebola virus have been evacuated to the U.S. by Phoenix Air Group, an air ambulance company contracted by the State Department, Yahoo News reported Tuesday.
Meanwhile, a new model by Oxford University, published in the journal eLife Monday, “suggests that Ebola’s animal reservoir, fruit bats, could spread the disease in the animal kingdom and to humans through the dense forest that spans 22 countries,” the Washington Post reported Tuesday.
On the plus side, the Bill & Melinda Gates Foundation announced today (Wednesday Sept. 10) that it will commit $50 million to support the scale up of emergency efforts to contain the Ebola outbreak in West Africa and interrupt transmission of the virus.