Put rats in an IMAX-like surround virtual world limited to vision only, and the neurons in their hippocampi* seem to fire completely randomly — and more than half of those neurons shut down — as if the neurons had no idea where the rat was, UCLA neurophysicists found in a recent experiment.
Put another group of rats in a real room (with sounds and odors) designed to look like the virtual room, and they were just fine.
“Since so many people are using virtual reality, it is important to understand why there are such big differences,” said Mayank Mehta, a UCLA professor of physics, neurology and neurobiology in the UCLA College and the study’s senior author.
When hippocampus neurons lose rhythm
When people walk or try to remember something, the activity in the hippocampus becomes very rhythmic and these complex, rhythmic patterns appear, Mehta said. Those rhythms facilitate the formation of memories and our ability to recall them. Mehta hypothesizes that in some people with learning and memory disorders, these rhythms are impaired.
The mechanisms by which the brain makes those cognitive maps remains a mystery, but neuroscientists have surmised that the hippocampus computes distances between the subject and surrounding landmarks, such as buildings and mountains. But in a real maze, other cues, such as smells and sounds, can also help the brain determine spaces and distances.
“Neurons involved in memory interact with other parts of the hippocampus like an orchestra,” Mehta said. “It’s not enough for every violinist and every trumpet player to play their music flawlessly. They also have to be perfectly synchronized.”
Mehta believes that by retuning and synchronizing these rhythms, doctors will be able to repair damaged memory, but said doing so remains a huge challenge.
The study was published in the journal Nature Neuroscience. The research was funded by the W.M. Keck Foundation and the National Institutes of Health.
* The hippocampus is a brain region (on both sides of the brain) involved in spatial learning and constructing and using mental maps.Abstract of Impaired spatial selectivity and intact phase precession in two-dimensional virtual reality
During real-world (RW) exploration, rodent hippocampal activity shows robust spatial selectivity, which is hypothesized to be governed largely by distal visual cues, although other sensory-motor cues also contribute. Indeed, hippocampal spatial selectivity is weak in primate and human studies that use only visual cues. To determine the contribution of distal visual cues only, we measured hippocampal activity from body-fixed rodents exploring a two-dimensional virtual reality (VR). Compared to that in RW, spatial selectivity was markedly reduced during random foraging and goal-directed tasks in VR. Instead we found small but significant selectivity to distance traveled. Despite impaired spatial selectivity in VR, most spikes occurred within ~2-s-long hippocampal motifs in both RW and VR that had similar structure, including phase precession within motif fields. Selectivity to space and distance traveled were greatly enhanced in VR tasks with stereotypical trajectories. Thus, distal visual cues alone are insufficient to generate a robust hippocampal rate code for space but are sufficient for a temporal code.
…printed in space: a faceplate for the printer s own extruder printhead. The Made in Space 3D printer reached the ISS on Sept. 22, but had to…
…SpaceX Dragon cargo ship dropped off a 3D printer custom-manufactured by Made In Space earlier this month. But try to wrap your head around the…
Activated through permanent stress, immune cells in the brain can cause changes to the brain, resulting in mental disorders, a research team headed by professor Georg Juckel, Medical Director of the Ruhr-Universität Bochum (RUB) LWL university clinic, has found. The research was based on psychoneuroimmunology, the study of the interaction between psychological processes and the nervous and immune systems of the human body.
The team focused mainly on microglia, a type of glial cell that acts as the main immune defense in the central nervous system and comprise 10–15% of all cells found within the brain. Under normal circumstances, microglia repair synapses between nerves cells in the brain and stimulate their growth. Repeatedly activated, however, microglia may damage nerve cells and trigger inflammation processes — a risk factor for mental diseases such as schizophrenia, the researchers found.
Interactions between the brain and immune system
“Originally, the brain and the immune system were considered two separate systems,” explains Juckel in RUB’s RUBIN publication. “It was assumed that the brain operates independently from the immune system and has hardly anything to do with it. This, however, is not true.
“Direct neural connections from the brain to organs of the immune system, such as the spleen, do exist. And vice versa, immune cells migrate to the brain, and local immune cells carry out various tasks there, including disposing of damaged synapses. Notably, treatment with an immune system mediator such as Interferon alpha, used in hepatitis C treatment, for example, leads to depressions in 20 to 30 per cent of the patients.
The RUB studies of microglia focused on patients suffering from multiple sclerosis or Alzheimer’s. “The brain areas affected by inflammation or neurodegeneration are surrounded by a circle of microglial cells,” says Juckel. “In schizophrenia patients, the number of microglial cells is considerably higher than in healthy individuals. Here, the cells cause synaptic links between neurons to degenerate,” especially in schizophrenia patients.
Microglial cells can also be activated via the peripheral immune system (outside the brain). “Acute stress stimulates the immune system. In stress situations, the body readies itself for fight or flight [and] prepares itself for potential injuries,” explains researcher Astrid Friebe, whose team at the LWL clinic lab studies the mechanisms involved in these processes. But under permanent stress, “microglial cells adapt to the new conditions, in a way. The more frequently they get triggered due to stress, the more they are inclined to remain in that mode. This is when microglial cells start to pose a danger to the brain.”
But not every individual who is under permanent stress will develop a mental disorder, the researchers note. They suspect the cause starts in the embryonic stage. U.S. researchers demonstrated in the 1950s that children born of mothers who contracted true viral influenza during pregnancy were seven times as likely to suffer schizophrenia later in life. The RUB researchers confirmed this hypothesis in animal models.
MIT spinoff Empatica, which is developing a medical-quality wearable device to monitor epileptic seizures* and alert caregivers, has launched an Indiegogo crowdfunding campaign to fund its development.
“When people that have epilepsy wear Embrace, they will get an alert when an unusual event happens, like a convulsive seizure,” the Indiegogo site says. “It will go via their smartphone to parents, roommates or caregivers, so somebody can check on them. Additionally, one of these caregivers can wear a ‘companion’ Embrace too.
“When the two Embraces are within range (e.g., in nearby rooms), the ‘companion’ Embrace worn by the caregivers will vibrate to alert them.”
Developed by a team led by Rosalind W. Picard, PhD, founder and director of the Affective Computing Research Group at MIT Media Lab, the Embrace wristband primarily monitors electrodermal activity (EDA) and movement. The key objective is to prevent sudden unexplained death in epilepsy (SUDEP), the number one cause of death in epilepsy (Institute of Medicine, 2012). Picard tells the dramatic story here of the accidental discovery of pre-seizure EDA surges.
Embrace is also a real-time stress indicator. “If you push yourself too much, the Embrace will gently remind you with a vibration that you need some time to recover,” the Indiegogo site says.
At launch, Embrace will have two apps available: an event detector for people suffering with Epilepsy and an alert for when your electrodermal response climbs to a level you specify (providing personal stress feedback). Embrace also comes with a diary for understanding better your daily life. Empatica plans to develop an API for additional apps.
The funding campaign runs until December 24 with a goal of raising $100,000, but has already raised more than $140,000. Backers who pledge $169 will receive a discounted Embrace device.
* One of every 26 people in the USA will suffer from epilepsy at some point in their lifetime (Institute of Medicine, 2012). Today, approximately 65 million people suffer from epilepsy worldwide.
Empatica | Embrace – A gorgeous watch designed to save lives!
…Wadhwa, an academic with positions at Stanford, Duke and Singularity University who advised the Chilean and Spanish efforts, bemoaned the United…
Raptopoulos’ vision for Matternet first hit him when he was attending Singularity University, a non-traditional, future-focused school that aims…
An international team of scientists has developed a fast, low-cost way of making low-cost medical electronic touch sensors by printing conductive silver nanowire inks directly on paper, using a 2D programmed printing machine.
Anming Hu of the University of Tennessee Department of Mechanical, Aerospace and Biomedical Engineering and colleagues point out that paper, which is available worldwide at low cost, makes an excellent surface for lightweight, foldable “paper electronics: that could be made and used nearly anywhere.
Scientists have already fabricated point-of-care diagnostic tests and portable DNA detectors using paper. But these require complicated and expensive lithography manufacturing techniques.
Silver nanowire ink, which is highly conductive and stable, offers a more practical solution, Hu says. His team developed a system for printing a pattern of silver ink on paper within a few minutes and then sintering (hardening) it with the light of a camera flash.
The resulting capacitive touch device was ultrathin and ultralight and responded to touch even when curved, folded, and unfolded 15 times, and rolled and unrolled 5,000 times. It could serve as the basis for many useful applications, the researchers suggest.
The study was published in the journal ACS Applied Materials & Interfaces.Abstract of Direct Writing on Paper of Foldable Capacitive Touch Pads with Silver Nanowire Inks
Paper-based capacitive touch pads can be fabricated utilizing high-concentration silver nanowire inks needle-printed directly onto paper substrates through a 2D programmable platform. Post deposition, silver nanowire tracks can be photonically sintered using a camera flash to reduce sheet resistance similar to thermal sintering approaches. Touch pad sensors on a variety of paper substrates can be achieved with optimized silver nanowire tracks. Rolling and folding trials, which yielded only modest changes in capacitance and no loss of function, coupled with touch pad functionality on curved surfaces, suggest sufficient flexibility and durability for paper substrate touch pads to be used in diverse applications. A simplified model to predict touch pad capacitance variation ranges with differing touch conditions was developed, with good agreement against experimental results. Such paper-based touch pads have the advantage of simple structure, easy fabrication, and fast sintering, which holds promise for numerous commercial applications including low-cost portable devices where ultrathin and lightweight features, coupled with reliable bending stability are desirable.
New observations with ESO’s Very Large Telescope (VLT) in Chile have revealed alignments over the largest structures ever discovered in the Universe. A European research team has found that the rotation axes of the central supermassive black holes in a sample of quasars are parallel to each other over distances of billions of light-years. The team has also found that the rotation axes of these quasars tend to be aligned with the vast structures in the cosmic web in which they reside.
Quasars are galaxies with very active supermassive black holes at their centers. These black holes are surrounded by spinning discs of extremely hot material that is often spewed out in long jets along their axes of rotation. Quasars can shine more brightly than all the stars in the rest of their host galaxies put together.
A team led by Damien Hutsemékers from the University of Liège in Belgium used the FORS instrument on the VLT to study 93 quasars that were known to form huge groupings spread over billions of light-years, seen at a time when the Universe was about one third of its current age.
“The first odd thing we noticed was that some of the quasars’ rotation axes were aligned with each other — despite the fact that these quasars are separated by billions of light-years,” said Hutsemékers.
The new VLT results also indicate that the rotation axes of the quasars tend to be parallel to the large-scale structure in which they find themselves — a cosmic web of filaments and clumps around huge voids where galaxies are scarce.
So, if the quasars are in a long filament then the spins of the central black holes will point along the filament. The researchers estimate that the probability that these alignments are simply the result of chance is less than 1%.
“A correlation between the orientation of quasars and the structure they belong to is an important prediction of numerical models of evolution of our Universe. Our data provide the first observational confirmation of this effect, on scales much larger that what had been observed to date for normal galaxies,” adds Dominique Sluse of the Argelander-Institut für Astronomie in Bonn, Germany and University of Liège.
The team could not see the rotation axes or the jets of the quasars directly. Instead they measured the polarization of the light from each quasar and, for 19 of them, found a significantly polarized signal. The direction of this polarization, combined with other information, could be used to deduce the angle of the accretion disc and hence the direction of the spin axis of the quasar.
“The alignments in the new data, on scales even bigger than current predictions from simulations, may be a hint that there is a missing ingredient in our current models of the cosmos,” concludes Sluse.
Abstract of Alignment of quasar polarizations with large-scale structures
We have measured the optical linear polarization of quasars belonging to Gpc-scale quasar groups at redshift z ∼ 1.3. Out of 93 quasars observed, 19 are significantly polarized. We found that quasar polarization vectors are either parallel or perpendicular to the directions of the large-scale structures to which they belong. Statistical tests indicate that the probability that this effect can be attributed to randomly oriented polarization vectors is of the order of 1%. We also found that quasars with polarization perpendicular to the host structure preferentially have large emission line widths while objects with polarization parallel to the host structure preferentially have small emission line widths. Considering that quasar polarization is usually either parallel or perpendicular to the accretion disk axis depending on the inclination with respect to the line of sight, and that broader emission lines originate from quasars seen at higher inclinations, we conclude that quasar spin axes are likely parallel to their host large-scale structures.
A novel robotic walker that helps patients carry out therapy sessions to regain their leg movements and natural gait has been invented by a team of researchers led by assistant professor Yu Haoyong from the National University of Singapore Department of Biomedical Engineering.
Survivors of stroke or other neurological conditions such as spinal cord injuries, traumatic brain injuries and Parkinson’s disease often struggle with mobility. To regain their motor functions, these patients are required to undergo challenging physical therapy sessions.
The robotic walker is capable of supporting a patient’s weight while providing the right amount of force at the pelvis of the patient to help the patient walk with a natural gait.
The system also increases productivity of physiotherapists and improves the quality of rehabilitation sessions. Quantitative data can be collected during the therapy sessions to allow doctors and physiotherapists to monitor the progress of the patient’s rehabilitation.How it works
The robotic walker comprises six modules:
- A suite of body sensors measure the gait of the patient so that the walker can provide the right amount of support to help the patient walk with a natural gait.
- The electrical stimulation unit can deliver targeted electrical current to stimulate the correct muscle at the correct timing to facilitate joint movement.
- The walker can provide assistive force, resistive force, and disturbance force depending on the training requirements set by the therapists, so, patients can go through different training schemes that are often difficult to achieve manually.
- The patient interacts with the walker through a force sensor which detects the user intent. The intelligent control system uses this information as well as the gait information provided by the body sensors to control the movement of the walker.
- The patient can practice gait movements by walking over ground instead of on a treadmill. Such features enable the gait training to be conducted in a natural and intuitive way.
- Gait kinematics and muscle activation pattern data allow for monitoring the progress of the patients’ recovery.
Besides improving the quality of rehabilitation sessions, the robotic walker will also relieve physiotherapists from the physical strain of assisting patients with the exercises.
Currently, gait training requires one or two physiotherapists to support the patient’s body weight and trunk, and an additional physiotherapist may be needed to move the paretic leg. Such therapy sessions are labor-intensive and ergonomically unfavorable for the physiotherapists, who often suffer from back injuries. This limits the quality, duration and frequency of rehabilitation sessions.
Haoyong is collaborating with spinoff Hope Technik to fine-tune and commercialize the robotic walker. He is also planning to conduct clinical studies to validate the training effects on patients and to develop novel therapy regimes together with clinicians at the National University Hospital.
“Our vision is for the robotic walker to be installed at outpatient clinics and rehabilitation centres to benefit patients who need therapy sessions. There is also a possibility that patients can perform exercises in the comfort of their own homes,” said Haoyong.
China and ‘one or two others’ can shut US electric grids and other critical infrastructure, says NSA director
China and “one or two others” can shut down the U.S. electric grids and other critical infrastructure and is performing electronic reconnaissance on a regular basis, said NSA director Admiral Michael Rogers, testifying Thursday (Nov. 20) at a House Select Intelligence Committee hearing on U.S. efforts to combat cybersecurity.
“All of that leads me to believe it is only a matter of when, not if, we are going to see something dramatic,” he said. In cyberspace, “you can literally do almost anything you want, and there is not a price to pay for it.
“China’s economic cyber espionage … has grown exponentially in terms of volume and damage done to our nation’s economic future,” he added. “The Chinese intelligence services that conduct these attacks have little to fear because we have no practical deterrents to that theft. This problem is not going away until that changes.”
Cambrian Genomics CEO says his company just raised $10M to print more DNA Cambrian Genomics, which has created a promising DNA-printing technology,
Georgia Tech associate professor Mark Ried has developed a new kind of “Turing test” — a test proposed in 1950 by computing pioneer Alan Turing to determine whether a machine or computer program exhibits human-level intelligence.Most Turing test designs require a machine to engage in dialogue and convince (trick) a human judge that it is an actual person. But creating certain types of art also requires intelligence, leading Reid to consider if that approach might lead to a better gauge of whether a machine can replicate human thought.
“It’s important to note that Turing never meant for his test to be the official benchmark as to whether a machine or computer program can actually think like a human,” Riedl said.
“And yet it has, and it has proven to be a weak measure because it relies on deception. This proposal suggests that a better measure would be a test that asks an artificial agent to create an artifact requiring a wide range of human-level intelligent capabilities.”
The Lovelace 2.0 Test
To that end, Riedl has created the Lovelace 2.0 Test of Artificial Creativity and Intelligence.
Here are the basic test rules:
- The artificial agent passes if it develops a creative artifact from a subset of artistic genres deemed to require human-level intelligence and the artifact meets certain creative constraints given by a human evaluator.
- The human evaluator must determine that the object is a valid representative of the creative subset and that it meets the criteria. (The created artifact needs only meet these criteria — it does not need to have any aesthetic value.)
- A human referee must determine that the combination of the subset and criteria is not an impossible standard.
The Lovelace 2.0 Test stems from the original Lovelace* Test as proposed by Bringsjord, Bello and Ferrucci in 2001. The original test required that an artificial agent produce a creative item in such a way that the agent’s designer cannot explain how it developed the creative item. The item, thus, must be created in such a way that is valuable, novel and surprising.
Riedl contends that the original Lovelace test does not establish clear or measurable parameters. Lovelace 2.0, however, enables the evaluator to work with defined constraints without making value judgments such as whether the artistic object created surprise.
Riedl’s paper, available here, will be presented at Beyond the Turing Test, an Association for the Advancement of Artificial Intelligence (AAAI) workshop to be held January 25–29, 2015, in Austin, Texas.
* In honor of Ada Lovelace, considered the world’s first computer programmer.
Northwestern University scientists have demonstrated a simple but powerful tool called NanoFlare that can detect live cancer cells in the bloodstream, potentially long before settling somewhere in the body and forming a dangerous tumor.
The NanoFlare technology is the first genetic-based approach that is able to detect live circulating tumor cells out of the complex matrix that is human blood — no easy feat. The NanoFlares are tiny spherical nucleic acids with gold nanoparticle cores outfitted with single-stranded DNA “flares” (glowing markers).
In a breast cancer study, the NanoFlares easily entered cells and lit up the cell if a biomarker target was present, even if only a trace amount.
“This technology has the potential to profoundly change the way breast cancer in particular and cancers in general are both studied and treated,” said Chad A. Mirkin, PhD, a noted nanomedicine expert and a corresponding author of the study.
Mirkin’s colleagues C. Shad Thaxton, M.D. and Dr. Chonghui Cheng, M.D., both of Northwestern University Feinberg School of Medicine, are also corresponding authors.
The research team, in a paper published the week of Nov. 17 in the Proceedings of the National Academy of Sciences (PNAS), reports two key innovations:
- The ability to track tumor cells in the bloodstream based on genetic content located within the cell itself, as opposed to using proteins located on the cell’s surface (current technology)
- The ability to collect the cells in live form, so they may be studied and used to inform researchers and clinicians as to how to treat a disease — an important step toward personalized medicine
“Cancers are very genetically diverse, and it’s important to know what cancer subtype a patient has,” Mirkin said. “Now you can think about collecting a patient’s cells and studying how those cells respond to different therapies. The way a patient responds to treatment depends on the genetic makeup of the cancer.”
Mirkin is the George B. Rathmann Professor of Chemistry in the Weinberg College of Arts and Sciences and professor of medicine, chemical and biological engineering, biomedical engineering and materials science and engineering.
How it works
A NanoFlare is designed to recognize a specific genetic code snippet associated with a cancer. The core nanoparticle, only 13 nanometers in diameter, enters cells, and the NanoFlare seeks its target. If the genetic target is present in the cell, the NanoFlare binds to it and the reporter “flare” is released to produce a fluorescent signal. The researchers then can isolate those cells.
“The NanoFlare turns on a light in the cancer cells you are looking for,” said Thaxton, an assistant professor of urology at Feinberg. “That the NanoFlares are effective in the complex matrix of human blood is a great technical advance. We can find small numbers of cancer cells in blood, which really is like searching for a needle in a haystack.”
Once they identified the cancer cells, the researchers were able to separate them from normal cells. This ability to isolate, culture and grow the cancer cells will allow researchers to zero in on the cancer cells that matter to the health of the patient. Most circulating tumor cells may not metastasize, and analysis of the cancer cells could identify those that will.
“This could lead to personalized therapy where we can look at how an individual’s cells respond to different therapeutic cocktails,” said Mirkin, whose lab developed NanoFlares in 2007.
In the study, the genetic targets were messenger RNA (mRNA) that code for certain proteins known to be biomarkers for aggressive breast cancer cells.
The research team first used the blood of healthy individuals, spiking some of the blood with living breast cancer cells to see if the NanoFlares could detect them. (Unspiked blood was used as a control.)
Cheng, an assistant professor of medicine in hematology/oncology at Feinberg, provided the cell lines and NanoFlare targets that the researchers used to model blood samples taken from breast cancer patients.
The research team tested four different NanoFlares, each with a different genetic target relevant to breast cancer metastasis. The technology successfully detected the cancer cells with less than 1 percent incidence of false-negative results.
Currently, in another study, the researchers are focused on detecting circulating tumor cells in the blood of patients with a diagnosis of breast cancer.
“When it comes to detecting and treating cancer, the mantra is the earlier, the better,” Thaxton said. “This technology may enable us to better detect circulating cancer cells and provides another tool to add to the toolkit of cancer diagnosis.”
Mirkin, Thaxton and Cheng are members of the Robert H. Lurie Comprehensive Cancer Center of Northwestern University.
The National Cancer Institute, the National Institutes of Health, the American Cancer Society, the Air Force Office of Scientific Research, and the Howard Hughes Medical Institute supported the research.Abstract of NanoFlares for the detection, isolation, and culture of live tumor cells from human blood
Metastasis portends a poor prognosis for cancer patients. Primary tumor cells disseminate through the bloodstream before the appearance of detectable metastatic lesions. The analysis of cancer cells in blood—so-called circulating tumor cells (CTCs)—may provide unprecedented opportunities for metastatic risk assessment and investigation. NanoFlares are nanoconstructs that enable live-cell detection of intracellular mRNA. NanoFlares, when coupled with flow cytometry, can be used to fluorescently detect genetic markers of CTCs in the context of whole blood. They allow one to detect as few as 100 live cancer cells per mL of blood and subsequently culture those cells. This technique can also be used to detect CTCs in a murine model of metastatic breast cancer. As such, NanoFlares provide, to our knowledge, the first genetic-based approach for detecting, isolating, and characterizing live cancer cells from blood and may provide new opportunities for cancer diagnosis, prognosis, and personalized therapy.
Computer software only recently became smart enough to recognize objects in photographs. Now, Stanford researchers using machine learning have created a system that takes the next step, writing a simple story of what’s actually happening in any digital image.
“The system can analyze an unknown image and explain it in words and phrases that make sense,” said Fei-Fei Li, a professor of computer science and director of the Stanford Artificial Intelligence Lab.
“This is an important milestone,” Li said. “It’s the first time we’ve had a computer vision system that could tell a basic story about an unknown image by identifying discrete objects and also putting them into some context.”
At the heart of the Stanford system are algorithms that enable the system to improve its accuracy by scanning scene after scene, looking for patterns, and then using the accumulation of previously described scenes to extrapolate what is being depicted in the next unknown image.
“It’s almost like the way a baby learns,” Li said.
She and her collaborators, including Andrej Karpathy, a graduate student in computer science, describe their approach in a paper submitted in advance of a forthcoming conference on cutting edge research in the field of computer vision.
Eventually these advances could lead to robotic systems that can navigate unknown situations. In the near term, machine-based systems that can discern the story in a picture promise to enable people to search photo or video archives and find specific images.
“Most of the traffic on the Internet is visual data files, and this might as well be dark matter as far as current search tools are concerned,” Li said. “Computer vision seeks to illuminate that dark matter.”
These findings are based on two years of effort that flows from research that Li has been pursuing for a decade. Her work builds on advances that have come, slowly at times, over the last 50 years since MIT scientist Seymour Papert convened a “summer project” to create computer vision in 1966.
It took researchers 20 years to create systems that could take the relatively simple first step of recognizing discrete objects in photographs.
Machine learning algorithms
More recently the emergence of the Internet has helped to propel computer vision. On one hand, the growth of photo and video uploads has created a demand for tools to sort, search and sift visual information. On the other, sophisticated algorithms running on powerful computers have led to electronic systems that can train themselves by performing repetitive tasks, improving as they go.
Computer scientists call this machine learning, and Li likened this to how a child learns soccer by getting out and kicking the ball. A coach might demonstrate how to kick, and comment on the child’s technique. But improvement occurs from within as the child’s eyes, brain, nerves and muscles make tiny adjustments.
Li’s latest algorithms incorporate work that her researchers and others have done. This includes training their system on a visual dictionary, using a database of more than 14 million objects. Each object is described by a mathematical term, or vector, that enables the machine to recognize the shape the next time it is encountered. Those mathematical definitions are linked to the words humans would use to describe the objects, be they cars, carrots, men, mountains or zebras.
Li played a leading role in creating this training tool, the ImageNet project, but her current work goes well beyond memorizing this visual dictionary.
Her team’s new computer vision algorithm trained itself by looking for patterns in a visual dictionary, but this time a dictionary of scenes, a more complicated task than looking just at objects.
This was a smaller database, made up of tens of thousands of images. Each scene is described in two ways: in mathematical terms that the machine could use to recognize similar scenes and also in a phrase that humans would understand. For instance, one image might be “cat sits on keyboard” while another could be “girl rides on horse in field.”
These two databases – one of objects and the other of scenes – served as training material. Li’s machine-learning algorithm analyzed the patterns in these predefined pictures and then applied its analysis to unknown images and used what it had learned to identify individual objects and provide some rudimentary context. In other words, it told a simple story about the image.”
University of Michigan scientists have come up with a possible explanation for the impressive ability of neurons to perform a wide range of functions.
They explored this using the C. elegans* roundworm. They found that a single neuron in C. elegans regulates both the speed and direction in which the worm moves, shedding light on how the human brain works, say investigators in the lab of Shawn Xu, a faculty member in the University of Michigan Life Sciences Institute.
The trick: the neuron is apparently able to route information through multiple downstream neural circuits, with each circuit controlling a specific behavioral output.
“Understanding how the nervous system and genes lead to behavior is a fundamental question in neuroscience, and we wanted to figure out how C. elegans are able to perform a wide range of complex behaviors with their small nervous systems,” Xu said.”Scientists think that even though humans have billions of neurons, some perform multiple functions.”
Both analog and digital
Xu’s team used a multifaceted approach, integrating calcium imaging, optogenetics,
molecular genetics, laser ablation, and electrophysiology at single-neuron resolution. They found that C. elegans synapses encode both analog- and digital-like behavioral outputs.
The model neuron studied, called AIY, regulates at least two distinct motor (movement) outputs: locomotion speed and direction-switch. AIY interacts with two circuits, one that is inhibitory and controls changes in the direction of the worm’s movement, and a second that is excitatory and controls speed.
“It’s important to note that these two circuits have connections with other neurons and may cross-talk with each other,” Xu said. “Neuronal control of behavior is very complex.”
Xu is also a professor of molecular and integrative physiology at the U-M Medical School.at the University of Michigan Life Sciences Institute
The findings were published online in the journal Cell. The research is also featured on the cover. The research was supported by the National Institutes of Health.
* C. elegans has a simple nervous system, containing only 302 neurons, making it ideal as a model for neurological functions.
Abstract of Cell paper
Model organisms usually possess a small nervous system but nevertheless execute a large array of complex behaviors, suggesting that some neurons are likely multifunctional and may encode multiple behavioral outputs. Here, we show that theC. elegans interneuron AIY regulates two distinct behavioral outputs: locomotion speed and direction-switch by recruiting two different circuits. The “speed” circuit is excitatory with a wide dynamic range, which is well suited to encode speed, an analog-like output. The “direction-switch” circuit is inhibitory with a narrow dynamic range, which is ideal for encoding direction-switch, a digital-like output. Both circuits employ the neurotransmitter ACh but utilize distinct postsynaptic ACh receptors, whose distinct biophysical properties contribute to the distinct dynamic ranges of the two circuits. This mechanism enables graded C. elegans synapses to encode both analog- and digital-like outputs. Our studies illustrate how an interneuron in a simple organism encodes multiple behavioral outputs at the circuit, synaptic, and molecular levels.
Neuroscientists have discovered mechanisms that enable certain brain cells to persuade others to create “the wave” (a wave of standing spectators that travels through a crowd*), which may help understand more about neurocognitive disorders such as dementia, the researchers say.
Inhibitory neurons** can persuade networks of other neurons to imitate their vibrations, setting off global synchronous oscillations in the brain. The neuroscientists, at Imperial College London and the Max Planck Institute for Brain Research, believe these collective synchronous oscillations play a key role in cognitive function.
This study was published (open access) Tuesday (Nov. 18) in the journal Nature Communications.
Claudia Clopath, co-author from the Department of Bioengineering at Imperial College London, says disruptions to the wave may contribute to neurocognitive disorders such as dementia. “Our hope is that ultimately our research will lead to new insights into these disorders and how they can be treated.”
The researchers developed a mathematical model showing the two mechanisms that inhibitory neurons need to convince others to join them in their rhythmical vibrations. The first is the mechanism that enables the inhibitory neurons to vibrate on their own, known as sub-threshold resonance.
The second mechanism is a nanoscopic hole known as a gap-junction. There are many of these on the surface of the inhibitory neuron and they allow neurons to communicate directly with one another, enabling inhibitory neurons to set off a collective vibration.
The fact that inhibitory neurons are able to determine how and when whole networks of neurons will vibrate suggests that they are much more important in brain function than scientists had previously thought, say the researchers.
The next step will be research on inhibitory neurons to fully understand why vibrations are important for cognitive functions. The neuroscientists believe that there may ultimately be a way to manipulate inhibitory neurons to improve how they vibrate, which might one day lead to better treatments for people with neurocognitive diseases.
* Known as the “Mexican wave” outside North America
* Neurons belong to one of two groups: inhibitory (inhibit other neurons from firing) or excitatory (stimulate firing).
ETH Zurich researchers have developed a novel gene regulation method that allows specific brainwaves to control gene expression (conversion of a gene into a protein) for therapeutic purposes.
The concept is a thought-controlled implant that could one day help combat neurological diseases, such as chronic headaches, back pain, and epilepsy.
An EEG-based BCI (brain-controlled interface) would detect the patient’s related brainwave patterns, which would be used to trigger a gene switch that would modulate the creation of certain chemical agents (such a drugs).
The experimental system is described in the journal Nature Communications. The implant was initially tested in cell cultures and mice, and controlled by the brainwaves of various test subjects.
Wireless transmission from brain to implant
The system uses an EEG headset (MindSet from NeuroSky) was used in the experiment) to detect brainwaves that are wirelessly transmitted via Bluetooth to a microcontroller (Arduino Uno). It controls an inductive field generator (transmitter) that wirelessly powers an inductively linked optogenetic near-infrared* LED light for defined periods of time (60 min/30 s). The light illuminates a culture chamber containing genetically modified cells, causing them to produce the desired protein. The protein then diffuses from the culture chamber of the implant into the mouse’s bloodstream. For the tests, the researchers used the SEAP protein, an easy-to-detect human model protein.
To regulate the quantity of released protein, three mental states were used: concentration, meditation, and biofeedback. Test subjects who played Minecraft on the computer (were concentrating) induced average SEAP values in the bloodstream of the mice. When completely relaxed (meditation), the researchers recorded very high SEAP values in the test animals. For biofeedback, the test subjects observed the LED light of the implant in the body of the mouse and were able to consciously switch the LED light on or off via the visual feedback. This in turn was reflected by the varying amounts of SEAP in the bloodstream of the mice.
“Controlling genes in this way is completely new and is unique in its simplicity,” explains Martin Fussenegger, Professor of Biotechnology and Bioengineering in the ETH Department of Biosystems.
According to the researchers, far into the future, patients may learn to generate specific mental states (for pain relief or locked-in syndrome, for example) to drive therapeutic implants to produce relevant doses of protein pharmaceuticals; or for neurodegenerative disorders such as epilepsy, the system could autonomously produce the chemicals, with closed-loop control.
* Near-infrared light was used because it is generally not harmful to human cells, can penetrate deep into the tissue, and enables the function of the implant to be visually tracked.Abstract of Mind-controlled transgene expression by a wireless-powered optogenetic designer cell implant
Synthetic devices for traceless remote control of gene expression may provide new treatment opportunities in future gene- and cell-based therapies. Here we report the design of a synthetic mind-controlled gene switch that enables human brain activities and mental states to wirelessly programme the transgene expression in human cells. An electroencephalography (EEG)-based brain–computer interface (BCI) processing mental state-specific brain waves programs an inductively linked wireless-powered optogenetic implant containing designer cells engineered for near-infrared (NIR) light-adjustable expression of the human glycoprotein SEAP (secreted alkaline phosphatase). The synthetic optogenetic signalling pathway interfacing the BCI with target gene expression consists of an engineered NIR light-activated bacterial diguanylate cyclase (DGCL) producing the orthogonal second messenger cyclic diguanosine monophosphate (c-di-GMP), which triggers the stimulator of interferon genes (STING)-dependent induction of synthetic interferon-β promoters. Humans generating different mental states (biofeedback control, concentration, meditation) can differentially control SEAP production of the designer cells in culture and of subcutaneous wireless-powered optogenetic implants in mice.
A new study shows for the first time that playing action video games improves learning capabilities more generally, not just the skills taught in the game.
According to Daphne Bavelier, a research professor in brain and cognitive sciences at the University of Rochester, our brains keep predicting what will come next when listening to a conversation, driving, or even preforming surgery. “To sharpen its prediction skills, our brains constantly build models, or ‘templates,’ of the world,” she explained. “The better the template, the better the performance. And now we know playing action video game actually fosters better templates.”Action Players vs. Non-Action Players
In the current study, published in the Proceedings of the National Academy of Sciences, Bavelier and her team first used a pattern discrimination task to compare action video game players’ visual performance with that of individuals who do not play action video games.
The action-gamers outperformed the non-action gamers. The key to the action-gamers success, the researchers found, was that their brains used a better template for the task at hand.
So the question was: were habitual players of fast-paced, action-rich video games already endowed with better templates independently of their game play, or did the action game play itself lead them to have better templates?
To answer that, the researchers recruited individuals with little video game experience, and as part of the experiment, asked them to play video games for 50 hours over the course of nine weeks. One group played action video games, e.g., Call of Duty. The second group played 50 hours of non-action video games, such as The Sims.
The trainees were tested on a pattern discrimination task before and after the video game “training.” The test showed that the action video games players improved their templates, compared to the control group who played the non-action video games.Measuring Learning
The authors then turned to neural modeling to investigate how action video games may foster better templates. When the researchers gave action gamers a perceptual learning task, the team found that the action video game players were able to build and fine-tune templates quicker than non-action game control participants. And they did so on the fly as they engaged in the task.
Being a better learner means developing the right templates faster and thus better performance. And playing action video games, the research team found, boosts that process.
“When they began the perceptual learning task, action video gamers were indistinguishable from non-action gamers; they didn’t come to the task with a better template,” said Bavelier. “Instead, they developed better templates for the task, much, much faster showing an accelerated learning curve.”
The researchers also found that the action gamers’ improved performance is a lasting effect. When tested several months to a year later, the action-trained participants still outperformed the other participants, suggesting that they retained their ability to build better templates.
Bavelier’s team is currently investigating which characteristics in action video games are key to boost players’ learning. “Games other than action video games may be able to have the same effect,” she said. “They may need to be fast paced, and require the player to divide his or her attention, and make predictions at different time scales.”
Researchers from Princeton University, University of Geneva, University of Wisconsin-Madison, and Ohio State University also contributed to the study. The Office of Naval Research, the Swiss National Foundation, The Human Frontier Science Program, and the National Eye Institute supported the research.Abstract of Action video game play facilitates the development of better perceptual templates
The field of perceptual learning has identified changes in perceptual templates as a powerful mechanism mediating the learning of statistical regularities in our environment. By measuring threshold-vs.-contrast curves using an orientation identification task under varying levels of external noise, the perceptual template model (PTM) allows one to disentangle various sources of signal-to-noise changes that can alter performance. We use the PTM approach to elucidate the mechanism that underlies the wide range of improvements noted after action video game play. We show that action video game players make use of improved perceptual templates compared with nonvideo game players, and we confirm a causal role for action video game play in inducing such improvements through a 50-h training study. Then, by adapting a recent neural model to this task, we demonstrate how such improved perceptual templates can arise from reweighting the connectivity between visual areas. Finally, we establish that action gamers do not enter the perceptual task with improved perceptual templates. Instead, although performance in action gamers is initially indistinguishable from that of nongamers, action gamers more rapidly learn the proper template as they experience the task. Taken together, our results establish for the first time to our knowledge the development of enhanced perceptual templates following action game play. Because such an improvement can facilitate the inference of the proper generative model for the task at hand, unlike perceptual learning that is quite specific, it thus elucidates a general learning mechanism that can account for the various behavioral benefits noted after action game play.
Is it possible to rapidly increase (or decrease) the amount of information the brain can store?
A new international study led by the Research Institute of the McGill University Health Centre (RI-MUHC) suggests it may be. Their research has identified a molecule that improves brain function and memory recall is improved. Published in the latest issue of Cell Reports, the study has implications for neurodevelopmental and neurodegenerative diseases, such as autism spectrum disorders and Alzheimer’s disease.
“Our findings show that the brain has a key protein called FXR1P (Fragile X Related Protein 1) that limits the production of molecules necessary for memory formation,” says RI-MUHC neuroscientist Keith Murai, the study’s senior author and Associate Professor in the Department of Neurology and Neurosurgery at McGill University. “When this brake-protein is suppressed, the brain is able to store more information.”
Murai and his colleagues used a mouse model to study how changes in brain cell connections produce new memories. When FXR1P was selectively removed from certain parts of the brain, new molecules were produced. They strengthened connections between brain cells, which correlated with improved memory and recall in the mice.Brain-disease link “The role of FXR1P was a surprising result,” says Murai. “Previous to our work, no-one had identified a role for this regulator in the brain. Our findings have provided fundamental knowledge about how the brain processes information. We’ve identified a new pathway that directly regulates how information is handled and this could have relevance for understanding and treating brain diseases.
“If we can identify compounds that control the braking potential of FXR1P, we may be able to alter the amount of brain activity or plasticity. For example, in autism, one may want to decrease certain brain activity and in Alzheimer’s disease, we may want to enhance the activity. By manipulating FXR1P, we may eventually be able to adjust memory formation and retrieval, thus improving the quality of life of people suffering from brain diseases.”
The study is described in an open-access paper in Cell Reports. Funding was provided by he Canadian Institutes of Health Research (CIHR), the Natural Sciences and Engineering Research Council of Canada, and U.S. National Institutes of Health.
Translational control of mRNAs allows for rapid and selective changes in synaptic protein expression that are required for long-lasting plasticity and memory formation in the brain. Fragile X Related Protein 1 (FXR1P) is an RNA-binding protein that controls mRNA translation in nonneuronal cells and colocalizes with translational machinery in neurons. However, its neuronal mRNA targets and role in the brain are unknown. Here, we demonstrate that removal of FXR1P from the forebrain of postnatal mice selectively enhances long-term storage of spatial memories, hippocampal late-phase long-term potentiation (L-LTP), and de novo GluA2 synthesis. Furthermore, FXR1P binds specifically to the 5′ UTR of GluA2 mRNA to repress translation and limit the amount of GluA2 that is incorporated at potentiated synapses. This study uncovers a mechanism for regulating long-lasting synaptic plasticity and spatial memory formation and reveals an unexpected divergent role of FXR1P among Fragile X proteins in brain plasticity.