17th Annual Swartz Foundation Mind/Brain Lecture
Michael Wigler, PhD
Professor of Genetics
Cold Spring Harbor Laboratory
The post Video Friday: Genetics of Cognitive Function Through the Prism of Autism appeared first on h+ Media.
If you’re a science fiction fan—you are well familiar with holographic displays floating in midair. Maybe it’s Princess Leia materializing above R2-D2 or Tony Stark designing his Iron Man suit... read more
The post This Touchable Midair 3D Laser Display Is Pretty Magical appeared first on Singularity HUB.
Purdue researchers have created a new implantable drug-delivery system using nanowires that can be wirelessly controlled. The nanowires respond to an electromagnetic field generated by a separate device, which can be used to control the release of a preloaded drug.
The system eliminates the tubes and wires required by other implantable devices that can lead to infection and other complications, said team leader Richard Borgens, Purdue University’s Mari Hulman George Professor of Applied Neuroscience and director of Purdue’s Center for Paralysis Research.
“This tool allows us to apply drugs as needed directly to the site of injury, which could have broad medical applications,” Borgens said. “The technology is in the early stages of testing, but it is our hope that this could one day be used to deliver drugs directly to spinal cord injuries, ulcerations, deep bone injuries or tumors, and avoid the terrible side effects of systemic treatment with steroids or chemotherapy.”
The team tested the drug-delivery system in mice with compression injuries to their spinal cords and administered the corticosteroid dexamethasone. The study measured a molecular marker of inflammation and scar formation in the central nervous system and found that it was reduced after one week of treatment.
Wen Gao, a postdoctoral researcher in the Center for Paralysis Research who worked on the project with Borgens, grew the nanowires vertically over a thin gold base, like tiny fibers making up a piece of shag carpet hundreds of times smaller than a human cell.
The nanowires are made of polypyrrole, a conductive polymer material that responds to electromagnetic fields. They were loaded with a drug and exposed to an approximately 25–40 Gauss pulsed magnetic field with 3000–5000 V/m electrical field at the injury sites for 2 hours daily, causing the nanowires to release small amounts of the payload. This process can be started and stopped at will, like flipping a switch, Borgens said.
As KurzweilAI reported earlier this month, polypyrrole nanowires were also used by ETH Zurich and Technion researchers in an elastic “nanoswimmer” that can move through biological fluid environments to deliver drugs, also controlled by a pulsed magnetic field.
The magnitude and wave form of the pulsed magnetic field must be tuned to obtain the optimum release of the drug, and the precise mechanisms that release the drug are not yet well understood, Borgens said.
“We think it is a combination of charge effects and the shape change of the polymer that allows it to store and release drugs,” he said. “It is a reversible process. Once the electromagnetic field is removed, the polymer snaps back to the initial architecture and retains the remaining drug molecules.” For each different drug the team would need to find the corresponding optimal electromagnetic field for its release.
Testing drug-delivery in mice
The team used mice that had been genetically modified such that the protein Glial Fibrillary Acidic Protein, or GFAP, is luminescent. GFAP is expressed in cells called astrocytes that gather in high numbers at central nervous system injuries. Astrocytes are a part of the inflammatory process and form scar tissue, Borgens said.
A 1–2 millimeter patch of the nanowires doped with dexamethasone was placed onto spinal cord lesions that had been surgically exposed, Borgens said. The lesions were then closed and an electromagnetic field was applied for two hours a day for one week. By the end of the week the treated mice had a weaker GFAP signal than the control groups, which included mice that were not treated and those that received a nanowire patch but were not exposed to the electromagnetic field. In some cases, treated mice had no detectable GFAP signal.
Whether the reduction in astrocytes had any significant impact on spinal cord healing or functional outcomes was not studied. In addition, the concentration of drug maintained during treatment is not known because it is below the limits of systemic detection, Borgens said.
“This method allows a very, very small dose of a drug to effectively serve as a big dose right where you need it,” Borgens said. “By the time the drug diffuses from the site out into the rest of the body it is in amounts that are undetectable in the usual tests to monitor the concentration of drugs in the bloodstream.”
Polypyrrole is an inert and biocompatable material, but the team is working to create a biodegradeable form that would dissolve after the treatment period ended. The team also is trying to increase the depth at which the drug delivery device will work. The current system appears to be limited to a depth in tissue of less than 3 centimeters, Gao said.
The research is described in an online open-access paper in the Journal of Controlled Release.
The research was funded through the general funds of the Center for Paralysis Research and an endowment from Mrs. Mari Hulman George. Borgens has a dual appointment in Purdue’s College of Engineering and the College of Veterinary Medicine.
Abstract of Remote-controlled eradication of astrogliosis in spinal cord injury via electromagnetically-induced dexamethasone release from “smart” nanowires
We describe a system to deliver drugs to selected tissues continuously, if required, for weeks. Drugs can be released remotely inside the small animals using pre-implanted, novel vertically aligned electromagnetically-sensitive polypyrrole nanowires (PpyNWs). Approximately 1–2 mm2 dexamethasone (DEX) doped PpyNWs was lifted on a single drop of sterile water by surface tension, and deposited onto a spinal cord lesion in glial fibrillary acidic protein-luc transgenic mice (GFAP-luc mice). Overexpression of GFAP is an indicator of astrogliosis/neuroinflammation in CNS injury. The corticosteroid DEX, a powerful ameliorator of inflammation, was released from the polymer by external application of an electromagnetic field for 2 h/day for a week. The GFAP signal, revealed by bioluminescent imaging in the living animal, was significantly reduced in treated animals. At 1 week, GFAP was at the edge of detection, and in some experimental animals, completely eradicated. We conclude that the administration of drugs can be controlled locally and non-invasively, opening the door to many other known therapies, such as the cases that dexamethasone cannot be safely applied systemically in large concentrations.
Scientists at the HZB Institute for Silicon Photovoltaics in Berlin have succeeded in precisely measuring and controlling the thickness of an organic compound that has been bound to a graphene layer. This could enable graphene to be used as a sensitive detector for biological molecules in the future.
It has long been known that graphene is useful for detecting traces of organic molecules, because the electrical conductivity of graphene drops as soon as foreign molecules bind to it. The problem: graphene is not very selective, making it difficult to differentiate molecules.
The scientists found a way to increase the selectivity by electrochemically connecting graphene to host molecules that act as detector molecules functioning as selective binding sites. To accomplish this, para-maleimidophenyl groups (maleimide) from an organic solution were grafted to the surface of the graphene. These organic molecules behave like mounting brackets to which the selective detector molecules can be attached in the next step.
“Thanks to these molecules, graphene can now be employed for detecting various substances, similar to how a key fits a lock,” explains researcher Marc Gluba. The “lock” molecules on the surface are highly selective and absorb only the matching “key” molecules, allowing for accurately measuring how many molecules actually were grafted to the surface of the graphene.
One use would be an inexpensive “lab-on-a-chip.” Using a single drop of blood could immediately provide data for medical diagnosis, says Prof. Norbert Nickel, head of the research team.
Abstract of Quantifying the electrochemical maleimidation of large area graphene
The covalent modification of large-area graphene sheets by p-(N-Maleimido)phenyl (p-MP) via electrochemical grafting of p-(N-Maleimido)benzenediazonium tetrafluoroborate (p-MBDT) is successfully demonstrated for the first time. The deposition process is monitored in-situ using the mass change of a graphene/SiNX:H/Au-coated quartz crystal microbalance(QCM) chip. The resulting mass increase correlates with a maleimide thickness of approximately 2.3 molecular layers. The presence of an infrared absorption band at 1726 cm-1 shows that maleimide groups were deposited on the substrates. Raman backscattering spectra reveal the presence of D and D′ modes of the graphene layer, indicating that p-MP forms covalent bonds to graphene. Using the mass change and charge transfer during the potential cycling the faradaic efficiency of the functionalisation process was deduced, which amounts to eta = 22%.
Researchers at the University of Chicago’s Institute for Molecular Engineering have taken a crucial step toward nuclear spintronic technologies that use the “spin” — or magnetization — of atomic nuclei to store and process information. The new technologies could be used for ultra-sensitive magnetic resonance imaging, advanced gyroscopes, and quantum computers.
The researchers used infrared light to make nuclear spins line themselves up in a consistent, controllable way, using a high-performance semiconductor that is practical, convenient, and inexpensive.
The research was featured as the cover article of the June 17 issue of Physical Review Letters.
No cryogenic temperatures and high magnetic fields
Nuclear spins tend to be randomly oriented. Aligning them in a controllable fashion is usually a complicated and only marginally successful proposition. The reason, explains Paul Klimov, a co-author of the paper, is that “the magnetic moment of each nucleus is tiny, roughly 1,000 times smaller than that of an electron.”
This small magnetic moment means that little thermal kicks from surrounding atoms or electrons can easily randomize the direction of the nuclear spins. Extreme experimental conditions such as high magnetic fields and cryogenic temperatures (-238 degrees Fahrenehit and below) are usually required to get even a small number of spins to line up. In magnetic resonance imaging, for example, only one to 10 out of a million nuclear spins can be aligned and seen in the image, even with a high magnetic field applied.
Using their new technique, David Awschalom, the Liew Family Professor in Spintronics and Quantum Information, and his associates aligned more than 99 percent of spins in certain nuclei in silicon carbide. Equally important, the technique works at room temperature — no cryogenics or intense magnetic fields needed. Instead, the research team used light to “cool” the nuclei.
While nuclei do not interact with light themselves, certain imperfections, or “color-centers,” in the silicon carbide crystals do. The electron spins in these color centers can be readily optically cooled and aligned, and this alignment can be transferred to nearby nuclei.
Getting spins to align in room-temperature silicon carbide brings practical spintronic devices a significant step closer, said Awschalom. The material is already an important semiconductor in the high-power electronics and opto-electronics industries. Sophisticated growth and processing capabilities are already mature. So prototypes of nuclear spintronic devices that exploit the IME researchers’ technique may be developed in the near future.
“Wafer-scale quantum technologies that harness nuclear spins as subatomic elements may appear more quickly than we anticipated,” Awschalom said.
Abstract of Optical Polarization of Nuclear Spins in Silicon Carbide
We demonstrate optically pumped dynamic nuclear polarization of Si29 nuclear spins that are strongly coupled to paramagnetic color centers in 4H- and 6H-SiC. The 99%±1% degree of polarization that we observe at room temperature corresponds to an effective nuclear temperature of 5 μK. By combining ab initio theory with the experimental identification of the color centers’ optically excited states, we quantitatively model how the polarization derives from hyperfine-mediated level anticrossings. These results lay a foundation for SiC-based quantum memories, nuclear gyroscopes, and hyperpolarized probes for magnetic resonance imaging.
LEDs made from nanowires with an inner core of gallium nitride (GaN) and a outer layer of indium-gallium-nitride (InGaN) — both semiconductors — use less energy and provide better light, according Robert Feidenhans’l, professor and head of the Niels Bohr Institute at the University of Copenhagen.
The studies were performed using nanoscale X-ray microscopy in the electron synchrotron at DESY in Hamburg, Germany. The results are published in the journal ACS Nano.
The nanowires could also be used as displays in smart phones, TVs and other forms of lighting within five years, according to the researchers.
Abstract of Fast Strain Mapping of Nanowire Light-Emitting Diodes Using Nanofocused X-ray Beams
X-ray nanobeams are unique nondestructive probes that allow direct measurements of the nanoscale strain distribution and composition inside the micrometer thick layered structures that are found in most electronic device architectures. However, the method is usually extremely time-consuming, and as a result, data sets are often constrained to a few or even single objects. Here we demonstrate that by special design of a nanofocused X-ray beam diffraction experiment we can (in a single 2D scan with no sample rotation) measure the individual strain and composition profiles of many structures in an array of upright standing nanowires. We make use of the observation that in the generic nanowire device configuration, which is found in high-speed transistors, solar cells, and light-emitting diodes, each wire exhibits very small degrees of random tilts and twists toward the substrate. Although the tilt and twist are very small, they give a new contrast mechanism between different wires. In the present case, we image complex nanowires for nanoLED fabrication and compare to theoretical simulations, demonstrating that this fast method is suitable for real nanostructured devices.
Those were the words I noticed when interviewing Augmented World Expo organizer Ori Inbar several days before AWE2015, the trade show of Augmented and Virtual Reality. “We’re not in beta anymore…” Inbar said, “We now have companies implementing enterprise-scale Augmented Reality solutions, and with coming products like the Meta One and Microsoft HoloLens, the consumer market is being lined up as well.” With the addition of the UploadVR summit to AWE2015 the event was a blitz of ideas, technologies and new hardware.
AWE/Upload is a trade and industry event that also includes coverage of the arts and related cultural effects, although it is smaller when compared to the industrial aspect of the show. In this way it is similar to SIGGRAPH and this is much of my rationale for covering this, and also SIGGRAPH later this year? Doing so is as simple as McLuhan’s axiom of “The Medium is the Message” or, better yet, examining how developers and industry shape the technologies and cultural frameworks from which the artforms using these techniques emerge. The issue is that in examining emerging technologies we can not only get an idea of near-future design fictions but also the emerging culture embedded within it.
To put things in perspective, Augmented Reality art is not new, as groups like Manifest.AR have already nearly come and gone and my own group in Second Life, Second Front, is in its ninth year. Even though media artists are frequently early technology adopters, what appears to be happening at the larger scale is a critical mass that signals the acceptance of these new technologies by a larger audience. But with all emerging technologies there is drama driven by those industries’ growing pains. For AR & VR the last two years have certainly been tumultuous.
Last year’s acquisition of Oculus Rift by Facebook sent ripples through the technology community. Fortunately, unlike my upcoming example, the buyout did not eliminate the Rift from the landscape; instead it gained venture capital allowing for licensing of the technology for products like the Sony Gear VR. Also the current design fictions being distributed by Microsoft for its Hololens give tantalizing glimpses of a future “Internet of No Things” full of virtual televisions and even ghostly laptops. This was suggested in a workshop by company Meta and the short film “Sight”, in which things like televisions, clocks, and objective art might soon be the function of the visor.
However disruptive events also happen in the evolution of technologies and their cultures. The news was that scant weeks before the conference a leading Augmented Reality Platform, Metaio, was purchased by Apple. Unlike the transparency and expansion experienced by Oculus the Mataio site merely said that no new products were being sold and cloud support would cease by December 15th. In my conversation with conference organizer Ori Inbar we agreed that this was not unexpected as Apple has been acquiring AR technologies, which has been related in rumors of “the crazy thing Apple’s been working on…”; But what was surprising was the almost immediate blackout, part of the subject of my concurrent article “Beware of the Stacks”. For entrepreneurs and cultural producers alike there is a message: Be careful of the tools you use, or your artwork (or company) could suddenly falter in days beyond your control. Imagine a painting suddenly disintegrating because a company bought out the technology of linseed oil. Although this is a poor metaphor, technological artists are dependent on technology and one can see digital media arts’ conservative reliance on Jurassic technologies like Animated GIFs for its long-term viability, but to go further I risk digression.Everyone in Headsets!
Another remarkable phenomenon this year was the near-assumption of the handheld as a experience device, and their use seemed almost invisible this year. What was evident was a proliferation of largely untethered headsets, ranging from the Phone-holding Google Cardboard to the Snapdragon-powered (and hot) ODG Android headset, boasting 30-degree field of view and the elimination of visible pixels. In the middle is the tethered, powerful Meta One headset with robust hand gesture recognition. Add in the conspicuously absent Microsoft Hololens and the popular design fictions of object and face recognition are emerging.
Like You’re Going to Have One Soon….
That is unless you are a brave early adopter, developer, or enterprise client. The fact that there was an entire Enterprise track and Daqri’s release of an AR-equipped construction/logistics helmet made it clear that the consumer market, much more prevalent last year, has clearly been placed in the long-term. For now, consumer/artistic AR is largely confined to the handheld device, as experienced through Will Pappenheimer’s “Proxy” at the Whitney Museum of American Art or Crayola’s “4D coloring books” in which certain colors serve as AR markers. This isn’t necessarily a bad thing, as an audience is likely to have a device that can run your app through which they can experience the art. As an aside, this is the reason why I chose to use handhelds for my tapestry work – imagine trying to experience a 21’ tapestry with a desktop using a 6’ cord! At this point, clarity and function, both partially dependent on computer power, have created a continuum from strapping your iPhone to your forehead like a jury-rigged Oculus for under $50, to potentially using a messenger bag with the Meta at $512, to the expensive ($2750), hot, but elegant ODG glasses you might try on if you visit the International Space Station.
Where the Rubber Hits the Road
While discussing the general shape of technology gives a context for its content and application, a media tool is often only as good as its app. Without meaning to show favoritism, Mark Skwarek’s NYU Lab team has been going outstanding work from a visualization of upcoming architectural developments to a surprising proof of concept for a landmine detection system, which I thought was amazing. Equally innovative was the VA-ST structured light headset for the visually impaired, which has several modes for different modes of contrast. These alternate methods not only was surprising in terms of application and possible creative uses but also changed my perception of AR as possessing photorealistic, stereoscopic overlays.
Other novel applications included National Geographic’s AR jigsaw puzzle sets, of which I saw the one outlining the history of Dynastic Egypt. I felt that if I were a kid, building the puzzle and then exploring it with AR would seem magical. There are other entertainment and experimentation platforms coming online like Skwarek, et al’s “PlayAR” AR environmental gaming system. But one platform I want to hold accountable for still being in late beta is the” LyteShot” AR laser tag system, which got an Auggie Award this year. My pleasure in the system is that the “gun” per se is Arduino-based, meaning that it could be a maker’s heaven. It uses the excellent mid-priced Epson headset, but at this time it is used primarily for status updates although there is a difference between AR and a heads-up display. So, from this perspective, it means that there are some great platforms getting into the market that are highly entertaining and innovative, but there are a few bugs to work out.
Ideas vs. Product
For the past thousand words or so I have been talking about the industry and applications of AR, but for me, my “soul”, if you will, set on fire during the “idea” panels and keynotes. For example, on the first day, Steve Mann, Ryan Janzen and the group at Meta had a workshop to teach attendees how to make “Veillometers” (or pixel-stick like devices to map out the infrared fields of view of surveillance cameras. Mann, famous for creating the Wearable Computing Lab at MIT and being Senior Researcher at Meta, still seemed five years ahead of the pack, which was refreshing. Another inspirational talk was given by one of the progenitors of the field, and inaugural Auggie Award for Lifetime Achievement, Tom Furness. His reflection on the history of extended reality, and his time in the US Air Force developing heads-up AR was fascinating. But what was most inspirational is that now that he is working on humane uses for augmentation systems such as warping the viewfield to assist people with Macular Degeneration. This, in my opinion, is the real potential of these technologies. In fact this array of keynotes was incredible, with Mann, Furness, the iconic HITLab’s Mark Billinghurst, and science fiction writer David Brin, (who comes off near-Libertarian) gave vast food for thought.
Auggies: The Best of the Best
Every year, the Augmented World Expo gives out the “Auggie” awards for achievements in technology, art, and innovation in AR. I think it should be noted that the Auggie is probably the world’s most unique trophy, consisting of a bust that is half naked skull and half fleshed head with a Borg-like lens with baleful eye wired into that head. The Auggie is another aspect of AWE that signals that the world of Reality media is still a bit Wild West.
There are several categories from Enterprise Application to Game/Toy (LyteShot having won this year), and many of them are largely of interest strictly to developers. For example, the fact that Qualcomm’s Vuforia development environment won three years in a row gives hint to its stability in the market, and Lowe’s HoloRoom is a wonderfully strange mix between Star Trek and Home Improvement. The headset winner was CastAR, a projective/reflective technology where polarized projectors were in the headset instead of cameras, which worked amazingly well. The other winners were gratifyingly humane applications such as Child MRI Evaluation and Next for Nigeria (Best Campaign). The prizes impressed on me that the community, or part of it, “got it” in terms of the potential of AR to help the human condition, which is perhaps a “superpower” that the conference framed itself under.
So, Where’s the Art?
Being that I am writing this for an art community it would be of interest to know where the art was in all of this. The Auggies have an Art category, as well as a gala between the end of the trade show events and the Auggie Awards. The pleasant part about AWE’s nominations for the best in AR art is that those works have integrity. Manifest.AR regular Sander Veerhof was nominated for his “Autocue”, where people with two mobile devices in a car can become the characters of famous driving dialogues (“Blues Brothers”, “Pulp Fiction”, “Harold and Kumar”). Octagon’s “History of London” is reminiscent of the National Geographic puzzles, except with far greater depth. Anita Yustisia’s beautiful “Circle of Life” paintings that were reactive to markers were on display in the auditorium but, besides a Twitter cloud and a Kinect-driven installation, the art was swamped by the size of the auditorium.
The winner of the art Auggie, Heavy & Re+Public’s’ “Consumption Cycle”, (which this writer saw at South by Southwest Interactive) was a baroquely detailed building sized mural of machinery and virtual television sets. I feel a bit of ambivalence about this work, as Heavy’s work tends to rely on spectacle. Of the lot I felt it did deserve the Auggie, purely for its execution and the effective use of spectacle. But with the emerging abilities of menuing, gesture recognition, and so on, I felt that last year’s winner, Darf Designs’ “Hermaton”, employed the potentials for AR as installation in a way that was more specific to the medium.
Wasn’t there VR as well?
Yes, but it was in a much smaller area than the AR displays. There were standout technologies, like the Chinese Kickstarter-funded FOVE eye-tracking VR visor, a sensor to deliver directional sound, and Ricoh’s cute 360 degree immersive video camera. The Best in Show Auggie actually went to a VR installation, Mindride’s “Airflow”, where you are literally in a flying sling with an Oculus Rift headset. Although a little cumbersome, it was as close to the flying game in the AR design fiction short, “Sight”. So, in a way, the ideas of near-future design and beta revision culture are still driving technology as surely as the PADD on Star Trek presaged the iPad.
This year’s AWE/UploadVR event showed that reality technology is emerging strongly at the enterprise level and it’s merely a matter of time before it hits consumer culture, but it’s my contention that we’re 2-4 years out unless there’s a game changer like the Oculus for AR or if the Meta or ODG get a killer app, which is entirely possible. So, as the festival’s tagline suggests, are we ready for Superpowers for the People? It seems like we’re almost there but, like Tony Stark in the beginning, we’re still learning to operate the Iron Man suit, sort of banging around the lab.
Patrick Lichty is a technologically-based conceptual artist, writer, independent curator, animator for the activist group, The Yes Men, and Executive Editor of Intelligent Agent Magazine. He began showing technological media art in 1989, and deals with works and writing that explore the social relations between us and media. Venues in which Lichty has been involved with solo and collaborative works include the Whitney & Turin Biennials, Maribor Triennial, Performa Performance Biennial, Ars Electronica, and the International Symposium on the Electronic Arts (ISEA).
He also works extensively with virtual worlds, including Second Life, and his work, both solo and with his performance art group, Second Front, has been featured in Flash Art, Eikon Milan, and ArtNews.
He is an Assistant Professor of Interactive Arts & Media at Columbia College Chicago, and resides in Baton Rouge, LA
This article originally appeared here, republished under creative commons license.
The post Picking Your Superpower at Augmented World Expo 2015 appeared first on h+ Media.
“Our ancestors didn’t eat like this, so we shouldn’t.” This is the main ethos of many modern diets which advise us to exclude a number of recent additions to our plates because they were not part of our distant predecessors diet. There are many different variations on the theme – from all-encompassing “palaeolithic-style” diets to grain-free or gluten-free regimes – which are all generating a massive boom in specialised shops, products and even restaurants.
The general idea is that for most of our millions of years of evolution we were not exposed to grains, milk, yogurt or cheese, refined carbs, legumes, coffee or alcohol. As they only came into existence with farming around 10,000 years ago, our finely-tuned bodies have not been designed to deal with them efficiently.
The belief is that human evolution via survival of the fittest and natural selection is a very slow process and our genes classically take tens of thousands of years to change. This means that these “modern” foods cause various degrees of intolerance or allergic reactions, resulting not only in the modern epidemic of allergies, but also that the toxins lead to inflammation and obesity. So follow our Palaeolithic ancestors we are told, cut out these foods – and your problems are over.
This may sound imminently sensible but as it turns out, the facts on which this idea is based are rubbish.We have adapted
The latest research shows we are not robotic automatons fixed in time but flexible plastic beings adapting to our environments and diets much faster than anyone had realised. A study published in Nature showed clearly that major changes to our genes can occur in just a thousand years or a few hundred generations.
Bronze Age Yamnaya skull
The researchers looked at the DNA from 101 Bronze Age skeletons across Europe from The Netherlands to Russia for key mutations. These people lived around 3,000 years ago and were busy migrating and spreading their genes. They looked in particular at one key gene (called lactase persistence) that controlled an enzyme conferring the ability to digest milk after the age of three. Around three quarters of modern Europeans have this gene allowing them to digest a glass of milk without feeling sick. Rates of the gene mutation are higher in North Europe (up to 90%) and lower in Southern Europe (around 50%).
It was previously thought this gene mutation started to dominate Europeans around 7,000 to 10,000 years ago at the onset of farming and the use of milk, so the finding that only one in 20 Bronze-age people had it 3,000 years ago was a major shock. It meant that it started later and has spread much faster than we imagined and as a consequence we have adapted to our new food source much more rapidly than the lumbering robots we are portrayed as.
Other genetic evidence of recent changes to our digestive genes comes from a worldwide study of the amylase gene which is key to breaking down starch in carbohydrates. People in areas with starch as a major part of the diet evolved to have multiple copies of the gene to help them digest it better. We found in a collaborative study using our twins that this mutation also strangely protected against obesity, and importantly we think this change only happened in the last few hundred generations.
Other genes key to how we digest food can change even more rapidly. These are the 2m or so genes in the DNA of the trillions of microbes in our gut. Although they are not human genes they are crucial to our health as they control our microbiome which digests our food and produces many of our vitamins and blood metabolites. These bacterial genes in our guts can respond rapidly to changes in our diet, and as they can produce a new generation every 30 minutes, they can evolve very fast indeed.
They also have a secret weapon called horizontal gene transfer which means they can rapidly swap genes between them to mutual advantage, without waiting for natural selection. They use this very effectively to become resistant to new antibiotics and the same process is likely for new foods.
So by all means enjoy eating at going to trendy paleo steak restaurants and decide to lose weight in the short term by going on a gluten-free diet, but don’t be fooled by the evolutionary scientific explanations which are now out of date. Your genes and your microbes are evolving faster than you realise and can cope with the new additions to our diet in the last few thousand years. The caveat is that we need to keep our gut microbes as healthy as possible. But dietary diversity, not exclusions, is the key.
Scientists first transcribed the genome—or complete genetic code—of a free living organism in 1995. Sequencing the bacterium H. Influenzae took a little over a year, cost about $1 million, and... read more
The post Smartphone-Sized Genetic Sequencer Transcribes Entire Bacterial Genome appeared first on Singularity HUB.
Researchers at MIT and spinoff company 24M have developed an advanced manufacturing approach for rechargeable lithium-ion batteries. The researchers claim the new process could cut the manufacturing and materials cost in half compared to existing lithium-ion batteries, while also improving their performance, making them easier to recycle as well as flexible and resistant to damage.
“We’ve reinvented the process,” says Yet-Ming Chiang, the Kyocera Professor of Ceramics at MIT and a co-founder of 24M (and previously a co-founder of battery company A123). The existing process for manufacturing lithium-ion batteries, he says, has hardly changed in the two decades since the technology was invented, and is inefficient, with more steps and components than are really needed.
By 2020, Chiang estimates, 24M will be able to produce batteries for less than $100 per kilowatt-hour of capacity — considered the threshold for mass adoption of electric vehicles, according to most analysts within the EV industry, Clean Technica notes, adding that the planned Tesla Gigafactory 1 also hopes to hit that figure by 2017.
Today, the estimates of battery costs range wildly between $300 per kilowatt-hour and $500 per kilowatt-hour, notes The Wall Street Journal. Because the battery is the most expensive part of an electric car, like the Tesla Model S or the forthcoming Chevrolet Bolt, lowering the cost of the battery significantly could have a big impact.”
A “semisolid” battery
The new process is a hybrid between a conventional solid battery and a “flow battery” design, in which the electrodes are actually suspensions of tiny particles carried by a liquid and pumped through various compartments of the battery. The flow battery was developed five years ago by Chiang and colleagues including W. Craig Carter, the MIT POSCO Professor of Materials Science and Engineering.
In this new version, while the electrode material does not flow, it is composed of a similar semisolid, colloidal suspension of particles. Chiang and Carter refer to this as a “semisolid battery.”
This approach greatly simplifies manufacturing, and also makes batteries that are flexible and resistant to damage, says Chiang, who is senior author of a paper on the new battery design in the Journal of Power Sources. This analysis demonstrates that while a flow-battery system is appropriate for battery chemistries with a low energy density (those that can only store a limited amount of energy for a given weight), for high-energy-density devices such as lithium-ion batteries, the extra complexity and components of a flow system would add unnecessary extra cost.
Almost immediately after publishing the earlier research on the flow battery, Chiang says, “We realized that a better way to make use of this flowable electrode technology was to reinvent the [lithium ion] manufacturing process.”
Instead of the standard method of applying liquid coatings to a roll of backing material, and then having to wait for that material to dry before it can move to the next manufacturing step, the new process keeps the electrode material in a liquid state and requires no drying stage at all. Using fewer, thicker electrodes, the system reduces the conventional battery architecture’s number of distinct layers, as well as the amount of nonfunctional material in the structure, by 80 percent.
Having the electrode in the form of tiny suspended particles instead of consolidated slabs greatly reduces the path length for charged particles as they move through the material — a property known as “tortuosity.” A less tortuous path makes it possible to use thicker electrodes, which, in turn, simplifies production and lowers cost.
Bendable and foldable
The new design also produces a battery that is more flexible and resilient. Conventional lithium-ion batteries are composed of brittle electrodes that can crack under stress; the new design produces battery cells that can be bent, folded, or even penetrated by bullets without failing. This should improve both safety and durability, he says.
The company has so far made about 10,000 batteries on its prototype assembly lines for testing. The process has received eight patents and has 75 additional patents under review; 24M has raised $50 million in financing from venture capital firms and a U.S. Department of Energy grant.
The company is initially focusing on grid-scale installations, used to help smooth out power loads and provide backup for renewable energy sources that produce intermittent output, such as wind and solar power. But Chiang says the technology is also well suited to applications where weight and volume are limited, such as in electric vehicles.
Another advantage of this approach, Chiang says, is that factories using the method can be scaled up by simply adding identical units. With traditional lithium-ion production, plants must be built at large scale from the beginning in order to keep down unit costs, so they require much larger initial capital expenditures.
Venkat Viswanathan, an assistant professor of mechanical engineering at Carnegie Mellon University who was not involved in this work, says that 24M’s new battery design “could do the same sort of disruption to [lithium ion] batteries manufacturing as what mini-mills did to the integrated steel mills.”
A University of Illinois at Urbana-Champaign researcher was also involved in the study. The work was supported by the U.S. Department of Energy’s Center for Energy Storage Research, based at Argonne National Laboratory in Illinois.
Abstract of Component-cost and performance based comparison of flow and static batteries
Flow batteries are a promising grid-storage technology that is scalable, inherently flexible in power/energy ratio, and potentially low cost in comparison to conventional or “static” battery architectures. Recent advances in flow chemistries are enabling significantly higher energy density flow electrodes. When the same battery chemistry can arguably be used in either a flow or static electrode design, the relative merits of either design choice become of interest. Here, we analyze the costs of the electrochemically active stack for both architectures under the constraint of constant energy efficiency and charge and discharge rates, using as case studies the aqueous vanadium-redox chemistry, widely used in conventional flow batteries, and aqueous lithium-iron-phosphate (LFP)/lithium-titanium-phosphate (LTP) suspensions, an example of a higher energy density suspension-based electrode. It is found that although flow batteries always have a cost advantage ($ kWh−1) at the stack level modeled, the advantage is a strong function of flow electrode energy density. For the LFP/LTP case, the cost advantages decreases from ∼50% to ∼10% over experimentally reasonable ranges of suspension loading. Such results are important input for design choices when both battery architectures are viable options.
Using a telepresence system developed at the École Polytechnique Fédérale de Lausanne (EPFL ), 19 people — including nine quadriplegics — were able to remotely control a robot located in an EPFL university lab in Switzerland.
A team of researchers at the Defitech Foundation Chair in Brain-Machine Interface (CNBI), headed by professor José del R. Millán, developed a brain-computer interface (BCI) system, using electroencephalography (EEG) signals.
This multi-year research project was intended to give a measure of independence to paralyzed people. The research involved 19 subjects (nine disabled and ten healthy) located in Italy, Germany and Switzerland. For several weeks, each of the subjects put on a BCI helmet and instructed the robot to move, transmitting their instructions in real time via Internet from their home country.
The robot used a video camera, screen, and wheels, similar to the commercially available Beam system. The robot transmitted the image from its camera and displayed the face of the remote pilot, both via Skype.
Shared control between human and machine
“Each of the 9 subjects with disabilities managed to remotely control the robot with ease after less than 10 days of training,” said Millán. The robot was able to avoid obstacles by itself, even when not told to.
The tests revealed no difference in piloting ability between healthy and disabled subjects. In the second part of the tests, the disabled people with residual mobility were asked to pilot the robot with the movements they were still capable of doing, for example, by simply pressing the side of their head on buttons placed nearby. They piloted the robot just as if they were using their thoughts.
The positive results of this research bring to a close the European project called TOBI (Tools for Brain-Computer Interaction), which began in 2008. The research is discussed in the June special edition of Proceedings of the IEEE, dedicated to brain-machine interfaces.
École polytechnique fédérale de Lausanne (EPFL) | Telepresence robots can give people with disabilities the feeling of being home
Abstract of Towards Independence: A BCI Telepresence Robot for People With Severe Motor Disabilities
This paper presents an important step forward towards increasing the independence of people with severe motor disabilities, by using brain-computer interfaces to harness the power of the Internet of Things. We analyze the stability of brain signals as end-users with motor disabilities progress from performing simple standard on-screen training tasks to interacting with real devices in the real world. Furthermore, we demonstrate how the concept of shared control — which interprets the user’s commands in context — empowers users to perform rather complex tasks without a high workload. We present the results of nine end-users with motor disabilities who were able to complete navigation tasks with a telepresence robot successfully in a remote environment (in some cases in a different country) that they had never previously visited. Moreover, these end-users achieved similar levels of performance to a control group of 10 healthy users who were already familiar with the environment.
Three-year clinical trial results of the Argus II retinal implant (“bionic eye”) have found that the device restored some visual function and quality of life for 30 people blinded by retinitis pigmentosa, a rare degenerative eye disease. The findings, published in an open-access paper in the journal Ophthalmology, also showed long-term efficacy, safety and reliability for the device.
Retinitis pigmentosa is an incurable disease that affects about 1 in 4,000 Americans and causes slow vision loss that eventually leads to blindness.
Using the Argus II, patients are able to see patterns of light that the brain learns to interpret as an image. The system uses a miniature video camera connected to the glasses to send visual information to a small computerized video processing unit and battery that can be stored in a pocket. This computer turns the image to electronic signals that are sent wirelessly to an electronic device surgically implanted on the retina in the eye.
The Argus II received FDA approval as a Humanitarian Use Device (HUD) in 2013 and in Europe Argus II received the CE Mark in 2011 and was launched commercially in Italy, Germany, France, Spain, The Netherlands, Switzerland and England.
The clinical trial was conducted in the United States and Europe. All of the study participants had little or no light perception in both eyes.
Test results: a viable treatment option
The researchers conducted two visual-function tests using both a computer screen and real-world conditions: finding and touching a door and identifying and following a line on the ground. A Functional Low-vision Observer Rated Assessment (FLORA) was also performed by independent visual rehabilitation experts at the request of the FDA to assess the impact of the Argus II system on the subjects’ everyday lives, including extensive interviews and tasks performed around the home.
The visual function results indicated that up to 89 percent of the subjects performed significantly better with the device. The FLORA found that among the subjects, 80 percent received benefit from the system when considering both functional vision and patient-reported quality of life, and no subjects were affected negatively.
After one year, two-thirds of the subjects had not experienced device- or surgery-related serious adverse events. After three years, there were no device failures. Throughout the three years, 11 subjects experienced serious adverse events, most of which occurred soon after implantation and were successfully treated. One of these treatments, however, was to remove the device due to recurring erosion after the suture tab on the device became damaged.
“This study shows that the Argus II system is a viable treatment option for people profoundly blind due to retinitis pigmentosa – one that can make a meaningful difference in their lives and provides a benefit that can last over time,” said Allen C. Ho, M.D., lead author of the study and director of the clinical retina research unit at Wills Eye Hospital.
The Argus II has 60 electrodes (pixels), which limits it to relatively large objects. There are higher-resolution devices in development. Retina Implant’s Alpha IMS Subretinal Microchip is a 1,500-pixel chip (no external video camera required). And Stanford University photovoltaic subretinal prosthesis research project achieves 178 pixels per square millimeter, scalable to thousands of pixels. Silicon photodiodes in each pixel directly convert pulsed near-infrared (NIR) images projected from video goggles, eliminating a hard-wire connection.
Abstract of Long-Term Results from an Epiretinal Prosthesis to Restore Sight to the Blind
Purpose: Retinitis pigmentosa (RP) is a group of inherited retinal degenerations leading to blindness due to photoreceptor loss. Retinitis pigmentosa is a rare disease, affecting only approximately 100 000 people in the United States. There is no cure and no approved medical therapy to slow or reverse RP. The purpose of this clinical trial was to evaluate the safety, reliability, and benefit of the Argus II Retinal Prosthesis System (Second Sight Medical Products, Inc, Sylmar, CA) in restoring some visual function to subjects completely blind from RP. We report clinical trial results at 1 and 3 years after implantation.
Design: The study is a multicenter, single-arm, prospective clinical trial.
Participants: There were 30 subjects in 10 centers in the United States and Europe. Subjects served as their own controls, that is, implanted eye versus fellow eye, and system on versus system off (native residual vision).
Methods: The Argus II System was implanted on and in a single eye (typically the worse-seeing eye) of blind subjects. Subjects wore glasses mounted with a small camera and a video processor that converted images into stimulation patterns sent to the electrode array on the retina.
Main Outcome Measures: The primary outcome measures were safety (the number, seriousness, and relatedness of adverse events) and visual function, as measured by 3 computer-based, objective tests.
Results: A total of 29 of 30 subjects had functioning Argus II Systems implants 3 years after implantation. Eleven subjects experienced a total of 23 serious device- or surgery-related adverse events. All were treated with standard ophthalmic care. As a group, subjects performed significantly better with the system on than off on all visual function tests and functional vision assessments.
Conclusions: The 3-year results of the Argus II trial support the long-term safety profile and benefit of the Argus II System for patients blind from RP. Earlier results from this trial were used to gain approval of the Argus II by the Food and Drug Administration and a CE mark in Europe. The Argus II System is the first and only retinal implant to have both approvals.
A global task force of 174 scientists from leading research centers in 28 countries has studied the link between mixtures of commonly encountered chemicals and the development of cancer. The open-access study selected 85 chemicals not considered carcinogenic to humans and found 50 of them actually supported key cancer-related mechanisms at exposures found in the environment today.
According to co-author cancer Biologist Hemad Yasaei from Brunel University London, “This research backs up the idea that chemicals not considered harmful by themselves are combining and accumulating in our bodies to trigger cancer and might lie behind the global cancer epidemic we are witnessing. We urgently need to focus more resources to research the effect of low dose exposure to mixtures of chemicals in the food we eat, air we breathe, and water we drink.”
Professor Andrew Ward from the Department of Biology and Biochemistry at the University of Bath, who contributed in the area of cancer epigenetics and the environment, said: “A review on this scale, looking at environmental chemicals from the perspective of all the major hallmarks of cancer, is unprecedented”.
Professor Francis Martin from Lancaster University who contributed to an examination of how such typical environmental exposures influence dysfunctional metabolism, pointed out that despite a rising incidence of many cancers, “far too little research has been invested into examining the pivotal role of environmental causative agents. This worldwide team of researchers refocuses our attention on this under-researched area.”
In light of the compelling evidence, the taskforce is calling for an increased emphasis on and support for research into low dose exposures to mixtures of environmental chemicals. Current research estimates chemicals could be responsible for as many as one in five cancers. With the human population routinely exposed to thousands of chemicals, the effects need to be better understood to reduce the incidence of cancer globally, the scientist say.
The research was published in Oxford University Publishing’s Carcinogenesis journal today (June 23).
William Goodson III, a senior scientist at the California Pacific Medical Center in San Francisco and lead author of the synthesis said: “Since so many chemicals that are unavoidable in the environment can produce low dose effects that are directly related to carcinogenesis, the way we’ve been testing chemicals (one at a time) is really quite out of date. Every day we are exposed to an environmental ‘chemical soup’, so we need testing that evaluates the effects of our ongoing exposure to these chemical mixtures.”
Abstract of Assessing the carcinogenic potential of low-dose exposures to chemical mixtures in the environment: the challenge ahead
Lifestyle factors are responsible for a considerable portion of cancer incidence worldwide, but credible estimates from the World Health Organization and the International Agency for Research on Cancer (IARC) suggest that the fraction of cancers attributable to toxic environmental exposures is between 7% and 19%. To explore the hypothesis that low-dose exposures to mixtures of chemicals in the environment may be combining to contribute to environmental carcinogenesis, we reviewed 11 hallmark phenotypes of cancer, multiple priority target sites for disruption in each area and prototypical chemical disruptors for all targets, this included dose-response characterizations, evidence of low-dose effects and cross-hallmark effects for all targets and chemicals. In total, 85 examples of chemicals were reviewed for actions on key pathways/mechanisms related to carcinogenesis. Only 15% (13/85) were found to have evidence of a dose-response threshold, whereas 59% (50/85) exerted low-dose effects. No dose-response information was found for the remaining 26% (22/85). Our analysis suggests that the cumulative effects of individual (non-carcinogenic) chemicals acting on different pathways, and a variety of related systems, organs, tissues and cells could plausibly conspire to produce carcinogenic synergies. Additional basic research on carcinogenesis and research focused on low-dose effects of chemical mixtures needs to be rigorously pursued before the merits of this hypothesis can be further advanced. However, the structure of the World Health Organization International Programme on Chemical Safety ‘Mode of Action’ framework should be revisited as it has inherent weaknesses that are not fully aligned with our current understanding of cancer biology.
In an engineering first, Stanford University scientists have invented a low-cost water splitter that uses a single catalyst to produce both hydrogen and oxygen gas 24 hours a day, seven days a week.
The researchers believe that the device, described in an open-access study published today (June 23) in Nature Communications, could provide a renewable source of clean-burning hydrogen fuel for transportation and industry.
“We have developed a low-voltage, single-catalyst water splitter that continuously generates hydrogen and oxygen for more than 200 hours, an exciting world-record performance,” said study co-author Yi Cui, an associate professor of materials science and engineering at Stanford and of photon science at the SLAC National Accelerator Laboratory.
The search for clean hydrogen
Hydrogen has long been promoted as an emissions-free alternative to gasoline. But most commercial-grade hydrogen is made from natural gas — a fossil fuel that contributes to global warming. So scientists have been trying to develop a cheap and efficient way to extract pure hydrogen from water.
A conventional water-splitting device consists of two electrodes submerged in a water-based electrolyte. A low-voltage current applied to the electrodes drives a catalytic reaction that separates molecules of H2O, releasing bubbles of hydrogen on one electrode and oxygen on the other.
In these devices, each electrode is embedded with a different catalyst, typically platinum and iridium, two rare and costly metals. But in 2014, Stanford chemist Hongjie Dai developed a water splitter made of inexpensive nickel and iron that runs on an ordinary 1.5-volt battery.
In the new study, Cui and his colleagues advanced that technology further.
Stanford University | Stanford water splitter produces clean hydrogen 24/7
A single catalyst
In conventional water splitters, the hydrogen and oxygen catalysts often require different electrolytes with different pH — one acidic, one alkaline — to remain stable and active. “For practical water splitting, an expensive barrier is needed to separate the two electrolytes, adding to the cost of the device,” Wang explained.
“Our water splitter is unique because we only use one catalyst, nickel-iron oxide, for both electrodes,” said graduate student Haotian Wang, lead author of the study. “This bi-functional catalyst can split water continuously for more than a week with a steady input of just 1.5 volts of electricity. That’s an unprecedented water-splitting efficiency of 82 percent at room temperature.”
Wang and his colleagues discovered that nickel-iron oxide, which is cheap and easy to produce, is actually more stable than some commercial catalysts made of expensive precious metals.
The key to making a single catalyst possible was to use lithium ions to chemically break the metal oxide catalyst into smaller and smaller pieces. That “increases its surface area and exposes lots of ultra-small, interconnected grain boundaries that become active sites for the water-splitting catalytic reaction,” Cui said. “This process creates tiny particles that are strongly connected, so the catalyst has very good electrical conductivity and stability.”
Using one catalyst made of nickel and iron also has significant implications in terms of cost.
“Not only are the materials cheaper, but having a single catalyst also reduces two sets of capital investment to one,” Cui said. “We believe that electrochemical tuning can be used to find new catalysts for other chemical fuels beyond hydrogen. The technique has been used in battery research for many years, but it’s a new approach for catalysis. The marriage of these two fields is very powerful. ”
“Our group has pioneered the idea of using lithium-ion batteries to search for catalysts,” Cui said. “Our hope is that this technique will lead to the discovery of new catalysts for other reactions beyond water splitting.”
Abstract of Bifunctional non-noble metal oxide nanoparticle electrocatalysts through lithium-induced conversion for overall water splitting
Developing earth-abundant, active and stable electrocatalysts which operate in the same electrolyte for water splitting, including oxygen evolution reaction and hydrogen evolution reaction, is important for many renewable energy conversion processes. Here we demonstrate the improvement of catalytic activity when transition metal oxide (iron, cobalt, nickel oxides and their mixed oxides) nanoparticles (~20 nm) are electrochemically transformed into ultra-small diameter (2–5 nm) nanoparticles through lithium-induced conversion reactions. Different from most traditional chemical syntheses, this method maintains excellent electrical interconnection among nanoparticles and results in large surface areas and many catalytically active sites. We demonstrate that lithium-induced ultra-small NiFeOx nanoparticles are active bifunctional catalysts exhibiting high activity and stability for overall water splitting in base. We achieve 10 mA cm−2 water-splitting current at only 1.51 V for over 200 h without degradation in a two-electrode configuration and 1 M KOH, better than the combination of iridium and platinum as benchmark catalysts.
Chemists and biologists at UC San Diego have succeeded in designing and synthesizing an artificial cell membrane capable of sustaining continual growth, just like a living cell.
Their achievement will allow scientists to more accurately replicate the behavior of living cell membranes, which until now have been modeled only by synthetic cell membranes without the ability to add new phospholipids.
“The membranes we created, though completely synthetic, mimic several features of more complex living organisms, such as the ability to adapt their composition in response to environmental cues,” said Neal Devaraj, an assistant professor of chemistry and biochemistry at UC San Diego who headed the research team, which included scientists from the campus’ BioCircuits Institute.
“Many other scientists have exploited the ability of lipids to self-assemble into bilayer vesicles with properties reminiscent of cellular membranes, but until now no one has been able to mimic nature’s ability to support persistent phospholipid membrane formation,” he explained. “We developed an artificial cell membrane that continually synthesizes all of the components needed to form additional catalytic membranes.”
Michael Hardy | Autocatalyst Drives Vesicle Growth
A time-lapse video shows increase in vesicle volume and membrane surface area at 60 second intervals over a period of 12 hours (credit: Michael Hardy, UC San Diego)
The scientists said in their paper, published in the current issue of Proceedings of the National Academy of Sciences, that to develop the growing membrane they substituted a “complex network of biochemical pathways used in nature with a single autocatalyst that simultaneously drives membrane growth.” In this way, they added, “our system continually transforms simpler, higher-energy building blocks into new artificial membranes.”
“Our results demonstrate that complex lipid membranes capable of indefinite self-synthesis can emerge when supplied with simpler chemical building blocks,” said Devaraj. “Synthetic cell membranes that can grow like real membranes will be an important new tool for synthetic biology and origin-of-life studies.”
Support for the research was provided by UC San Diego, US Army Research Laboratory, US Army Research Office, and the National Science Foundation.
Abstract of Self-reproducing catalyst drives repeated phospholipid synthesis and membrane growth
Cell membranes are dynamic structures found in all living organisms. There have been numerous constructs that model phospholipid membranes. However, unlike natural membranes, these biomimetic systems cannot sustain growth owing to an inability to replenish phospholipid-synthesizing catalysts. Here we report on the design and synthesis of artificial membranes embedded with synthetic, self-reproducing catalysts capable of perpetuating phospholipid bilayer formation. Replacing the complex biochemical pathways used in nature with an autocatalyst that also drives lipid synthesis leads to the continual formation of triazole phospholipids and membrane-bound oligotriazole catalysts from simpler starting materials. In addition to continual phospholipid synthesis and vesicle growth, the synthetic membranes are capable of remodeling their physical composition in response to changes in the environment by preferentially incorporating specific precursors. These results demonstrate that complex membranes capable of indefinite self-synthesis can emerge when supplied with simpler chemical building blocks.
Iowa State University engineers have developed microrobotic tentacles that could allow small robots to safely handle delicate objects.
As described in an open-access research paper in the journal Scientific Reports, the tentacles are microtubes just a third of an inch long and less than a hundredth of an inch wide. They’re made from PDMS, a transparent elastomer that can be a liquid or a soft, rubbery solid.
“Most robots use two fingers and to pick things up they have to squeeze,” said Jaeyoun (Jay) Kim, an Iowa State University associate professor of electrical and computer engineering and an associate of the U.S. Department of Energy’s Ames Laboratory. “But these tentacles wrap around very gently.”
The researchers sealed one end of the tube and pumped air in and out. The air pressure and the microtube’s asymmetrical wall thickness created a circular bend. They then added a small lump of PDMS to the base of the tube to amplify the bend and create a two-turn spiraling, coiling action. The resulting soft-robotic micro-tentacle can wind around and hold fragile micro-objects.
“Spiraling tentacles are widely utilized in nature for grabbing and squeezing objects,” the engineers wrote in the paper. “There have been continuous soft-robotic efforts to mimic them… but the life-like, multi-turn spiraling motion has been reproduced only by centimeter-scale tentacles so far. At millimeter and sub-millimeter scales, they could bend only up to a single turn.”
Extending the reach of surgical robots
The micro-tentacle’s final spiral radius is about 200 micrometers (millionths of a meter), with a grabbing force in the vicinity of 0.78 millinewtons at 9.8 psi pneumatic pressure — weaker than those of existing elastomer-based pneumatic micro-actuators.
Kim said that makes the microrobotic tentacle ideal for medical applications because the microrobotic tentacles can’t damage tissues or even blood vessels.
The design also unites two current research areas, he said. “There’s microrobotics, where people want to make robots smaller and smaller. And there’s soft robotics, where people don’t want to make robots out of iron and steel. This project is an overlap of both of those fields. I want to pioneer new work in the field with both microscale and soft robotics.”
The study was supported by Kim’s six-year, $400,000 Faculty Early Career Development Award from the National Science Foundation.
No ants were harmed in testng the micro-tentacles, just really freaked out.
Abstract of Microrobotic tentacles with spiral bending capability based on shape-engineered elastomeric microtubes
Microscale soft-robots hold great promise as safe handlers of delicate micro-objects but their wider adoption requires micro-actuators with greater efficiency and ease-of-fabrication. Here we present an elastomeric microtube-based pneumatic actuator that can be extended into a microrobotic tentacle. We establish a new, direct peeling-based technique for building long and thin, highly deformable microtubes and a semi-analytical model for their shape-engineering. Using them in combination, we amplify the microtube’s pneumatically-driven bending into multi-turn inward spiraling. The resulting micro-tentacle exhibit spiraling with the final radius as small as ~185 μm and grabbing force of ~0.78 mN, rendering itself ideal for non-damaging manipulation of soft, fragile micro-objects. This spiraling tentacle-based grabbing modality, the direct peeling-enabled elastomeric microtube fabrication technique, and the concept of microtube shape-engineering are all unprecedented and will enrich the field of soft-robotics.
Three new significant developments in machine-learning were announced last week.
Reading and comprehending natural-language documents
Google DeepMind in London said it has developed a way to teach machines to read natural-language documents and comprehend them, and like Watson, answer complex questions with minimal prior knowledge of language structure — at least for CNN and Daily Mail websites.
As noted by the researchers in an arXiv paper (open access), these websites have summaries (such as bulleted lists) and paraphrase sentences. The researchers were able to use these for creating context–query–answer triples for each document. In the process, they generated two new corpora (collections of data) of roughly a million news stories with associated queries to serve as training sets.
Facial recognition for sharing photos with friends
Facebook has launched Moments, an app that uses facial recognition technology to groups the photos on your phone based on when they were taken and, using facial recognition technology, which friends are in them. You can then privately sync those photos quickly and easily with specific friends, and they can choose to sync their photos with you as well.
The app and this technology is based in part on work conducted by the Facebook AI Research (FAIR) team, headed by AI research Yann LeCun, as he explains in this video:
Facebook | Facebook AI research
But an experimental algorithm created by Facebook’s FAIR lab can recognize people in photographs even when it can’t see their faces. Instead it looks for other unique characteristics like your hairdo, clothing, body shape and pose, New Scientist notes.
“The research team pulled almost 40,000 public photos from Flickr — some of people with their full face clearly visible, and others where they were turned away – and ran them through a sophisticated neural network. The final algorithm was able to recognize individual people’s identities with 83 per cent accuracy. An algorithm like this could one day help power photo apps like Facebook’s Moments.
“LeCun also imagines such a tool would be useful for the privacy-conscious – alerting someone whenever a photo of themselves, however obscured, pops up on the internet. The flipside is also true: the ability to identify someone even when they are not looking at the camera raises some serious privacy implications.”
Amazon machine learning algorithm fights fake product reviews
Amazon has developed a machine learning algorithm that will “learn which reviews are most helpful to customers” — that is, which reviews are real and which ones are fake. (Amazon sued a number of websites that specialized in creating fake Amazon reviews in April.) Amazon will give greater weight to newer, more helpful and verified customer reviews and ratings (their 5-star system).
Amazon Web Services began offering its Amazon Machine Learning service in April to make “it easy for developers of all skill levels to use machine learning technology … without having to learn complex ML algorithms and technology.”
How would you like to produce carbon nanoparticles small enough to evade the body’s immune system, that reflect light in the near-infrared range for easy detection in the body, and even carry payloads of pharmaceutical drugs to targeted tissues — all in the privacy of your own home?
Unlike other methods of making carbon nanoparticles — which require expensive equipment and purification processes that can take days — with the handy-dandy new approach, you can generates your own biomedical-class nanoparticles in a few hours, using store-bought molasses and honey. That and a pig.
The researchers report their big findings in the journal Small.
The DIY biomedical nanoparticles kitchen recipe
“If you have a microwave, honey, and molasses, you can pretty much make these particles at home,” Pan said. “You just mix these two together and cook it for a few minutes, and you get something that looks like char, but that is nanoparticles with high luminescence [they glow]. This is one of the simplest systems that we can think of. It is safe and highly scalable for eventual clinical use.”
These “next-generation” carbon nanospheres, or “luminescent carbon nanomaterials,” have several attractive properties, the researchers found:
- For imaging, they naturally scatter light in a manner that makes them easy to differentiate from human tissues, eliminating the need for added dyes or fluorescing molecules to help detect them in the body. The nanoparticles are coated with polymers that fine-tune their optical properties and their rate of degradation in the body.
- The polymers can also be loaded with drugs that are gradually released.
The nanoparticles also can be made less than eight nanometers in diameter (a human hair is 80,000 to 100,000 nanometers thick). “Our immune system fails to recognize anything under 10 nanometers,” Pan said. “So, these tiny particles are kind of camouflaged, I would say; they are hiding from the human immune system.”
Testing the nanoparticles: don’t try this at home
Unless you have a really unusual kitchen, this part will require a lab. Bhargava’s laboratory used vibrational spectroscopic techniques to identify the molecular structure of the nanoparticles and their cargo.
“Raman and infrared spectroscopy are the two tools that one uses to see molecular structure,” Bhargava said. “We think we coated this particle with a specific polymer and with specific drug-loading — but did we really? We use spectroscopy to confirm the formulation as well as visualize the delivery of the particles and drug molecules.”
The team found that the nanoparticles did not release the drug payload at room temperature, but at body temperature began to release the anti-cancer drug.
Here’s where the pig comes in. The team tested the therapeutic potential of the nanoparticles by loading them with an anti-melanoma drug and mixing them in a topical solution that was applied to pig skin.The researchers also determined which topical applications penetrated the skin to a desired depth.
In further experiments, the researchers found they could alter the infusion of the particles into melanoma cells by adjusting the polymer coatings. Imaging confirmed that the infused cells began to swell, a sign of impending cell death.
“This is a versatile platform to carry a multitude of drugs — for melanoma, for other kinds of cancers and for other diseases,” Bhargava said. “You can coat it with different polymers to give it a different optical response. You can load it with two drugs, or three, or four, so you can do multidrug therapy with the same particles.”
“By using defined surface chemistry, we can change the properties of these particles,” Pan said. “We can make them glow at a certain wavelength and also we can tune them to release the drugs in the presence of the cellular environment. That is, I think, the beauty of the work.”
Abstract of Tunable luminescent carbon nanospheres with well-defined nanoscale chemistry for synchronized imaging and therapy
In this work, we demonstrate the significance of defined surface chemistry in synthesizing luminescent carbon nanomaterials (LCN) with the capability to perform dual functions (i.e., diagnostic imaging and therapy). The surface chemistry of LCN has been tailored to achieve two different varieties: one that has a thermoresponsive polymer and aids in the controlled delivery of drugs, and the other that has fluorescence emission both in the visible and near-infrared (NIR) region and can be explored for advanced diagnostic modes. Although these particles are synthesized using simple, yet scalable hydrothermal methods, they exhibit remarkable stability, photoluminescence and biocompatibility. The photoluminescence properties of these materials are tunable through careful choice of surface-passivating agents and can be exploited for both visible and NIR imaging. Here the synthetic strategy demonstrates the possibility to incorporate a potent antimetastatic agent for inhibiting melanomas in vitro. Since both particles are Raman active, their dispersion on skin surface is reported with Raman imaging and utilizing photoluminescence, their depth penetration is analysed using fluorescence 3D imaging. Our results indicate a new generation of tunable carbon-based probes for diagnosis, therapy or both.
Using factual information from summary infoboxes from Wikipedia* as a source, they built a “knowledge graph” with 3 million concepts and 23 million links between them. A link between two concepts in the graph can be read as a simple factual statement, such as “Socrates is a person” or “Paris is the capital of France.”
In the first use of this method, IU scientists created a simple computational fact-checker that assigns “truth scores” to statements concerning history, geography and entertainment, as well as random statements drawn from the text of Wikipedia. In multiple experiments, the automated system consistently matched the assessment of human fact-checkers in terms of the humans’ certitude about the accuracy of these statements.
Dealing with misinformation and disinformation
In what the IU scientists describe as an “automatic game of trivia,” the team applied their algorithm to answer simple questions related to geography, history, and entertainment, including statements that matched states or nations with their capitals, presidents with their spouses, and Oscar-winning film directors with the movie for which they won the Best Picture awards. The majority of tests returned highly accurate truth scores.
Lastly, the scientists used the algorithm to fact-check excerpts from the main text of Wikipedia, which were previously labeled by human fact-checkers as true or false, and found a positive correlation between the truth scores produced by the algorithm and the answers provided by the fact-checkers.
Significantly, the IU team found their computational method could even assess the truthfulness of statements about information not directly contained in the infoboxes. For example, the fact that Steve Tesich — the Serbian-American screenwriter of the classic Hoosier film “Breaking Away” — graduated from IU, despite the information not being specifically addressed in the infobox about him.
Using multiple sources to improve accuracy and richness of data
“The measurement of the truthfulness of statements appears to rely strongly on indirect connections, or ‘paths,’ between concepts,” said Giovanni Luca Ciampaglia, a postdoctoral fellow at the Center for Complex Networks and Systems Research in the IU Bloomington School of Informatics and Computing, who led the study.
“If we prevented our fact-checker from traversing multiple nodes on the graph, it performed poorly since it could not discover relevant indirect connections,” said Ciampaglia. “But because it’s free to explore beyond the information provided in one infobox, our method leverages the power of the full knowledge graph.
“These results are encouraging and exciting. We live in an age of information overload, including abundant misinformation, unsubstantiated rumors and conspiracy theories whose volume threatens to overwhelm journalists and the public. Our experiments point to methods to abstract the vital and complex human task of fact-checking into a network analysis problem, which is easy to solve computationally.”
Expanding the knowledge base
Although the experiments were conducted using Wikipedia, the IU team’s method does not assume any particular source of knowledge. The scientists aim to conduct additional experiments using knowledge graphs built from other sources of human knowledge, such as Freebase, the open-knowledge base built by Google, and note that multiple information sources could be used together to account for different belief systems.
The team added a significant amount of natural language processing research, but they note that additional work remains before these methods could be made available to the public as a software tool.
The work was supported in part by the Swiss National Science Foundation, the Lilly Endowment, the James S. McDonnell Foundation, the National Science Foundation, and the Department of Defense.
* The team selected Wikipedia as the information source for their experiment due to its breadth and open nature. Although Wikipedia is not 100 percent accurate, previous studies estimate the online encyclopedia is nearly as reliable as traditional encyclopedias, but also covers many more subjects, the researchers note.
Abstract of Computational Fact Checking from Knowledge Networks
Traditional fact checking by expert journalists cannot keep up with the enormous volume of information that is now generated online. Computational fact checking may significantly enhance our ability to evaluate the veracity of dubious information. Here we show that the complexities of human fact checking can be approximated quite well by finding the shortest path between concept nodes under properly defined semantic proximity metrics on knowledge graphs. Framed as a network problem this approach is feasible with efficient computational techniques. We evaluate this approach by examining tens of thousands of claims related to history, entertainment, geography, and biographical information using a public knowledge graph extracted from Wikipedia. Statements independently known to be true consistently receive higher support via our method than do false ones. These findings represent a significant step toward scalable computational fact-checking methods that may one day mitigate the spread of harmful misinformation.
There is no longer any doubt: we are entering a mass extinction that threatens humanity’s existence.
That’s the conclusion of a new study by a group of scientists including Paul Ehrlich, the Bing Professor of Population Studies in biology and a senior fellow at the Stanford Woods Institute for the Environment. Ehrlich and his co-authors call for fast action to conserve threatened species, populations and habitat, but warn that the window of opportunity is rapidly closing.
“[The study] shows without any significant doubt that we are now entering the sixth great mass extinction event,” Ehrlich said.
Although most well known for his positions on human population, Ehrlich has done extensive work on extinctions going back to his 1981 book, Extinction: The Causes and Consequences of the Disappearance of Species. He has long tied his work on coevolution, on racial, gender and economic justice, and on nuclear winter with the issue of wildlife populations and species loss.
According to the study, there is general agreement among scientists that extinction rates have reached levels unparalleled since the dinosaurs died out 66 million years ago. However, some have challenged the theory, believing earlier estimates rested on assumptions that overestimated the crisis.
The new study, published (open-access) in the journal Science Advances, shows that even with extremely conservative estimates, species are disappearing up to about 100 times faster than the normal rate between mass extinctions, known as the background rate.
“If it is allowed to continue, life would take many millions of years to recover, and our species itself would likely disappear early on,” said lead author Gerardo Ceballos of the Universidad Autónoma de México.
Stanford | Stanford researcher warns sixth mass extinction is here
“The walking dead”
Using fossil records and extinction counts from a range of records, the researchers compared a highly conservative estimate of current extinctions with a background rate estimate twice as high as those widely used in previous analyses. This way, they brought the two estimates — current extinction rate and average background or going-on-all-the-time extinction rate — as close to each other as possible.
Focusing on vertebrates, the group for which the most reliable modern and fossil data exist, the researchers asked whether even the lowest estimates of the difference between background and contemporary extinction rates still justify the conclusion that people are precipitating “a global spasm of biodiversity loss.” The answer: a definitive yes.
“We emphasize that our calculations very likely underestimate the severity of the extinction crisis, because our aim was to place a realistic lower bound on humanity’s impact on biodiversity,” the researchers write.
Their list of impacts of human population includes:
- land clearing for farming, logging and settlement
- introduction of invasive species
- carbon emissions that drive climate change and ocean acidification
- toxins that alter and poison ecosystems
Now, the specter of extinction hangs over about 41 percent of all amphibian species and 26 percent of all mammals, according to the International Union for Conservation of Nature, which maintains an authoritative list of threatened and extinct species.
“There are examples of species all over the world that are essentially the walking dead,” Ehrlich said.
As species disappear, so do crucial ecosystem services such as honeybees’ crop pollination and wetlands’ water purification. At the current rate of species loss, people will lose many biodiversity benefits within three generations, the study’s authors write. “We are sawing off the limb that we are sitting on,” Ehrlich said.
Hope for the future
Despite the gloomy outlook, there is a meaningful way forward, according to Ehrlich and his colleagues. “Avoiding a true sixth mass extinction will require rapid, greatly intensified efforts to conserve already threatened species, and to alleviate pressures on their populations — notably habitat loss, over-exploitation for economic gain, and climate change,” the study’s authors write.
In the meantime, the researchers hope their work will inform conservation efforts, the maintenance of ecosystem services, and public policy.
Abstract of Accelerated modern human–induced species losses: Entering the sixth mass extinction
The oft-repeated claim that Earth’s biota is entering a sixth “mass extinction” depends on clearly demonstrating that current extinction rates are far above the “background” rates prevailing in the five previous mass extinctions. Earlier estimates of extinction rates have been criticized for using assumptions that might overestimate the severity of the extinction crisis. We assess, using extremely conservative assumptions, whether human activities are causing a mass extinction. First, we use a recent estimate of a background rate of 2 mammal extinctions per 10,000 species per 100 years (that is, 2 E/MSY), which is twice as high as widely used previous estimates. We then compare this rate with the current rate of mammal and vertebrate extinctions. The latter is conservatively low because listing a species as extinct requires meeting stringent criteria. Even under our assumptions, which would tend to minimize evidence of a starting mass extinction, the average rate of vertebrate species loss over the last century is up to 114 times higher than the background rate. Under the 2 E/MSY background rate, the number of species that have gone extinct in the last century would have taken, depending on the vertebrate taxon, between 500 and 11,400 years to disappear. These estimates reveal an exceptionally rapid loss of biodiversity over the last few centuries, indicating that a sixth mass extinction is already under way. Averting a dramatic decay of biodiversity and the subsequent loss of ecosystem services is still possible through intensified conservation efforts, but that window of opportunity is rapidly closing.