A computer model of how bees use vision to avoid hitting walls could be a breakthrough in the development of autonomous drones.
Bees control their flight using the speed of motion (optic flow) of the visual world around them. A study by Scientists at the University of Sheffield Department of Computer Science suggests how motion-direction detecting circuits could be wired together to also detect motion-speed, which is crucial for controlling bees’ flight.
“Honeybees are excellent navigators and explorers, using vision extensively in these tasks, despite having a brain of only one million neurons,” said Alex Cope, PhD., lead researcher on the paper. “Understanding how bees avoid walls, and what information they can use to navigate, moves us closer to the development of efficient algorithms for navigation and routing, which would greatly enhance the performance of autonomous flying robotics,” he added.
“Experimental evidence shows that they use an estimate of the speed that patterns move across their compound eyes (angular velocity) to control their behavior and avoid obstacles; however, the brain circuitry used to extract this information is not understood, ” the researchers note. “We have created a model that uses a small number of assumptions to demonstrate a plausible set of circuitry. Since bees only extract an estimate of angular velocity, they show differences from the expected behavior for perfect angular velocity detection, and our model reproduces these differences.”
Their open-access paper is published in PLOS Computational Biology.Abstract of A Model for an Angular Velocity-Tuned Motion Detector Accounting for Deviations in the Corridor-Centering Response of the Bee
We present a novel neurally based model for estimating angular velocity (AV) in the bee brain, capable of quantitatively reproducing experimental observations of visual odometry and corridor-centering in free-flying honeybees, including previously unaccounted for manipulations of behaviour. The model is fitted using electrophysiological data, and tested using behavioural data. Based on our model we suggest that the AV response can be considered as an evolutionary extension to the optomotor response. The detector is tested behaviourally in silico with the corridor-centering paradigm, where bees navigate down a corridor with gratings (square wave or sinusoidal) on the walls. When combined with an existing flight control algorithm the detector reproduces the invariance of the average flight path to the spatial frequency and contrast of the gratings, including deviations from perfect centering behaviour as found in the real bee’s behaviour. In addition, the summed response of the detector to a unit distance movement along the corridor is constant for a large range of grating spatial frequencies, demonstrating that the detector can be used as a visual odometer.
Columbia University engineering researchers have developed a new “circulator” technology that can double WiFi speed while reducing the size of wireless devices. It does this by requiring only one antenna (instead of two, for transmitter and receiver) and by using conventional CMOS chips instead of resorting to large, expensive magnetic components.
Columbia engineers previously invented a “full-duplex” radio integrated circuit on a conventional CMOS chip. “Full duplex” means simultaneous transmission and reception at the same frequency in a wireless radio, unlike “half-duplex” (transmitting and receiving at different times, used by current cell phones and other wireless devices). Full duplex also allows for faster transmission speeds.
The new circulator technology further miniaturizes future WiFi and other wireless devices (see Lighter, cheaper radio-wave device could double the useful bandwidth in wireless communications — an earlier circulator device developed by The University of Texas at Austin engineers that was not integrated on a CMOS chip).
“Full-duplex communications, where the transmitter and the receiver operate at the same time and at the same frequency, has become a critical research area and now we’ve shown that WiFi capacity can be doubled on a nanoscale silicon chip with a single antenna,” said Electrical Engineering Associate Professor Harish Krishnaswamy, director of the Columbia High-Speed and Mm-wave IC (CoSMIC) Lab. “This has enormous implications for devices like smartphones and tablets.”
By combining circulator and full-duplex technologies, “this technology could revolutionize the field of telecommunications,” he said. “Our circulator is the first to be put on a silicon chip, and we get literally orders of magnitude better performance than prior work.”
How to embed circulator technology on a CMOS chip
A circulator allows for using only one antenna to both transmit and receive. To do that, it has to “break” “Lorentz reciprocity” — a fundamental physical characteristic of most electronic structures that requires that electromagnetic waves travel in the same manner in both forward and reverse directions.
The traditional way of breaking Lorentz reciprocity and building radio-frequency circulators has been to use magnetic materials such as ferrites, which lose reciprocity when an external magnetic field is applied. But these materials are not compatible with silicon chip technology, and ferrite circulators are bulky and expensive.
Krishnaswamy and his team were able to design a highly miniaturized circulator that uses switches to rotate the signal across a set of capacitors to emulate the non-reciprocal “twist” of the signal that is seen in ferrite materials.
“Being able to put the circulator on the same chip as the rest of the radio has the potential to significantly reduce the size of the system, enhance its performance, and introduce new functionalities critical to full duplex,” says PhD student Jin Zhou, who integrated the circulator with a full-duplex receiver.
Circulator circuits and components have applications in many different scenarios, from radio-frequency full-duplex communications and radar to building isolators that prevent high-power transmitters from being damaged by back-reflections from the antenna. The ability to break reciprocity also opens up new possibilities in radio-frequency signal processing that are yet to be discovered.
The circulator research is published in an open-access paper on April 15 in Nature Communications. A paper detailing the single-chip full-duplex radio with the circulator and additional echo cancellation was presented at the 2016 IEEE International Solid-State Circuits Conference on February 2.
The work has been funded by the DARPA Microsystems Technology Office and the National Science Foundation.
Abstract of Magnetic-free non-reciprocity based on staggered commutation
Lorentz reciprocity is a fundamental characteristic of the vast majority of electronic and photonic structures. However, non-reciprocal components such as isolators, circulators and gyrators enable new applications ranging from radio frequencies to optical frequencies, including full-duplex wireless communication and on-chip all-optical information processing. Such components today dominantly rely on the phenomenon of Faraday rotation in magneto-optic materials. However, they are typically bulky, expensive and not suitable for insertion in a conventional integrated circuit. Here we demonstrate magnetic-free linear passive non-reciprocity based on the concept of staggered commutation. Commutation is a form of parametric modulation with very high modulation ratio. We observe that staggered commutation enables time-reversal symmetry breaking within very small dimensions (λ/1,250 × λ/1,250 in our device), resulting in a miniature radio-frequency circulator that exhibits reduced implementation complexity, very low loss, strong non-reciprocity, significantly enhanced linearity and real-time reconfigurability, and is integrated in a conventional complementary metal–oxide–semiconductor integrated circuit for the first time.
Abstract of Receiver with integrated magnetic-free N-path-filter-based non-reciprocal circulator and baseband self-interference cancellation for full-duplex wireless
Full-duplex (FD) is an emergent wireless communication paradigm where the transmitter (TX) and the receiver (RX) operate at the same time and at the same frequency. The fundamental challenge with FD is the tremendous amount of TX self-interference (SI) at the RX. Low-power applications relax FD system requirements , but an FD system with -6dBm transmit power, 10MHz signal bandwidth and 12dB NF budget still requires 86dB of SI suppression to reach the -92dBm noise floor. Recent research has focused on techniques for integrated self-interference cancellation (SIC) in FD receivers [1-3]. Open challenges include achieving the challenging levels of SIC through multi-domain cancellation, and low-loss shared-antenna (ANT) interfaces with high TX-to-RX isolation. Sharedantenna interfaces enable compact form factor, translate easily to MIMO, and ease system design through channel reciprocity.
Can a robot handle the slippery stuff of soft tissues that can move and change shape in complex ways as stitching goes on, normally requiring a surgeon’s skill to respond to these changes to keep suturing as tightly and evenly as possible?
A Johns Hopkins University and Children’s National Health System research team decided to find out by using their “Smart Tissue Autonomous Robot” (STAR) to perform in a procedure called anastomosis* (joining two tubular structures such as blood vessels together), using pig intestinal tissue.
The researchers published the results today in an open-access paper in the journal Science Translational Medicine. The robot surgeon took longer (up to 57 minutes vs. 8 minutes for human surgeons) but “the machine does it better,” according to Peter Kim, M.D., Professor of Surgery at the Sheikh Zayed Institute for Pediatric Surgical Innovation, Children’s National Health System in Washington D.C. Kim said the procedure was about 60 percent fully autonomous and 40 percent supervised (“we made some minor adjustments”), but that it can be made fully autonomous.
“The equivalent of a fancy sewing machine”
STAR was developed by Azad Shademan and associates at the Sheikh Zayed Institute. It features a 3D imaging system and a near-infrared sensor to spot fluorescent markers along the edges of the tissue to keep the robotic suture needle on track. Unlike most other robot-assisted surgical systems, such as the Da Vinci Si, it operates without human hands-on guidance (but under the surgeon’s supervision).
In the research, the STAR robotic sutures were compared with the work of five surgeons completing the same procedure using three methods: open surgery, laparoscopic, and robot assisted surgery. Researchers compared consistency of suture spacing, pressure at which the seam leaked, mistakes that required removing the needle from the tissue or restarting the robot, and completion time.
The system promises to improve results for patients and make the best surgical techniques more widely available, according to the researchers. Putting a robot to work in this form of surgery “really levels the playing field,” said Simon Leonard, a computer scientist an assistant research professor in the Johns Hopkins Whiting School of Engineering, who worked for four years to program the robotic arm to precisely stitch together pieces of soft tissue.
As Leonard put it, they’re designing an advanced surgical tool, “the equivalent of a fancy sewing machine.”
* Anastomosis is performed more than a million times a year in the U.S.; more than 44.5 million such soft-tissue surgeries are performed in the U.S. each year. According to the researchers, complications such as leakage along the seams occur nearly 20 percent of the time in colorectal surgery and 25 to 30 percent of the time in abdominal surgery.
Abstract of Supervised autonomous robotic soft tissue surgery
Carla Schaffer/AAAS | Robotic Surgery Just Got More Autonomous
The current paradigm of robot-assisted surgeries (RASs) depends entirely on an individual surgeon’s manual capability. Autonomous robotic surgery—removing the surgeon’s hands—promises enhanced efficacy, safety, and improved access to optimized surgical techniques. Surgeries involving soft tissue have not been performed autonomously because of technological limitations, including lack of vision systems that can distinguish and track the target tissues in dynamic surgical environments and lack of intelligent algorithms that can execute complex surgical tasks. We demonstrate in vivo supervised autonomous soft tissue surgery in an open surgical setting, enabled by a plenoptic three-dimensional and near-infrared fluorescent (NIRF) imaging system and an autonomous suturing algorithm. Inspired by the best human surgical practices, a computer program generates a plan to complete complex surgical tasks on deformable soft tissue, such as suturing and intestinal anastomosis. We compared metrics of anastomosis—including the consistency of suturing informed by the average suture spacing, the pressure at which the anastomosis leaked, the number of mistakes that required removing the needle from the tissue, completion time, and lumen reduction in intestinal anastomoses—between our supervised autonomous system, manual laparoscopic surgery, and clinically used RAS approaches. Despite dynamic scene changes and tissue movement during surgery, we demonstrate that the outcome of supervised autonomous procedures is superior to surgery performed by expert surgeons and RAS techniques in ex vivo porcine tissues and in living pigs. These results demonstrate the potential for autonomous robots to improve the efficacy, consistency, functional outcome, and accessibility of surgical techniques.
The results of two Yale University psychology experiments suggest that what we believe to be a conscious choice may actually be constructed, or confabulated, unconsciously after we act — to rationalize our decisions. A trick of the mind.
Tricks of the mind
Bear and Paul Bloom performed two simple experiments to test how we experience choices. In one experiment, participants were told that five white circles would appear on the computer screen in front of them and, in rapid-fire sequence, one would turn red. They were asked to predict which one would turn red and mentally note this. After a circle turned red, participants then recorded by keystroke whether they had chosen correctly, had chosen incorrectly, or had not had time to complete their choice.
The circle that turned red was always selected by the system randomly, so probability dictates that participants should predict the correct circle 20% of the time. But when they only had a fraction of a second to make a prediction, these participants were likely to report that they correctly predicted which circle would change color more than 20% of the time.
In contrast, when participants had more time to make their guess — approaching a full second — the reported number of accurate predictions dropped back to expected levels of 20% success, suggesting that participants were not simply lying about their accuracy to impress the experimenters.
(In a second experiment to eliminate artifacts, participants chose one of two different-colored circles, with similar results.)
What happened, Bear suggests, is that events were rearranged in subjects’ minds: People unconsciously perceived the color red from the screen image before they predicted it would appear, but then right after that, consciously experienced these two things in the opposite order.
Bear said it is unknown whether this “postdictive” illusion is caused by a quirk in perceptual processing that can only be reproduced in the lab, or whether it might have “far more pervasive effects on our everyday lives and sense of free will.”
Previous research at Charité–Universitätsmedizin Berlin suggests the latter, and includes volition. That research involved a “duel” game between a human and a brain-computer interface (see Do we have free will?). It showed that there‘s a “point of no return” in the decision-making process (at about 200 milliseconds before actual movement onset), after which cancellation of a person’s movement is no longer possible.Abstract of A Simple Task Uncovers a Postdictive Illusion of Choice
Do people know when, or whether, they have made a conscious choice? Here, we explore the possibility that choices can seem to occur before they are actually made. In two studies, participants were asked to quickly choose from a set of options before a randomly selected option was made salient. Even when they believed that they had made their decision prior to this event, participants were significantly more likely than chance to report choosing the salient option when this option was made salient soon after the perceived time of choice. Thus, without participants’ awareness, a seemingly later event influenced choices that were experienced as occurring at an earlier time. These findings suggest that, like certain low-level perceptual experiences, the experience of choice is susceptible to “postdictive” influence and that people may systematically overestimate the role that consciousness plays in their chosen behavior.
Serial entrepreneur Peter Diamandis is optimistic and ambitious, even by the standards of tech-can-fix-everything Silicon Valley. He thinks believing the world is headed anywhere other than an era of abundance where all have access to first world-grade resources is foolish. Technological advances–artificial intelligence, developments in diagnostic technology, access to previously inaccessible resources–all this means people can accomplish more than ever before, with less than ever was needed.
Join Singularity University on June 20th at Mountain View’s Computer History Museum for the 2016 Global Solutions Program Opening Ceremony! The world’s resources seem finite and its problems insurmountable—until they’re recombined and transformed by entrepreneurs and visionaries with access to powerful technologies. Singularity University’s Global Solutions Program (GSP) enables driven individuals to build exponential…Click here for more information.
This content is password protected. To view it please enter your password below:
Astronomers have detected three exoplanets just 40 light years from Earth whose sizes and temperatures are comparable to those of Earth. The planets may be the best targets found so far for the search for life outside the solar system.
The results were published Monday (May 2) in the journal Nature.
Because the system is relatively close to Earth, co-author Julien de Wit, a postdoc at MIT, says scientists will soon be able to study the planets’ atmospheric compositions, as well as assess their habitability and whether life actually exists within this planetary system.
The scientists discovered the planets using TRAPPIST (TRAnsiting Planets and PlanetesImals Small Telescope), a 60-centimeter telescope operated by the University of Liège, based in Chile. Built by lead authors Michael Gillon and Emmanuel Jehin of the University of Liège, TRAPPIST is designed to focus on 60 nearby small, ”ultracool” dwarf stars (those with effective temperatures of less than 2,700 kelvin) — stars that are so faint they are invisible to optical telescopes and are monitored at infrared wavelengths.
The team focused the telescope on the dwarf star, which they named TRAPPIST-1 — a Jupiter-sized star that is one-eighth the size of our sun and significantly cooler. Over several months, the scientists observed the star’s infrared signal fade slightly at regular intervals, suggesting that several objects were passing in front of the star.
Most exoplanetary missions have been focused on finding systems around bright, solar-like stars. These stars emit radiation in the visible band and can be seen with optical telescopes. However, because these stars are so bright, their light can overpower any signal coming from a planet. Ultracool stars emit radiation in the infrared band. Because they are so faint, these tiny red stars would not drown out the image of a planet crossing the star, giving scientists a better chance of detecting orbiting planets.
May be in the habitable zone
From their observations, the scientists determined that all three planets are likely tidally locked, with permanent day and night sides.
The two innermost planets orbit the star in 1.5 and 2.4 days and receive only four and two times, respectively, the amount of radiation the Earth receives from the sun. The third planet may orbit the star in anywhere from four to 73 days, and may receive even less radiation than Earth. But given their size and proximity to their star, all three planets may have regions with temperatures well below 127 degrees C (260 degrees F), within a range that is suitable for sustaining liquid water and life.
The two planets closest to the star may have day sides that are too hot, and night sides too cold, to host any life forms. However, there may be a “sweet spot” — a region that still receives daylight, but with relatively cool temperatures — on the western side of both planets that may be temperate enough to sustain conditions suitable for life. The third planet, furthest from its star, may be entirely within the habitable zone.
“Now we have to investigate if they’re habitable,” de Wit says. “We will investigate what kind of atmosphere they have, and then will search for biomarkers and signs of life. We have facilities all over the globe and in space that are helping us, working from UV to radio, in all different wavelengths to tell us everything we want to know about this system.”
This research was funded, in part, by the Belgian Fund for Scientific Research, the European Research Council, and NASA.
Abstract of Temperate Earth-sized planets transiting a nearby ultracool dwarf star
Star-like objects with effective temperatures of less than 2,700 kelvin are referred to as ‘ultracool dwarfs’. This heterogeneous group includes stars of extremely low mass as well as brown dwarfs (substellar objects not massive enough to sustain hydrogen fusion), and represents about 15 per cent of the population of astronomical objects near the Sun. Core-accretion theory predicts that, given the small masses of these ultracool dwarfs, and the small sizes of their protoplanetary disks, there should be a large but hitherto undetected population of terrestrial planets orbiting them—ranging from metal-rich Mercury-sized planets to more hospitable volatile-rich Earth-sized planets. Here we report observations of three short-period Earth-sized planets transiting an ultracool dwarf star only 12 parsecs away. The inner two planets receive four times and two times the irradiation of Earth, respectively, placing them close to the inner edge of the habitable zone of the star. Our data suggest that 11 orbits remain possible for the third planet, the most likely resulting in irradiation significantly less than that received by Earth. The infrared brightness of the host star, combined with its Jupiter-like size, offers the possibility of thoroughly characterizing the components of this nearby planetary system.
IBM Research has announced that effective Wednesday May 4, it is making quantum computing available free to members of the public, who can access and run experiments on IBM’s quantum processor, via the IBM Cloud, from any desktop or mobile device.
IBM believes quantum computing is the future of computing and has the potential to solve certain problems that are impossible to solve on today’s supercomputers.
The cloud-enabled quantum computing platform, called IBM Quantum Experience, will allow users to run algorithms and experiments on IBM’s quantum processor, work with the individual quantum bits (qubits), and explore tutorials and simulations around what might be possible with quantum computing.
The quantum processor is composed of five superconducting qubits and is housed at the IBM T.J. Watson Research Center in New York. IBM’s quantum architecture can scale to larger quantum systems. It is aimed at building a universal quantum computer that can be programmed to perform any computing task and will be exponentially faster than classical computers for a number of important applications for science and business, IBM says.
IBM | Explore our 360 Video of the IBM Research Quantum Lab
IBM envisions medium-sized quantum processors of 50–100 qubits to be possible in the next decade. With a quantum computer built of just 50 qubits, none of today’s TOP500 supercomputers could successfully emulate it, reflecting the tremendous potential of this technology.
“Quantum computing is becoming a reality and it will extend computation far beyond what is imaginable with today’s computers,” said Arvind Krishna, senior vice president and director, IBM Research. “This moment represents the birth of quantum cloud computing. By giving hands-on access to IBM’s experimental quantum systems, the IBM Quantum Experience will make it easier for researchers and the scientific community to accelerate innovations in the quantum field, and help discover new applications for this technology.”
This leap forward in computing could lead to the discovery of new pharmaceutical drugs and completely safeguard cloud computing systems, IBM believes. It could also unlock new facets of artificial intelligence (which could lead to future, more powerful Watson technologies), develop new materials science to transform industries, and search large volumes of big data.
The IBM Quantum Experience
IBM | Running an experiment in the IBM Quantum Experience
Coupled with software expertise from the IBM Research ecosystem, the team has built a dynamic user interface on the IBM Cloud platform that allows users to easily connect to the quantum hardware via the cloud.
In the future, users will have the opportunity to contribute and review their results in the community hosted on the IBM Quantum Experience and IBM scientists will be directly engaged to offer more research and insights on new advances. IBM plans to add more qubits and different processor arrangements to the IBM Quantum Experience over time, so users can expand their experiments and help uncover new applications for the technology.
IBM employs superconducting qubits that are made with superconducting metals on a silicon chip and can be designed and manufactured using standard silicon fabrication techniques. Last year, IBM scientists demonstrated critical breakthroughs to detect quantum errors by combining superconducting qubits in latticed arrangements, and whose quantum circuit design is the only physical architecture that can scale to larger dimensions.
IBM | IBM Brings Quantum Computing to the Cloud
Now, IBM scientists have achieved a further advance by combining five qubits in the lattice architecture, which demonstrates a key operation known as a parity measurement — the basis of many quantum error correction protocols.
By giving users access to IBM’s experimental quantum systems, IBM believes it will help businesses and organizations begin to understand the technology’s potential, for universities to grow their teaching programs in quantum computing and related subjects, and for students (IBM’s potential future customers) to become aware of promising new career paths. And of course, to raise IBM’s marketing profile in this emerging field.
University of Cambridge researchers have developed the world’s tiniest engine, capable of a force per unit-weight nearly 100 times higher* than any motor or muscle.
The new nano-engines could lead to nanorobots small enough to enter living cells to fight disease, the researchers say.
Professor Jeremy Baumberg from the Cavendish Laboratory, who led the research, has named the devices “actuating nanotransducers” (ANTs). “Like real ants, they produce large forces for their weight,” he quipped.
As reported in the journal PNAS, the prototype ANT device — just a few billionths of a meter in size — is made of gold nanoparticles bound together with temperature-responsive gel polymers. It can function as a piston or spring and works in a reversible cycle. Loose nanoparticles in water are first heated. When the temperature reaches 32 degrees C, they suddenly aggregate into a tight ball. Cooling causes the nanoparticles to rapidly take on water and expand in a sudden explosion.
“It’s like an explosion,” said Tao Ding, PhD., from Cambridge’s Cavendish Laboratory, and the paper’s first author. “We have hundreds of gold balls flying apart in a millionth of a second when water molecules inflate the polymers around them.”
This “explosion” process converts Van de Waals energy — the attraction or repulsion between atoms or molecules — into the elastic energy of polymer molecules and releases it very quickly. “The whole process is like a nano-spring,” explained Baumberg. “The smart part here is we make use of Van de Waals attraction of heavy metal particles to set the springs (polymers) and water molecules to release them, which is very reversible and reproducible.”
Biological, other applications
KurzweilAI has covered a number of kinetic nanorobotic and microrobotic devices, including 3D-motion nanomachines from DNA, a magnetically controlled “nanoswimmer” for delivering drugs, sperm-inspired microrobots controlled by magnetic fields, and bacteria-powered microrobots. However, the forces exerted by ANT devices are several orders of magnitude larger* than those for any other previously produced device, according to the researchers. ANT devices are bio-compatible, cost-effective to manufacture, fast to respond, and energy-efficient, according to the researchers.
Possible applications include microrobotics, sensing, storage devices, smart windows and walls, and especially biomedical uses, since the spring process occurs at biological temperatures (32 degrees C or 90 degrees F). The team plans to initially commercialize this technology for optically controlled biological microfluidic pumps and valves.
The research is funded as part of a UK Engineering and Physical Sciences Research Council (EPSRC) investment in the Cambridge NanoPhotonics Centre, and by the European Research Council (ERC).
* On the order of nN compared to typically 10 fN/nm2, with up to GHz switching speeds.
Abstract of Light-induced actuating nanotransducers
Nanoactuators and nanomachines have long been sought after, but key bottlenecks remain. Forces at submicrometer scales are weak and slow, control is hard to achieve, and power cannot be reliably supplied. Despite the increasing complexity of nanodevices such as DNA origami and molecular machines, rapid mechanical operations are not yet possible. Here, we bind temperature-responsive polymers to charged Au nanoparticles, storing elastic energy that can be rapidly released under light control for repeatable isotropic nanoactuation. Optically heating above a critical temperature Tc = 32 °C using plasmonic absorption of an incident laser causes the coatings to expel water and collapse within a microsecond to the nanoscale, millions of times faster than the base polymer. This triggers a controllable number of nanoparticles to tightly bind in clusters. Surprisingly, by cooling below Tc their strong van der Waals attraction is overcome as the polymer expands, exerting nanoscale forces of several nN. This large force depends on van der Waals attractions between Au cores being very large in the collapsed polymer state, setting up a tightly compressed polymer spring which can be triggered into the inflated state. Our insights lead toward rational design of diverse colloidal nanomachines.
By Paul Cohen
Harold Cohen, artist and pioneer in the field of computer-generated art, died on April 27, 2016 at the age of 87. Cohen is the author of AARON, perhaps the longest-lived and certainly the most creative artificial intelligence program in daily use.
Cohen viewed AARON as his collaborator. At times during their decades-long relationship, AARON was quite autonomous, responsible for the composition, coloring and other aspects of a work; more recently, AARON served Cohen by making drawings that Cohen would develop into paintings. Cohen’s death is the end of a lengthy partnership between an artist and an artificial intelligence.
Cohen grew up in England. He studied painting at the Slade School of Fine Arts in London, and later taught at the Slade as well as Camberwell, Nottingham and other arts schools. He represented Great Britain at major international festivals during the 60′s, including the Venice Biennale, Documenta 3, and the Paris Biennale. He showed widely and successfully at the Robert Fraser Gallery, the Alan Stone Gallery, the Whitechapel Gallery, the Arnolfini Gallery, the Victoria and Albert Museum, and many other notable venues in England and Europe.
Then, in 1968, he left London for a one-year visiting faculty appointment in the Art Department at the University of California, San Diego. One year became many, Cohen became Department Chair, then Director of the Center for Research in Computing and the Arts at UCSD, and eventually retired emeritus in 1994.
A scientist and engineer of art
Leaving the familiar, rewarding London scene presaged a career of restless invention. By 1971, Cohen had taught himself to program a computer and exhibited computer-generated art at the Fall Joint Computer Conference. The following year, he exhibited not only a program but also a drawing machine at the Los Angeles County Museum. A skilled engineer, Cohen built many display devices: flatbed plotters, a robotic “turtle” that roamed and drew on huge sheets of paper, even a painting robot that mixed its own colors.
These machines and the museum-goers’ experiences were always important to Cohen, whose fundamental question was, “What makes images evocative?” The distinguished computer scientist and engineer Gordon Bell notes that “Harold was really a scientist and engineer of art.”
Indeed, AARON was a thoroughly empirical project: Cohen studied how children draw, he tracked down the petroglyphs of California’s Native Americans, he interviewed viewers and he experimented with algorithms to discover the characteristics of images that make them seem to stand for something. Although AARON went through an overtly representational phase, in which images were recognizably of people or potted plants, Cohen and AARON returned to abstraction and evocation and methods for making images that produce cascades of almost-recognition and associations in the minds of viewers.
“Harold Cohen is one of those rare individuals in the Arts who performs at the highest levels both in the art world and the scientific world,” said Professor Edward Feigenbaum of Stanford University’s Artificial Intelligence Laboratory, where Cohen was exposed to the ideas and techniques of artificial intelligence. “All discussions of creativity by computer invariably cite Cohen’s work,” said Feigenbaum.
Cohen had no patience for the “is it art?” question. He showed AARON’s work in the world’s galleries, museums and science centers — the Tate, the Stedelijk, the San Francisco Museum of Art, Documenta, the Boston Computer Museum, the Ontario Science Center, and many others. His audiences might have been drawn in by curiosity and the novelty of computer-generated art, but they would soon ask, “how can a machine make such marvelous pictures? How does it work?” The very questions that Cohen asked himself throughout his career.
AARON’s images and Cohen’s essays and videos can be viewed at www.aaronshome.com.
Cohen is survived by his partner Hiromi Ito; by his brother Bernard Cohen; by Paul Cohen, Jenny Foord and Zana Itoh Cohen; by Sara Nishi, Kanoko Nishi-Smith, and Uta and Oscar Nishi-Smith; by Becky Cohen; and by Allegra Cohen, Jacob and Abigail Foord, and Harley and Naomi Kuych-Cohen.
ACM SIGGRAPH Awards | Harold Cohen, Distinguished Artist Award for Lifetime Achievement
German researchers have developed a biometric system called SkullConduct that uses bone conduction of sound through the user’s skull for secure user identification and authentication on augmented-reality glasses, such as Google Glass, Meta 2, and HoloLens.
SkullConduct uses the microphone already built into many of these devices and adds electronics (such as a chip) to analyze the frequency response of sound after it travels through the user’s skull. The researchers, at the University of Stuttgart, Saarland University, and Max Planck Institute for Informatics, found that individual differences in skull anatomy result in highly person-specific frequency responses that can be used as a biometric system.
The system combines “Mel Frequency Cepstral Coefficient” (MFCC) (a feature extraction method used in automatic speech recognition) with a lightweight neural-network classifier algorithm that can directly run on the augmented-reality device.
The researchers also conducted a controlled experiment with ten participants that showed that skull-based frequency response serves as a robust biometric, even when taking off and putting on the device multiple times. The experiments showed that the system could identify users with 97.0% accuracy and authenticate them with an error rate of 6.9%.
It’s not as accurate as the CEREBRE biometric system (see You can now be identified by your ‘brainprint’ with 100% accuracy), but it’s low-cost, portable, and doesn’t require a complex system and extensive user testing.
Abstract of SkullConduct: Biometric User Identification on Eyewear Computers Using Bone Conduction Through the Skull
Secure user identification is important for the increasing number of eyewear computers but limited input capabilities pose significant usability challenges for established knowledge-based schemes, such as a passwords or PINs. We present SkullConduct, a biometric system that uses bone conduction of sound through the user’s skull as well as a microphone readily integrated into many of these devices, such as Google Glass. At the core of SkullConduct is a method to analyze the characteristic frequency response created by the user’s skull using a combination of Mel Frequency Cepstral Coefficient (MFCC) features as well as a computationally light-weight 1NN classifier. We report on a controlled experiment with 10 participants that shows that this frequency response is person-specific and stable – even when taking off and putting on the device multiple times – and thus serves as a robust biometric. We show that our method can identify users with 97.0% accuracy and authenticate them with an equal error rate of 6.9%, thereby bringing biometric user identification to eyewear computers equipped with bone conduction technology.
Deep neural networks (DNNs) are capable of learning to identify shapes, so “we’re on the right track in developing machines with a visual system and vocabulary as flexible and versatile as ours,” say KU Leuven researchers.
“For the first time, a dramatic increase in performance has been observed on object and scene categorization tasks, quickly reaching performance levels rivaling humans,” they note in an open-access paper in PLOS Computational Biology.
The researchers found that when trained for generic object recognition from natural photographs, several different DNNs developed visual representations that relate closely to human perceptual shape judgments, even though they were never explicitly trained for shape processing.
However, “We’re not there just yet,” say the researchers. “Even if machines will at some point be equipped with a visual system as powerful as ours, self-driving cars would still make occasional mistakes —- although, unlike human drivers, they wouldn’t be distracted because they’re tired or busy texting. However, even in those rare instances when self-driving cars would err, their decisions would be at least as reasonable as ours.”Abstract of Deep Neural Networks as a Computational Model for Human Shape Sensitivity
Theories of object recognition agree that shape is of primordial importance, but there is no consensus about how shape might be represented, and so far attempts to implement a model of shape perception that would work with realistic stimuli have largely failed. Recent studies suggest that state-of-the-art convolutional ‘deep’ neural networks (DNNs) capture important aspects of human object perception. We hypothesized that these successes might be partially related to a human-like representation of object shape. Here we demonstrate that sensitivity for shape features, characteristic to human and primate vision, emerges in DNNs when trained for generic object recognition from natural photographs. We show that these models explain human shape judgments for several benchmark behavioral and neural stimulus sets on which earlier models mostly failed. In particular, although never explicitly trained for such stimuli, DNNs develop acute sensitivity to minute variations in shape and to non-accidental properties that have long been implicated to form the basis for object recognition. Even more strikingly, when tested with a challenging stimulus set in which shape and category membership are dissociated, the most complex model architectures capture human shape sensitivity as well as some aspects of the category structure that emerges from human judgments. As a whole, these results indicate that convolutional neural networks not only learn physically correct representations of object categories but also develop perceptually accurate representational spaces of shapes. An even more complete model of human object representations might be in sight by training deep architectures for multiple tasks, which is so characteristic in human development.
Scientists at the Gladstone Institutes have used chemicals to transform skin cells into heart cells and brain cells, instead of adding external genes — making this accomplishment a breakthrough, according to the scientists.
The research lays the groundwork for one day being able to regenerate lost or damaged cells directly with pharmaceutical drugs — a more efficient and reliable method to reprogram cells and one that avoids medical concerns surrounding genetic engineering.
Instead, in two studies published in an open-access paper in Science and in Cell Stem Cell, the team of scientists at the Roddenberry Center for Stem Cell Biology and Medicine at Gladstone used chemical cocktails to gradually coax skin cells to change into organ-specific stem-cell-like cells and ultimately into heart or brain cells.
“This method brings us closer to being able to generate new cells at the site of injury in patients,” said Gladstone senior investigator Sheng Ding, PhD, the senior author on both studies. “Our hope is to one day treat diseases like heart failure or Parkinson’s disease with drugs that help the heart and brain regenerate damaged areas from their own existing tissue cells. This process is much closer to the natural regeneration that happens in animals like newts and salamanders, which has long fascinated us.”
Chemically Repaired Hearts
Transplanted adult heart cells do not survive or integrate properly into the heart and few stem cells can be coaxed into becoming heart cells.
Instead, in the Science study, the researchers used a cocktail of nine chemicals to change human skin cells into beating heart cells. By trial and error, they found the best combination of chemicals to begin the process by changing the cells into a state resembling multipotent stem cells (cells that can turn into many different types of cells in a particular organ). A second cocktail of chemicals and growth factors then helped transition the cells to become heart muscle cells.
With this method, more than 97% of the cells began beating, a characteristic of fully developed, healthy heart cells. The cells also responded appropriately to hormones, and molecularly, they resembled heart muscle cells, not skin cells. What’s more, when the cells were transplanted into a mouse heart early in the process, they developed into healthy-looking heart muscle cells within the organ.
“The ultimate goal in treating heart failure is a robust, reliable way for the heart to create new muscle cells,” said Srivastava, co-senior author on the Science paper. “Reprogramming a patient’s own cells could provide the safest and most efficient way to regenerate dying or diseased heart muscle.”
Rejuvenating the brain with neural stem cell-like cells
In the second study, authored by Gladstone postdoctoral scholar Mingliang Zhang, PhD, and published in Cell Stem Cell, the scientists created neural stem-cell-like cells from mouse skin cells using a similar approach.
The chemical cocktail again consisted of nine molecules, some of which overlapped with those used in the first study. Over ten days, the cocktail changed the identity of the cells, until all of the skin-cell genes were turned off and the genes of the neural stem-cell-like cells were gradually turned on.
When transplanted into mice, the neural stem-cell-like cells spontaneously developed into the three basic types of brain cells: neurons, oligodendrocytes, and astrocytes. The neural stem-cell-like cells were also able to self-replicate, making them ideal for treating neurodegenerative diseases or brain injury.
With their improved safety, these neural stem-cell-like cells could one day be used for cell replacement therapy in neurodegenerative diseases like Parkinson’s disease and Alzheimer’s disease, according to co-senior author Yadong Huang, MD, PhD, a senior investigator at Gladstone. “In the future, we could even imagine treating patients with a drug cocktail that acts on the brain or spinal cord, rejuvenating cells in the brain in real time.”
Gladstone Institutes | Chemically Reprogrammed Beating Heart Cell
Abstract of Conversion of human fibroblasts into functional cardiomyocytes by small molecules
Reprogramming somatic fibroblasts into alternative lineages would provide a promising source of cells for regenerative therapy. However, transdifferentiating human cells to specific homogeneous, functional cell types is challenging. Here we show that cardiomyocyte-like cells can be generated by treating human fibroblasts with a combination of nine compounds (9C). The chemically induced cardiomyocyte-like cells (ciCMs) uniformly contracted and resembled human cardiomyocytes in their transcriptome, epigenetic, and electrophysiological properties. 9C treatment of human fibroblasts resulted in a more open-chromatin conformation at key heart developmental genes, enabling their promoters/enhancers to bind effectors of major cardiogenic signals. When transplanted into infarcted mouse hearts, 9C-treated fibroblasts were efficiently converted to ciCMs. This pharmacological approach for lineage-specific reprogramming may have many important therapeutic implications after further optimization to generate mature cardiac cells.
Abstract of Pharmacological Reprogramming of Fibroblasts into Neural Stem Cells by Signaling-Directed Transcriptional Activation
Cellular reprogramming using chemically defined conditions, without genetic manipulation, is a promising approach for generating clinically relevant cell types for regenerative medicine and drug discovery. However, small-molecule approaches for inducing lineage-specific stem cells from somatic cells across lineage boundaries have been challenging. Here, we report highly efficient reprogramming of mouse fibroblasts into induced neural stem cell-like cells (ciNSLCs) using a cocktail of nine components (M9). The resulting ciNSLCs closely resemble primary neural stem cells molecularly and functionally. Transcriptome analysis revealed that M9 induces a gradual and specific conversion of fibroblasts toward a neural fate. During reprogramming specific transcription factors such as Elk1 and Gli2 that are downstream of M9-induced signaling pathways bind and activate endogenous master neural genes to specify neural identity. Our study provides an effective chemical approach for generating neural stem cells from mouse fibroblasts and reveals mechanistic insights into underlying reprogramming processes.
University of Illinois at Urbana-Champaign engineers have demonstrated real-time video-rate (>30Mbps) “meat comm” data transmission through tissue, which could mean in-body ultrasonic communications may be possible for implanted medical devices, including hi-def video.
For example, a patient could swallow a miniaturized HD video camera that could stream live to an external screen, with the orientation of the device controlled wirelessly and externally by a physician, according to Andrew Singer, the Fox Family Professor in the Department of Electrical and Computer Engineering at Illinois,
“To our knowledge, this is the first time anyone has ever sent such high data rates through animal tissue,” Singer added. “These data rates are sufficient to allow real-time streaming of high definition video, enough to watch Netflix, for example, and to operate and control small devices within the body.”
Ingestible cameras and other devices
Potential biomedical uses include ingestible cameras for imaging the digestive track, as well as lower-bandwidth devices such as implanted pacemakers and defibrillators, glucose monitors and insulin pumps, intracranial pressure sensors, and epilepsy control.
Currently, most implanted medical devices use RF electromagnetic waves to communicate through the body. The Federal Communications Commission (FCC) regulates the bandwidths that can be used for RF electromagnetic wave propagation available to implanted medical devices. For example, the Medical Device Radiocommunication Service (MDRS) designates frequencies of operation ranging from 401–406 MHz (where these is high absorption). The corresponding maximum bandwidth allowed is 300 kHz and a maximum of 50 kb/s.
The main limitation for using RF electromagnetic waves in the body is loss of signal that occurs because of attenuation in the body. That requires higher power, which can cause tissue damage from heating due to absorption.
“For underwater applications, radio-frequency (RF) electromagnetic communications has long since been supplanted by acoustic communication,” Singer noted. “Acoustic or ultrasonic communication is the preferred communication means underwater because sound (pressure) waves exhibit dramatically lower losses than RF and can propagate tremendous distances for signals of modest bandwidth.”
The study was reported in an open-access paper on arXiv.org. The researchers have received a provisional patent application on the high-definition ultrasonic technology. They will be presenting their findings at the 17th IEEE International Workshop on Signal Processing Advances in Wireless Communications, this July in Edinburgh, UK.
Abstract of Mbps Experimental Acoustic Through-Tissue Communications: MEAT-COMMS
Methods for digital, phase-coherent acoustic communication date to at least the work of Stojanjovic, et al , and the added robustness afforded by improved phase tracking and compensation of Johnson, et al . This work explores the use of such methods for communications through tissue for potential biomedical applications, using the tremendous bandwidth available in commercial medical ultrasound transducers. While long-range ocean acoustic experiments have been at rates of under 100kbps, typically on the order of 1- 10kbps, data rates in excess of 120Mb/s have been achieved over cm-scale distances in ultrasonic testbeds . This paper describes experimental transmission of digital communication signals through samples of real pork tissue and beef liver, achieving data rates of 20-30Mbps, demonstrating the possibility of real-time video-rate data transmission through tissue for inbody ultrasonic communications with implanted medical devices.
Just 1 minute of intense exercise produces health benefits similar to 50 minutes of moderate exercise
Researchers at McMaster University have found that a single minute of very intense exercise within a 10-minute session produces health benefits similar to those from 50 minutes of moderate-intensity continuous exercise.
Brief bursts of intense exercise are remarkably effective, a very time-efficient workout strategy, according to Martin Gibala, a professor of kinesiology at McMaster and lead author on the study, published online in an open-access paper in the journal PLOS ONE
Gibala and associates compared their “sprint interval training” (SIT) protocol to moderate-intensity continuous training (MICT), which is recommended in current public-health guidelines. They examined key health indicators, including insulin sensitivity (a measure of how the body regulates blood sugar) and cardiorespiratory fitness.
Quick intense vs. longer moderate
The ”sprint interval training” (SIT) protocol in the experiment involved three intermittent 20-second “all-out” cycle sprints interspersed with two minutes of continuous low-intensity exercise for recovery. MICT (the current exercise guideline) involves 45 minutes of continuous cycling at ~70% maximal heart rate. Both protocols involve a two-minute warm-up and three-minute cool-down.
In the experiment, a total of 27 sedentary men were recruited and assigned to perform three weekly sessions of either intense or moderate training for 12 weeks, or to a control group that did not exercise.
After 12 weeks of training, the results were remarkably similar, even though the MICT protocol involved five times as much exercise and a five-fold greater time commitment. Specifically, the researchers found a strikingly similar 19% improvement in cardiorespiratory fitness as determined by peak oxygen uptake (VO2 peak), which compares favorably with the typical change reported after several months of traditional endurance training (MICT).
“Most people cite ‘lack of time’ as the main reason for not being active,” says Gibala. “Our study shows that an interval-based approach can be more efficient — you can get health and fitness benefits comparable to the traditional approach, in less time. The basic principles apply to many forms of exercise. Climbing a few flights of stairs on your lunch hour can provide a quick and effective workout. The health benefits are significant.”
This project was supported by an operating grant from the Natural Sciences and Engineering Research Council, and an internally-sponsored research grant from McMaster University to MJG.
Abstract of Twelve Weeks of Sprint Interval Training Improves Indices of Cardiometabolic Health Similar to Traditional Endurance Training despite a Five-Fold Lower Exercise Volume and Time Commitment
McMaster | Gibala on HIIT
Aims: We investigated whether sprint interval training (SIT) was a time-efficient exercise strategy to improve insulin sensitivity and other indices of cardiometabolic health to the same extent as traditional moderate-intensity continuous training (MICT). SIT involved 1 minute of intense exercise within a 10-minute time commitment, whereas MICT involved 50 minutes of continuous exercise per session.
Methods: Sedentary men (27±8y; BMI = 26±6kg/m2) performed three weekly sessions of SIT (n = 9) or MICT (n = 10) for 12 weeks or served as non-training controls (n = 6). SIT involved 3×20-second ‘all-out’ cycle sprints (~500W) interspersed with 2 minutes of cycling at 50W, whereas MICT involved 45 minutes of continuous cycling at ~70% maximal heart rate (~110W). Both protocols involved a 2-minute warm-up and 3-minute cool-down at 50W.
Results: Peak oxygen uptake increased after training by 19% in both groups (SIT: 32±7 to 38±8; MICT: 34±6 to 40±8ml/kg/min; p<0.001 for both). Insulin sensitivity index (CSI), determined by intravenous glucose tolerance tests performed before and 72 hours after training, increased similarly after SIT (4.9±2.5 to 7.5±4.7, p = 0.002) and MICT (5.0±3.3 to 6.7±5.0 x 10−4 min-1[μU/mL]-1, p = 0.013) (p<0.05). Skeletal muscle mitochondrial content also increased similarly after SIT and MICT, as primarily reflected by the maximal activity of citrate synthase (CS; P<0.001). The corresponding changes in the control group were small for VO2peak (p = 0.99), CSI (p = 0.63) and CS (p = 0.97).
Conclusions: Twelve weeks of brief intense interval exercise improved indices of cardiometabolic health to the same extent as traditional endurance training in sedentary men, despite a five-fold lower exercise volume and time commitment.
OpenAI (a non-profit AI research company sponsored by Elon Musk and others) has released the public beta of OpenAI Gym, a toolkit for developing and comparing algorithms for reinforcement learning (RL), a type of machine learning.
OpenAI Gym consists of a growing suite of environments (from simulated robots to Atari games), and a site for comparing and reproducing results. OpenAI Gym is compatible with algorithms written in any framework, such as Tensorflow and Theano. The environments are initially written in Python (other languages planned).
What is reinforcement learning?
Reinforcement learning (RL) is the subfield of machine learning concerned with decision making and motor control. It studies how an agent can learn how to achieve goals in a complex, uncertain environment. It’s exciting for two reasons, according to OpenAI’s Greg Brockman and John Schulman:
- RL is very general, encompassing all problems that involve making a sequence of decisions: for example, controlling a robot’s motors so that it’s able to run and jump, making business decisions like pricing and inventory management, or playing video games and board games. RL can even be applied to supervised learning problems with sequential or structured outputs.
- RL algorithms have started to achieve good results in many difficult environments. RL has a long history, but until recent advances in deep learning, it required lots of problem-specific engineering. DeepMind’s Atari results, BRETT from Pieter Abbeel’s group, and AlphaGo all used deep RL algorithms, which did not make too many assumptions about their environment, and thus can be applied in other settings.
However, RL research is also slowed down by two factors:
- The need for better benchmarks. In supervised (human-managed) learning, progress has been driven by large labeled datasets like ImageNet. In RL, the closest equivalent would be a large and diverse collection of environments. However, the existing open-source collections of RL environments don’t have enough variety, and they are often difficult to even set up and use.
- Lack of standardization of environments used in publications. Subtle differences in the problem definition, such as the reward function or the set of actions, can drastically alter a task’s difficulty. This issue makes it difficult to reproduce published research and compare results from different papers.
OpenAI Gym is an attempt to fix both problems.
- NVIDIA: Technical Q&A with John.
- Nervana: Implementation of a DQN OpenAI Gym agent.
- Amazon Web Services (AWS): A limited number of $250 credit vouchers for select OpenAI Gym users.
More information, including enviroments (Atari games, 2D and 3D robots, and toy text, for example) is available here.
“During the public beta, we’re looking for feedback on how to make this into an even better tool for research,” says the OpenAI team. “If you’d like to help, you can try your hand at improving the state-of-the-art on each environment, reproducing other people’s results, or even implementing your own environments. Also please join us in the community chat!
John Schulman | hopper
A Dartmouth College scientist and his collaborators* have created the first high-resolution co-assembly between a protein and buckminsterfullerene (C60), aka fullerene and buckyball (a sphere-like molecule composed of 60 carbon atoms and shaped like a soccer ball).
“This is a proof-of-principle study demonstrating that proteins can be used as effective vehicles for organizing nanomaterials by design,” says senior author Gevorg Grigoryan, an assistant professor of computer science at Dartmouth and senior author of a study discussed in an open-access paper in the journal in Nature Communications.
Proteins organize and orchestrate essentially all molecular processes in our cells. The goal of the new study was to create a new artificial protein that can direct the self-assembly of fullerene into ordered superstructures.
Grigoryan and his colleagues show that that their artificial protein organizes a fullerene into a lattice called C60Sol–COP. COP, a protein that is a stable tetramer (a polymer derived from four identical single molecules), interacted with fullerene molecules via a surface-binding site and further self-assembled into an ordered crystalline superstructure. Interestingly, the superstructure exhibits high charge conductance, whereas both the protein-alone crystal and amorphous C60 are electrically insulating.
Grigoryan says that if we learn to do the programmable self-assembly of precisely organized molecular building blocks more generally, it will lead to a range of new materials with properties such as higher strength, lighter weight, and greater chemical reactivity, resulting in a host of applications, from medicine to energy and electronics.
Fullerenes are currently used in nanotechnology because of their high heat resistance and electrical superconductivity (when doped), but the molecule has been difficult to organize in useful ways.
* The study also included researchers from Dartmouth College, Sungkyunkwan University, the New Jersey Institute of Technology, the National Institute of Science Education and Research, the University of California-San Francisco, the University of Pennsylvania, and the Institute for Basic Science.Abstract of Protein-directed self-assembly of a fullerene crystal
Learning to engineer self-assembly would enable the precise organization of molecules by design to create matter with tailored properties. Here we demonstrate that proteins can direct the self-assembly of buckminsterfullerene (C60) into ordered superstructures. A previously engineered tetrameric helical bundle binds C60 in solution, rendering it water soluble. Two tetramers associate with one C60, promoting further organization revealed in a 1.67-Å crystal structure. Fullerene groups occupy periodic lattice sites, sandwiched between two Tyr residues from adjacent tetramers. Strikingly, the assembly exhibits high charge conductance, whereas both the protein-alone crystal and amorphous C60 are electrically insulating. The affinity of C60 for its crystal-binding site is estimated to be in the nanomolar range, with lattices of known protein crystals geometrically compatible with incorporating the motif. Taken together, these findings suggest a new means of organizing fullerene molecules into a rich variety of lattices to generate new properties by design.
Trust in robots is a critical component in safety that requires study, says MIT Professor Emeritus Thomas B. Sheridan in an open-access study published in Human Factors journal.
For decades, he has studied humans and automation and in each case, he noted significant human factors challenges — particularly concerning safety. He looked at self-driving cars and highly automated transit systems; routine tasks such as the delivery of packages in Amazon warehouses; devices that handle tasks in hazardous or inaccessible environments, such as the Fukushima nuclear plant; and robots that engage in social interaction (Barbies).
For example, no human driver, he claims, will stay alert to take over control of a self-driving car quickly enough should the automation fail. Nor does self-driving car technology consider the value of social interaction between drivers such as eye contact and hand signals. And would airline passengers be happy if computerized monitoring replaced the second pilot?
Designing a robot to move an elderly person in and out of bed would potentially reduce back injuries among human caregivers, but questions abound as to what physical form that robot should take, and hospital patients may be alienated by robots delivering their food trays. The ability of robots to learn from human feedback is an area that demands human factors research, as is understanding how people of different ages and abilities best learn from robots.
Sheridan also challenges the human factors community to address the inevitable trade-offs: the possibility of robots providing jobs rather than taking them away, robots as assistants that can enhance human self-worth instead of diminishing it, and the role of robots to improve rather than jeopardize security.Abstract of Human–Robot Interaction: Status and Challenges
Objective: The current status of human–robot interaction (HRI) is reviewed, and key current research challenges for the human factors community are described.
Background: Robots have evolved from continuous human-controlled master–slave servomechanisms for handling nuclear waste to a broad range of robots incorporating artificial intelligence for many applications and under human supervisory control.
Methods: This mini-review describes HRI developments in four application areas and what are the challenges for human factors research.
Results: In addition to a plethora of research papers, evidence of success is manifest in live demonstrations of robot capability under various forms of human control.
Conclusions: HRI is a rapidly evolving field. Specialized robots under human teleoperation have proven successful in hazardous environments and medical application, as have specialized telerobots under human supervisory control for space and repetitive industrial tasks. Research in areas of self-driving cars, intimate collaboration with humans in manipulation tasks, human control of humanoid robots for hazardous environments, and social interaction with robots is at initial stages. The efficacy of humanoid general-purpose robots has yet to be proven.
Applications: HRI is now applied in almost all robot tasks, including manufacturing, space, aviation, undersea, surgery, rehabilitation, agriculture, education, package fetch and delivery, policing, and military operations.
Sticking a needle into the hippocampus of mice modeled with Alzheimer’s disease (AD) improved performance on memory tasks, stimulated regenerative activity, and reduced β-amyloid plaques (a hallmark of AD). This area was chosen because the early and primary damage by AD appears to take place in the hippocampus.
Until recently, many diseases of the central nervous system could not be treated by this method because of inaccessibility of the brain to micro-needles, said the researchers.
“Because Alzheimer’s disease is increasing in prevalence, new intervention strategies are becoming invaluable,” said Dr. Shinn-Zong Lin, professor of Neurosurgery at China Medical University Hospital in TaiChung, Taiwan and Co-Editor-in-Chief for Cell Transplantation. “Since the host’s microenvironment can be inhospitable to transplanted cells and pharmacological interventions in diseased conditions, strategies to increase the regenerative capacity of the patient’s own body may be another viable option. Future studies should strive to include a larger sample size in order to validate this approach.”
The study will be published in a future issue of Cell Transplantation and is currently available open-access as an unedited, early epub.Abstract of Transient Micro-needle Insertion into Hippocampus Triggers Neurogenesis and Decreases Amyloid Burden in a Mouse Model of Alzheimer’s Disease
Targeted micro-lesions of the hippocampus have been reported to enhance neurogenesis in the sub-granular zone (SGZ). The potential therapeutic impact of transient insertion of a micro-needle was investigated in a mouse model of Alzheimer’s disease (AD). Here we tested the hypothesis that transient micro-injury to the brain elicits cellular responses that mediate beneficial regenerative processes. Brief stereotaxic insertion and removal of a micro-needle into the right hippocampus of 14 month old APP/PS1 mice brain resulted in a) stimulation of hippocampal neurogenesis and b) reduction of beta-amyloid plaque number in the CA-1 region. This treatment also resulted in a trend towards improved performance in the radial arm water maze (RAWM). Further studies of fundamental cellular mechanisms of the brain’s response to micro-injury will be useful for investigation of potential neuro-protective and deleterious effects of targeted micro-lesions and deep brain stimulation in Alzheimer Disease (AD).