Microsoft introduced Tuesday (Jan. 21) HoloLens, an immersive, augmented-reality device based on the forthcoming Windows 10 operating system, also announced. No release date or price is available.
HoloLens allows users to interact with 3D objects, which are displayed as floating images, emulating holographic projections. A built-in CPU, graphics core, and “Holographic Processing Unit” (HPU) replaces the need for a phone or external computer. HoloLens recognizes gestures, gaze, and voice.
Microsoft also announced that Windows 10 will include a set of APIs for Windows Holographic that enable developers to create “holographic” (augmented-reality) experiences. Microsoft’s Cortana personal digital assistant will be available on Windows 10 PCs and tablets, in addition to Windows Phone. The included HoloStudio app allows users to design (and later print) 3D objects, Iron Man style.
Microsoft | Microsoft HoloLens – Transform your world with holograms
Microsoft | Microsoft’s HoloLens Live Demonstration
Microsoft | Microsoft HoloLens – Possibilities
…methane and waste they emit. That’s why Andras Forgacs co-founded Modern Meadow, a three-year-old biotech startup working to develop meat made…
By zapping ordinary metals with femtosecond laser pulses, researchers from the University of Rochester in New York have created extraordinary new surfaces that efficiently absorb light, repel water and clean themselves for use in durable, low-maintenance solar collectors and sensors, for example.
This is the first multifunctional metal surface created by lasers that is superhydrophobic (water repelling), self-cleaning, and highly absorptive,” said Chunlei Guo, a physicist at the Institute of Optics at the University of Rochester who made the new surfaces with University of Rochester researcher Anatoliy Vorobyev.
The researchers describe the laser-patterned surfaces in an open-access article published in the Journal of Applied Physics, from AIP Publishing.
Enhanced light absorption improves light collection in solar sensors, while superhydrophobicity will make a surface rust-resistant, anti-icing, and anti-biofouling, a self-cleaning to makes solar collectors robust and easier to maintain, Guo said.
The researchers created the surfaces by zapping platinum, titanium and brass samples with extremely short femtosecond laser pulses that lasted on the order of a millionth of a billionth of a second. “During its short burst the peak power of the laser pulse is equivalent to that of the entire power grid of North America,” Guo said.
These extra-powerful laser pulses produced microgrooves, on top of which densely populated, lumpy nanostructures were formed. The structures alter the optical and wetting properties of the surfaces of the three metals, turning the normally shiny surfaces velvet black (very optically absorptive) and also making them water repellent.
Most commercially used hydrophobic and high optical absorption materials rely on chemical coatings that can degrade and peel off over time, said Guo. Because the nano- and microstructures created by the lasers are intrinsic to the metal, the properties they confer should not deteriorate, he said.
The hydrophobic properties of the laser-patterned metals also compare favorably with Teflon. “Many people think of Teflon as a hydrophobic surface, but if you want to get rid of water from a Teflon surface, you will have to tilt the surface to nearly 70 degrees before the water can slide off,” Guo said. “Our surface has a much stronger hydrophobicity and requires only a couple of degrees of tilt for water to slide off.”
The team has plans to work on creating multifunctional effects on other materials, such as semiconductors and dielectrics. The multifunctional effects should find a wide range of applications such as making better solar energy collectors.
University of Rochester | Using Lasers to Create Super-hydrophobic Materials
Reducing energy lost in reflections
Another way to improve solar collectors is by reducing the amount of sunlight that bounces off the surface of solar cells.
That helps maximize the conversion of the sun’s rays to electricity, so manufacturers use coatings to cut down on reflections. Scientists at the U.S. Department of Energy’s Brookhaven National Laboratory have recently developed a method for etching a nanoscale texture onto the silicon material itself. That can create an antireflective surface that works as well as expensive state-of-the-art thin-film multilayer coatings.
Their method, described in the journal Nature Communications and submitted for patent protection, has potential for streamlining silicon solar cell production and reducing manufacturing costs. The approach may find additional applications in reducing glare from windows, providing radar camouflage for military equipment, and increasing the brightness of light-emitting diodes.
“For antireflection applications, the idea is to prevent light or radio waves from bouncing at interfaces between materials,” said physicist Charles Black, who led the research at Brookhaven Lab’s Center for Functional Nanomaterials (CFN), a DOE Office of Science User Facility.
Preventing reflections requires controlling an abrupt change in “refractive index,” a property that affects how waves such as light propagate through a material. This occurs at the interface where two materials with very different refractive indices meet, for example at the interface between air and silicon. Adding a coating with an intermediate refractive index at the interface eases the transition between materials and reduces the reflection, Black explained.
“The issue with using such coatings for solar cells,” he said, “is that we’d prefer to fully capture every color of the light spectrum within the device, and we’d like to capture the light irrespective of the direction it comes from. But each color of light couples best with a different antireflection coating, and each coating is optimized for light coming from a particular direction. So you deal with these issues by using multiple antireflection layers. We were interested in looking for a better way.”
For inspiration, the scientists turned to a well-known example of an antireflective surface in nature, the eyes of common moths. The surfaces of their compound eyes have textured patterns made of many tiny “posts,” each smaller than the wavelengths of light. This textured surface improves moths’ nighttime vision, and also prevents the “deer in the headlights” reflecting glow that might allow predators to detect them.
“We set out to recreate moth eye patterns in silicon at even smaller sizes using methods of nanotechnology,” said Atikur Rahman, a postdoctoral fellow working with Black at the CFN and first author of the study.
The scientists started by coating the top surface of a silicon solar cell with a polymer material called a “block copolymer,” which can be made to self-organize into an ordered surface pattern with dimensions measuring only tens of nanometers.
The self-assembled pattern served as a template for forming posts in the solar cell like those in the moth eye using a plasma of reactive gases—a technique commonly used in the manufacture of semiconductor electronic circuits.
The resulting surface nanotexture served to gradually change the refractive index to drastically cut down on reflection of many wavelengths of light simultaneously, regardless of the direction of light impinging on the solar cell.
“Adding these nanotextures turned the normally shiny silicon surface absolutely black,” Rahman said.
Solar cells textured in this way outperform those coated with a single antireflective film by about 20 percent, and bring light into the device as well as the best multi-layer-coatings used in the industry.
“We are working to understand whether there are economic advantages to assembling silicon solar cells using our method, compared to other, established processes in the industry,” Black said.
One intriguing aspect of the study was that the scientists achieved the antireflective performance by creating nanoposts only half as tall as the required height predicted by a mathematical model describing the effect. Using a combination of computational modeling, electron microscopy, and surface science, the team deduced that a thin layer of silicon oxide similar to what typically forms when silicon is exposed to air seemed to be having an outsized effect.
“On a flat surface, this layer is so thin that its effect is minimal,” explained Matt Eisaman of Brookhaven’s Sustainable Energy Technologies Department and a professor at Stony Brook University. “But on the nanopatterned surface, with the thin oxide layer surrounding all sides of the nanotexture, the oxide can have a larger effect because it makes up a significant portion of the nanotextured material.”
Said Black, “This ‘hidden’ layer was the key to the extra boost in performance.”
The scientists are now interested in developing their self-assembly based method of nanotexture patterning for other materials, including glass and plastic, for antiglare windows and coatings for solar panels.
This research was supported by the DOE Office of Science.
Abstract of Multifunctional surfaces produced by femtosecond laser pulses
In this study, we create a multifunctional metal surface by producing a hierarchical nano/microstructure with femtosecond laser pulses. The multifunctional surface exhibits combined effects of dramatically enhanced broadband absorption, superhydrophobicity, and self-cleaning. The superhydrophobic effect is demonstrated by a falling water droplet repelled away from a structured surface with 30% of the droplet kinetic energy conserved, while the self-cleaning effect is shown by each water droplet taking away a significant amount of dust particles on the altered surface. The multifunctional surface is useful for light collection and water/dust repelling.
Abstract for Sub-50-nm self-assembled nanotextures for enhanced broadband antireflection in silicon solar cells
Materials providing broadband light antireflection have applications as highly transparent window coatings, military camouflage and coatings for efficiently coupling light into solar cells and out of light-emitting diodes. In this work, densely packed silicon nanotextures with feature sizes smaller than 50 nm enhance the broadband antireflection compared with that predicted by their geometry alone. A significant fraction of the nanotexture volume comprises a surface layer whose optical properties differ substantially from those of the bulk, providing the key to improved performance. The nanotexture reflectivity is quantitatively well modelled after accounting for both its profile and changes in refractive index at the surface.We employ block copolymer self-assembly for precise and tunable nanotexture design in the range of ~10–70nm across macroscopic solar cell areas. Implementing this efficient antireflection approach in crystalline silicon solar cells significantly betters the performance gain compared with an optimized, planar antireflection coating.
A Columbia University scientist has developed a new microscope that can image freely moving living things in 3D at very high speeds — up to 100 times faster 3D imaging than laser-scanning confocal, two-photon, and light-sheet microscopy.
Developed by Elizabeth Hillman, associate professor of biomedical engineering at Columbia Engineering, SCAPE (swept confocally aligned planar excitation microscopy) uses a simple, single-objective lens imaging geometry that requires no sample mounting or translation (movement).
The issue Hillman and other researchers are dealing with here is that while current confocal and two-photon microscopes do a good job of imaging a single plane within a living sample, it’s difficult to acquire enough of these layers to form a 3D image at fast enough rates to capture events like neurons actually firing.
“With SCAPE, we can now image complex, living things, such as neurons firing in the rodent brain, crawling fruit fly larvae, and single cells in the zebrafish heart while the heart is actually beating spontaneously — this has not been possible until now,” she says.
SCAPE is actually a variation on light-sheet imaging, but it “breaks all the rules,” says Hillman. While conventional light-sheet microscopes use two awkwardly positioned objective lenses, Hillman realized that she could use a single-objective lens, and then that she could sweep the light sheet to generate 3D images without moving the objective or the sample.
SCAPE can’t yet compete with the penetration depth of conventional two-photon microscopy, but Hillman and her collaborators have already used the system to observe firing in 3D neuronal dendritic trees in superficial layers of the mouse brain, formerly impossible.
In small organisms, including zebrafish larvae, SCAPE can see through the entire organism. By tracking these tiny, unrestrained creatures in 3D at high speeds, SCAPE can capture both cellular structure and function and behavior, because it doesn’t require time-consuming movement of the imaging objective lens or the sample to create a 3D image. SCAPE can also be combined with optogenetics and other tissue manipulations during imaging.
Her study was published in the Advance Online Publication (AOP) on Nature Photonics’s website on Monday, January 19.
How to make a high-speed microscope
Hillman and her students built their first SCAPE system using inexpensive off-the-shelf components. Her “aha” moment came when, looking at an old polygonal mirror in the lab, she realized how it could be used to generate SCAPE’s unusual scanning geometry.
After several years of trial and error, Hillman and graduate student Matthew Bouchard came up with a configuration that worked, and beautiful images started to flow out. “It wasn’t until we built it that we realized it was a light-sheet microscope!” says Hillman.
“It took us a while to realize how versatile the imaging geometry was, how simple and inexpensive the layout was—and just how many problems we had overcome.”
Extensive applications in biology and medicine
Beyond neuroscience, Hillman sees many future applications of SCAPE including imaging cellular replication, function, and motion in intact tissues, 3D cell cultures, and engineered tissue constructs, as well as imaging 3D dynamics in microfluidics, and flow-cell cytometry systems. These are all applications where molecular biology is delivering tools and techniques, but imaging methods have struggled to keep up.
Hillman also plans to explore clinical applications of SCAPE such as video-rate 3D microendoscopy and intrasurgical imaging.
Next-generation versions of SCAPE are in development that will deliver even better speed, resolution, sensitivity, and penetration depth. Hillman’s technology is available for licensing from Columbia Technology Ventures and has already attracted interest from multiple companies.
This research was supported by grants from NIH (NINDS), the Human Frontier Science Program, the Wallace H. Coulter Foundation (E.M.C.H), the Dana Foundation (R.M.B.), and DoD.
A patent related to this technique was issued on December 31, 2013 (inventors Hillman and Bouchard). The authors are currently in licensing discussions.
Hillman is also associate professor of radiology at Columbia University Medical Center, associate professor of radiology at Columbia University Medical Center, and a member of Columbia’s Mortimer B. Zuckerman Mind Brain Behavior Institute.
Hillman Lab, Columbia University | SCAPE real-time 3D microscopy — Intact brain
Abstract of Swept confocally-aligned planar excitation (SCAPE) microscopy for high-speed volumetric imaging of behaving organisms
Hillman Lab, Columbia University | SCAPE real-time 3D microscopy — Beating zebrafish heart
We report a three-dimensional microscopy technique—swept, confocally-aligned planar excitation (SCAPE) microscopy—that allows volumetric imaging of living samples at ultrahigh speeds. Although confocal and two-photon microscopy have revolutionized biomedical research, current implementations are costly, complex and limited in their ability to image three-dimensional volumes at high speeds. Light-sheet microscopy techniques using two-objective, orthogonal illumination and detection require a highly constrained sample geometry and either physical sample translation or complex synchronization of illumination and detection planes. In contrast, SCAPE microscopy acquires images using an angled, swept light sheet in a single-objective, en face geometry. Unique confocal descanning and image rotation optics map this moving plane onto a stationary high-speed camera, permitting completely translationless three-dimensional imaging of intact samples at rates exceeding 20 volumes per second. We demonstrate SCAPE microscopy by imaging spontaneous neuronal firing in the intact brain of awake behaving mice, as well as freely moving transgenic Drosophila larvae.
Singularity University and Yunus Social Business Partner To Impact Global Development In Some Of The Most Vulnerable Areas Of The World
DAVOS, SWITZERLAND (January 21, 2015) – Singularity University (SU) and Yunus Social Business (YSB) have announced a new partnership within the Singularity University Impact Partnership program. The Impact Partnership was announced at the World Economic Forum by Dr. Peter H. Diamandis, co-founder of Singularity University, and Dr. Muhammad Yunus, founder of Yunus Social Business, to concentrate on the use of accelerating technologies and social entrepreneurship for global development in some of the most vulnerable areas of the world where YSB is active.
Dr. Muhammad Yunus and Dr. Peter Diamandis, at the World Economic Forum
“Singularity University’s mission is to educate and empower the brightest people on the planet with the tools of exponential technology and entrepreneurship to solve humanity’s greatest challenges,“ said Dr. Diamandis. “Our new partnership with Yunus Social Business will bring to SU a valuable new network of experienced professionals in the field who have intricate knowledge of local community needs and access to capital. This is essential to our SU Lab accelerator community and global network of entrepreneurial alum to understand these challenges first-hand from the user standpoint and interact with the local populations. The field network will also provide opportunities for beta testing innovations.”
Yunus Social Business applies business approaches to the world of social development by using an ‘incubate and finance’ methodology that bridges the gap between social businesses and philanthropic lenders and donors, with local acceleration programs and or funds currently in Brazil, Colombia, Mexico, Haiti, Albania, Uganda, Tunisia and India.
Dr. Yunus distinguished the partnership “as a means by which the Yunus Social Business worldwide network can gain exposure to the latest thinking on accelerating technologies, Silicon Valley-style entrepreneurial thinking, networks and capital. We will work with SU to focus particularly in the areas of security, public health, food, energy, water, education and environment.”
The partnership is intended to facilitate a greater sharing of resources between both organizations. This will include, but not be limited to speaker exchanges in Silicon Valley and around the world; Field Innovation Exchanges to share ideas and test related innovations in the field; Innovation Challenges, in line with SU’s Global Impact Competitions; and the ability for the YSB network to attend SU’s 10-week summer Graduate Studies Program, as fellows and advisors. Dr. Yunus will deliver the keynote address at the 2015 Opening Ceremonies for the Singularity University Graduate Studies Program on June 15 in Mountain View, California.
About SU Impact Partners
Singularity University forms Impact Partnerships with global and regional organizations that are actively working to solve humanity’s grand challenges. Field Innovation Exchanges (FIX) are a central component of each partnership and provide a critical link between cutting-edge technological innovation and the field realities of humanity’s grand challenges. FIX is an opportunity for long-serving field staff of international organizations to step out of that environment, come to SU to learn about exponential technology and entrepreneurship, and take those lessons back to the field projects of partner organizations. Through the SU Impact Partnerships, SU is building a global ecosystem of innovators and organizations working to solving humanity’s grand challenges.
For additional information, please contact:
Diane Murphy, Singularity University, firstname.lastname@example.org
For more information on Singularity University programs: www.singularityu.org
For information on Yunus Social Business: www.yunussb.com
Singularity University and Yunus Social Business Partner To Impact Global Development In Some Of The Most Vulnerable Areas Of The World
Singularity University and Yunus Social Business Partner To Impact Global Development In Some Of The Most Vulnerable Areas Of The World DAVOS,
As regular readers know, I advocate for the development of treatments for aging based on periodic repair of the low-level cellular and molecular damage that causes aging. There is at least one detailed plan of action on how to produce the necessary treatments, the Strategies for Engineered Negligible Senescence (SENS) research proposals. Enough is known to work on this with a good expectation of success. Outside of factions within the stem cell research community this is still at this time a minority path in the scientific community, however. Most research groups are much more interested in developing a greater understanding of the fine details of metabolism so as to alter it in order to slow down aging. Unfortunately this latter path is nowhere near the point of producing a working plan, and it has proven to be enormously expensive and time consuming to investigate even tiny slices of the necessary reach of knowledge. See the much hyped past decade of research on sirtuins, for example, that has consumed the cost of implementing SENS in the laboratory several times over without producing any meaningful treatment.
In this piece, the author chooses the hard, slow, expensive, largely unknown path of altering the fundamental operation of metabolism as the better way forward for egalitarian reasons - that a one-time alteration that slows aging is better than a frequent treatment to repair aging because it is somehow more equal, or less prone to ongoing costs. This seems silly. For one, even setting aside the much greater difficulty and time required to develop means of altering metabolism, that approach cannot produce rejuvenation as it only slows down the pace of damage accumulation. Thus it cannot help the old, and it cannot extend healthy life indefinitely. Repair therapies can in principle achieve these goals, it's just a matter of how well they repair the damage. When it comes to costs, the mature evolution of SENS-like repair treatments would be a mass-produced infusion given by a bored clinician once every twenty years or so. Mass produced infusions such as TNF inhibitors today cost less than $10,000, even in the dysfunctional US medical system. So this seems like another example of death for everyone before even the vague possibility of inequality for someone, a position sadly prevalent in many areas of our society:Let me give you my nightmare scenario for a world of superlongevity. It's a world largely bereft of children where our relationship to our bodies has become something like the one we have with our smart phones, where we are constantly faced with the obsolescence of the hardware and the chemicals, nano-machines and genetically engineered organisms under our own skins and in near continuous need of upgrades to keep us alive. It is a world where those too poor to be in the throes of this cycle of upgrades followed by obsolescence followed by further upgrades are considered a burden and disposable. It's a world where the rich have brought capitalism into the body itself, an individual life preserved because it serves as a perpetual "profit center".
The other path would be for superlongevity to be pursued along my first model of healthcare focusing its efforts on understanding the genetic underpinnings of aging through looking at miracles such as the bowhead whale which can live for two centuries and gets cancer no more often than we do even though it has trillions more cells than us. It would focus on interventions that were cheap, one time or periodic, and could be spread quickly through populations. This would be a progressive superlongevity. If successful, rather than bolster, it would bankrupt much of the system built around the second model of healthcare for it would represent a true cure rather than a treatment of many of the diseases that ail us.
Yet even superlongevity pursued to reflect the demands for justice seems to confront a moral dilemma that seems to be at the heart of any superlongevity project. The morally problematic features of superlongevity pursued along the second model of healthcare is that it risks giving long life only to the few. Troublingly, even superlongevity pursued along the first model of healthcare ends up in a similar place, robbing from future generations of both human beings and other lifeforms the possibility of existing, for it is very difficult to see how if a near future generation gains the ability to live indefinitely how this new state could exist side-by-side with the birth of new people or how such a world of many "immortals" of the types of highly consuming creatures we are is compatible with the survival of the diversity of the natural world.
I see no real solution to this dilemma, though perhaps as elsewhere, the limits of nature will provide one for us, that we will discover some bound to the length of human life which is compatible with new people being given the opportunity to be born and experience the sheer joy and wonder of being alive, a bound that would also allow other the other creatures with whom we share our planet to continue to experience these joys and wonders as well. Thankfully, there is probably some distance between current human lifespans and such a bound, and thus, the most important thing we can do for now, is try to ensure that research into superlongevity has the question of sustainable equity serve as its ethical lodestar.
Aneuploidy is the state in which a cell has an abnormal number of chromosomes and is dysfunctional as a result. Like all forms of cellular malfunction, there is more of it in old tissues. But is it significant in aging? In recent years researchers demonstrated that one way of reducing aneuploidy is to boost levels of BubR1, which normally declines with age. As a genetic alteration this extends life in mice, but of course has a range of other effects beyond influencing aneuploidy, so the meaningful mechanism in this extension of healthy life isn't clearly defined. This is the case for many ways to slow aging in mice. Here is a piece on another group studying aneuploidy in aging:Dr. Dunham has recently focused her efforts on the role of aneuploidy in aging. In the last few years, her lab has generated disomic yeast strains, in which each individual chromosome is duplicated, for all the yeast chromosomes (yeast are haploid organisms and normally only have one set of chromosomes). Interestingly, she found that strains with individually duplicated chromosomes had a dramatic decrease in replicative lifespan. Furthermore, her lab identified a suppressor mutation that rescued lifespan decline in these strains. The suppressor mutation was a missense mutation in Bul1, which is part of the Rsp5 E3-ubiquitin ligase complex and is involved in protein quality control. This finding supports a potential mechanism by which aneuploidy effects aging via perturbing protein quality control.
"My lab has already developed tools for studying aneuploidy using genomics and genetics, and the aging phenotype is just another interesting phenotype that we could apply our suite of existing tools too. I've always been interested in aging. I did a rotation in an aging genetics lab in graduate school. What I like about the aging field also is that so much fundamental biology is touched on by aging. And I really like studying metabolism. If you ask who is still interested in studying metabolism...the answer is the aging people! They get that metabolism is really cool and fundamental! I am interested in what happens in general when you have the wrong number of chromosomes: what things go right, and what things go wrong? Can cells tolerate it, and how do they do so if they can? I think that aging is a good phenotype because it's another aspect of what the cell has to do. Being able to look at a cell from birth to death and across environments and phenotypes and determine where aneuploidy and DNA copy number variation can have an effect, this is just one piece of that."
At the annual meeting of the Association for the Advancement of Artificial Intelligence (AAAI) this month, MIT computer scientists will present smart algorithms that function as “a better Siri,” optimizing planning for lower risk, such as scheduling flights or bus routes.
They offer this example:
Imagine that you could tell your phone that you want to drive from your house in Boston to a hotel in upstate New York, that you want to stop for lunch at an Applebee’s at about 12:30, and that you don’t want the trip to take more than four hours.
Then imagine that your phone tells you that you have only a 66 percent chance of meeting those criteria — but that if you can wait until 1:00 for lunch, or if you’re willing to eat at TGI Friday’s instead, it can get that probability up to 99 percent.
The new software allows a planner to specify constraints — say, buses along a certain route should reach their destination at 10-minute intervals — and reliability thresholds, such as that the buses should be on time at least 90 percent of the time.
Then, on the basis of probabilistic models that reveal data such as that travel time along this mile of road fluctuates between two and 10 minutes, the system determines whether a solution exists: For example, perhaps the buses’ departures should be staggered by six minutes at some times of day, 12 minutes at others.
If, however, a solution doesn’t exist, the software doesn’t give up. Instead, it suggests ways in which the planner might relax the problem constraints: Could the buses reach their destinations at 12-minute intervals? If the planner rejects the proposed amendment, the software offers an alternative: Could you add a bus to the route?
Their algorithms are rooted in graph theory. A graph is a data representation that consists of nodes, usually depicted as circles, and edges, usually depicted as line segments connecting the nodes. Any scheduling problem can be represented as a graph.
Nodes represent events, and the edges indicate the sequence in which events must occur. Each edge also has an associated weight, indicating the cost of progressing from one event to the next — the time it takes a bus to travel between stops, for instance.
The algorithm first represents a problem as a graph, then begins adding edges that represent the constraints imposed by the planner. If the problem is soluble, the weights of the edges representing constraints will everywhere be greater than the weights representing the costs of transitions between events.
Existing algorithms, however, can quickly home in on loops in the graph where the weights are imbalanced. The algorithm then calculates the lowest-cost way of re-balancing the loop, which it presents to the planner as a modification of the problem’s initial constraints.
The open-access papers by the researchers in Brian Williams’ group at MIT’s Computer Science and Artificial Intelligence Laboratory are linked below.Abstract of Resolving over-constrained probabilistic temporal problems through chance constraint relaxation
When scheduling tasks for field-deployable systems, our solutions must be robust to the uncertainty inherent in the real world. Although human intuition is trusted to balance reward and risk, humans perform poorly in risk assessment at the scale and complexity of real world problems. In this paper, we present a decision aid system that helps human operators diagnose the source of risk and manage uncertainty in temporal problems. The core of the system is a conflict-directed relaxation algorithm, called Conflict-Directed Chance-constraint Relaxation (CDCR), which specializes in resolving overconstrained temporal problems with probabilistic durations and a chance constraint bounding the risk of failure. Given a temporal problem with uncertain duration, CDCR proposes execution strategies that operate at acceptable risk levels and pinpoints the source of risk. If no such strategy can be found that meets the chance constraint, it can help humans to repair the overconstrained problem by trading off between desirability of solution and acceptable risk levels. The decision aid has been incorporated in a mission advisory system for assisting oceanographers to schedule activities in deepsea expeditions, and demonstrated its effectiveness in scenarios with realistic uncertainty.
Every culture throughout recorded history had its seekers after agelessness, all of whom were deluding themselves. As science replaced alchemy the seekers remained just as prevalent, but adopted the superficial trappings of science in their futile quest. A few even adopted the scientific method, or emerged from the scientific community of the time, and were thus much more rapidly and reliably disappointed by the results of their experiments. The power of the scientific method lies as much in its ability to close off potential paths ahead as to open up new ones: it clears out wishful thinking and delusion for those willing to adopt its rigors.
Technology and other applications of scientific knowledge have steadily lengthened healthy life spans since the late 1700s, once the positive feedback loop of growth in wealth and knowledge really kicked in. For most of the past few hundred years much of that growth has stemmed from reducing the burden of infectious disease, not just a matter of reducing death rates in the young, but also lowering the damage load carried by those reaching middle age and older. Nowadays the continued growth in life span in the wealthier regions of the world is largely achieved through improvements in treating and preventing age-related disease. As before this is a very incremental process, however, with trends adding a year of life expectancy at 60 in every decade.
In the 1970s futurists were very enthused about the prospects for medicine, and especially in the prospects for their own personal longevity. They are all aged to death or near as gone now. They were absolutely wrong about how much could be achieved with then new and exciting applications of biotechnology. Yet so very much has been achieved. In comparison to the tools of today, 1970s biotechnology is clunky and expensive: halls of manually tended machinery have now shrunk to a single chip, and a graduate student today can accomplish tasks in a few weekends that would have strained the largest laboratory in the country for years back then.
So we're all pretty excited about what can be done today in medicine, and the prospects for our own personal longevity. When it comes to our understanding of biochemistry and ability to manipulate our cells, we are as far beyond the 1970s as the 1970s were beyond the gentlemen-scientists working at the end of the 19th century. Why, however, is it different this time? Why are the seekers after agelessness now rational scientists rather than another crop of self-deluded fools? This is a question that crops up. I can recall numerous conversations over the years in which I was informed that someone knew an older fellow who was, back in the day, quite confident in the forthcoming existence of longevity-enhancing therapies, and yet where are those treatments decades later? Nowhere in evidence, but here I stand telling you that now is the time, that the Strategies for Engineered Negligible Senescence (SENS) are a viable, plausible road to rejuvenation treatments that could indefinitely extend human life, and that given sufficient funding we could make enough progress in the next 20 years to hit actuarial escape velocity, the point at which medicine adds more healthy life faster than aging takes it away.
As an aside, there is an unfortunate tendency for successful futurists to be those who predict useful and interesting things to happen soon enough to catch the interest of the audience, regardless of the merits of that claim. Most of the really good communicators have also convinced themselves of their message. It is somewhat challenging for a non-technical person to tell the difference between the self-convinced fraud versus someone who happens to be right about an opportunity for development that happens to be in the near future. Many of these opportunities are in the range of 20-30 years distant, assuming funding goes well at each stage, far beyond the point at which you'll see a lot of corroboration in the form of investment in companies trying to achieve these goals directly.
So why is it different this time? For one there is SENS, a detailed plan of development leading to rejuvenation treatments that could be prototyped in mice given a billion dollars and ten years, give or take. No such plan could have been formed a century ago, and while much of the basic knowledge that informs the SENS viewpoint of aging as an accumulation of cellular and molecular damage existed in the 1970s, SENS could not have been proposed as a serious project at that time even had someone had the realization. There was simply no way to even guess at how much time and money it would have required to build the tools to build the tools to develop the validation of the theories so as to build the tools to build the tools to develop the therapies, and so forth: it would have been a project on the scale of going to the moon, and with far less certainty of success.
More importantly none of the proposed paths to add decades or more of healthy life put forward in past generations, now obviously naive and wrong, were in any way rigorous or supported by large fractions of the scientific community. Only now do we have that, built on the vast body of knowledge of biology accumulated over the last century, and on the new tools of biotechnology of the past few decades. Only now are large numbers of scientists putting their careers and their reputations into the extension of healthy life.
Why is it different this time? The fact that funding for various scientific establishment efforts to extend life is growing rapidly. Most of these are in fact not going to move the needle all that much, but that isn't the point. The point is that the consensus in a significant fraction of the scientific community and its surrounding institutions of funding and review is that the time has come. Investment and interest in any given field are cyclic, and this present cycle will see billions poured into this field, and old narrow views of the implausibility of life extension swept away. Scientists are the arbiters of truth in our culture, though this is sometimes hard to see, and the rest of the world will follow their lead when deciding whether to take something seriously. That will create a feedback loop of funding and progress in which, yes, a lot of less useful work will thrive, but so will significant approaches such as SENS.
None of this was the case for past generations of what turned out to be deluded optimism. It is the case now. The times have changed, and it is different this time around.
On May 14, 2014, astronomers at Parkes Radio Telescope led by Emily Petroff at Swinburne University of Technology observed live an extremely short, sharp “fast radio burst” for 2.8 milliseconds at a microwave frequency of 1.4 GHz from an unknown source at an estimated distance of up to 5.5 billion light years from Earth. 24 seconds later, an email alert went out to astronomers at 12 telescopes around the world to make follow-up observations on other frequencies, ranging from infrared and visible light to ultraviolet light and X-ray waves.
However, no further signals were found, the astronomers reported Monday (Jan. 19) in an open-access paper in Monthly Notices of the Royal Astronomical Society. But when astronomers went through archival data from the Parkes Radio Telescope (famous for its role in the movie The Dish — technical corrections here) in Eastern Australia, they found that the pulse had already been detected by chance in 2007.
Astronomers also found six more such unnoticed bursts in the Parkes telescope’s data and a seventh in Arecibo telescope data.
So what did they discover? “The burst could have hurled out as much energy in a few milliseconds as the Sun does in an entire day,” explains paper co-author Daniele Malesani, an astrophysicist at the Dark Cosmology Centre, Niels Bohr Institute, University of Copenhagen.
“But the fact that we did not see light in other wavelengths eliminates a number of astronomical phenomena that are associated with violent events, such as gamma-ray bursts from exploding stars and supernovae, which were otherwise candidates for the burst.”
But the burst did leave another clue. The Parkes detection system captured the polarization of the light. Polarization is the direction in which electromagnetic waves oscillate and they can be linearly or circularly polarized. The signal from the radio wave burst was about 21 percent circularly polarized, which suggests that there was a magnetic field in the vicinity, the astronomers suggest.
“The theories are now that the radio wave burst might be linked to a very compact type of object — such as neutron stars or black holes — and the bursts could be connected to collisions or ‘star quakes.’ Now we know more about what we should be looking for,” says Malesani.Abstract of A real-time fast radio burst: polarization detection and multiwavelength follow-up
Fast radio bursts (FRBs) are one of the most tantalizing mysteries of the radio sky; their progenitors and origins remain unknown and until now no rapid multiwavelength follow-up of an FRB has been possible. New instrumentation has decreased the time between observation and discovery from years to seconds, and enables polarimetry to be performed on FRBs for the first time. We have discovered an FRB (FRB 140514) in real-time on 2014 May 14 at 17:14:11.06 UTC at the Parkes radio telescope and triggered follow-up at other wavelengths within hours of the event. FRB 140514 was found with a dispersion measure (DM) of 562.7(6) cm−3 pc, giving an upper limit on source redshift of z ≲ 0.5. FRB 140514 was found to be 21 ± 7 per cent (3σ) circularly polarized on the leading edge with a 1σ upper limit on linear polarization <10 per cent. We conclude that this polarization is intrinsic to the FRB. If there was any intrinsic linear polarization, as might be expected from coherent emission, then it may have been depolarized by Faraday rotation caused by passing through strong magnetic fields and/or high-density environments. FRB 140514 was discovered during a campaign to re-observe known FRB fields, and lies close to a previous discovery, FRB 110220; based on the difference in DMs of these bursts and time-on-sky arguments, we attribute the proximity to sampling bias and conclude that they are distinct objects. Follow-up conducted by 12 telescopes observing from X-ray to radio wavelengths was unable to identify a variable multiwavelength counterpart, allowing us to rule out models in which FRBs originate from nearby (z < 0.3) supernovae and long duration gamma-ray bursts.
The main goal of this paper is to design nanorobotic agent communication mechanisms which would yield coordinated swarm behavior. Precisely we propose a bee-inspired swarm control algorithm that allows nanorobotic agents communication in order to converge at a specific target. In this paper, we present experiment to test convergence speed and quality in a simulated multi-agent deployment in an environment with a single target. This is done to measure whether the use of our algorithm or random guess improves efficiency in terms of convergence and quality. The results attained from the experiments indicated that the use of our algorithm enhance the coordinated movement of agents towards the target compared to random guess.
The problem of controlling large groups of agents towards a specific target is not new. Genetic algorithms (GA) and other evolutionary programming techniques have been dominantly proposed for tackling this problem in the past . In this manuscript, the main goal of this paper is to design nanorobotic agent communication mechanisms which would yield coordinated swarm behavior. This target is perceived to represent a foreign body in the human body like environment which the swarm is aimed at locating and destroying. We precisely investigate communication control issues in swarms of bee-like nanorobotic agents. Medical nanotechnology has promised a great future including improved medical sensors for diagnostics, augmentation of the immune system with medical nano-machines, rebuilding tissues from the bottom up and tackling the aging problem . Proponents claim that the application of nanotechnology to nano-medicine, will offer ultimate benefit for human life and the society eliminating all common diseases and all medical suffering . This work is primarily motivated by the need to contribute to this common goal. Most of the researches that have been completed so far focus mostly on the control of a single agent . Increased efforts have begun towards addressing systems that are composed of multiple autonomous mobile robots . In some cases, local interaction rules maybe sensor based, as in the case of flocking birds . In other cases these local interactions may be stigmergic . In this work, the bee agents we propose have no leader to influence other bees in the swarm to fulfill their low level agenda. Each bee agent responds to some form of local information which is made available through the local environment and direct message passing.
Communication is a necessity in multi agent emergent systems as it increases agent’s performance. Many agent communication techniques assume an external communication method by which agents may share information with one another. Direct agent communication includes waggle dancing, a technique used by bees when they are communicating the source of food and signaling . In our context a swarm can be defined as a collection of interacting nanorobotics agents. The agents will be deployed in an environment which is a substrate that facilitates the functionalities of an agent through both observable and unobservable properties. Given the above view swarm intelligence is the self organizing behavior of cellular robotic systems . Swarm does not have a centralized control system and individual responds to simple and local information that allows the whole system to function . Bee agents in this research will be represented by nanorobotic agents, defined as artificial or biological nanoscale devices that can perform simple computations, sensing or actuation . In this research, we will interchangeably use the terms nanorobots, nanites and nano-robotic agents to refer to tiny and autonomous devices that are built to work together towards achieving a collaborated solution, the same way natural bee swarms.
Foraging is a task of locating and acquiring resources and has to be performed in unknown and possibly dynamic environments . When the foraging bee discovers the nectar source, they commit to memory information on direction in which the nectar is found, the distance from the hive and its quality rating . On return to the hive they perform waggle dance on the dance flow . The dance expresses the information of the direction, distance and quality of the nectar source. Onlooker bees assess the information delivered and make a decision to follow, remain in the hive or randomly wonder. The recruitment among bees is associated with the quality of the nectar source . A nectar source with a lot of nectar and near the hive is recognized as more promising and would attract more followers . In the event that the bees want to relocate due to some reasons, the queen bee and some of the bee in a colony leave their hive and form a cluster on a nearby branch . Upon returning to the cluster the searching bees perform a waggle dance . In a dance commonly known as waggle dance, agents can communicate the distance and direction of the target to observing novice agents . The dance communication techniques in bee agents have been extended to applications in database querying, particularly evaluating the similarities between the recommender systems to the user query . In , the travelling salesman problem was proposed. In their version a number of nodes dotted in the network and the hive was located at one of the nodes. To collect as much nectar, the artificial bee agents had to fly along a certain link and need to locate the shortest path to the source. The Bee Colony Optimization (BCO) was proposed to solve combinatorial optimization problems . Two main elements of BCO is forwards pass and backwards pass. A partial solution was generated when the bees preformed a forwards pass which was accomplished by the combination of individual exploration and collective experience for the past. A backward pass was performed when they returned to the hive. The step was followed by decision making process. It was inspired by a swarm of virtual bees where it began with bees wandering randomly in the search environment . Virtual Bee Algorithm (VBA) initially created a population of virtual bees, where each bee was associated with a memory bank. Then, the functions of the optimization were converted into virtual food source. The direction and distance of the virtual food were defined. Population update is through waggle dance.
This strategy combines recruitment and navigation . Recruitment strategies were employed to communicate the search experience to the novice bees in a colony. Strategies included such thing as dancing processes, communicating information of distance and direction of the nectar source. The other half of the algorithm, navigation is mainly worried about discovering undiscovered areas. In , the non pheromone algorithms proposed by Lammens in his recruitment and navigation model was presented. The algorithm has three (3) functions namely ManageBeesActivities, CalculateVector and DaemonAction.
The models use mathematical model adopted from  which is inspired by the way bees forage in search of food. In addition to already discussed algorithms, we reviewed Minimal communication , Knowledgebased-multi-agent communication , sensing communication  and flocking . As the agents move towards the target they communicate their proximity, direction and velocity. The kind of communication employed in these kinds of scenarios is direct communication and the agents will be communicating with the neighboring agents to maintain the same speed and distance between agents. In  Packer describes experiments of a group of simulated robots which are required to keeping a side by side line formation whilst moving towards a goal. This algorithm provides a one on group communication. The idea proposed in aminimal communication algorithm  has been improved in this algorithm and this will go a long way in solving our multi nanorobotic agents. This algorithm brings about the issue of proximity for communication to be more successful when they mention sense of sight.
The agents must be able to assemble themselves to repair damaged vessels and it must be able to scan the environment in search of the target, In addition to what an agent can do us also need to consider the makeup of an agent that will enable it to execute the above mentioned tasks.
The structure can be represented as a class with the attributes.
3.1. Agent Class
int state; // flagging, 1 , 2, …..
In this paper, the agents were deployed in an environment through a single entry point just like the way an injection is administered. The point of entry was defined and considered in terms of the y and the x axis. As soon as our nanorobotic agents are deployed into the environment they start communicating among themselves and their environment. The goal for communicating dispersion in nanorobotic agents is to achieve positional configurations that satisfy some user defined criteria. This behavior steers a nanorobotic agent to move to avoid crowding its local flocking mates.
Algorithm If evaluate (pos) == unsatisfied then Move Else Do not move End if This algorithm can be diagrammatically represented as follows (See Figure 1).
From Figure 1, unsatisfied situations can mean that nanites are too closed or too far apart from each other while satisfied situation should be the normal flocking position.
The goal of each nanorobotic agent is to achieve and maintain a constant minimum and maximum distance from its neighboring nanorobotic agents. To enhance swarming, we provided an algorithm which makes all nanites to come closer to each other to maintain a swarm. We use the intuitive idea where each nanites move towards the centre of mass (COM) of all other nanorobotic agents where COM of n points p1 ……. pn is defined by
If and then Do not move Else Choose dimension for each
Move one step towards the COM.
And representation diagrammatically as in Figure 2.
In Figure 2, the black dots are nanorobotic agents, the tiny circle is COM and the arrows signify possible movement choices for each moving nanorobotic agent. Our nanorobotic agents will move towards the average destination neighbors keeping the swarm in alignment and moving together towards the same general heading. To align them the nanorobotics agents will need to
communicate velocity of the neighboring agent and adjust its speed to suit the rest of the swarm. The main and final aspect of this research as mentioned earlier is to communicate the location of the target. As soon as one of the nanorobotic agents get in contact with the target in the environment it will change color and make some movements (waggle) observed by those within the same proximity. Agents within the proximity will also change color to red to show that they have received the message and they will move towards the target. In our simulator all user options are dynamic and may be switched during execution. Target deployment: The user can only deploy a sing target anywhere in the environment by assigning values for x and y axis in the target information panel. The agents in our simulator can be deployed in batches, starting form 2, 5, 10, 20 up to 50. Agent deployments: Agents enter the environment through a single entry point as defined by the user. The user will also enter the point of entry through accepting the x and the y axis value. Agent speed control: Speed of the nanorobotic agents can either be decreased on increased through a button under the agent information panel.
Figure 3 shows the nanorobots swarming towards the target at closed formation. Here, the screen is the environment while the control panel is the agent and target information panel. Our approach, we kept a
closer look on and avoid the following as our nanorobots will be sending a message to its neighbors; send wrong message or fail to send message. In our approach our nanorobotic agents are capable of observing and decoding message being sent by the waggling nanorobotics agent. After observation our agents will select the best waggle dance considering distance, direction and target location. We logically present our algorithm as follows:
a) Deploy target b) Deploy n agents c) Explore environment d) Communicate swarm direction e) Cohesion f) Agent collision avoidance g) If found = null: go back c.
h) Communicate target location i) Coordinate movement towards communicated location.
The simulation environment should provide an accurate estimate of nanorobotic agent performance in the real human like environment. The potential metrics for nanorobotics agent’s communication and coordination are convergence speed and quality. Convergence speed was measured in terms of time and distance. Time was measured in iterations. Quality was measured in terms of frequency and time. In order to examine the performance of the proposed communication control algorithm we first consider the efficiency of the following experiments: Dispersion, Direction coordination, Cohesion control, and Target location. If the agents collide it may take a certain delay time to reassemble and that affect systems efficiency in performance. In the event that the agents are about to collide the separation algorithm will be called to disperse the near collision agents. We started by deploying 2 agents in the environment to run at a speed 0.4. In the screen shot given above 10 nanorobotic agents have been deployed at position 0, 0 in terms of and y axis. We observed the agents and noted that they are moving in a single coordinated direction. As the agents will be scanning the environment each nanorobotic agents need information about the speed of neighboring agent.
The results obtained from the simulation are presented below.
Convergence speed: In this experiment two simulation setups have been chosen to compare and study performance and scalability of the proposed algorithm in terms of convergence speed. The setups were carried out in the same environment where number of agents was changed with the target remaining at the same position. The first simulation setup was made for reference case model followed by the coordinated simulation model.
5.1. Parameter Justifications
As shown in Table 1, we used three batches of 5, 10, and 20 agents for both reference case model and coordinated agents model in order to ascertain the results in the event of doubled agents’ deployment. Simulation time has to be set such that we have the maximum limit to avoid running the simulation to infinite. The environment has been given as 884 × 418 following the measure of the canvas space given when we developed the simulator. The target was deployed at the midpoint for it not too far or too close to the boundary to enable more observation time. Agent deployment point was kept constant at point 0, 0 to enable consistent results. Figure 4 given in the following shows summarized results obtained after running some experimental simulation where agents are not capacitated with any control algorithm. The results shows that using random guess, the agents were able to locate the target after so many iterations. For testing purposes we ran the simulation three times with the same agent density in order ascertain how varied the results would be. The non-effectiveness of reference case model has shown high iteration values implying that there is low convergence speed.
An effective “hit” occurs when 50% on the nanorobotic agents find the target. The convergence properties were measured by running the model for three times on a group of 5, 10 and 20 respectively. In all runs, the reference model achieved reaching the target after high iteration values. To evaluate the effectiveness of our algorithm in terms of convergence speed, we made several simulation runs. We fixed the target at mid point as justified in simulations above and changed the number of agents from 5, 10 and 20 respectively. The results indicate that the agents were able to locate the target after far less iteration as compared to reference case model. Figure 5is the graphical representation for the results obtained.
5.2. Performance Measures of Our Agent Control Algorithm
We have successfully deployed the agents in the environment in groups of 5, 10 and 20 respectively. We observed that the agents within the proximity will change color to red and move to the target instantly decreasing the iterations towards the target. Agents controlled by our algorithm achieved getting to the target in less iteration. We also observed that the more the agents the high the convergence speed though the variance is not that distinct. In this research we drew up analogies between agent density and convergence speed. We have observed that the two are greatly related in that an increase in number of agent causes an increase in convergence speed. Another observation made is that as soon as one of the agents get informed all those agents within proximity get the information about the target location and they will immediately move towards the target meaning as the number increases the rate of confidence and convergence is increased. During the simulation some initially non informed agents had the opportunity to make new observation that converts them into informed agents. We recommend the deployment of many coordinated agents to enhance high convergence speed when using our algorithm. Not mentioning other factor the major or basic concept that enables high convergence speed is that we allow non informed agents to acquire informed state through interactions with neighbors. This will result in
increasing in the number of informed agents hence high convergence speed. Random guess results have shown no consistence in terms of number of iterations. Conclusively we can safely say that convergence speed increases when agents are coordinated as compared to random guess and the number of agents takes a pivotal role convergence speed. Our algorithm allows all agents to follow dispersion, alignment, and cohesion rules to reach common decision in search of a target without complex coordination mechanism.
Figure 6 shows coordinated agents in action searching for a target. In the screen shot it shows that as soon as one of the agents get in touch with the target it communicates the information about the location of the target hence move directly towards the target increasing the frequency of “hitting” the target. Ultimately, this will increase the quality of emergence when using our algorithm. To authenticate our argument Table 2 shows the results obtained after a number of simulations to test for the quality when using our control algorithm.
We observed that the agents within the proximity will change color to red and move to the target instantly. Agents controlled by our algorithm achieved high quality due to target location communication.
The bee agent control algorithm generally achieves better convergence speeds and qualities of emergence than the reference case model. Performances in both cases improve with an increase in agent density. Frequency of hitting the target increases as the number of informed agents increases in our algorithm compared to reference
case model. Fast and coordinated swarm decisions making is prominent in bee agent control algorithm compared to reference case model.
The results presented are a proof of concept that the bee agent algorithm we propose can successfully coordinate agents towards desired targets in specific environments. As such, we contribute evidence of the potentials of similar algorithms to enhance search in human-bodylike environments. In that way, connotations towards using the algorithm to control nanorobotic agents for health purposes are supported. A number of conclusions emanated from the results are reported in this work. Among these are: Swarms of bee-like agent that are deployed and controlled using the algorithm proposed in this work significantly out-performed the performances of swarms of random wandering agents that are deployed for the same purpose. We attribute these good results to a number of factors that are addressed by our algorithm. Most importantly our algorithm incorporate mechanisms in which the agents of the deployed swarm make use of local interactions and information to decide the direction to follow in each step. This alone fosters the speed of convergence, and hence the quality of emergence behavior that arises.
- A. Cavalcanti and R. A. Freitsa, Jr., “Nanorobotics Control Design: A Collective Behaviour for Approach for Medicine,” IEEE Transactions on Nanobioscience, Vol. 4, No. 2, 2005, pp. 134-139.
- A. Cavalcanti, R. A. Freitsa Jr. and L. C. Krety, “Nanorobotics Control Design: A Practical Aproach Tutorial,” ASME Press, Salt Lake City, 2004.
- R. A. Freitas Jr., “Current Status of Nanomedicine and Medical Nanorobotics,” Computational and Theoretical Nanoscience, Vol. 2, No. 1, 2005, pp. 1-25.
- A. Galstyan and T. Hogg, “Kristina Lerman Modelling and Mathematical Analysis of Swarms of Microscopic Robots,” Pasadena, 2006, pp. 201-208.
- T. E. Izquierdo, “Collective Intelligence in Multi Agent Robotics: Stigmery, Self Organisation and Evolution,” University of Sucssex Press, UK, 2004.
- C. W. Raynolds, “Flocks, Heads and Schools: A Distributed Behavioural Model,” Computer Graphics, Vol. 21, No. 4, 1987, pp. 25-34.
- C. Chibaya and S. Bangay, “A Probabilistic Movement Model for Shortest Path Formation in Virtual Ant like Agents,” Proceedings of the 2007 Annual Research Conference of the South African Institute of Computer Scientists and Information Technologists on IT Research in Developing Countries, South Africa, 2007, pp. 9-18.
- N. Pavol, “Bee Hive Metaphor for Web Search,” Communication and Cognition-Artificial Intelligence, Vol. 23, No. 1-4, 2006, pp. 15-20.
- G. Beni and J. Wang, “Swarm Intelligence in Cellular Systems,” Proceedings of NATO Advanced Workshop on Robots and Biological Systems, Tuscany, 26-30 June 1989.
- D. M. Gordon, “The Organisation of Work in Social Insect Colonies,” Nature, Nature Publishing Group, Vol. 380, No. 6570, 1996, pp. 121-124.
- R. A. Freitas Jr., “Current Status of Nanomedicine and Medical Nanorobotics,” American Scientific Publisher, USA, 2005.
- J. R. Krebs and D. W. Stephen, “Foraging Theory,” Monographs in Behavior and Ecology, Princeton University Press, Princeton, 1986.
- K. V. Frisch, “The Dancing Bees: Their Vision, Chemical Senses and Language,” Cornell University Press, Ithaca, 1956, pp. 36-39.
- Lammens, S. Dejong, K. Tuyls, A. Nowe, et al., “Bee Behaviour in Multi-Agent Systems: A Bee Foraging Algorithm,” Proceedings of the 7th ALAMAS Symposium, Maastricht, 2007.
- D. Teodorovic, “Swarm Intelligence Systems for Transportation Engineering: Principles and Applications,” Elsevier, Vol. 16, No. 6, 2008, pp. 651-667.
- M. Beekman and F. L. W. Ratniek, “Long Range Foraging by the Honey Bee,” Apis Mellifera, Vol. 14, No. 4, 2000, pp. 490-496.
- K. M. Passino and T. D. Seeley, “Modelling and Analysis of Nest-Site Selection by Honeybee Swarms: The Speed and Accuracy Trade-Off,” Springer-Verlag, Berlin, 2006, pp. 427-442.
- T. D. Seeley and S. C. Buhraman, “Group Decision Making in Swarms of Honey Bees,” Behavioral Ecology and Sociobiology, Vol. 45, No. 1, 1999, pp. 19-31.
- F. Lorenzi, F. Arreguy, C. Correa, A. L. C. Bazzan, M. Abel, F. Ricci, et al., “A Multi Agent Recommender System with Task Based Agent Specialisation,” Springer, Berlin, 2010.
- C. S. Chong, A. I. Sivakumar, M. Y. H. Low and K. L. Gay, “A Bee Coloby Optimisation Algorithm to Job Shop Scheduling,” Winter Simulation Conference, California, 3-6 December 2006, pp. 1954-1961.
- N. Pavol, “Bee Hive Metaphor for Web Search,” Communication and Cognition—Artificial Intelligence, Vol. 23, 2006, pp. 15-20.
- G. Enee and C. Escazut, “A Minimal Model of Communication for Multi-Agent Classifiers System,” 4th International Workshop on Advances in Learning Classifier Systems, Berlin-Heidelburg, 2002, pp. 32-42.
- E. Verbrugge and R. van Baars, “Knowledge Based Algorithm for Multi-Agents Communication,” Proceedings of the 7th Conference on Logic and the Foundations of Game and Decision Theory, 2006, pp. 227-236.
- T. O. T. Kuroda and Y. Hoshinot, “A Evaluation of Multi Agent Behaviour in a Sensing Communication World,” IEEE Internation Workshop on Robot and Human Communication, Conference Proceeding, Tokyo, 3-5 November 1993, pp. 302-307.
- G. A. Bekey, “From Biological Inspiration to Implementation and Control,” 1993.
Wireless Sensor Network
Vol.5 No.10(2013), Article ID:38789,7 pagesDOI:10.4236/wsn.2013.510024
Copyright © 2013 Rodney Mushining, Francis Joseph Ogwu. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Email: email@example.com, firstname.lastname@example.org
The post Nanorobotic Agent Communication Using Bee-Inspired Swarm Intelligence appeared first on h+ Magazine.
Car hacking is a reality, we have discussed the topic several times and we have learned that modern vehicles have a complex internal networking infrastructure that could be subject to cyber attacks.
Devices used by a popular car insurance company to track vehicles could be exploited by hackers to take control of a car. The discovery was made by Cory Thuen, a security researcher at Digital Bond Labs. Thuen has shared the results of its study “Remote Control Automobiles”, during the last S4x15 conference held each January in Miami.
This kind of device is used by car insurance company to evaluate users’ driving habits in order to target the offer for them. Progressive is the name of the manufacturer of a dongle called Snapshot that plugs into the OBD-II diagnostic port that is present on almost every modern car. But as I explained in a past post on car hacking, this port could be also the entry point for an attacker.
Thuen discovered several issues by reverse engineering the device firmware and testing the hardware on his Toyota Tundra. The dongle fails to authenticate to the cellular network and not encrypt its traffic, but most concerning aspect is that its source code is not signed allowing an ill-intentioned to modify or replace it.
In order to run a successful attack, a bad actor needs to compromise also the u-blox modem, which is used to establish a connection between the Progressive servers and the dongle, but Thuen explained that is not a problem because such systems have been already exploited in the past.
“The firmware running on the dongle is minimal and insecure. It does no validation or signing of firmware updates, no secure boot, no cellular authentication, no secure communications or encryption, no data execution prevention or attack mitigation technologies… basically it uses no security technologies whatsoever.” said Thuen.
The device runs on CANbus, the same “digital highway” used by many components to talk to each other, including the brake system, airbags and power steering. The direct access to CANBus could allow an attacker to control the various components of a car by sending specific commands.
The circumstance is alarming because potentially every car is exposed to the risk of a cyber attack. Thuen also explained that hackers could gain control over the vehicles by hacking the Progressive’s servers.
“I suspected that these dongles were built insecurely, and I was correct. The technology being used in them is outdated and vulnerable to attack which is highly troubling considering it is being used to remotely access insecure by design vehicle computers,” he said. “A skilled attacker could almost certainly compromise such dongles to gain remote control of a vehicle, or even an entire fleet of vehicles. Once compromised, the consequences range from privacy data loss to life and limb.
“Also, there is the attack vector of Progressive backend infrastructure. If those systems are compromised, an attacker would have control over the devices that make it out to the field. “In simple terms, we have seen that cars can be hacked and we have seen that cell comms can be hacked.” Thuen told to Forbes.
Despite Thuen had tried to disclose his research to Xirgo Technologies, the company that manufactures the tracking devices, he still hasn’t received any comment.
Progressive released the following statement to Forbes, last week:
“The safety of our customers is paramount to us. We are confident in the performance of our Snapshot device – use in more than two million vehicles since 2008 – and routinely monitor the security of our device to help ensure customer safety.”
“However, if an individual has credible evidence of a potential vulnerability related to our device, we would prefer that the person would first disclose that potential vulnerability to us so that we could evaluate it and, if necessary, correct it before the vulnerability could be exploited. While it’s unfortunate that Mr. Thuen didn’t share his findings with us privately in advance, we would welcome his confidential and detailed input so that we can properly evaluate his claims.”
If you are interested in car hacking topic, I suggest you the reading of the post titled “Car Hacking: You Cannot Have Safety without Security“, in which I have collected the state of the art in this category of attacks, including the researchers made by the popular hackers Chris Valasek and Charlie Miller.
This article previously appeared here.
The post Car Hacking – Progressive Dongle Exposes Vehicles To Attacks appeared first on h+ Magazine.
Do you eat only when you’re actually hungry? Many of us eat even when our bodies don’t need food. Just the thought of food entices us to eat. We think about food when we see other people eating, when we pass a favorite fast-food restaurant, when we see a scrumptious snack near the check-out at a convenience store. In addition, we’re the targets of sophisticated advertising techniques designed to keep thoughts of food and the pleasures of eating almost constantly in our minds.
Obviously, overeating unhealthy foods can lead to overweight. But looking beyond direct effects on expanding waistlines, our lab studies how mental functioning is related to diet. We’ve found a troubling link between a fat-rich diet common in the West and brain-related ailments that can actually impair our ability to avoid overeating.Messages to eat are all around us. Thomas Hawk, CC BY-NC Fatter and fatter
Many scientists believe that societal factors, such as advertising, have combined to create an environment in which the temptations to eat have overwhelmed our body’s natural biological ability to control what and how much we consume. The result is that in the United States, two-thirds of adults, and more than one third of children and adolescents, are now overweight or obese. This trend is spreading to other countries all over the world. Even worse, diseases that are associated with excess body weight – such as diabetes, high blood pressure and heart problems – are also becoming more prevalent.
At the core of the problem is the fact that many of the foods we can’t seem to resist are unhealthy. Some of the most attractive and popular foods in our current environment contain high amounts of saturated fats – high levels are found in red meats and dairy products like ice cream and butter. This type of diet is consumed by so many people in the US and other western societies that it is often called the “western diet.” No wonder obesity has become such a problem.Beyond bellies to brains
Over the past several years, many scientists have reported that consuming a western diet and gaining excess body weight may have harmful effects on the brains of both human and nonhuman animals. For example, some research suggests that middle-aged adults who are overweight and obese are at greater risk for developing Alzheimer’s disease andother types of late-life cognitive dementias compared to people of normal weight. The results of other studies suggest that even children as young as seven years of age may suffer certain types of memory impairments as a consequence of consuming too much of a western diet and accumulating too much body fat.
Much information about the nature of the effects of western diets on the brain comes from studies with rats and mice. Research in our lab and elsewhere has repeatedly shown that feeding rats a diet with levels of saturated fat and sugar much like those in the human western diet weakens the blood-brain barrier (BBB). The BBB is a system of cells and membranes that form tight junctions to prevent harmful agents that circulate in the bloodstream from entering the brain. Feeding rats a western-style diet weakens those tight junctions and thereby allows potentially harmful substances to pass into the brain.Healthy tight junctions keep substances in the bloodstream from diffusing into the brain. Chrejsa, CC BY-NC-SA
To determine which areas of the brain are most vulnerable to the ill-effects of a leaky BBB, we infuse a small amount of dye into the bloodstream of a rat and measure areas of the brain where the dye accumulates. In overweight rats fed a western-style diet, the dye appears to collect preferentially in the hippocampus, a brain structure involved with important learning and memory functions. As an apparent response to the accumulation of such intruding substances, the hippocampus becomes inflamed and its electrochemical activity changes. Rats that suffer these consequences also show deficits in their ability to use certain types of information processed by the hippocampus.A vicious cycle
Do these deficits have anything to do with our ability to resist eating high-fat and sugary foods? We think they do. One type of information that is processed by the hippocampus takes the form of internal physiological signals about one’s need for food. Rats and people who have sustained damage to their hippocampus appear to have difficulty using those internal signals to tell whether or not they’ve had enough to eat or drink. In the presence of powerful cues in the environment that entice you to eat, a reduced ability to use information from your body that tells you that you don’t need food can lead to overeating.Author provided
The result could be a vicious cycle in which eating a western diet produces hippocampal dysfunction which weakens the ability to use internal cues to counter eating elicited by cues in the environment. This could lead to progressively more eating of western diet based on progressively greater deterioration of hippocampal function. As the hippocampus becomes more and more impaired, the severity and scope of learning and memory deficits would also increase. The result could be not only obesity but also more serious cognitive decline.
How to break this feedback loop is an important research question. Maybe the answer will be to find ways to protect and strengthen the BBB against the bad effects of western diet. Maybe it will be in finding ways to make the western diet less damaging. But until other answers are found, the only protection we have is knowing that an excessive intake of a western diet may harm both our physical and mental well-being.
Terry Davidson receives funding from the National Institutes of Child Health and Development (NIH)
Camille Sample does not work for, consult to, own shares in or receive funding from any company or organisation that would benefit from this article, and has no relevant affiliations.This article previously appeared here. Republished under Creative Commons license.
The post Fat and Sugar-heavy Diet Harms Your Brain – And Makes You Keep On Eating appeared first on h+ Magazine.
Most people misunderstand what aging is. It’s not just the public who have been deceived — Most scientists and medical researchers who study aging are on the wrong track.
The culprit is the “natural medicine” movement that has dominated thinking about our bodies for the last 50 years. “Respect the body’s wisdom. Work with the body to fix what has gone wrong.” This approach has worked so well with injuries and many diseases that it is understandable that people want to extend it to aging as well.
Diseases of aging have been treated as if they were something that goes wrong, something we have to help the body to fix. But in fact, the evidence accumulating in recent decades is that aging is not something that goes wrong, and the body is not trying to fix it. Aging is natural. It is the body shutting itself down, putting itself out of the way after it has done its job, finished reproduction.
How do we know that aging is an active process of self-destruction, and not just the body “wearing out”? There are a number of indications, becoming clearer all the time.
- For one thing, if the body were trying its best to keep in good shape, but can’t help wearing out over time, we would expect that damage to the body makes aging happen faster. On the contrary, most kinds of external damage actually make us live longer. The best example is exercise, which generates free radicals like crazy, tears muscles and puts little cracks in our bones. And yet, people who exercise tend to be healther and live longer than others who don’t. Starvation is also a way to live longer. Animals in the lab that are kept on very low calorie diets live much longer than those that have enough to eat. This is a clear indication that the bodies that get plenty to eat aren’t really trying to live a long time.
- If the body were doing its best to forestall aging, but succumbing eventually to wear-and-tear, we would expect that as we get older the repair functions would be going full-tilt. But in fact, all our repair and protection systems gradually shut down as we age. Stem cells, which produce new body tissues, gradually stop working. And the anti-oxidants that protect us from chemical damage are dialed down in old age, so we don’t have enough of such enzymes as CoQ10, SOD and glutathione.
- Clearest of all: there are actually self-destruction mechanisms that we can see in action. One of them is inflammation. When we are young, inflammation protects us from invading microbes, and kills diseased cells; but when we get old, inflammation is dialed up much too high; it kills healthy cells, inflames our arties, leading to heart disease, and inflammation causes cancer as well. Another mechanism we can see in action is called apoptosis, or cell suicide. When we are young, only cells that are diseased or defective remove themselves via apoptosis; but when we are old, healthy muscle and nerve cells simply fall on their swords and die, leading toweakness of muscle, weakness of mind and Parkinson’s disease.
This explains why “natural medicine” has been so helpful for infectious diseases, immune function and response to trauma, but in stark contrast natural medicine has failed to make headway against cancer and Alzheimer’s disease, and has made only marginal progress against heart disease and stroke.
For these diseases of old age, we need to abandon the natural approach, and instead simply trick the body into thinking that it is younger. Then it won’t try to shut itself down.
In fact there are some intriguing indications that this might work. There are researchers working with this approach and they have produced some dramatic successes just in the last few years:
- Every chromosome in every cell contains a time-keeper, tacked onto the tail end of the DNA. This is the “telomere”. Simply by resetting the telomere clock, scientists have produced dramatic results in lab animals, reversing aging and making animals younger.
- When the telomere clock signals a critical age, the cell becomes “senescent”. It goes and strike and refuses to do its job. Worse yet, it sends signals to nearby cells that cause the other cells to become inflamed and cancerous. Recently, scientists have had remarkable success making mice live longer simply by removing the small number of senescent cells.
- As we get older, the hormones circulating in our blood gradually change. This is the principal way that the body knows how old it is. There are youth hormones that promote rebuilding and high-efficiency energy output; and there are old-age hormones that turn up inflammation and cell suicide and signal the body to gradually destroy itself. Scientists have begun to have success by increasing the former and decreasing the latter, resetting the hormone profile of an old animal to match that of a young animal.
These approaches have not yet made front page news, but scientists in the field already recognize their dramatic promise. If all goes well, we should expect breakthrough treatments that extend life and prevent the debilitating diseases of old age, coming on-line in the next few years.
Disclaimer: This is my own perspective, shared by a handful of world-class aging scientists, but it is not yet mainstream. In addition to the two views described here–programmed aging and wear-and-tear theories–there is another class of theories favored by mainstream evolutionary scientists, based on compromises that evolution has been forced to make. These compromises have been made up ad hoc to avoid the inference that aging evolved to benefit the community, not the individual.
There are a great deal of genetic phenomena, as well as hormesis, that can only be explained by programmed theories.
This article originally appeared here on Josh’s blog.
In recent years the data gathered from large epidemiological studies have suggested that more time spent sitting correlates with higher mortality independently of the level of exercise undertaken by an individual. This association seems fairly robust as it has been replicated in a number of different data sets and by different research groups. Here is a survey of these results:The amount of time a person sits during the day is associated with a higher risk of heart disease, diabetes, cancer, and death, regardless of regular exercise. "More than one half of an average person's day is spent being sedentary - sitting, watching television, or working at a computer. Our study finds that despite the health-enhancing benefits of physical activity, this alone may not be enough to reduce the risk for disease." The meta-analysis study reviewed studies focused on sedentary behaviour. The authors found the negative effects of sitting time on health, however, are more pronounced among those who do little or no exercise than among those who participate in higher amounts of exercise.
"The findings suggest that the health risk of sitting too much is less pronounced when physical activity is increased. We need further research to better understand how much physical activity is needed to offset the health risks associated with long sedentary time and optimize our health." Future research will help determine what interventions, in addition to physical activity, are effective against the health risk of sedentary time. "Avoiding sedentary time and getting regular exercise are both important for improving your health and survival. It is not good enough to exercise for 30 minutes a day and be sedentary for 23 and half hours."
These results suggest that a modest reduction in heart rate leads to a modest increase in life span. The researchers here at least monitored body weight, as I would otherwise immediately suspect inadvertent calorie restriction as a more likely cause of life extension than the proposed mechanisms related to heart rate:Heart rate correlates inversely with life span across all species, including humans. In patients with cardiovascular disease, higher heart rate is associated with increased mortality, and such patients benefit from pharmacological heart rate reduction. However, cause-and-effect relationships between heart rate and longevity, notably in healthy individuals, are not established. We therefore prospectively studied the effects of a life-long pharmacological heart rate reduction on longevity in mice. We hypothesized, that the total number of cardiac cycles is constant, and that a 15% heart rate reduction might translate into a 15% increase in life span.
C57BL6/J mice received either placebo or ivabradine at a dose of 50 mg/kg/day in drinking water from 12 weeks to death. Heart rate and body weight were monitored. Autopsy was performed on all non-autolytic cadavers, and parenchymal organs were evaluated macroscopically. Ivabradine reduced heart rate by 14% throughout life, and median life span was increased by 6.2%. Body weight and macroscopic findings were not different between placebo and ivabradine. Life span was not increased to the same extent as heart rate was reduced, but nevertheless significantly prolonged by 6.2%.
A see-through zebrafish and enhanced imaging provide the first direct glimpse of how blood stem cells take root in the body to generate blood.
Reporting in the journal Cell, researchers in Boston Children’s Hospital’s Stem Cell Research Program describe a surprisingly dynamic system that offers several clues for improving bone marrow transplants in patients with cancer, severe immune deficiencies, and blood disorders, and for helping those transplants “take.”
“The same process occurs during a bone marrow transplant, as occurs in the body naturally” in humans, says Zon. “Our direct visualization gives us a series of steps to target, and in theory we can look for drugs that affect every step of that process.”
“Stem cell and bone marrow transplants are still very much a black box — cells are introduced into a patient and later onm we can measure recovery of their blood system, but what happens in between can’t be seen,” says Owen Tamplin, PhD, the paper’s co-first author. “Now we have a system where we can actually watch that middle step. ”The blood system’s origins revealed
It had already been known that blood stem cells bud off from cells in the aorta, then circulate in the body until they find a “niche” where they’re prepped for their future job creating blood for the body.
For the first time, the researchers reveal how this niche forms, using time-lapse imaging of naturally transparent zebrafish embryos and a genetic trick that tagged the stem cells green.
On arrival in its niche (in the zebrafish, this is in the tail), the newborn blood stem cell attaches itself to the blood vessel wall. There, chemical signals prompt it to squeeze itself through the wall and into a space just outside the blood vessel.
“In that space, a lot of cells begin to interact with it,” says Zon. Nearby endothelial (blood-vessel) cells wrap themselves around it: “We think that is the beginning of making a stem cell happy in its niche, like a mother cuddling a baby.”
As the stem cell is being “cuddled,” it’s brought into contact with a nearby stromal or “nurse” cell that helps keep it attached. The stem cell hooks onto the nurse cell tightly, in a process Zon likens to early “attachment” of an infant to its mother.
The “cuddling” was reconstructed from confocal and electron microscopy images of the zebrafish taken during this stage. Through a series of image slices, the researchers were able to reassemble the whole 3D structure — stem cell, cuddling endothelial cells, and stromal cells.
“Nobody’s ever visualized live how a stem cell interacts with its niche,” says Zon. “This is the first time we get a very high-resolution view of the process.”
Eventually, the cuddled stem cell begins dividing. One daughter cell leaves the niche while the other stays. Eventually, all the stem cells leave and begin colonizing their future site of blood production (in fish, this is in the kidney).
Further imaging done in mice found evidence that blood stem cells go through much the same process in mammals, which makes it likely in humans too. In humans, blood stem cells set up permanent residence in the bone marrow.
These detailed observations are already informing the Zon Lab’s attempt to improve bone marrow transplantation. By doing a chemical screen in large numbers of zebrafish embryos, the researchers found that the compound lycorine promotes interaction between the blood stem cell and its niche, leading to greater numbers of blood stem cells in the adult fish.
Abstract of Hematopoietic Stem Cell Arrival Triggers Dynamic Remodeling of the Perivascular Niche
Boston Children’s Hospital | The birth and engraftment of a blood stem cell
Hematopoietic stem and progenitor cells (HSPCs) can reconstitute and sustain the entire blood system. We generated a highly specific transgenic reporter of HSPCs in zebrafish. This allowed us to perform high-resolution live imaging on endogenous HSPCs not currently possible in mammalian bone marrow. Using this system, we have uncovered distinct interactions between single HSPCs and their niche. When an HSPC arrives in the perivascular niche, a group of endothelial cells remodel to form a surrounding pocket. This structure appears conserved in mouse fetal liver. Correlative light and electron microscopy revealed that endothelial cells surround a single HSPC attached to a single mesenchymal stromal cell. Live imaging showed that mesenchymal stromal cells anchor HSPCs and orient their divisions. A chemical genetic screen found that the compound lycorine promotes HSPC-niche interactions during development and ultimately expands the stem cell pool into adulthood. Our studies provide evidence for dynamic niche interactions upon stem cell colonization.