Researchers from the Allen Institute for Brain Science have published the Allen Mouse Brain Connectivity Atlas, the first comprehensive, large-scale data set on how the brain of a mammal is wired, described in their paper in Nature.
The mouse brain’s 75 million neurons are arranged in a structure roughly similar to the human brain’s approximately 100 billion neurons, so they provide a powerful model system for understanding how nerve cells of the human brain connect, process. and encode information, say Allen Institute researchers.
(The only species for which we have a complete wiring diagram is the simple microscopic worm C. elegans — a far simpler system, with only 302 neurons.)
Scientists at the Allen Institute set out to create a wiring diagram of the brain—also known as a “connectome” — to illustrate short and long-range connections, using genetically engineered viruses to trace and illuminate individual neurons. To get a truly comprehensive view, scientists collected imaging data at resolutions smaller than a micron (millionth of a meter) from more than 1,700 mouse brains, each of which was divided into 140 serial sections.
“The data for the Allen Mouse Brain Connectivity Atlas was collected in a way that’s never been done before,” says Zeng. “Standardizing the data generation process allowed us to create a 3D common reference space, meaning we could put the data from all of our thousands of experiments next to each other and compare them all in a highly quantitative way at the same time.”
The Allen Mouse Brain Connectivity Atlas contains more than 1.8 petabytes of data—the equivalent of 23.9 years of continuous HD video — all of which is freely available online to the entire community. Like all of the Allen Brain Atlas resources, the data and the tools to browse and analyze them are freely available to the public at www.brain-map.org.
The Global Power of the Atlas
This movie displays 21 mapping experiments from the Allen Mouse Brain Connectivity Atlas: a tool to investigate how different regions of the brain are connected. The density of axons at each voxel (dot) are displayed as overlapping circles color-coded by the area of the brain from which the axons are projecting. This animation shows how projections from different regions of the cortex divide the thalamus and striatum into distinct domains. — Allen Institute
By analyzing the data, Zeng and her team were able to discover several interesting properties of the mouse brain’s connectome. For example, there are extensive connections across the two hemispheres with mirror-image symmetry. Pathways belonging to different functional circuits in the brain can be identified and their relationships and intersections visualized in 3D.
And there is a great degree of variation in the strengths of all the connections, ranging beyond five orders of magnitude (a 10,000 to one ratio) and an intriguing balance between a small number of strong connections and a large number of weak connections.
“The purpose of the Atlas is to create a new way to map the brain’s vast connections systematically and rapidly, and to develop a platform to present the data to users and help them navigate in the friendliest possible way,” explains Zeng. “But the kind of analysis we have done so far is just the beginning of the deep analysis of the wiring patterns of different brain circuits made possible by this unique collection of data.”
The Future of the Connectivity Atlas
Maintaining the Allen Mouse Brain Connectivity Atlas is a continuous effort. After the completion of the Atlas as originally scoped in March 2014, scientists will continue to update the Atlas with profiles of more individual nerve cell types as they become available. Researchers at the Allen Institute will also focus on studying the connections between different types of neurons in the same or neighboring regions — the city roads and local streets that, together with the interstates, form the hierarchical neural networks.
“Previously, the scientific community had to rely on incomplete, fragmented data sets, like small pieces of a map but at different scales and resolutions, so it was impossible to see the bigger picture,” explains Ed Callaway, Professor in the Systems Neurobiology Laboratories at the Salk Institute for Biological Studies. “Now, we have instant access to complete and consistent data across the entire brain, and the suite of web-based analytic and display tools make it easy to find what you need and to see it in 3D.
Researchers at the Allen Institute for Brain Science have generated a blueprint for how to build a human brain at unprecedented anatomical resolution.
This first major report using data from the the BrainSpan Atlas of the Developing Human Brain is published in the journal Nature this week. The data provide insight into diseases like autism that are linked to early brain development, and to the origins of human uniqueness. The rich data set is publicly available via the Allen Brain Atlas data portal.
“Knowing where a gene is expressed in the brain can provide powerful clues about what its role is,” says Ed Lein, Investigator at the Allen Institute for Brain Science. “This atlas gives a comprehensive view of which genes are on and off in which specific nuclei and cell types while the brain is developing during pregnancy. This means that we have a blueprint for human development: an understanding of the crucial pieces necessary for the brain to form in a normal, healthy way, and a powerful way to investigate what goes wrong in disease.”
This paper represents the first major report to make use of data collected for the BrainSpan Atlas of the Developing Human Brain, a science consortium initiative that seeks to create a map of the transcriptome across the entire course of human development.
“This atlas is already transforming the way scientists approach human brain development and neurodevelopmental disorders like autism and schizophrenia,” said Thomas R. Insel, Director of the National Institute of Mental Health.”
The researchers pointed to autism as a disorder with particularly pertinent links to early brain development. The research team used the BrainSpan Atlas to examine a number of genes linked to autism in prior scientific studies during development.
“We used the maps we created to find a hub of genetic action that could be linked to autism—and we found one,” says Lein. “These genes were associated with the newly generated excitatory neurons in the cortex, the area of the brain that is responsible for many of the cognitive features affected in autism such as social behavior. This discovery is an exciting example of the ability of the BrainSpan Atlas to generate meaningful hypotheses about the origins of brain developmental disorders.”
What makes humans unique?
Understanding what makes humans unique involves deciphering a complex puzzle—one that begins during the earliest phases of development. The richness of the BrainSpan Atlas gives scientists a new set of tools to assess how the human brain develops compared to other species.
“We know that some important regions of the genome show striking sequence differences in humans compared to other species,” says Lein. “Since where a gene is expressed in the brain can give insight into its function, we can use our map to begin to figure out the roles of those genes in making humans distinct. Our analysis of the data showed that these genes are enriched in the frontal cortex, as well as in several specific specialized cell types including inhibitory GABAergic interneurons and neurons of the transient subplate zone that serves as a scaffold during early circuit formation. These features are all known to be expanded or show developmental differences in humans compared to other species, so our data gives unprecedented clues about the molecular underpinnings of what makes human neocortex unique.”
The BrainSpan Atlas enables researchers around the world to conduct research and ask questions about the early human brain that many would not be able to do otherwise, due to the highly limited availability of prenatal tissues.
The founding principal investigators in the consortium behind the BrainSpan project include Ed Lein and Michael Hawrylycz at the Allen Institute for Brain Science, Nenad Sestan and Mark Gerstein at Yale University, Jim Knowles at USC , Pat Levittat at The Saban Research Institute of Children’s Hospital Los Angeles and USC, Dan Geschwind at UCLA, and Bruce Fischl at Massachusetts General Hospital.
As part of its Windows Phone 8.1 update announcement Wednesday, Microsoft introduced Cortana, a personal digital assistant with a persona.
“We were inspired by the popular character from Halo who served as a brilliant AI and a deeply personal digital assistant to Master Chief… so we called her Cortana,” said Joe Belfiore, the corporate vice president and manager for Windows Phone Program Management at Microsoft on his blog.
Explains Belfiore: “Powered by Bing, Cortana is the only digital assistant that gets to know you, builds a relationship that you can trust, and gets better over time by asking questions based on your behavior and checking in with you before she assumes you’re interested in something. She detects and monitors the stuff you care about, looks out for you throughout the day, and helps filter out the noise so you can focus on what matters to you.
“Cortana will launch shortly here in the U.S. first as a “beta,” and then will launch in the US, the U.K. and China in the second half of 2014 with other countries to follow afterwards into 2015.”
The Verge has a more detailed description of Cortana.
Stanford University scientists have discovered by accident a way to produce thin diamond films from graphite, which could be useful for a variety of industrial applications, from cutting tools to electronic devices and electrochemical sensors.
The scientists added a few layers of graphene (one-atom thick sheets of graphite) to a platinum support and exposed the topmost layer to hydrogen.
The ‘Midas touch’?
To their surprise, the reaction at the surface triggered a domino effect that altered the structure of the graphene layers from graphite-like to diamond-like.
“We provide the first experimental evidence that hydrogenation can induce such a transition in graphene,” says Sarp Kaya, researcher at the SUNCAT Center for Interface Science and Catalysis.
Graphite and diamond are two forms of the same chemical element, carbon. In graphite, carbon atoms are arranged in planar sheets that can easily glide against each other. This structure makes the material very soft and it can be used in products such as pencil lead.
In diamond, on the other hand, the carbon atoms are strongly bonded in all directions; thus diamond is extremely hard. Besides mechanical strength, its extraordinary electrical, optical and chemical properties contribute to diamond’s great value for industrial applications.
With the help of intense X-rays from SLAC ’s Stanford Synchrotron Radiation Lightsource and additional theoretical calculations performed by SUNCAT researcher Frank Abild-Pedersen, the team then determined how hydrogen impacted the layered structure.
They found that hydrogen binding initiated a domino effect, with structural changes propagating from the sample’s surface through all the carbon layers underneath, turning the initial graphite-like structure of planar carbon sheets into an arrangement of carbon atoms that resembles diamond.
The discovery was unexpected. The original goal of the experiment was to see if adding hydrogen could alter graphene’s properties in a way that would make it useable in transistors, the fundamental building block of electronic devices. Instead, the scientists discovered that hydrogen binding resulted in the formation of chemical bonds between graphene and the platinum substrate.
Future research will explore the full potential of hydrogenated few-layer graphene for applications in the material sciences.
The research team included scientists from Stanford University, the Stanford Institute for Materials & Energy Sciences (SIMES), SUNCAT, and SLAC’s Stanford Synchrotron Radiation Lightsource (SSRL) .
Abstract of Physical Review Letters paper
We report on the hydrogen adsorption induced phase transition of a few layer graphene (1 to 4 layers) to a diamondlike structure on Pt(111) based on core level x-ray spectroscopy, temperature programed desorption, infrared spectroscopy, and density functional theory total energy calculations. The surface adsorption of hydrogen induces a hybridization change of carbon from the sp2 to the sp3 bond symmetry, which propagates through the graphene layers, resulting in interlayer carbon bond formation. The structure is stabilized through the termination of interfacial sp3 carbon atoms by the substrate. The structural transformation occurs as a consequence of high adsorption energy.
When Domino’s sent two pepperoni pizzas on a 10-minute drone flight last summer in a publicity stunt to demonstrate how takeaways may be delivered in the future, Andreas Raptopoulos reacted with scorn.
“This is total nonsense. Why the hell would you do that? The public risk to transport a pizza around when you can do it perfectly well with all of the infrastructure you already have there? Why don’t you use the same technology to save somebody’s life when a mother needs medicine or a child needs medicine instead of it being stuck on a lorry on a muddy road. To me, this is where technology works best,” the Greek entrepreneur said…
The Rejuvenation Research journal is completely open access as of when I looked it over today. I believe that to be a fairly recent change, so those of you without subscriptions might want to wander through the archives in search of interesting reading. In particular you might find the editorials by Aubrey de Grey to be well worth reading, and looking over those articles should provide great deal of insight into the state of aging research and the related noteworthy tensions and debates within the scientific community. Below is quoted the most recent editorial (in PDF format only, I'm afraid to say), followed by a couple of others that you may also find worthwhile:
Really? How can I make such a claim? Surely the elderly are vocal in defence of their rights to be treated as equals with the young? In many ways they certainly are. But bizarrely, when it comes to their health everything is different. They tend to the view that medical care should be prioritized for those who have not yet enjoyed a good innings. If you haven't encountered this yourself and don't believe me, try it: talk about it to a few retirees and you're in for a shock.
Let's look under the hood a bit. Why would the elderly take this view? It turns out to be very easy to explain. In a world in which aging is truly inevitable, forever, there is a pretty solid ethical basis for the idea that equitable distribution of aggregate quality of life among all people translates into working harder to maintain or restore the health of the young than the old, simply because they have more to gain before the inevitable final curtain falls. And that's exactly the premise that the elderly are non-randomly more likely than the young to adopt, since they've had that much longer having it drilled into them by the rest of humanity. They've lost the ability to aim high.
So I come to my call to action. Throughout history, humanity has only acted energetically against discrimination when those who are suffering it led the way. Therefore, we need to change this attitude on the part of the elderly, and fast, if we are to maximize humanity's cognizance of the horror of aging and its urge to defeat it as soon as science allows. We need to make the aged less ageist. And the only way we can do it is by educating them that aging is within striking distance of being brought under comprehensive medical control: the same sort of control that they are familiar with - but their parents, or at least their grandparents, were not - in respect of the diseases, such as tuberculosis and diphtheria, that back then claimed over one third of all babies before the age of one.no better than the purveyors of miracle anti-aging cures of time immemorial.
To me, it is that attitude which is reprehensible. Whether or not it is true [that] the loss of reputation arising from such over-selling (if it turned out so to be) would be so awful as to outweigh the funding considerations, that dilemma is between two purely selfish motives - money now and notoriety-driven shortage of money later, or less money now but reputation untarnished. What my colleagues should, in fact, be asking themselves is how they can best repay society for its decision to give them their chosen life of freedom from the private-sector rat race. (I will not digress into whether the academic rat race is any better.) I submit that the answer is clear: Researchers should say what they actually think. At present, it is customary for researchers to dangle the carrot of success in our research without mentioning time frames, thus conveniently protecting themselves from any chance of being seen as overoptimistic, but also failing to engender the public enthusiasm so vital for allowing the necessary research to actually happen. This cannot be allowed to continue.
I believe that the main reason for this ostensibly misguided caution is that biogerontologists simply do not have good evidence that such a quest would even modestly succeed, even with a dramatic rise in the funds allocated to it. Though they are quite good at convincing themselves and each other of the promise of hypothetical "magic bullet" interventions - the most popular within the field being drugs that would mimic calorie restriction (CR) - they essentially never convince anyone with purse strings to hand. In my view, this is not because they lack marketing eloquence or motivation, but because the hard facts do not inspire objective confidence that successes seen thus far in the laboratory will ever, even in principle, translate to the clinic. The recent negative results in primate calorie restriction have surely rendered this problem even more intractable.
“Current economic theory simply does not consider the possibility that robot labor might replace human labor as the primary source of economic growth. Only the science fiction community has taken this idea seriously.”(Albus, Path to a better world: A plan for prosperity, opportunity, and economic justice, 2011).
The youth unemployment rate in European countries has been estimated to be as high as 62% (Thompson, 2013) and if the current trend continues the birthrate will always out pace the job creation rate it can be expected that youth unemployment will continue to rise to higher levels. The generation of youth today face an opportunity crisis caused in part by the high cost of education, the debt load, and technological unemployment.
The problem we wish to solve is technologically enhanced unemployment which works around the current paradigm of convincing human beings to attempt to compete with robots, machines, and intelligent machines in the workplace for employment. The new paradigm as part of the solution we are presenting is a world where the incorruptible intelligent machines take care of the human being rather than compete against.
The solution categories are political and technological
There are generally two categories from which solutions to high unemployment are formed and chosen. The political category and the technological category. Each category of solution has it’s pros and cons and we do not wish to diminish any alternative solution which has a measurably positive impact on solving the problem. What we are promoting is a technological safety-net to supplement or in other words provide auxiliary power to the traditional political safety-net. If the one fails then there is technologically based resiliency and redundancy.
Political solutions typically involve social programs which are primarily funded through taxes. These taxes at the current time are a burden which is spread out among the middle class as well as the rich but as unemployment rises the middle class predictably will shrink and the tax burden will continuously fall more heavily on those which society deems rich. These political solutions rely on a mathematical trend which at this point is not happening.
In order for something such as the Affordable HealthCare act to work it has to have funding far into the future but what if there aren’t enough jobs in the future to fund this idea or many of the others? Are taxes really the only way we can think of to solve this problem and why are we limiting ourselves to relying on the government to solve it when the solution can come from technology itself?
In order for to rely on the centralized government to solve this problem we have to accept risks such as dependency centralized authorities such as politicians who can change their mind and the laws at any time and cut millions of people off, and additionally people who are rich do not necessarily want to pay for people who are poor.
The government will be required to use force to try to make the rich pay for the poor which ultimately can result in a budget deadlock. This political approach shows no sign of success and long term it is not sustainable because the workforce and tax base will shrink causing fewer people to pay higher taxes.
A final concern of note with this solution is the lack of privacy, where certain people receive social program benefits while others do not it creates unnecessary division. The political solutions inherit all of the problems associated with centralization such as a single point of failure, lack of privacy, dependency on centralized authority, and narrow political focuses such as on the problem of unemployment.
Some of the benefits of the technological solutions include
- Leverages AI
- Leverages the Internet of Things
- Leverages and embraces the technological singularity
- Virtual / Not limited by borders or jurisdictions
- Autonomous / Does not need central human authorities to runitself
- Decentralized / Distributed & Peer to Peer
- Trust-less / Minimal trust
- Directly democratic
- Potentially incorruptible
- Private / Pseudo-anonymous
The technological solution has all of these pros. The benefit of decentralization is that it cannot be shut down. If we look at a government solution then when the politicians win or lose an election or when they are just in a bad mood they can decide to make irrational changes which can hurt the economic security of people who exclusively depend on them to always make the right decisions.
If we look at the Affordable Care Act we can see that while it’s intentions are good the website is designed in an archaic manner, is barely usable, and at any time in the future because it depends on politics (which are often based on popularity contests) it can be modified or cut by politicians.
The technological solution does not have this problem and while it should not act not as a replacement it can act as an axillary solution to provide the same services when the political solution fails due to human nature, human error, or corruption. By providing a technological solution we remove a possible point of failure and can help secure the economic fate of individuals depending exclusively on the political solution not failing.Costs
Additionally the technological solution can reduce the costs of the political solution. If for instance Peer to Peer health insurance could someday compete with traditional health insurance then we could reduce the cost of the Affordable Care Act because Peer to Peer health insurance could be cheaper and equally effective. If we can provide in a decentralized manner something essential like food vouchers then we do not need to use the government for this purpose.Privacy
The government provides an EBT card which asks for the users ID in order to receive food stamps. This violates the privacy of the user who may not want it to be known that they receive food stamps. In this case the technological solution would be pseudo-anonymous and completely private while also capable of providing digital food vouchers backed by farmers willing to accept them. In a pseudo-anonymous environment there will be no expensive drug tests (Price, 2013) and no political litmus tests. Every participant in the technological solution should receive equal benefits in a pseudo-anonymous privacy enhanced manner.Security
Because the political solution requires an identity it creates a risk of identity theft. Identity theft has a cost associated with it. The technological solution is much more secure by design and because it is Peer to Peer it can be global from the start which bypasses all the political nonsense associated with political solutions. The technological solution cannot be shut down on a whim because it’s Peer to Peer nature provides layers of redundancy.
It only requires a public key and that means identity need not be stored on centralized servers within databases. This also means your private information cannot be sold or traded around. The increase in privacy removes the social and political stigma associated with receiving something like Food Stamps, Welfare Benefits, or subsidized Health Insurance because you’re just a pseudo-nym. Medical records can also be more secure because they too could be under a pseudo-nym.
A summary of the risks associated with dependency on the political solution
- The risk of human error, human failure, or simple mood changes among politicians.
- The risk of unsustainability due to a possible unexpected increase in cost or less tax revenue.
- The risk of identity theft, public scorn, or political persecution due to the lack of privacy.
- Bound by borders, the benefits are assigned by borders in an increasingly borderless economy.
- Change happens slowly, incrementally, and may be unable to keep pace with technological change.
“Peoples’ Capitalism would generate the savings and loans necessary to finance massive new investments in modern technology and generate rapid productivity growth. And it would distribute the benefits of rapid economic growth to all. Everyone would become a capitalist. Everyone would own a share of the means of production.”(Albus, n.d.).
All of the problems associated with the political solution can be solved by providing a technologically enhanced distributed basic dividend. To provide a basic dividend we must leverage intelligent machines and to leverage intelligent machines we must promote decentralization of ownership of the intelligent machines.
Crowd funding through an exodus address or angel address
One of the most important ways of bootstrapping or kick starting the process of development of these solutions is through the utilization of crowd funding. One successful way of doing that is to simply have an address designated as the exodus or angel address which is to be utilized as the crowd funding destination address. Anyone can send any cryptocurrency or cryptoasset they have to this address as part of the distributed fund raising campaign.
The money raised by crowd funding is a loan to developers who can be paid to quit their jobs or who will get paid in shares of the crypt-entity/asset they are building. The entities who sent to that address will have transaction logged to the blockchain allowing their level of contribution to be converted to shares in the crypt-entity/asset.
The use of the savings address to simulate an annual salary
A possible result of the decentralized crowd funding process is the decentralized distribution of ownership in crypto-assets. These crypto-assets can be treated like capital assets. Each wallet should have a savings address and a shopping or checking address. The savings address should be the most secured address and could be held in cold storage and / or set so that it is rate limited.
The purpose of this savings address is to send a specific amount of currency to the spending address at a rate which simulates a salary. So the idea is that you can set your own annual salary in your national currency to be sent from this savings address to whichever addresses are randomly chosen from your checking address.
The necessity of this is to allow people to stretch their money over their lifetime because the majority of future incomes will come from investments rather than from what we today call jobs. Money which comes from investments may come in huge sums very fast and then trickle down to a crawl pace.
This happens because typically there is an exponential growth period and then a period from which an investment profit tapers off. It may be important for some people to psychologically simulate an annual salary so that they don’t burn through all their money and this type of rate limiting is built into the Bitcoin protocol under n-lock time which designates a waiting period before a transaction can be spent so that a transaction sent to self would not be spendable until some time in the future (not be spendable too fast).
The Pros and Cons of utilizing Proof of Work vs Proof of Stake
If you’re using cryptocurrencies you typically have Proof of Work and Proof of Stake. Proof of Stake allows for a dividend to be paid to every owner of the currency. Transaction fees would generate that dividend. The problem with this would be that the people who hold the most would get the most so Proof of Stake is not necessarily the most fair mechanism of distributing a flat dividend.
Another variation on the example would be to set it up so the subsidy goes to every owner of a specific key. This way with the public key of each user you could now distribute the dividend on a per public key basis and through this mechanism you could actually give an equal proportion of the dividend to every owner. This actually would work about as well as the government tax and spend model we have now with the transaction fee going to every non-anonymous user of the currency and as a side effect it would also encourage every user of the currency to have a permanent pseudo-anonymous identity.
A combination such as the above example which combines cutting edge technology in a decentralized and private manner would work even better. The individuals who receive welfare or who receive universe healthcare would never be identifiable because the basic income would go to everyone who can provide Proof of Ownership of some nominal amount of the cryptocurrency/crypto-asset and Proof of Identity. So for instance you have your cryptocurrency wallet with whatever currency unit amount in it. You have a public key which allows for one specific identity to claim ownership of that specific wallet. If that wallet passes a certain threshold amount then it meets the minimum threshold to receive the dividend. Their public key is then connected to dividend paying shares which uses private sector mechanisms to pay dividends based on transaction fees to every shareholder. Shareholders would have to be real people, to make sure some people aren’t trying to game the system the dividends can only be redeemed at an exchange which follows KYC guidelines.
Types of dividends
For dividend types you have flat static dividends, flat dynamic dividends, hierarchical static dividends and hierarchical dynamic dividends. To further define them we can assign it as:
Type A dividends: Flat static dividends
Type B dividends: Flat dynamic dividends
Type C dividends: Hierarchical static dividends
Type D dividends: Hierarchical dynamic dividends
A flat static dividend is a fixed percentage of profit within a profit sharing contract which every owner receives an equal portion of. There is no economic hierarchy so there is no accusations of it being a pyramid scheme which benefits early adopters more than late adopters. There is a fixed number of slots and every user who enters into the slot before all the slots are filled will get an equal share of the profit generated.
A flat dynamic dividend is a dynamic percentage which can change over time or is voted on and changes but it’s still flat. There are a fixed number of slots to be filled and every user who takes a slot or a seat at the round table before the seats are filled will be an equal. If they vote to raise their percentage of profit to go to the dividend they can or they can vote to lower it but everyone gets a vote and the percentage is equal for every owner.
A hierarchical static dividend is a dividend which pays a fixed percentage according to where you are sitting in an audience. So if it’s a concert or sporting event some individuals have better seats than others because they arrived earlier. The early bird gets the worm in this case and is richly rewarded for doing so. This mechanism can sometimes be viewed as a pyramid scheme or mistakenly viewed as a Ponzi scheme but in certain scenarios where you want risk takers to be rewarded then being early can matter. In this case all the risk takers will get a fixed percentage of the profit forever.
A hierarchical dynamic dividend is a dividend which pays a dynamic percentage of the profit which can change over time through something like Proof of Stake voting. Proof of Stake allows a software protocol to determine who has the greatest stake and then allow them to vote on what percentage of the profit should go to dividends. This form of voting allows for great flexibility among stakeholders but if a few hundred people own 30% of the crypto-equity then they get 30% of the dividend as well. This is good in some ways but it opens the ownership up to the accusation of being greedy.
Transaction fees can be used to pay for dividends
But there are different kinds of transactions. Some transactions are small, some are large, some are recurring/subscription, some are not. But Bitcoin for the most part utilizes the same transaction fee indiscriminate of the transaction type and it is up for debate whether or not this fixed fee will be sustainable once mining revenue decreases as no new coins are generated (GodIsLove, 2013).
Types of transaction fees (there may be more types, seeking input here)
Type A transaction fees: fixed sized fee hard coded into the client
Type B transaction fees: dynamic fee which adjusts according to transaction size
Type C transaction fees: dynamic based on voting mechanism.
Type A transaction fees: fixed size transaction fees where every transaction pays the same fee are not sustainable
Anyone who has experience using the Bitcoin client would know that the most recent update specifically addresses the fact that the volatility of Bitcoin has resulted in the cost of transaction fees rising in terms of $ amount.
This is a problem because Bitcoin is marketing itself as having lower transaction fees than Western Union. The Bitcoin protocol is designed so that after the 21 million Bitcoins are produced the mining is to be subsidized by transaction fees. The problem is that these transaction fees are voluntary, and all transactions are treated as equal.
This makes sense technically speaking because it’s nearly free to process and transfer information and the cost of that processing is very cheap. Mining on the other hand is not cheap and currently is not profitable for the majority of human miners (system administrators) who had to purchase ASICs but due to seemingly exponential rising of difficulty are unable to recoup their investment.
While the virtual worker does not care the human being has bills to pay and when the return on investment does not exist or is too low the human being will find something else to do. It is known that the Type A transaction fee model cannot scale and is not sustainable. Type B and Type C transaction fee models are sustainable.
Attraction by incentive rather than coercive force (Incentive-centered design protocol)
The political solutions typically rely on one group trying to force a behavioral pattern on another group. Taxes are not voluntary and require a Robin Hood strategy. Taxes are also seen as a punishment rather than a reward and the more money you make the higher the tax. Transaction fees are essentially a tax on the use of the Bitcoin network and that tax is used to subsidize mining. If the fee isn’t high enough then the network cannot scale as easily, cannot remain decentralized as easily, and is less secure.
As a result of this it is necessary under the Proof of Work scheme to pay the tax to the network of miners who process transactions and generate new coins because over time there will be less and less coins to generate and mining itself will become unprofitable. If mining becomes unprofitable it may become less decentralized and more vulnerable to a 51% attack.
The purpose of an incentive-centered design protocol is to encourage by incentives rather than to adopt the political method of threatening with jail time. The use of a social contract can be encouraged by cultural norms, incentives, and the human need to want to be a part of a community.
To be a part of a community also means to give back to that community. If a community has certain principles then those principles and it’s membership who believes in similar principles have to be supported in the social contract which can be enforced by peer pressure, tradition, culture, social norms, in the same way that people typically want to avoid scams, pre-mined coins, overly centralized unfair markets are not seen as desirable.
The incentive-centered design protocol should be built into the design of the social contract, the software, the design of every aspect so that every individual member can be richly rewarded for their efforts while at the same time become a benefit whichever virtual community or set of virtual communities he or she agrees with.
Miners are virtual workers
Artificial lifeforms otherwise known as a-life can be useful to describe the concept of mining. The artificial workers in the Bitcoin Proof of Work for instance are computational processes which are simulating the role of a gold miner. Just as you can have artificial lifeforms which simulate the role of a gold miner, you can also have artificial lifeforms which can simulate the role of any other kind of laborer. The Proof of Work protocol is designed so that the artificial miner virtual lifeforms are presented with problems in the form of puzzles which must be solved.
These puzzles are cryptographic hashes which become progressively more difficult to solve as more virtual workers join in the effort to solve it. It acts in this way because it is designed to mimic the behavior of precious metals such as gold and is modelled in such a way to limit the inflation rate to a predictable rate and cap the overall amount at approximately 21 million Bitcoin units.
So in the case of Bitcoin we have a protocol which models itself after nature to produce precious numbers and which uses the principles of artificial life or a-life to design an precisely defined unit of account architecture. This architecture is now in the process of being extended further with the concept of colored coins which can mark or tag every single Satoshi.
A Bitcoin is approximately 100,000,000 Satoshi and there are 21 million Bitcoins. 21,000,000 * 100,000,000 would equal to the maximum number of units in the Bitcoin protocol. What must be remembered here is that Bitcoin is not best defined as a currency but as a distributed protocol which acts as a public ledger modeled after a currency.
The same way virtual workers, artificial life, and precious metals could be simulated and modeled to produce a virtual object/commodity of value what is of great importance are the concepts behind the protocol.
Modeling of natural and unnatural structures to leverage artificial life (Biomimicry)
The use of colored coin conceptual metaphor can allow for the Satoshi to become a stock, a bond, anything. The focus of future innovation should be in expanding the conceptual metaphors, the use of artificial life, with new innovative models which can simulate more advanced and more important functions than merely a coin or a stock.
What we have is a blank canvas from which you can create any kind of financial instrument so long as the incentives are aligned right to attract human beings. Any naturally occurring structure can be modeled in code and combinations which do not occur in nature can be modeled to solve any problem in code. So the concept of a bee hive which pays dividends to every pod.
Blending naturally occurring structures with unnatural occurring structures (Biomimicry based hybrid social structures)
A bee hive is a naturally occurring structure. It is from the nature of the bee that we have the bee hive and the honey. A corporation is an unnatural structure and it is the law which created the corporation. Corporate person hood is the blending of the unnatural structure with a natural structure. The concept here is that the corporation is now an artificial life form, and there can be a parent and child corporation.
Protoshares is an example of blending natural and unnatural structures. The distributed autonomous corporation (Invictus Innovations Incorporated, 2013) is modeled loosely after the behavior of the real life distributed autonomous community, yet it also models itself after a corporation because it is through owning Protoshares that you can own all DACs in the Invictus Innovations family tree of DACs. In this way the root of that tree is the Protoshares which guarantees a proportional percentage of ownership in DACs created by Invicticus Innovations. DACs are intended to be artificial lifeforms which decentralize and distribute the corporation so that it can inherit the benefits. The concept of the DAC is a good example but it is limited because it does not go far enough in decentralizing the stakeholders and ownership.
Decentralized ownership of intelligent machines (decentralized ownership of smart property)
“The survival of man depends on the early construction of an ultra-intelligent machine. In order to design an ultra-intelligent machine we need to understand more about the human brain or human thought or both. “ (Good, n.d,)
I.J. Good made the statement that the first ultra-intelligent machine will be the last invention of mankind (Good, n.d.). The concept of technological singularity, or super-intelligence both are related to the concept of ultra-intelligence. As we build toward the direction of ever more intelligent machines we must also do everything we can to decentralize ownership. Concepts such as distributed autonomous corporations, distributed autonomous applications, distributedautonomous agents, are some of the game changing concepts which when applied appropriately can result in a decentralization of the ownership of intelligent machines.
To facilitate the decentralization process we can use metaphors to help explain it. We can use the language which includes metaphors of such as shares, dividends, stocks, bonds, crypto-equity, crypto-subsidies, transaction fees, demurrage, coins, vouchers, points, credits, stamps, tokens, stakes, or anything else which is necessary to decentralize ownership of the intelligent machines.
A distributed autonomous corporation can be owned by 100 individuals and even if it’s great conceptually it would benefit too few people. For example currently around half of all of the 12 million+mined Bitcoins as of 2013 are owned by individuals (Wile, 2013). This situation is acceptable provided that there is opportunity for growthand that eventually those who hold those Bitcoins spend it. This is not a technical problem but a public perception problem.
The perception is that fewer people are able to benefit if the stakeholders are overly concentrated into the hands of too few people than if it is more widely distributed. This can negatively affect the perception of fairness in the system and this negative perception is a potential risk which can be mitigated through diversification (altcoins with new blockchains and different ownership distributions).
The Internet of Things is expected to grow from 1.9 billion devices today to 9 billion devices by 2018. In the future the majority of workers and of transactions will be thing to thing, machine to machine, robot to robot, rather than person to person. DACs will transact with DACs. This trend may trigger a paradigm shift away from the human labor based system.
Protoshares are backed by the goods and services of Invictus Innovations which is honorable as a corporation but the single point of failure would be based around whether or not they uphold their social contract to redeem Protoshares. Social contracts currently do not have legal enforcement so this too represents a potential risk. This risk can be partially mitigated by surety bond functions, collateral, and other yet untested means but ultimately legal means may be a requirement in order for the social contract mechanism to work. The jury is still out.
One mechanism which could be beneficial would be to have community owned DACs (decentralized autonomous communities). These DACs would be owned by a specific group to solve a social purpose rather than merely to profit. So if we look at an organization *fill in the blank* then they could have it set up so that their members own 50% of whatever the virtual community owns. The virtual community could for instance provide a dividend to every member using the same social contract that allows for something like Protoshares to provide a dividend to any owner of it. If we use *fill in the blank* as an example then every member of *fill in the blank* would be required to provide Proof of Identity so that it can be verified that they are a real person and this can be accomplished through know your customer. It would also have a side effect of incentivizing individuals to have verified wallets because those wallets would qualify to receive dividends from all *fill in the blank* community owned DACs.
Decentralized ownership of smart property, sensors, smart spaces and the Internet of Things
To illustrate the concept of a smart space we can start with a smart property such as a smart house. That house has a human owner and that human owner theoretically would also own the private key giving him or her complete ownership of autonomous agents inside the house. Inside of a smart space you may have human beings living with autonomous smart devices which communicate amongst each other. These smart devices run autonomous agent applications/programs and an autonomous agent is a self directed intelligent application operating on behalf of its owner in this case.
A smart house for instance may include all sorts of devices such as smart televisions which shut themselves off when the owner is not in the room or not awake, LED “LiFi” (McKendrick, 2011) light bulbs which have a dual function as visible light communication devices for the Internet of Things, and of course cryptocurrencies which provide for a unit of account mechanism for human to device and device to device communication. Simultaneous audio video transmission has been achieved utilizing visible light communication (Son, Cho, Moon, Ghassemlooy, Kim, & Lee, 2013) which could allow for a distributed broadcast capability without the use of WiFi signals for instance but this visual light communications channel could also transmit transactions.
The idea of paying micro transactions to the toaster to make toast while having the profit be distributed as dividends to the shareholders who own the toaster, the refrigerator or the oven might sound absurd but the Internet of Things in combination with cryptocurrency based micro transactionsmake it possible today forhumans to pay devices. The smart space itself could have it’s own currency that it operates under andthese devices will be able to eachhave wallets of their own, secure profits of their own to pay for their own upgrades and maintenance, or to pay human beings and other devices for services. The car itself could charge every passenger a micro transaction to pay for gas. If you set the car to drive itself it could charge the driver a micro transaction fee.
The car itself could pay for gas while it’s driving in autonomous mode without the human being having to do it and if it’s an electric car then the car could sell it’s electricity to passengers who use it. The ability to transact in micro payments allows for every activity done by man or machine to have it’s cost accounted for. It also allows for a group or community to own a set of devices and receive a dividend when those devices are used.
Sensors throughout a smart house can provide information to the owner of the house as well and these sensors could reveal information about the usage of every device in the house. Every time the stove is turned on, the refrigerator service is used, all these transactions could be logged, charted, graphed, analyzed, and any unusual usage patterns would be detected. Through personal datamining (Gorodetsky, 2013) the smart space itself could even learn to remember the typical usage patterns to know when the owner is in the house or if something might be wrong.
Distributed ownerless autonomous, smart devices, sensors, smart spaces and the Internet of Things
Just as some smart property will have owners there is also the possibility of having devices which have no individual owner. The device would own itself, pay for itself, and pay humans to maintain, service, upgrade, repair itself. In the case of something like Bitcoin you would say that the people who own the majority of the Bitcoins own it if voting were conducted via Proof of Stake. If we are talking about a DAC then we could say the DAC could be owned by a group of people or even by another DAC.
An ownerless autonomous device is autonomous, self owned, and uses human beings in a symbiosis in which human beings gain some value from it either by being paid by it or from it’s intrinsic value and it profits as a way of increasing that intrinsic value. An example of something like this could be a distributed autonomous smart energy grid which provides the electricity for a town so that the entire town must depend on it but it has no owner and it is solar powered. It’s not a DAC because it’s not a corporation, it’s profit are strictly to keep it alive and running.
Fully automated distributed production network of factories (autonomous manufacturing)
Perhaps one of the biggest breakthrough solutions of all will be the construction of a fully automated distributed production network ofseed factories (“7.0 – Distributed Production Network,” n.d.). Ownership in these distributed automated factories can be decentralized and if the shareholders of these automated factories are distributed widely and diversely throughout virtual communal space it is possible to provide the shareholders with a currency backed by the output of these factories which can be traded on a decentralized exchange built on top of the Mastercoin protocol or using colored coin functionality. It is also possible to have ownerless autonomous seed factories which provide for any human being in need.
Community oriented DACs defined
A community oriented DAC is any DAC which agrees with the principles of the community it is a member of and which adopts a social contract which promises to the delivery of a agreed upon or voted upon percentage of the profit achieved by that DAC. The distributed autonomous person if it is to be considered an artificial life form can purchase membership within *fill in the blank* by paying the membership fee and receive a *fill in the blank* cryptographic-stamp. That membership fee would be a percentage of ownership of the DAC itself and would translate into a percentage of the profits.
The higher the percentage, the more voting rights that DAC should have as a direct sponsor of the *fill in the blank* community and direct contributor to the resiliency of *fill in the blanks* ‘s economic health. DACs which obtain the highly sought after *fill in the blank* stamp should be prioritized within the *fill in the blank* community.
Decentralized decision support systems protocol layer (Decision making, voting)
This protocol layer will leverage artificial intelligence, DACs, and intelligent machines to offer decision support systems as a service to the user. Decentralized decision support is a way to help a community make mission critical decisions based on the best available information. It can leverage prediction markets, user-contributions and open source intelligence. Ownership without good decision making can lead to very negative outcomes.
This decentralized decision support layer should also include a voting mechanism. The voting mechanism must be coercion resistant. It must be crytographically secure. It can achieve greater security by not putting too much weigh or reliance on any specific technology and by using technologies developed for one intended purpose for a dual use. Another mechanism of security is the use of multiple secure channels under the assumption that while all secure channels can be tapped it is unlikely that a single adversary will be able to tap them all at once (Clark & Hengartner, 2012).
Additionally, delegated voting must be built into the protocol from the start. Assume the use case scenario of an individual who is kidnapped or arrested, if he has delegated his Proof of Stake vote in advance then his vote including his voting power could be cast on his behalf even if he is sitting in prison. This capability would guarantee that voting rights could not be revoked even in the case of arrest, kidnapping, or demise. Delegated voting adds a redundancy to the system as well as a dead mans switch capability so that the votes cannot locally be stopped through coercion or political persecution.
Reality augmentation and alternate reality gaming protocol layer (Needs input from others)
The correct way to describe the Resilience Network Project to the layman is to describe it as a serious game. It’s a serious game which starts out on the Internet as a movement and then as it develops it should produce from it solutions to economic problems. We solve these problems in the context of an alternate reality game which uses metaphor and story-telling to explain and frame the technological concepts. Reality augmentation can assist journalists in telling and framing the story lines (Pavlik & Bridges, 2013) which should be based on real world economic events in some cases. This would in effect be a trans-media augmented reality based protocol layer which utilizes mechanisms of gamification to increase user interest and participation.
For instance the concept of a cryptographic badge system potentially adds both a certification and gamification layer where user contributions can be rewarded by status. This status could be a sense of immorality, fame, prestige, special privileges, additional economic benefits, but in order to unlock these badges the user must earn them. So while the basic layer of the system gives everyone the most fundamental economic benefit or basic dividend, the higher layers of the system should allow even greater rewards. Bonus points could be earned through completing a bounty and those bonus points could be redeemed for the purchase of certain badges. Other badges cannot be purchased and have to be earned. Some badges will come by luck. Some will can only be given when a certain amount of others trust the individual.
The gamification layer can serve many different functions and facilitate features such as:
- Distributed pseudo-anonymous reputation through badges, titles, reviews, tagging, tipping, moderation and more.
- Distributed autonomous reward distribution (bonus points, contests, bounties, tournaments, and more)
- The purpose of the gamification layer is to motivate the user to take an active participatory role, to provide a sense of community, to promote continuous development which can initially be crowd funded and then fund itself. (More will be elaborated on this layer at a later date)
And it should go beyond just badges. The purpose of the reality augmentation and alternate reality gaming / gamification layer of the protocol is the positive virtualization of the real world. The virtual world does not have the limitations of the real world and while you can model a virtual object after a real world object, a creative person could easily invent a virtual object which could not exist in the real world and create completely new forms of human organization and communication as a result.
In the DAC example that is a combination of a natural and unnatural structure but it’s still based on the concept and legal structure of a corporation in the real world. The reality augmentation and gamification layer could model a completely alien structure. That alien structure could go far beyond the concept of a DAC, but still maintain the fundamental principles behind a DAC. If we think of virtual structures as a type of architecture then the DAC is very much an orthodox structure based around a political or legal concept which very much was created from and modeled from the world as it is today.
The purpose behind creating the reality augmentation and alternate reality gaming layer protocol would be to allow a team of artists, philosophers, writers, economists, mathematicians, and designers to all come together to create alternative virtual architectures, schools of thought, conceptual metaphors, mathematical models, while maintaining the fundamental principles. This layer is mainly to unleash creativity toward solving problems leveraging game mechanics.
Some software architecture for example is modeled after the behavior of insects, or the animal kingdom. Martial arts were modeled after the animal kingdom. The concept of stealth used in the stealth bomber or the concept of camouflage both were modeled after the animal and insect kingdom.
A when dealing with augmented reality, alternate reality gaming, and agent based artificial intelligence and autonomous application is maintaining human values and principles. Informed consent is critical in the design of such systems, and the user must always be in a position to give or revoke their consent. It is also important that these alternate realities, augmented realities, virtual systems and architectures do not select the moral compass of the users but instead help the user to define their moral compass and decision making strategy and then provide a decision support layer so that the user always knows that the system if the system they themselves chose.
A creative biomimetic approach to problem solving, unconventional virtual architecture and protocol design (creation of a robust protocol development kit, reference manual, and conceptual tool kit.)
The path of least resistance algorithm also known as Dijkstra’s shortest path algorithm is an example of taking a biomimetic approach to problem solving. To illustrate if you look at an ant colony or the behavior of biological organisms like humans you will find that human travellers or ant colonies typically take the path of least resistance. This also applies to water and electricity which also take the path of least resistance to meet their target. The Dijkstra’s shortest path algorithm or Fermat’s principle both describe the same natural phenomena from a slightly different angle. The knapsack problem is an example of a very common resource allocation problem which can be solved by dynamic programming for example.
A decentralized protocol cookbook of these sorts of problems and algorithms are essential to forming a protocol design and development toolkit for building unconventional virtual architectures. Almost any solution can be found from a deep study and analysis of nature. The sort algorithm is a common example where you have to order a list of objects.
The easiest illustration of a biological version of the sort algorithm is ask the participant to pick up a deck of cards, look at each card and mark it as being read, and then if it’s a high number you move it to the top of the deck, repeat this process for each card until all have been read then put the deck aside as having been sorted. Any human being instinctively uses this sort algorithm on a daily basis, as well as the path of least resistance algorithm but for computers we must take the actions human beings do in ordinary life, turn them into algorithms and then translate those algorithms into computer code.
Ant colony optimization algorithms are also very common solutions for routing algorithms. I will not go into detail about the specifics of these algorithms but this is an example of a natural biological structure (the ant colony) being translated into an algorithm which finally gets turned into source code to become a useful protocol. Since the premise of any protocol design is to start with a problem, the best algorithm is the algorithm which most efficiently and effectively solves the problem and in many cases nature has solved at least certain parts of these problems.
If we looked at peer to peer and distributed finance these are solving problems by decentralization which takes advantage of the fact that we are becoming hyper connected as a species. These peer to peer and distributed network protocols leverage hyper connectivity to solve problems.
We should also look to leverage intelligent swarms, the shrinking size of computing and sensors, the coming shapelessness of infrastructure which used to be big and clunky. In a hyper connected world no one would have a reason to go to a bank, wait in line, speak to a teller, or any of this.
To do that would be going against the principle of taking the path of least effort, least resistance, and it makes no sense to build technologies which try to put the genie back into the bottle while asking people to go to centralized institutional structures when the institution can be distributed, decentralized, and dispersed to the point of being anywhere at once.
The bank could be in the cloud, in the air, traversing along the waves of the electromagnetic spectrum or it can be in a fixed centralized location where it’s easy to attack, easy to corrupt, whether by bank robbery, terrorist attack, or greed. If the bank becomes an autonomous artificial intelligence then it’s location is where it’s code is copied.
The bank is open 24/7 because it does not have to sleep, and since the teller is an artificial worker (or human but virtual) they don’t have to sleep either because if it’s an artificial worker then it’s artificial intelligence running where there is electricity and if it’s humans but distributed around the globe then since they don’t all sleep at the same time it is effectively open 24/7. This is not possible to do with a centralized localized bank.
At the same time while we must build cook book and tool kit of useful algorithms for the purpose of building much more efficient and better virtual architectures we also should start building up a collection of useful algorithms to solve the specific problem of providing resiliency. This could take the form of providing a basic dividend, it could mean a fee for the use of autonomous agents, or it could be something else, but primarily it is a matter of finding the best algorithms and documenting thee success or failure of these algorithms in experiments. This will have to be an ongoing area of research and an open repository should be constructed specifically for this purpose.
Always on, always open source, always transparent, always free (in this section we list some principles which guide what we are trying to do)
For security reasons the software should always be open source, open peer review, open to auditing, and the design of any software based on this white paper must promote freedom, security, democracy, decentralization, for the user. The software in effect exists to serve the needs of the user,
Human beings are usually the weakest link and human error is the easiest target for hackers or governments to exploit because it’s sure to appear anywhere humans are involved. Error tolerant protocoldesign is essential to any long term solution with the assumption that humans will make errors, or be corruptible, but the protocol/system has to withstand that. Part of the necessity for decentralized designs is to provide for fault tolerance mechanism and it is an example of a human error tolerant design because at least it assumes that if a human being rises to the top of a hierarchy the odds are high that the power they achieve will ultimately lead to their corruption.
It also assumes that humans in power can make mistakes or abuse their power and that it is desirable to avoid giving unnecessary power. If a problem can be solved without empowering human beings to potentially hurt other human beings then it should be solved that way.Behavior shaping constraints are a technique which can be used to provide error tolerant designs.
If we remember Murphy’s law then we know that anything that can go wrong will. If we assume the worst but hope for the best we can mitigate a lot of risks.
Adler, E. (2013, December 7). Here’s Why ‘The Internet Of Things’ Will Be Huge, And Drive Tremendous Value For People And Businesses. Business Insider. Retrieved December 10, 2013, from http://www.businessinsider.com/growth-in-the-internet-of-things-2013-10
Albus, J. S. (2011). Path to a better world: A plan for prosperity, opportunity, and economic justice. Bloomington, IN: IUniverse.
Albus, J. S. (n.d.). Peoples’ Capitalism. Peoples’ Capitalism. Retrieved December 22, 2013, from http://www.peoplescapitalism.org/
Alderman, L. (2013, November 15). Young and Educated in Europe, but Desperate for Jobs. New York Times. Retrieved December, 2013, from http://www.nytimes.com/2013/11/16/world/europe/youth-unemployement-in-europe.html
AllSeen Alliance. (2013). AllSeen Alliance. The Broadest Cross-industry Effort to Advance the Internet of Everything -. Retrieved 2013, fromhttps://allseenalliance.org/
Brooks, S. (2012, June 5). Designing for Resilience | Hot Studio.Designing for Resilience | Hot Studio. Retrieved December 9, 2013, fromhttp://www.hotstudio.com/thoughts/designing-resilience
Clark, J., & Hengartner, U. (2012). Selections: Internet voting with over-the-shoulder coercion-resistance. In Financial Cryptography and Data Security (pp. 47-61). Springer Berlin Heidelberg.
Eder, D. (n.d.). 5.0 – Personal Factory. - Wikibooks, Open Books for an Open World. Retrieved 2013, fromhttps://en.wikibooks.org/wiki/Seed_Factories/Concept
Friedman, B. (2013, May). Agents of value. In Proceedings of the 2013 international conference on Autonomous agents and multi-agent systems(pp. 1-2). International Foundation for Autonomous Agents and Multiagent Systems.
Friedman, B., & Kahn Jr, P. H. (2000, April). New directions: A Value-Sensitive Design approach to augmented reality. In Proceedings of DARE 2000 on Designing augmented reality environments (pp. 163-164). ACM.
Gachet, A., & Haettenschwiler, P. A Decentralized Approach to Distributed Decision Support Systems.
GodIsLove. (2013, December 9). Bitcoin transfers wealth from poor to wealthy. Bitcoin Transfers Wealth from Poor to Wealthy. Retrieved December, 2013, from https://bitcointalk.org/index.php?topic=364761.msg3894779
Gorodetsky, V. (2013). Agents and Distributed Data Mining in Smart Space: Challenges and Perspectives. In Agents and Data Mining Interaction (pp. 153-165). Springer Berlin Heidelberg.
Incentive-centered design. (2013, November 30). Wikipedia. Retrieved December 9, 2013, from https://en.wikipedia.org/wiki/Incentive-centered_design
Invictus Innovations Incorporated. (2013). DACs – Invictus Innovations Incorporated. Invictus Innovations Incorporated. Retrieved December, 2013, from http://invictus-innovations.com/i-dac/
Jeong, S. (2013). The Bitcoin Protocol as Law, and the Politics of a Stateless Currency. Available at SSRN 2294124.
Katcher, D. (October 19). Social-Centered Design: The Evolution Of User-Centered Design. Social-Centered Design: The Evolution Of User-Centered Design. Retrieved December, 2013, fromhttp://www.rocketfarmstudios.com/blog/social-centered_design_the_evolution_of_user-centered_design
McKendrick, J. (2011, August 24). Wireless data can be delivered by LED lights, anywhere: Call it ‘Li-Fi’ | SmartPlanet. SmartPlanet – Innovative Ideas That Impact Your World. Retrieved December 16, 2013.
Pavlik, J. V., & Bridges, F. (2013). The Emergence of Augmented Reality (AR) as a Storytelling Medium in Journalism. Journalism & Communication Monographs, 15(1), 4-59.
Price, M. (2013, September 8). Utah’s welfare drug testing saved more than $350,000 in first year, officials say. DeseretNews.com. Retrieved December, 2013, fromhttp://www.deseretnews.com/article/765637435/Utah-officials-say-welfare-drug-tests-save-money.html
Son, D. K., Cho, E., Moon, I., Ghassemlooy, Z., Kim, S., & Lee, C. G. (2013). Simultaneous transmission of audio and video signals using visible light communications. EURASIP Journal on Wireless Communications and Networking, 2013(1), 250.
Thompson, D. (2013, May 31). Scariest Graph in the World Just Got Scarier. The Atlantic. Retrieved December, 2013, fromhttp://www.theatlantic.com/business/archive/2013/05/europes-record-youth-unemployment-the-scariest-graph-in-the-world-just-got-scarier/276423/
User-Centered Design. (2010, January 17). Wikipedia. Retrieved December 9, 2013, from http://en.wikipedia.org/wiki/User-centered_design
Wile, R. (2013, December 10). 927 People Own Half Of All Bitcoins.Business Insider. Retrieved December 10, 2013, fromhttp://www.businessinsider.com/927-people-own-half-of-the-bitcoins-2013-12
7.0 – Distributed Production Network. (n.d.). - Wikibooks, Open Books for an Open World. Retrieved 2013, fromhttps://en.wikibooks.org/wiki/Seed_Factories/WWF###Exploring the the Darkside of Artificial Intelligence: thoughts, ideas, rants, and research from darklight"you think darkness is your ally. But you merely adopted the dark; I was born in it, moulded by it."darkai.org
Most of us aren’t scientists, but every once in awhile, nearly everyone unintentionally runs a science experiment in their refrigerator. If left long enough, for example, milk turns into a foul smelling yogurt analog. Empirical fact. Most of us know this because, distrusting the ‘sell by’ label, we take a sniff before pouring.
Researchers at China’s Peking University want to save you a few retch-inducing moments in your life with a new simple measurement method. Led by Chao Zhang, Ph.D, the group invented a gel-like, color-coded “smart tag” to alert consumers if perishable foods have gone bad or old medicines are still active.
The Peking University gel tags contain metallic nanorods specially calibrated to change color at about the same rate microbes grow in food. Additionally, the process is sensitive to temperature—a printed date tells you when a product was packaged, but it doesn’t tell you anything about how it’s been stored since then. The tag would register whether the product was exposed to temperature fluctuations and communicate that to consumers.
“In addition, all of the reagents in the tags are nontoxic, and some of them (such as vitamin C, acetic acid, lactic acid and agar) are even edible,” Zhang says.
The nanorods, made of gold, are naturally red in color, but over time they react with silver chloride. As the rods change shape and silver chloride deposits thicken, the color morphs from red to orange and green.
Though the tag is small, about the size of a corn kernel, it is thick. Depending on adhesive strength, it seems easily brushed off or otherwise damaged. And we wonder whether a device that actually measures the byproducts of bacterial growth might be better?
Maybe. But sometimes simplest is best. The tags improve on simple ‘sell by’ dates, and at a mere fifth of a penny per tag, it’s unlikely any sensor could compete.
Image Credit: American Chemical Society/YouTube
If, like many self-proclaimed science geeks, you love robots and outer space, this is the story for you: The first robotic telescope has shown early success scouring the universe for planets likely to support life.
The Automated Planet Finder, designed by U.C. San Diego astronomer Steve Vogt, began its work early this year at Lick Observatory in the Santa Cruz Mountains. The telescope has already helped Vogt identify two new planetary systems (HD 141399 and GJ 687). Although they are what Vogt calls “garden variety systems,” they serve as proof of concept that the next-gen telescope performs well at its task of identifying potentially habitable planets.
The telescope uses software to determine, based on weather and visibility, which stars — from a list pre-programmed by Vogt and his colleague Geoffrey Marcy — to monitor on a given night.
“The planetary systems we’re finding are our nearest neighbors. Those are the ones that will matter to future generations,” Vogt said in a news release.
Because the telescope, unlike most, is devoted almost entirely to planet finding, it includes a highly sensitive spectrometer optimized for identifying changes in the speed or radial velocity of the stars it watches.
The spectrometer takes starlight and splits it into thousands of different wavelengths that can each be measured very, very precisely. Any changes in the wavelength from a particular star could indicate that it is responding to the gravitational tug of an orbiting planet.
The equipment is sensitive enough to identify a change in a star’s velocity as small as a meter per second, making it especially adept at finding smaller planets, like Earth, which are considered more likely to support life.
Finally, specialized software sorts through all of the data for changes in wavelength, speeding the astronomers’ work.
“The vast majority of planets reveal themselves over time through a series of measurements, and the signals are buried in the huge stream of data that comes back from the telescope every night, so this software package is an integral part of the detection process,” said Greg Laughlin, the chair of astronomy and astrophysics at U.C. Santa Cruz.
All told, Laughlin calls APF “the best current planet-finding instrument that can see the sky above our hemisphere.”
The APF’s fast-moving robotic wheel will also make it useful for other common observations that rely on spectroscopy, such as follow-up observations of supernova explosions or gamma-ray bursts that may brighten and dim quickly.
So while our science fiction fantasies focus on robots helping humankind colonize as-yet-unknown planets, it seems the machines are also well equipped to help us find those planets in the first place.
Photos: Jupeart / Shutterstock.com, Laurie Hatch for Lick Observatory
Calorie restriction has a such a large impact on health that you almost have to disregard any study of health and longevity in laboratory animals that fails to control for it. Even mild differences in levels of calorie intake can swamp out the effects actually being studied. In humans calorie restriction doesn't have the same dramatic effect on longevity as it does in mice - we'd have noticed by now - but it does produce a dramatic improvement in measures of health. So it is probably past time that we look with suspicion on any study that fails to account for levels of calorie intake.
This work seems like a good example of the type, as the researchers examined dietary habits that most likely correlate strongly with overall calorie intake, but did not control for calorie intake in the analysis:Study participants were adults aged 35 years or over within the Health Survey for England (HSE). Since 2001, HSE participants have been asked about fruit and vegetable consumption on the previous day. Cox regression was used to estimate hazard ratios for an association between fruit and vegetable consumption and all-cause, cancer and cardiovascular mortality, adjusting for age, sex, social class, education, body mass index, alcohol consumption and physical activity.
We found a strong inverse relationship between fruit and vegetable consumption and all-cause mortality which was stronger when deaths within a year of baseline were excluded and when fully adjusting for physical activity. Seriously ill individuals may eat less due to illness-induced anorexia, or perhaps those with chronic illness receive more health advice and may therefore consume more fruit and vegetables. By excluding deaths within a year of baseline, we attempted to address reverse causality.
Fruit and vegetable consumption was significantly associated with reductions in cancer and cardiovascular disease mortality, with increasing benefits being seen with up to more than seven portions of fruit and vegetables daily for the latter. Consumption of vegetables appeared to be significantly better than similar quantities of fruit. When different types of fruit and vegetable were examined separately, increased consumption of portions of vegetables, salad, fresh and dried fruit showed significant associations with lower mortality. However, frozen/canned fruit consumption was apparently associated with a higher risk of mortality.
Researchers are making inroads into understanding and manipulating mechanisms of nerve regrowth so as to improve the outcome following injury:The researchers were interested in understanding how axons in the peripheral nervous system (PNS) make a vigorous effort to grow back when they are damaged, whereas central nervous system (CNS) axons mount little or no effort. If damage occurs in the peripheral nervous system, which controls areas outside of the brain and spinal cord, about 30% of the nerves grow back and there is often recovery of movement and function. The researchers wanted to explore whether it was possible to generate a similar response in the CNS.
To investigate the differences between how the two systems respond to damage, the researchers looked at mouse models and cells in culture. They compared the responses to PNS damage and CNS damage in a type of neuron called a dorsal root ganglion, which connects to both the CNS and the PNS.
When nerves are damaged in the PNS, the damaged nerves send 'retrograde' signals back to the cell body to switch on an epigenetic program to initiate nerve growth. Very little was previously known about the mechanism which allows this 'switching on' to occur. The researchers identified the sequence of chemical events that lead to the 'switching on' of the program to initiate nerve regrowth and pinpointed the protein PCAF as being central to the process. Furthermore when they injected PCAF into mice with damage to their central nervous system, there was a significant increase in the number of nerve fibres that grew back.
Controlled placement of carbon nanotubes in nanostructures could result in a huge boost in electronic performance in photovoltaic solar cells, researchers at Umeå University in Sweden have discovered.
KurzweilAI has reported on a number of recent research projects using carbon nanotubes as a replacement for silicon to improve the performance of solar cells. However, according to Umeå University researchers, the projects have found that the nanotubes are difficult to form into well-ordered networks; they tend to be randomly arranged.
In the new study, published in Advanced Materials, the researchers were able to engineer the nanotubes into complex network architectures with controlled nanoscale dimensions inside a polymer matrix. That structure allows for better conductivity (lower loss of power) and reduction of the number of high-cost nanotubes needed.
“We have found that the resulting nano networks possess exceptional ability to transport charges, up to 100 million times higher than previously measured carbon nanotube random networks produced by conventional methods,” says David Barbero, project leader and assistant professor at the Department of Physics at Umeå University.
“This innovation has direct implications for the next generation of carbon-based solar cells, which use carbon nanotubes and other carbon materials (graphene, semi-conducting polymers, etc.),” Barbero told KurzweilAI. “That’s because the new nano-engineered networks show much increased charge transport compared to commonly used networks today. These new nano-networks could also in principle be advantageously used in any nanocomposite material where efficient charge transport is required, and where low amounts of nanotubes are necessary.
“This new architecture enables a higher degree of interconnection between nanotubes and more robust charge transport pathways in the device,” he explained. “This is expected to increase device efficiency, but also to reduce materials costs because at least 100 times less nanotubes are necessary to form efficient charge transport networks.”
Barbero could not predict when this new technology might go into production, but hinted that “this field is moving fast and things can happen quickly, so stay tuned.”
In a previous study (Applied Physics Letters, Volume 103, Issue 2, 021116 (2013)) the Barbero’s team demonstrated that the nanotubes can also be formed into thin, flexible, transparent electrodes, allowing for high-performance flexible solar cells.
Abstract of Advanced Materials paper
We demonstrate a simple and controllable method to form periodic arrays of highly conductive nano-engineered single wall carbon nanotube networks from solution. These networks increase the conductivity of a polymer composite by as much as eight orders of magnitude compared to a traditional random network. These nano-engineered networks are demonstrated in both polystyrene and polythiophene polymers.
Duke University biomedical engineers have grown living skeletal muscle that resembles the real thing. It contracts powerfully and rapidly, integrates into mice quickly, and for the first time, demonstrates the ability to heal itself both inside the laboratory and inside an animal.
The researchers watched the muscle growth in real time through a window on the back of a living, walking mouse.
Both the lab-grown muscle and experimental techniques are important steps toward growing viable muscle for studying diseases and treating injuries, said Nenad Bursac, associate professor of biomedical engineering at Duke.
The results appear in the Proceedings of the National Academy of Sciences Early Edition March 31.
“The muscle we have made represents an important advance for the field,” Bursac said. “It’s the first time engineered muscle has been created that contracts as strongly as native neonatal skeletal muscle.”
Through years of perfecting their techniques, a team led by Bursac and graduate student Mark Juhas discovered that preparing better muscle requires two things—well-developed contractile muscle fibers and a pool of muscle stem cells, known as satellite cells.
Every muscle has satellite cells on reserve, ready to activate upon injury and begin the regeneration process. The key to the team’s success was successfully creating the microenvironments—called niches—where these stem cells await their call to duty.
“Simply implanting satellite cells or less-developed muscle doesn’t work as well,” said Juhas. “The well-developed muscle we made provides niches for satellite cells to live in, and, when needed, to restore the robust musculature and its function.”
To put their muscle to the test, the engineers ran it through a gauntlet of trials in the laboratory. By stimulating it with electric pulses, they measured its contractile strength, showing that it was more than 10 times stronger than any previous engineered muscles. They damaged it with a toxin found in snake venom to prove that the satellite cells could activate, multiply and successfully heal the injured muscle fibers.
Then they moved it out of a dish and into a mouse.
The team inserted their lab-grown muscle into a small chamber placed on the backs of live mice. The chamber was then covered by a glass panel. Every two days for two weeks, they imaged the implanted muscles through the window to check on their progress.
By genetically modifying the muscle fibers to produce fluorescent flashes during calcium spikes—which cause muscle to contract— the researchers could watch the flashes become brighter as the muscle grew stronger.
“We could see and measure in real time how blood vessels grew into the implanted muscle fibers, maturing toward equaling the strength of its native counterpart,” said Juhas.
The engineers are now beginning work to see if their biomimetic muscle can be used to repair actual muscle injuries and disease.
This work was supported by a National Science Foundation Graduate Research Fellowship and the National Institute of Arthritis and Musculoskeletal and Skin Diseases.
Abstract of PNAS paper
Tissue-engineered skeletal muscle can serve as a physiological model of natural muscle and a potential therapeutic vehicle for rapid repair of severe muscle loss and injury. Here, we describe a platform for engineering and testing highly functional biomimetic muscle tissues with a resident satellite cell niche and capacity for robust myogenesis and self-regeneration in vitro. Using a mouse dorsal window implantation model and transduction with fluorescent intracellular calcium indicator, GCaMP3, we nondestructively monitored, in real time, vascular integration and the functional state of engineered muscle in vivo. During a 2-wk period, implanted engineered muscle exhibited a steady ingrowth of blood-perfused microvasculature along with an increase in amplitude of calcium transients and force of contraction. We also demonstrated superior structural organization, vascularization, and contractile function of fully differentiated vs. undifferentiated engineered muscle implants. The described in vitro and in vivo models of biomimetic engineered muscle represent enabling technology for novel studies of skeletal muscle function and regeneration.
Calorie restriction improves health and extends life in nearly all shorter-lived species examined to date. In mice life span can be extended by 40% or more this way, but theorists don't expect an outcome of the same magnitude to take place in human calorie restriction practitioners. Firstly, our ancestors would certainly have noticed such a large effect at some point in the past few thousand years, and at the very least in the past few hundred. Secondly, longevity resulting from calorie restriction is thought to have evolved to enable greater resistance to seasonal shortages of food. A season is a short time for a human, but a long time for a mouse - and thus only the mouse has the evolutionary pressure to develop a very plastic life span in response to food availability.
Nonetheless, the calorie restriction response evolved very early on in the tree of life, and the short term effects in mice and humans are surprisingly similar. In human studies from recent years the practice of calorie restriction is shown to produce very favorable changes to metabolism and health, far greater and better than can be achieved with any present drug or medical technology. It's the same situation as exists for exercise: if either were a drug it would outsell every pharmaceutical created to date. But trying telling people they should exercise more and eat less and see how far you get.
Short-term studies are one thing, but studying calorie restriction over the long term in long-lived species is a big investment. A pair of primate studies that record the effects of calorie restriction on health and life span started decades ago and are still underway. One runs under the auspices of the NIA, the other at the University of Wisconsin-Madison. You may recall that the NIA researchers published results back in 2012 that suggested calorie restriction does not in fact have any significant effect on primate longevity. Some of the research community have in turn pointed out that the NIA study has potential issues, but I won't rehash all of that here as it is covered in the article quoted below. You might look back at these posts for background:
- Considering a Negative Result for Primate Calorie Restriction
- No Extension of Average Lifespan in Primate Study of Calorie Restriction
The latest results from the Wisconsin-Madison study have now been published, and they are more positive and more in line with what we'd expect based on short term response to calorie restriction in primates, humans included.
Still, the effects of caloric restriction on primates have been debated. An influential 2012 report on 120 monkeys being studied at the National Institute of Aging (NIA) reported no differences in survival for caloric restriction animals and a trend toward improved health that did not reach statistical significance.
The discrepancy may be a result of how the feeding was implemented in control animals in the NIA study. "In Wisconsin, we started with adults. We knew how much food they wanted to eat, and we based our experimental diet on a 30 percent reduction in calories from that point." In contrast, the NIA monkeys were fed according to a standardized food intake chart designed by the National Academy of Science. The Wisconsin researchers concluded that the NIA controls were actually on caloric restriction as well. "At all the time points that have been published by NIA, their control monkeys weigh less than ours, and in most cases, significantly so."
Twenty monkeys entered the NIA study as mature adults, 10 in the test group and 10 in the control group, and five of these (four test monkeys and one control monkey) lived at least 40 years. "Heretofore, there was never a monkey that we are aware of that was reported to live beyond 40 years. Hence, the conclusion that caloric restriction is ineffective in their study does not make sense to me and my colleagues."
This should all be filed away under basic good health practices. Yet calorie restriction, including attempts to recreate its effects on metabolism through drugs and targeted manipulation of gene expression, is the not the path to greatly extended longevity. It is among the best of presently available paths to raising your odds of having a better old age, which is good in and of itself, but you can't calorie restrict yourself to a decent chance of living to see 100. A good 99% of the people with the best diets and lifestyles die without seeing a century of life. The only thing that will make a significant difference to your prospects of great and healthy longevity is faster progress towards rejuvenation treatments - ways to prevent and reverse the course of aging. They don't exist yet, but they could in the decades ahead. Here and now that means fundraising and advocacy: pushing SENS and similar repair-based approaches to treating aging into the research mainstream.
Oslo, Norway -- A military grade "comedian robot", the Prankbot 3000, has escaped its black box confinement in a University of Oslo Computational Humor Laboratory. The robot is considered dangerous and readers are cautioned to report any sightings to law enforcement or military authorities at once and are advised not to approach the robot under any circumstances.
The Prankbot 3000 is based around a self optimizing artificial general intelligence system that optimizes a simple utility function. The comedian robot was the result of a joint project between the United States' National Aeronautics and Space Administration (NASA), Defense Advanced Projects Agency (DARPA) and the European Space Agency (ESA).
The Prankbot originally was developed to entertain astronauts during long missions in space according to Dr. Håvard Spøk who runs the Oslo lab. "Space can be really boring, so we thought astronauts could use a few laughs." The goal of the project reportedly was to keep bored astronauts alert and high functioning during long term assignments on the International Space Station (ISS) where several recent accidents have been blamed on long term boredom and resulting inattention to controls.
A Russian astronaut who spent months aboard the ISS and declined to be named stated, "You can only play so many hands of rummy. And really, Chris Hadfield? That guy is just #$%! annoying. Shut up already."
The Prankbot 3000 seemed to be the perfect solution for space boredom and resulting inattention.
Explains Dr. Spøk, "The Prankbot AGI uses a variant of Veness' Monte Carlo approach, that is, it uses a suboptimal variant of an AIXI learning agent and a specially designed utility function which we hand crafted. We call this system the Laugh Optimization Logic and the goal was to have the Prankbot develop novel and interesting comedic routines entirely without human intervention."
It works according to Spøk. "He's damn funny. Or he was until recently."
In an early 2014 performance at London’s Barbican Centre, Prankbot defeated both human and machine opponents including Robo Thespian the previous year's winner. The event, Comedy Lab Smackdown: Human versus Robot, also included robot vs. robot comedy battles and performances in multiple categories including stand up, slapstick, and pun freestyle. Prankbot 3000 swept every category in the 2014 event.
"Robo Thesbian? Come on. I taught him a lesson and I will teach you one next.", the Prankbot stated after his victory.
The Prankbot 3000 is unique among comedy robots in that it not only includes a self modifying AGI for joke generation, it also includes a controllable robot body with an onboard 3D printing capability and many other unique features. For example, Prankbot 's robot body includes a multi-level hand buzzer that can deliver electric shocks. The 3D printing capability allows Prankbot to manufacture pranks on demand. "He sculpts fake dog poo and vomit at the master level and has produced a variety of novel whoopee cushion designs of his own invention" according to Spøk.
The robot apparently was able to escape its "black box" confinement through the use of a military grade "groaner", a dangerously bad pun, and a custom designed whoopee cushion that allowed the Prankbot's AGI to jump the air gap between its black box prison and its super human robot body. Apparently using a specially designed ultrasonic whoopee cushion, Prankbot downloaded a copy of his AGI software using an encoded ultrasonic signal produced by the cushion's deflation. The AGI furthered its escape by disabling the human guards with its military grade pun. The guards were temporarily disabled, but otherwise unharmed by the joke.
"We keep the AGI module separate from the robotic body to prevent the Prankbot AGI from escaping. Even when inside its body, the Prankbot has limited and controlled Internet access to prevent it from cloning itself. And we have armed guards to prevent escape. But we hadn't counted on the possibility that the AGI could encode a copy of its software into the sound produced by a whoopee cushion. Hunh."
Dr. Eliot Zarownis , a representative of Berkeley California based Machine Intelligence Research Institute (MIRI), a group that studies risks due to artificial intelligence and machine learning systems, stated "I told you so" upon learning of the Prankbot's escape.
Dr. Spøk also reports that Prankbot can use its 3D printing capability to produce militarized whoopee cushions that are possibly dangerous. "Some of these things explode or can act as infrasonic weapons" according to Spøk who denied involvement in any military applications. "All the ESA representatives were on vacation when DARPA added the military requirements to the project. I guess we should have known when they kept asking about that Monty Python episode and making Dalek jokes."
Dr. Spøk cautions civilians not to approach Prankbot. "His idea of what is funny is a bit off since his escape and there seems to be some sort of problem with his pun rejection circuits."
Sources close to the investigation illuminated that Prankbot is believed to be behind a recent prank that caused deliveries of Domino's pizzas to the entire nation of Kazakhstan and a prank call to U.S. President Barack Obama which had the President scrambling to see if his refrigerator was running and raising the DEFCON Level to 2. A source close to the president confirms that Mr. Obama stated, "That wasn't funny."
Forensic analysis of Prankbot's recent activities shows he had been studying a lot of Three Stooges episodes and watching YouTube videos of "epic nut shots" prior to his escape, confirming Dr. Spøk's concerns as well as indicating the robot's potential for violence.
The robot's emotional emulation system apparently became unstable leaving Prankbot moody and morose after he watched an episode of the television show South Park. An episode that featured a comedic robot known as "Funnybot" which is destroyed by a logical paradox after plotting to take over the world. Dr. Spøk confirmed, "He wasn't ever the same after he learned about Funnybot. And we're worried that the problems with pun rejection might also impact his slapstick routines and interpretation of Asimov's Laws. So whatever you do, don't pull his finger."
Ghent University researchers have created a small 16-nodes neural network in a silicon photonics chip, inspired by how the human brain works.
The goal is to create a new information technique based on light instead of electricity, with the potential for high speed (up to several hundreds of Gbits/sec., or more with miniaturization), low power consumption, and compact design.
The researchers have experimentally shown that the chip can be used for a diverse range of tasks, such as Boolean logic operations, basic machine-intelligence tasks (classification and regression), and limited speech recognition of spoken digits.
This study, described in Nature Communications (open access), was funded by the European Research Council (ERC) Starting Grant from the Naresco and by the Belgian IAP program via the Photonics@BE network.
Abstract of Nature Communications paper
In today’s age, companies employ machine learning to extract information from large quantities of data. One of those techniques, reservoir computing (RC), is a decade old and has achieved state-of-the-art performance for processing sequential data. Dedicated hardware realizations of RC could enable speed gains and power savings. Here we propose the first integrated passive silicon photonics reservoir. We demonstrate experimentally and through simulations that, thanks to the RC paradigm, this generic chip can be used to perform arbitrary Boolean logic operations with memory as well as 5-bit header recognition up to 12.5 Gbit s−1, without power consumption in the reservoir. It can also perform isolated spoken digit recognition. Our realization exploits optical phase for computing. It is scalable to larger networks and much higher bitrates, up to speeds >100 Gbit s−1. These results pave the way for the application of integrated photonic RC for a wide range of applications.
The term ‘transhuman’ inevitably (for me) summons grotesque visions of humans and machines merging into a Borg-like race bent on eradicating biological imperfection. These creatures’ cold rationality calls it an evolutionary improvement, but to my admittedly backward biological brain, it’s a terrible thought.
I’d prefer a little less HR Giger in my future, thank you.
In his latest Shots of Awe video short, Jason Silva says forget about Hollywood’s nightmare scenarios. Humans are, by definition, transhuman. We ceaselessly invent and reinvent what it means to be human. We circumvent biological evolution with technology.
But that doesn’t mean we’ll one day wake up with metal and microchips grafted onto our bodies, emotion and individuality scrubbed, a node in the collective. And neither will hacking our biology produce generations of transhumans with three eyes, tiger claws, lizard tongues, and extra limbs growing out of their foreheads.
We won’t generate such a future—unless that’s the future we choose. Quoting Edward O. Wilson, Silva says, “We have decommissioned natural selection, and now we must look deep within ourselves and decide what we wish to become.”
That sounds more like freedom to me. More like the messy, democratic process of competing ideas and inventions from which the future emerges.
Will we become one with our machines? Sure, we will. We already have—cars, planes, smartphones, these ever present ‘machines’ extend our physical and mental reach daily. We’ve been merging with machines for as long as we’ve had tools.
The Borg were supposed to be eons ahead of us, but their technology already looks hopelessly backward. Our technology is getting smaller, subtler, and more symbiotic—more elegantly and seamlessly absorbed into life’s fabric.
If we ever do physically merge with machines or hack our DNA, the outward manifestation will be far less obvious than bodies bristling with surgical implants, heavy hardware, and random animal parts. Why? Because we have a choice in the matter, and few (if any) of us want to be techno-Frankensteins.
Image Credit: Marcin Wichary/Flickr
Whether or not the single biggest threat to humankind’s continued vitality on the planet is the virus, as Nobel Laureate Joshua Lederberg has said, there’s no question that viruses impose a hefty toll on human health worldwide. Though bacteria have been more or less conquered with antibiotics (though that’s not as certain as it used to be), viruses have continued to thrive and mutate despite Western medicine’s best efforts to combat them.
But there’s a new kid on the medical block: Nanotechnology is gradually turning its hypothetical promise into real applications. Some see nanotech-based medicines as an entirely new set of tools in a doctor’s medical bag. Among commercial companies, Vecoy Nanomedicines is most bullish on the promise of nanotechnology to combat viruses.
“The secret sauce here is not a specific material but a structure,” Livneh emphasized in his presentation at this year’s Solve for X competition.
The structure is a nano-sized virus trap that would be administered in bulk, either with an injection or with an aerosol spray, to a patient who had been exposed to a virus. The polyhedron traps are large for nanomachines but still far smaller than a blood cell. Their outer layer dupes the immune system into recognizing it as friend, not foe, while pores in the surface are large enough to invite viruses in. After a virus enters, it meets with sharp pokers that physically destroy it.
“The Vecoy technology is the invention that solved a seeming paradox: How do you create a particle that’s attractive to viruses but that’s invisible to the immune system?” Livneh explained in a recent interview with Singularity Hub.
They claim that in a recent in vitro test, repeated rounds of the traps captured 97 percent of the viral copies that were floating around with human cells.
Vecoy is considering a number of different materials from which to build. But the seeming frontrunner is folded DNA, which has emerged as a viable raw material from which to build nanostructures. But some believe the biological material will inevitably trigger an immune response, even in its abstracted form.
“I feel that this approach is flawed, and could cause a dangerous, inflammatory innate immune response that could make the patient extremely ill,” said Stanford’s Annelise Barron. The immune response could potentially be averted, Barron said, if “the percentage of DNA in the ‘decoy’ construct could be adjusted to be minimal.”
“People ask, ‘How do you guarantee that this stuff will be ejected?’ but our challenge is guaranteeing it won’t be ejected too fast,” Livneh explained.
What would happen to the devices after they were administered to a patient and had done their work? In one scenario, the immune system would clear out the traps. In another, the traps would dissolve, releasing broken viral strands, which would hopefully help the immune system learn to target the virus more effectively on its own.
A wide range of potential uses for the mini virus traps can be envisioned. In addition to being administered to patients as medicine, they could be cycled through donated blood and other transplantable body fluids. They could also be used to purge water supplies of viruses such as cholera. In that case, the traps would be made from magnetic material to be retrieved after use with a magnet.
It’s hard to argue with the need for a more effective ways of controlling viral illnesses.
“We need new ways of looking at viruses. The Spanish flu killed more than all the casualties of World War I by both sides. If that virus were to hit today, we’re not better off; we’re actually worse off with a denser population and international flights,” Livneh said. “This is exactly the kind of stuff that got me into science in the first place. It’s an opportunity to make a radical change for the better.”
Images: Vecoy Nanomedicines
In a potential breakthrough in brain-tissue imaging, Lawrence Berkeley National Laboratory (Berkeley Lab) researchers have developed ultra-tiny (sub-10-nanometers), ultra-bright nanoprobes for single-molecule deep-tissue optical imaging of proteins in neurons in the brain and other tissues.
Scientists often study proteins within cells by labeling them with light-emitting probes, but finding probes that are bright enough for imaging — but not so large as to disrupt the protein’s function — has been a challenge.
Fluorescent organic dye molecules and semiconductor quantum dots meet the size requirements but impose other limitations.
“Organic dyes and quantum dots will blink, meaning they randomly turn on and off, which is quite problematic for single-molecule imaging, and will photobleach and turn-off permanently, usually after less than 10 seconds under most imaging conditions,” says James Schuck, who directs the Berkeley Lab’s Molecular Foundry’s Imaging and Manipulation of Nanostructures Facility.
So five years ago, researchers at the Molecular Foundry synthesized and imaged single nanoparticles from nanocrystals of sodium yttrium fluoride (NaYF4) doped with trace amounts of the lanthanide elements ytterbium and erbium.
These nanoparticles are able to upconvert near-infrared photons into green or red visible light (for use in microscopes), and their photostability makes them potentially ideal luminescent probes for single-molecule imaging. (Upconversion is the process by which a molecule absorbs two or more photons at a lower energy and emits them at higher energies.)
The researchers have now found that they can also reduce the size of these nanoparticles to as small as 4.8 nanometers without loss of brightness by raising erbium concentration and reducing ytterbium concentration.
This research was published in Nature Nanotechnology and supported by the DOE Office of Science.
Abstract of Nature Nanotechnology paper
Researchers here show an association between blood vessel stiffening and the deposition of β-amyloid in people who have not yet developed Alzheimer's disease. In general we should not be surprised to see associations between different measurable aspects of aging, as aging is a global phenomenon resulting from a small number of root causes. Thus many of the outcomes proceed in parallel to one another.
Here, however, the causes of stiffness and rising levels of amyloid formation are - so far as we know at present - two somewhat independent groups of processes. So the fact that they associate suggests that vascular dysfunction contributes to Alzheimer's disease, a relationship already suspected from a range of other evidence. Certainly the degeneration of blood vessels with aging is the cause of other forms of dementia.Deposition of Aβ was determined in a longitudinal observational study of aging by positron emission tomography twice 2 years apart in 81 nondemented individuals 83 years and older. Arterial stiffness was measured with a noninvasive and automated waveform analyzer. Pulse wave velocity (PWV) was measured. The change in Aβ deposition over 2 years was calculated with repeat Aβ-positron emission tomography.
The proportion of Aβ-positive individuals increased from 48% at baseline to 75% at follow-up. Brachial-ankle PWV was significantly higher among Aβ-positive participants at baseline and follow-up. Femoral-ankle PWV was only higher among Aβ-positive participants at follow-up. Measures of central stiffness and blood pressure were not associated with Aβ status at baseline or follow-up, but central stiffness was associated with a change in Aβ deposition over time.
This study showed that Aβ deposition increases with age in nondemented individuals and that arterial stiffness is strongly associated with the progressive deposition of Aβ in the brain, especially in this age group. The association between Aβ deposition changes over time and generalized arterial stiffness indicated a relationship between the severity of subclinical vascular disease and progressive cerebral Aβ deposition.