• Before posting an article from a specific source, check this list here to see how much the Orange Room trust it. You can also vote/change your vote based on the source track record.

Live Scientific or technological news



Active Member
This HD Video From Space Is Going To Change The World

This imagery is from the world's first commercially available, daily HD video satellite system.

The footage, from Skybox Imaging's SkySat-1 micro-satellite, is as mesmerizing as the implications are powerful.

In a blog post this week, John Clark of SkyBox wrote:

Businesses can, for the first time, monitor a network of globally distributed assets with full-motion snapshots without needing to deploy an aircraft or field team. The movement captured in these short video windows, up to 90 seconds in length, yields unique insights that improve operational decisions.

The U.S. company's vision is to "leverage timely satellite data to provide insight into daily global activity."

Digital cartographers are figuring out ways to use the new source of geospatial data, but simply being able to watch part of the world several times a day from space is profound.

What the insights are, and who benefits from it, has not been seen yet. No one outside the military has ever been able to access data like this: Theoretically, one could follow individual people from space.

Nevertheless, the potential in industries like agriculture, airports, asset monitoring, security, supply chain management, nuclear plants is vast.

The technology is cutting edge. The company asked: “What’s the smallest box I can fit something of real commercial value into?”

The challenge is that space assets are traditionally extremely valuable, extremely expensive, and extremely risky.

So the next technological frontier for quality imaging from space involves systems that can both capture data of high enough quality (resolution) to show economic activity and be cost-effective enough to deploy in large numbers (timeliness).

And Skybox thinks they've nailed it given that the circutiry of the SkySat is about the size of a phone book and consumes less power than a 100w light bulb:

Skybox is currently taking off as it sells its full-motion video and imagery systems and build SkyNode ground station in various countries. The company is now planning to build a constellation of 24 satellites that can cover almost the entire expanse of the Earth.

Welcome to the future.

Here's another video of the real-time HD imagery:

source Business Insider
Last edited by a moderator:
  • Advertisement
  • EuroMode


    Active Member
    Stevia wonder: The plant that's a super sugar alternative – and free from calories and carbs

    It can be hard to keep up with all the bad news on sugar – or the smoking of our time as it's rapidly becoming known. It has become this generation's ticking time-bomb, leaving a trail of diabetes and obesity in its wake. Last week, the World Health Organisation added its voice to the fray, warning that sugar should make up just 5 per cent of our daily calorie intake, half what it had previously advised.

    But help for the sweet-toothed – which, given that manufacturers spike even the most wholesome-sounding cereals with sugar, means practically everyone – is at hand

    From their cupboard of substitutes, food science analysts report that salvation lies in a naturally sourced substance called stevia, which has no calories, no carbohydrates, and does not raise blood sugar levels. It comes from a plant that has been used as a sweetener for centuries in Paraguay and Brazil, and has been sold in Japan for about 40 years, yet the West has been slow to wake up to its virtues. Stevia-based products have only been approved as food additives since 2008 in the US, and since 2011 in the EU.

    A recent report by Mintel and Leatherhead Food Research predicted that the value of such products, which are mainly manufactured by the food giant Cargill, would soar to $275m by 2017 from $110m in 2013.

    One drawback is that despite being between 250 and 300 times sweeter than sugar, some people find it has a slightly bitter, liquorice-like aftertaste. But companies are getting round this by blending it with – sugar. Tropicana recently launched a juice made with 50 per cent stevia and 50 per cent sugar, halving the number of calories per serving. And Coca-Cola is poised to launch its stevia-sweetened alternative to Coke across the world. It already sells a version of Sprite that includes stevia.

    Laura Jones, a food science analyst at Mintel, said: "Stevia is the one to watch. It's still early in the innovation process, but it will become more appealing as new variants are released. Consumers want to cut sugar in their diets but not compromise on taste, plus they want to move away from anything artificial, so the appeal of plant-derived products is much stronger."

    People are increasingly avoiding artificial sweeteners such as aspartame and acesulfame K. But dieticians warn this is a mistake. "There are some misconceptions that they're dangerous but there is no evidence that any are harmful," said Cara Sloss, a spokeswoman for the British Dietetic Association.

    Some consumers may dislike their taste, but they don't pack anything like the calorific punch of sugar, which has 400 kilocalories in every 100 grams. The use of intense sweeteners in food and drink product launches has grown from 3.5 per cent in 2009 to 5.5 per cent in 2012, the same report found. The global market for all sweeteners as additives in food manufacture was worth more than $2bn in 2012.

    Other natural alternatives include the fuzzy, green, melon-like monk fruit, once cultivated by Buddhist monks in China. It is already used in the US, where analysts believe it could help to revive the flagging diet soda sector.

    The "main message", though, says Ms Sloss, is that we need to cut down. "It's about re-educating your tastebuds, because we know sugar is addictive."

    Grow your own 'sugar'

    Stevia may sound like it's made in a laboratory, but it is in fact a plant that anyone can grow at home. Yet strict EU rules mean that it can't be grown for domestic human consumption in the UK – even though gardeners in the US can do so – and can only be cultivated as an ornamental herb. But there are other options for people who want to grow their own "sugar". Sweet Cicely (Myrrhis odorata), can be used as a sugar substitute – the seeds and dried leaves can be added to fruit pies and crumbles, while the flowers, and even the roots, are also good for salads or cooking. Gardener Sarah Raven says the plant adds a "gentle aniseed flavour" to dishes.

    source Independent


    Active Member
    Girls' brains are more 'resilient' than boys' to disorders such as autism and ADHD

    New research suggests that girls’ brains are more “resilient” than boys to neurodevelopmental conditions such as autism.

    The cohort study found evidence supporting the “female protective model”, a theory that suggests that females require more extreme genetic mutations than males before they develop certain sorts of disorders.

    This would account for the gender difference for conditions such as autism spectrum disorder (ASD) and attention deficit hyperactivity disorder (ADHD), both of which are diagnosed more frequently in males than females.

    An alternative theory accounting for this difference suggests that social bias is responsible, although this research – which looked at genetic data from more than 15,000 individuals – found that that females diagnosed with ASD had a greater number of harmful genetic mutations than males.

    "The data suggests - and it would require additional experiments to really prove this - but it looks like there is a resilience in brain development that is much higher in females than in males," lead author Sebastien Jacquemont of the University Hospital of Lausanne, Switzerland, told the BBC.

    "You can 'break' neurodevelopment in males much easier than you can in females."

    Jacquemont said he hoped that that the research might lead to the development of "more sensitive, gender-specific approaches for the diagnostic screening of neurodevelopmental disorders."

    One possible explanation of the imbalance is that females’ extra X chromosome helps them compensate for any extreme genetic damage.

    Evan Eichler, a co-author of the study from the University of Washington, said that the results of the study could be interpreted in two ways: either girls are more protected from bad mutations or boys are more susceptible.

    "It takes more insult in the genome of a girl to push them over a threshold to develop autism or to develop developmental delay compared to a boy,” said Eichler.

    source independent


    Active Member
    Stephen Hawking claims victory in gravitational wave bet

    Stephen Hawking has claimed victory in a bet with a fellow scientist over the discovery of primordial gravitational waves, ripples in the structure of space-time from the birth of the universe.

    The Cambridge cosmologist bet Neil Turok, director of the Perimeter Institute in Canada, that gravitational waves from the first fleeting moments after the big bang would be detected.

    Speaking on BBC Radio 4's Today programme, Hawking said the discovery of gravitational waves, announced on Monday by researchers at the Harvard-Smithsonian Centre for Astrophysics, disproves Turok's theory that the universe cycles endlessly from one big bang to another.

    If confirmed by other groups, the discovery would count as the strongest evidence yet for cosmic inflation, a theory which says that the universe went through a period of extremely rapid expansion soon after the big bang. The theory explains why the universe looks almost the same in every direction.

    "It is another confirmation of inflation," Hawking told the Today programme. "It also means I win a bet with Neil Turok, director of the Perimeter Institute in Canada, for cyclic universe theory predicts no gravitational waves from the early universe."

    But Turok was not ready to concede just yet. He told the programme that the bet rested on results from the European Space Agency's Planck space telescope, which last year failed to spot any signs of gravitational waves.

    "In 2001, I gave a talk proposing a new theory of the big bang according to which the big bang was just the latest in an infinite series of big bangs, and the universe would be a cyclic universe," Turok said. "Stephen, in typical fashion, at the end of a talk, said 'I bet you that the Planck satellite will discover the gravitational wave signal of inflation, which would immediately disprove your theory', because our prediction from our theory was that there would be no gravitational wave signal."

    "So, of course, the Planck satellite flew, and last year announced its results, and there was no gravitational wave signal, so thus far, I'm winning the bet," he added.

    The idea of cosmic inflation came to Alan Guth, a physicist at MIT, by chance one evening in 1979. He was up late in his apartment, working with pen and notebook, hoping to understand why the universe was not filled with strange particles called magnetic monopoles. He worked out that the universe would have far fewer of the particles if it went through a rapid period of supercooling. As he worked through the equations, one step stood out. It suggested that the expansion of the early universe would be exponential.

    Over the next three decades, scientists, including Andrei Linde at Stanford University, developed the theory into its modern form. In 1982, Hawking added to the work with a paper that suggested galaxies arose from tiny irregularities in the early universe.

    "This paper aroused interest among other scientists who had been thinking on similar lines, so I invited them all to a workshop in Cambridge in June 1982 supported by the Nuffield Foundation. At the workshop we established the now accepted picture of inflation in the very early universe, although it was not confirmed by observation until 10 years later," Hawking told Today.

    Turok urged caution over the latest claims. "First of all, I should say this is just a spectacular result, and right or wrong, it actually indicates we are right on the threshold of a completely new window into the big bang and what happened at the big bang, so it's tremendously exciting," he said.

    But he added: "I have reasons for doubts about the new experiment and its results. It's not entirely convincing to me, but they have clearly seen what they claim to have seen. Verification is very important and it's wise to be a little bit sceptical at the moment when there is no confirmation. The experiment was extremely difficult, and they don't entirely explain why they are so convinced of what they claim … The problem with the inflationary theory is that it really doesn't explain the beginning. Stephen has postulated a way of starting the universe off, but it doesn't seem to work."

    Hawking is well known for making bets with other scientists. He recently lost $100 to Gordon Kane at the University of Michigan after betting that scientists at Cern, home of the Large Hadron Collider near Geneva, would not find the Higgs boson. They discovered the particle in July 2012.

    Turok said he needed to see more evidence for gravitational waves from the big bang before conceding the bet to Hawking. "The great thing about science is that it doesn't matter how many [scientists] you are up against. Ultimately the right ideas win out. Science is not a popularity contest. Galileo was right, but his ideas weren't popular at the time. The bet is still open," he said.

    source Guardian


    Active Member
    Meet the 'chicken from hell': Scientists discover fearsome new species of birdlike dinosaur named 'Anzu wyliei'

    A two-legged dinosaur with a fearsome beak and a set of sharp claws on each of its feathered fore-limbs has been described as the “chicken from hell” by the scientists who discovered it among a collection of fossilised bones unearthed from a dried-up mud-plain in the American mid-west.

    The bird-like dinosaur stood about 10 feet tall on its hind legs, weighed around 500 pounds and roamed the earth at the same time as the largest ever land predator, Tyrannosaurus rex, about 66 million years ago – just a million or so years before all the dinosaurs went extinct.

    The dinosaur was decorated with a crest-like ornament on its head and with its long tail it looked like a cross between a modern emu or cassowary and a reptile, the scientists said. “We jokingly call this thing ‘the chicken from hell’, and I think that’s pretty appropriate,” said Matt Lamanna of the Carnegie Museum of Natural History in Pittsburgh, Pennsylvania.

    Bones from three specimens were put together to construct the entire skeleton of the creature, named Anzu wyliei, which showed that it belonged to the oviraptorosaurs, a group of bird-like dinosaurs that have been discovered as far afield as the fossil beds of China and North America.

    “It was a giant raptor, but with a chicken-like head and presumably feathers. The animal stood about 10 feet tall, so it would be scary as well as absurd to encounter,” said Emma Schachner of the University of Utah, co-author of the study published online in the journal Plos One.

    “I am really excited about this discovery because Anzu is the largest oviraptorosaur found in North America. Oviraptorosaurs are a group of dinosaurs that are closely related to birds and often have strange, cassowary-like crests on their heads,” Dr Schachner said. “Two of the specimens display evidence of pathology. One appears to have a broken and healed rib, and the other has evidence of some sort of trauma to a toe,” she said.

    source Independent


    Active Member
    Plus Fours Routefinder - Worlds First Navigation System

    Invented in 1920′s this could be world’s first navigation system. No satellites or digital screens were used in the making of this portable navigation system. Called Plus Fours Routefinder, this little invention was designed to be worn on your wrist, and the “maps” were printed on little wooden rollers which you would turn manually as you drove along.

    Dynamite Joe

    Dynamite Joe

    Well-Known Member
    Space Sunflower May Help Snap Pictures of Planets

    This animation shows the prototype starshade, a giant structure designed to block the glare of stars so that future space telescopes can take pictures of planets.

    March 20, 2014

    A spacecraft that looks like a giant sunflower might one day be used to acquire images of Earth-like rocky planets around nearby stars. The prototype deployable structure, called a starshade, is being developed by NASA's Jet Propulsion Laboratory in Pasadena, Calif.

    The hunt is on for planets that resemble Earth in size, composition and temperature. Rocky planets with just the right temperature for liquid water -- not too hot, not too cold -- could be possible abodes for life outside our solar system. NASA's Kepler mission has discovered hundreds of planets orbiting other stars, called exoplanets, some of which are a bit larger than Earth and lie in this comfortable "Goldilocks" zone.

    Researchers generally think it's only a matter of time before we find perfect twins of Earth. The next step would be to image and characterize their spectra, or chemical signatures, which provide clear clues about whether those worlds could support life. The starshade is designed to help take those pictures of planets by blocking out the overwhelmingly bright light of their stars. Simply put, the starshade is analogous to holding your hand up to the sun to block it while taking a picture of somebody.

    The proposed starshade could launch together with a telescope. Once in space, it would separate from the rocket and telescope, unfurl its petals, then move into position to block the light of stars.

    The project is led by Jeremy Kasdin, a professor at Princeton University, N.J., in conjunction with JPL and support from Northrop Grumman of Redondo Beach, Calif.

    Kasdin gave a TED talk about the project on March 19. More information is at:


    Read more about the Starshade at:


    JPL manages NASA's Exoplanet Exploration program office.

    Source: http://www.jpl.nasa.gov/news/news.php?release=2014-089


    Active Member
    Shovel face

    A new fossil reptile is unlike anything previously found

    YUNNAN province, in China, is home to the Luoping formation, a trove of spectacularly preserved fossils of creatures that roamed the seas 240m years ago, during the Triassic period. The latest—and arguably most spectacular yet—is Atopodentatus unicus, described this week in Naturwissenschaften by Long Cheng, of the Wuhan Institute of Geology and Mineral Resources, and his team.

    People have been digging up and classifying prehistoric reptiles for more than two centuries, so it might be reasonable to suppose that all the main groups would by now have been identified. Atopodentatus unicus suggests this is not true, for it resembles no other known fossil. Its limbs seem to have evolved into paddles, suggesting it was indeed aquatic, but its toe bones look adapted for walking, as they resemble those in hoofed animals. Also, its pelvis is unusually solid and well-structured for a creature which could rely on the water’s buoyancy to counterbalance the force of gravity. Then there is its head, which is tiny, shovel-shaped and armed with more than 175 teeth, outwardly needlelike and inwardly bladelike, arranged in a way reminiscent of a comb.

    And a comb is just what Dr Long thinks they were. But not a comb for grooming. He believes Atopodentatus unicus combed the seabed, and probably also beaches and mudflats exposed at low tide, for buried creatures such as worms. It would have taken in mouthfuls of sand or mud and squeezed them back out through its teeth, trapping its prey in the comb as it did so in the way that a baleen whale traps krill.

    The creature’s shovel-shaped head supports this idea, for it would have been easy to push through the sediment. Its need to walk along the bottom while doing so explains the toe bones. And emergence from the water for a bit of beachcombing explains the strong pelvis. Where Atopodentatus unicus fits into the tree of life, then, is a mystery—and a reminder of how little-understood the history of life still is.

    source economist


    Active Member
    Why Facebook and Google are buying into drones

    The profit motive is behind both firms' investment in unmanned aircraft, whatever terms they might couch it in

    Back in the bad old days of the cold war, one of the most revered branches of the inexact sciences was Kremlinology. In the west, newspapers, thinktanks and governments retained specialists whose job was to scrutinise every scrap of evidence, gossip and rumour emanating from Moscow in the hope that it would provide some inkling of what the Soviet leadership was up to. Until recently, this particular specialism had apparently gone into terminal decline, but events in Ukraine have led to its urgent reinstatement.

    The commercial equivalent of Kremlinology is Google- and Facebook-watching. Although superficially more open than the Putin regime, both organisations are pathologically secretive about their long-term aspirations and strategies. So those of us engaged in this strange spectator-sport are driven to reading stock-market analysts' reports and other ephemera, which is the technological equivalent of consulting the entrails of recently beheaded chickens.

    It's grisly work but someone has to do it, so let us examine what little we know and see if we can make any sense of it. First of all, what do we know for sure? We know first of all that these two companies are run by smart people who have a deep understanding of the capabilities and potential of computing technology. We also know that these folks have: total control of their companies on account of a cunning two-tier shareholding structure, which effectively liberates them from stock market control; megalomaniacal ambitions; and – for the time being at least – money-pumps, which provide limitless resources and enable their founders to indulge their ambitions and visions.

    After that, all is speculation. The only thing we have to go on is what Google and Facebook have been up to in the public marketplace. And what they have been doing is acquiring companies in the way that, pace PG Wodehouse, ostriches go for brass doorknobs.

    In the last 18 months, for example, Google has bought at least eight significant robotics companies, and laid out £400m to buy the London-based artificial intelligence firm Deepmind. Facebook, for its part, bought Instagram, a photo-sharing network, for $1bn and paid an eye-watering $19bn in cash and shares for WhatsApp, a messaging company. More puzzling was its decision to buy Oculus VR, a virtual reality company, for $2bn. And in the last few weeks, both companies have got into the pilotless-drones business. Google acquired Titan Aerospace, a US-based startup that makes high-altitude drones, which cruise near the edge of the Earth's atmosphere, while Facebook bought a UK-based company, Ascenta, which is designing high-altitude, solar-powered drones that can fly for weeks – or perhaps longer – at a time.

    In trying to make sense of these activities, we need to separate out short-term panic from long-term strategy. Facebook's acquisition of Instagram and WhatsApp was the product of two things: naked fear and the ability to mint a particular form of Monopoly money known as Facebook shares. Users' photographs are Facebook's lifeblood, and Instagram's meteoric growth suggested that it, rather than Facebook, might ultimately become the place where people shared their pictures. Much the same applies to WhatsApp: it was growing much faster than Facebook had at a comparable stage in its corporate development, and looked like eventually becoming a threat; besides, most of the $19bn price was paid in Monopoly money rather than in hard cash. As for the Oculus VR acquisition? Well, like the peace of God, it passeth all understanding.

    Which leaves us with the strategic stuff. Here we see clear long-term thinking at work. The Google boys have decided that advanced robotics, machine-learning, distributed sensors and digital mapping are going to be the essential ingredients of a combinatorial future, and they are determined to be the dominant force in that.

    As far as the high-altitude drones are concerned, Google and Facebook are on exactly the same wavelength. Since internet access in the industrialised world is now effectively a done deal, all of the future growth is going to come from the remaining 5 billion people on the planet who do not yet have a proper internet connection. Both companies have a vital interest in speeding up the process of getting those 5 billion souls online, for the simple reason that the more people who use the internet the greater their revenues will be. And they see high-altitude drones as the means to that profitable end. They piously insist, of course, that this new connectivity will be good for humanity, and perhaps indeed it will. But ultimately profitability, like charity, begins at home.

    source Guardian


    Active Member
    Scientists solve mystery of Southern Ocean 'quacking' sound

    Noise heard in the Southern Ocean has been attributed to the underwater chatter of the Antarctic minke whale

    The mystery source of a strange quacking sound coming from the ocean has been discovered.

    The so-called "bio-duck" noise, which occurs in the winter and spring in the Southern Ocean, had confused researchers for over 50 years.

    Scientists have now attributed the sound to underwater chatter of the Antarctic minke whale.

    Minke whale's 'quacking' sound in Antarctica – audio

    Submarine crews first heard the quacking sound – a series of repetitive, low-pitched pulsing sounds – in the 1960s.

    Lead researcher Denise Risch, from the US National Oceanic and Atmospheric Administration north-east fisheries science centre in Massachusetts, told the BBC: "Over the years there have been several suggestions, but no one was able to really show this species was producing the sound until now."

    The research team attached suction-cup sensor tags equipped with underwater microphones to a pair of minke whales off the western Antarctic peninsula in February last year, with the aim of monitoring their feeding behaviour and movements.

    These were the first acoustic tags deployed on Antarctic minke whales, and the team compared their recordings with years worth of collected audio recordings to match the sounds. Researchers were able to identify the quacking noise, as well as downward-sweeping sounds previously linked to minke whales.

    The sounds "can now be attributed unequivocally to the Antarctic minke whale," Risch and her team wrote in a study published in the Royal Society journal Biology Letters.

    Researchers are hoping to retrospectively analyse previous recordings to investigate "seasonal occurrence and migration patterns" of the whales.

    Scientists remain puzzled as to why the whales produce the sound, but it is thought that the animals make the noise close to the surface before they make a deep dives to feed.

    Risch added: "Identifying their sounds will allow us to use passive acoustic monitoring to study this species. That can give us the timing of their migration – the exact timing of when the animals appear in Antarctic waters and when they leave again – so we can learn about migratory patterns, about their relative abundance in different areas and their movement patterns between the areas."

    source Guardian
    J. Abizeid

    J. Abizeid

    Well-Known Member

    Illustris Simulation: Most detailed simulation of our Universe


    Published on May 6, 2014 The Illustris simulation is the most ambitious computer simulation of our Universe yet performed. The calculation tracks the expansion of the universe, the gravitational pull of matter onto itself, the motion of cosmic gas, as well as the formation of stars and black holes. These physical components and processes are all modeled starting from initial conditions resembling the very young universe 300,000 years after the Big Bang and until the present day, spanning over 13.8 billion years of cosmic evolution. The simulated volume contains tens of thousands of galaxies captured in high-detail, covering a wide range of masses, rates of star formation, shapes, sizes, and with properties that agree well with the galaxy population observed in the real universe. The simulations were run on supercomputers in France, Germany, and the US. The largest was run on 8,192 compute cores, and took 19 million CPU hours. A single state-of-the-art desktop computer would require more than 2000 years to perform this calculation.
    Last edited by a moderator:


    Active Member
    'Antivirus is dead' says maker of Norton software suite

    Symantec, maker of the widely used Norton Antivirus software suite, has declared that antivirus technology “is dead”.

    The company’s senior vice president of information security Brian Dye told the Wall Street Journal that hackers were not only finding new ways to break into computers but that antivirus wasn’t “a moneymaker in any way."

    Mr. Dye said that the company’s antivirus software catches just 45 per cent of cyberattacks – an admission that sounds surprising but that reflects a broader shift in the cybersecurity industry as experts are forced to adapt to new methods employed by hackers.

    When Symantec’s antivirus software was first introduced in the late 1980s it worked as an immune system for computers, with experts maintaining a database of malicious code and blocking any attacks on a given system.

    Categories of cyberattacks have since multiplied and now include everything from malware (Trojan-horse like programs that open backdoors in systems) and spyware (software that monitors a users’ keyboard to record passwords) to more sophisticated breaches aimed at large businesses.

    Symantec has said that it was now looking to move from a "protect" model to one of "detect and respond,” offering businesses bespoke packages that track hacks and leaks to prevent any damages beyond the initial infiltration.

    Mr Dye also stressed that despite the growing redundancy of antivirus products, security packages for consumers still offer a range of useful services including blocking spam, managing passwords, and even scanning users' Facebook feeds for malicious links.

    Symantec, which has an 8 per cent global market share of the antivirus market and forecast a quarterly revenue of $1.62-1.66 billion in the months through March, will be following the lead of a number of smaller cybersecurity companies who are finding innovative methods to deal with new types of threats.

    One company, Juniper Networks, has launched products that place “ghost armies” of fake data on systems in order to distract and misdirect hackers from important information like customer data and intellectual property.

    Another firm called Mandiant, which was founded by an ex-US Air Force officer and purchased in December 2013 by FireEye for $1 billion, offers its own ‘emergency response’ services with the strapline “Security breaches are inevitable – being a headline is not."

    source independent


    Active Member
    New life: Scientists create first semi-synthetic organism with 'alien' DNA

    Scientists have created the first “semi-synthetic” micro-organism with a radically different genetic code from the rest of life on Earth.

    The researchers believe the breakthrough is the first step towards creating new microbial life-forms with novel industrial or medical properties resulting from a potentially massive expansion of genetic information.

    The semi-synthetic microbe, a genetically modified E. coli bacterium, has been endowed with an extra artificial piece of DNA with an expanded genetic alphabet – instead of the usual four “letters” of the alphabet its DNA molecule has six.

    The natural genetic code of all living things is based on a sequence of four bases – G, C, T, A – which form two sets of bonded pairs, G to C and T to A, that link the two strands of the DNA double helix.

    The DNA of the new semi-synthetic microbe, however, has a pair of extra base pairs, denoted by X and Y, which pair up together like the other base pairs and are fully integrated into the rest of the DNA’s genetic code.

    The scientists said that the semi-synthetic E. coli bacterium replicates normally and is able to pass on the new genetic information to subsequent generations. However, it was not able to use the new encoded information to produce any novel proteins – the synthetic DNA was added as an extra circular strand that did not take part in the bacterium’s normal metabolic functions.

    The study, published in the journal Nature, is the first time that scientists have managed to produce a genetically modified microbe that is able to function and replicate with a different genetic code to the one that is thought to have existed ever since life first started to evolve on Earth more than 3.5 billion years ago.

    “Life on earth in all its diversity is encoded by only two pairs of DNA bases, A-T and C-G, and what we’ve made is an organism that stably contains those two plus a third, unnatural pair of bases,” said Professor Floyd Romesberg of the Scripps Research Institute in La Jolla, California.

    “This shows that other solutions to storing information are possible and, of course, takes us closer to an expanded-DNA biology that will have many exciting applications, from new medicines to new kinds of nanotechnology,” Professor Romesberg said.

    Expanding the genetic code with an extra base pair raises the prospect of building new kinds of proteins from a much wider range of amino acids than the 20 or so that exist in nature. A new code based on six base pairs could in theory deal with more than 200 amino acids, the scientists said.

    “In principle, we could encode new proteins made from new, unnatural amino acids, which would give us greater power than ever to tailor protein therapeutics and diagnostics and laboratory reagents to have desired functions,” Professor Romesberg said.

    “Other applications, such as nanomaterials, are also possible,” he added.

    The researchers emphasised that there is little danger of the new life-forms living outside the confines of the laboratory, as they are not able to replicate with their synthetic DNA strand unless they are continuously fed the X and Y bases – synthetic chemicals called "d5SICS" and "dNaM", that do not exist in nature.

    The bacteria also need a special protein to transport the new bases around the cell of the microbe. The transporter protein comes from algae and if it, or the X and Y bases, are lacking, the microbial cells revert back to the natural genetic code, said Denis Malyshev of the Scripps Institute.

    “Our new bases can only get into the cell if we turn on the "base transporter" protein. Without this transporter or when the new bases are not provided, the cell will revert back to A, T, G, C and the d5SICS and the dNaM will disappear from the genome,” Dr Malyshev said.

    source independent


    Active Member
    How Distant Planets Affect Earth's Ice Ages And Gave Rise To Civilization

    Human-induced warming is sending Earth into frightening and uncharted climate territory — but humans are not the first force to cause colossal changes to our climate.

    Other celestial bodies, including planets, tug at Earth causing it to move in ways that affect our ice caps in history-shaping ways, as discussed by narrator and astrophysicist Neil deGrasse Tyson on Sunday's Cosmos: A Spacetime Odyssey.

    For example, the pull of the planets influences Earth's tilt. They cause Earth's axis to wobble in a circular motion similar to the spin of a top. You can can see these movements in the Cosmos GIF to the left.

    If you could stick a pen out of Earth's north pole, it would draw a circle about every 26,000 years. In 14,000 AD for example, our north star was not Polaris, as it is now, but Vega. In 12,000 years , Vega will be our north star again.

    The pull of the planets also cause the Earth's tilt to change between 22.1 and 24.5 degrees over 41,000 years. When our tilt is more extreme, seasons can be more severe, with warmer summers and cooler winters. When the tilt is less, we get cooler summers and milder winters. Currently our tilt is about 23.4 degrees — near the middle.

    In addition to Earth's rotation about its own axis, Earth wobbles like a top (left) and changes it's tilt (right) in response to the gravitational pull of other celestial bodies.

    In the GIF below, you can see another effect of planetary pull. Earth's orbit is not round, but slightly egg-shaped or eccentric.

    "The shape of the Earth's orbit changes from being elliptical (high eccentricity) to being nearly circular (low eccentricity) in a cycle that takes between 90,000 and 100,000 years," according to NASA's Earth Observatory.

    The Earth's eccentric and unsymmetrical orbit.

    Currently, Earth is at its closest to the sun at the beginning of January when it is about 91.5 million miles away and at its furthest in July when it is about 94 million miles away. This corresponds to about a 6% difference in how much solar radiation the Earth receives.

    Earth, planetary pull, and ice ages

    During a highly elliptical orbit, the Earth would receive 20 to 30% higher sunlight intensity between its closest and furthest passes by Earth, "resulting in a substantially different climate from what we experience today," according to NASA.

    The pull of every celestial body combines to create Earth's eccentricity, wobble, and tilt. These movements determine how our planet is positioned to receive sunlight and how much sunlight it can receive. Which of these is the dominant determiner of ice ages, however, is still not completely understood.

    "There is still some discussion about how exactly this starts and ends ice ages, but many studies suggest that the amount of summer sunshine on northern continents is crucial: if it drops below a critical value, snow from the past winter does not melt away in summer and an ice sheet starts to grow as more and more snow accumulates," said the Intergovernmental Panel On Climate Change in their 2007 report.

    One glacial period, or lack thereof, was particularly significant to human history: Fifteen to twenty-five thousand years ago, the northern ice cap shrunk, revealing a land bridge that our ancestors would use to cross from Eurasia into North America.

    How ice forms civilizations

    When Earth's northern region is ill-positioned to bathe in the sun's rays, the top third of the world becomes entombed in ice. You can see the northern polar ice cap expand all the way down over California and consume the better part of North America:

    When the ice grows, it siphons water from the sea, lowering sea levels 400 feet, said deGrasse Tyson. This exposes more land in coastal regions.

    When Earth is at a prime angle for sunbathing, this ice thaws and sea levels rise. When Earth thawed about 15,000 to 25,000 years ago, receding ice exposed a land bridge connecting Eurasia and North America. This bridge would later be humanities route into North America.

    Roughly 10,000 years ago, "the manic swings of the climate and sea levels," subsided, said Tyson, "a new and gentler climate age began."

    We entered an interglacial period that we still live in, "one of those balmy intermissions in an ice age," he said. Rivers transported fertile sediments into deltas, where some humans made their homes.

    Once solely wanderers, humans had time and resources in this interglacial period to settle and build the civilizations at the dawn of our history.

    "The way the planets tug at each other, the way the skin of the earth moves, the way those motions affect climate, and the evolution of life and intelligence, they all combined to give us the means to turn the mud of those river deltas into the first civilizations," said Tyson.

    Now go enjoy the beautiful interglacial period. You've only got about another 50,000 years. That is, if we don't throw it all out of wack with our greenhouse gas emissions.

    source businessinsider


    Active Member
    EU court says Google must delete 'irrelevant' links at the request of ordinary individuals

    The European Court of Justice struck a major blow against the right of internet companies to hold unlimited information on individuals when it ordered Google to remove links that are deemed “inadequate, irrelevant or no longer relevant”.

    The court’s decision will allow individuals the right to ask internet search engines to remove links to information about them that they do not want known – which could be seen either as an assertion of the right to privacy or an attack on free speech. Google and free speech activists reacted angrily to the court’s verdict which could guarantee individuals a “right to be forgotten” on the internet which is not currently available.

    It is unclear exactly how the ruling will be implemented considering the sheer volume of online data and internet users. For individuals keen to erase embarrassing incidents from their past, it could prove a handy tool for re-shaping their digital footprint, while data protection advocates are calling it a victory against the all-powerful internet giants.

    But for champions of free speech, the potential for misuse is deeply worrying.

    “This is akin to marching into a library and forcing it to pulp books,” said Jodie Ginsberg, chief executive of Index on Censorship. “Although the ruling is intended for private individuals, it opens the door to anyone who wants to whitewash their personal history.”

    It was a repossessed home in Catalonia which sparked the battle between privacy campaigners, search engines such as Google and free speech advocates. Mario Costeja Gonzalez was dismayed to find searches on his name still threw up a 1998 newspaper article on past financial problems, even though many years had passed and his debts were paid off.

    A Spanish court referred his request for the link to be removed to the Court of Justice of the European Union in Luxembourg, which ruled in favour of Mr Costeja. The judges decided that search engines did have a duty to make sure that data deemed “inadequate, irrelevant or no longer relevant” did not appear. Ordinary citizens could also request that search engines remove links to sites which contained excessive personal data on them.

    Google had argued that it was not in control of the content – it was merely linking to it – and therefore the onus for removing any out-of-date information was on the websites themselves.

    “We are very surprised that [this decision] differs so dramatically from the Advocate General’s opinion and the warnings and consequences that he spelled out. We now need to take time to analyse the implications,” said a Google spokesman, Al Verney. He was referring to an opinion issued an adviser to the European Court of Justice last year expressing concern that freedom of speech could be threatened.

    This clash between the right to privacy and the right to information is an ongoing one in Europe. In 2012, the EU’s executive arm, the European Commission, proposed a law granting people the right to be forgotten on the internet. The European Parliament however watered it down, and internet companies have been lobbying member states not to approve the legislation.

    They have an ally in the free speech groups, which argue that giving an individual the right to decide what can be removed from search engines with no legal oversight has worrying implications.

    “The court’s decision is a retrograde move that misunderstands the role and responsibility of search engines and the wider internet,” said Ms Ginsberg. “It should send chills down the spine of everyone in the European Union who believes in the crucial importance of free expression and freedom of information.”

    Javier Ruiz, Policy Director at Open Rights Group, agreed. “We need to take into account individuals’ right to privacy, but if search engines are forced to remove links to legitimate content that is already in the public domain... it could lead to online censorship,” he said.

    But the battle lines are not entirely clear. People are increasingly concerned about the safety of their personal data since allegations emerged last year of mass government surveillance. The EU’s Justice Commissioner, Viviane Reding, called the ruling a “strong tailwind” in the commission’s efforts to tighten data protection in the bloc. “Companies can no longer hide behind their servers being based in California or anywhere else in the world,” she wrote on Facebook.

    It is now up to legal experts with search engines like Google, Yahoo and Bing to work out how they can possibly implement the law, and what processes will be put in place to allow people to appeal against the links,. For the man who started it all, he is pleased that his case has created an opening for ordinary people to stand up to the often faceless Internet giants. “It’s a great relief to be shown that you were right when you have fought for your ideas, it’s a joy,” Mr Costeja told the Associated Press.

    source independent


    Active Member
    Mice crippled with MS are made to walk again with breakthrough cure

    Treatment with human stem cells has allowed mice crippled by a version of multiple sclerosis (MS) to walk again after less than two weeks – suggesting a possible new direction for human therapies.

    Scientists admit to being astonished by the result and believe it opens up a new avenue of research in the quest for solutions to MS.

    Professor Tom Lane, from the University of Utah, who led the US team, recalled: “My postdoctoral fellow Dr Lu Chen came to me and said, ‘The mice are walking.’ I didn’t believe her.”

    The genetically engineered mice had a condition that mimics the symptoms of human MS.

    They were so disabled they could not stand long enough to eat and drink on their own and had to be hand-fed.

    The scientists transplanted human neural stem cells into the animals, expecting them to be rejected and provide no benefit. Instead the experiment yielded spectacular results.

    Within 10 to 14 days, the mice had regained motor skills and were able to walk again. Six months later, they showed no sign of relapsing.

    The findings, published in the journal Stem Cell Reports, suggest the mice experienced at least a partial reversal of their symptoms.

    A similar outcome in humans could help patients in potentially disabling stages of the disease for which there are no treatments.

    “This result opens up a whole new area of research for us to figure out why it worked,” said Dr Jeanne Loring, a co-author of the paper and director of the centre for regenerative medicine at The Scripps Research Institute in La Jolla, California. “We’ve long forgotten our original plan.”

    MS is an auto-immune condition caused by the body’s own defences attacking myelin, the fatty insulation surrounding nerve fibres.

    As myelin is stripped away, nerve impulses can no longer be transmitted properly, leading to symptoms ranging from mild tingling to full-blown paralysis.

    Drugs that dampen the immune system can slow early forms of the disease, but little can be done for patients in the later stages.

    Members of the research team believe their success may be linked to the way the stem cells were grown in an unusually uncrowded lab dish.

    This led to stem cells that were highly potent, with an enhanced ability to mature and develop.

    Chemical signals from the stem cells instructed each mouse’s own cells to repair the damage caused by MS, the scientists said.

    One signal was identified as a protein called TGF-beta, raising the prospect of delivering a similar therapy in the form of a drug.

    “Rather than having to engraft stem cells into a patient, which can be challenging from a medical standpoint, we might be able to develop a drug that can be used to deliver the therapy much more easily,” said Professor Lane.

    Dr Sorrel Bickley of the MS Society said: “This is an interesting, early-stage study that’s given scientists new ideas for future research into potential MS therapies. It’s not currently being planned for testing in people, but it’s a useful avenue for scientists to explore – we look forward to seeing how this area of research develops.”

    source independent


    Active Member
    High hopes for new malaria vaccine based on blood protein

    A new type of malaria vaccine based on proteins found in the blood of children who develop a natural resistance to the parasitic disease has been developed by American researchers.

    Tests on laboratory mice have shown that the vaccine can protect the animals against the most lethal strains of malaria and scientists are confident that it will be both safe and effective in humans.

    It is believed to be the first time that scientists have made a candidate malaria vaccine based on a blood protein that confers natural resistance to young children. The breakthrough could lead to the first clinical trials of the prototype vaccine within two years, the researchers said.

    Malaria kills about 600,000 people each year, most of them young children under the age of five living in sub-Saharan Africa and South East Asia. There are more than 200 million cases a year worldwide and many children who survive infection can still be left with health problems in later life.

    Scientists discovered the blood antibodies that confer natural resistance to malaria during a survey of about 785 children in Tanzania. About six per cent of the children possessed antibodies to a malaria protein that is vital for the parasite to complete its lifecycle within the human body.

    Although the new candidate vaccine, known as PfSEA-1, has only been tested on mice, the scientists behind the research are excited about its potential in terms of protecting vulnerable children against the most dangerous forms of severe malaria.

    The new PfSEA-1 vaccine works by trapping malaria parasites inside infected red blood cells so that they cannot emerge to infect other red blood cells and so complete their complicated life cycle, said Jonathan Kurtis at Rhode Island Hospital in the US.

    “It turns out that antibodies to this protein prevent the schizont [a stage in the malaria lifecycle] from getting out of the red cell. We trap the parasite inside the red cell,” Dr Kurtis said.

    “Most vaccine candidates for malaria have worked by trying to prevent parasites from entering red blood cells; we’ve taken a different approach. We’re sort of trapping the parasite in the burning house,” he said.

    “We have found a way to block it from leaving the cell once it has entered. If it’s trapped in the red blood cells, it can’t go anywhere. It can’t do any further damage,” he added.

    About 100 candidate malaria vaccines have been developed over the past 30 years but only one has made it to the final, phase-three stage of clinical trials. However, this vaccine, known as "RTS,S" and made by GlaxoSmithKline, confers less than 50 per cent protection – which many experts believe is better than none.

    “Malaria is the single greatest killer of children on the planet. It kills one child every 15 seconds. It’s an unbelievable culling machine of sub-Sarharan Africa and South-East Asia. We need desperately a vaccine against malaria,” Dr Kurtis said.

    “It’s unlikely that anything that is immediately on the horizon is going to be actually sufficient efficacious. So our next destination is an active vaccination trial in monkeys, followed by phase-one trials in humans. We’d like to roll this out as quickly as we can,” he said.

    The study, published in the journal Science, showed that the PfSEA-1 vaccine can protect laboratory mice from the most severe strains of rodent malaria. The researchers also found that the children with PfSEA-1 antibodies naturally in their blood did not succumb to severe malaria, he added.

    “The shocking result was that children who had detectable antibodies to this antigen never got severe malaria – zero cases,” Dr Kurtis said.

    Some candidate vaccines have been shown to work well in animals such as mice and monkeys, but they fail to work in humans. This candidate vaccine comes from a natural protein that has been shown to confer high levels of resistance in children and so it should work in humans, he said.

    “What really distinguishes this work is that we began with human beings and we can overcome the problem that says we’ve got a great vaccine, I can protect mice but it does not translate at all to human beings,” Dr Kurtis said.

    Professor Mike Blackman, a malaria researcher at the National Institute for Medical Research in north London said: “The study is quite an important step forward and potentially raises the prospect of this becoming a vaccine candidate and combining it with other vaccine candidates… but they still have a long way to go.”

    In 2011, scientists from the Wellcome Trust Sanger Institute in Cambridge identified the critical protein on the surface of red blood cells that allowed the malaria parasite to gain entry to the cell. The researchers said this protein, known as basigin, could also be used as a target for future prototype vaccines.

    Combining two or more vaccines that target different stages of the malaria’s complicated lifecycle are likely to be more effective than just one vaccine, Dr Kurtis said. “We can approach the parasite from all angles,” he said.

    source independent
    Green Arrow

    Green Arrow

    New Member
    Can the Nervous System Be Hacked?

    By MICHAEL BEHARMAY 23, 2014

    Mirela Mustacevic, who suffers from rheumatoid arthritis, had a nerve stimulator implanted as part of a medical trial. Her symptoms have lessened significantly.

    One morning in May 1998, Kevin Tracey converted a room in his lab at the Feinstein Institute for Medical Research in Manhasset, N.Y., into a makeshift operating theater and then prepped his patient — a rat — for surgery. A neurosurgeon, and also Feinstein Institute’s president, Tracey had spent more than a decade searching for a link between nerves and the immune system. His work led him to hypothesize that stimulating the vagus nerve with electricity would alleviate harmful inflammation. “The vagus nerve is behind the artery where you feel your pulse,” he told me recently, pressing his right index finger to his neck.

    The vagus nerve and its branches conduct nerve impulses — called action potentials — to every major organ. But communication between nerves and the immune system was considered impossible, according to the scientific consensus in 1998. Textbooks from the era taught, he said, “that the immune system was just cells floating around. Nerves don’t float anywhere. Nerves are fixed in tissues.” It would have been “inconceivable,” he added, to propose that nerves were directly interacting with immune cells.

    Nonetheless, Tracey was certain that an interface existed, and that his rat would prove it. After anesthetizing the animal, Tracey cut an incision in its neck, using a surgical microscope to find his way around his patient’s anatomy. With a hand-held nerve stimulator, he delivered several one-second electrical pulses to the rat’s exposed vagus nerve. He stitched the cut closed and gave the rat a bacterial toxin known to promote the production of tumor necrosis factor, or T.N.F., a protein that triggers inflammation in animals, including humans.

    “We let it sleep for an hour, then took blood tests,” he said. The bacterial toxin should have triggered rampant inflammation, but instead the production of tumor necrosis factor was blocked by 75 percent. “For me, it was a life-changing moment,” Tracey said. What he had demonstrated was that the nervous system was like a computer terminal through which you could deliver commands to stop a problem, like acute inflammation, before it starts, or repair a body after it gets sick. “All the information is coming and going as electrical signals,” Tracey said. For months, he’d been arguing with his staff, whose members considered this rat project of his harebrained. “Half of them were in the hallway betting against me,” Tracey said.

    Inflammatory afflictions like rheumatoid arthritis and Crohn’s disease are currently treated with drugs — painkillers, steroids and what are known as biologics, or genetically engineered proteins. But such medicines, Tracey pointed out, are often expensive, hard to administer, variable in their efficacy and sometimes accompanied by lethal side effects. His work seemed to indicate that electricity delivered to the vagus nerve in just the right intensity and at precise intervals could reproduce a drug’s therapeutic — in this case, anti-inflammatory — reaction. His subsequent research would also show that it could do so more effectively and with minimal health risks.

    Tracey’s efforts have helped establish what is now the growing field of bioelectronics. He has grand hopes for it. “I think this is the industry that will replace the drug industry,” he told me. Today researchers are creating implants that can communicate directly with the nervous system in order to try to fight everything from cancer to the common cold. “Our idea would be manipulating neural input to delay the progression of cancer,” says Paul Frenette, a stem-cell researcher at the Albert Einstein College of Medicine in the Bronx who discovered a link between the nervous system and prostate tumors.

    “The list of T.N.F. diseases is long,” Tracey said. “So when we created SetPoint” — the start-up he founded in 2007 with a physician and researcher at Massachusetts General Hospital in Boston — “we had to figure out what we were going to treat.” They wanted to start with an illness that could be mitigated by blocking tumor necrosis factor and for which new therapies were desperately needed. Rheumatoid arthritis satisfied both criteria. It afflicts about 1 percent of the global population, causing chronic inflammation that erodes joints and eventually makes movement excruciating. And there is no cure for it.

    In September 2011, SetPoint Medical began the world’s first clinical trial to treat rheumatoid-arthritis patients with an implantable nerve stimulator based on Tracey’s discoveries. According to Ralph Zitnik, SetPoint’s chief medical officer, of the 18 patients currently enrolled in the ongoing trial, two-thirds have improved. And some of them were feeling little or no pain just weeks after receiving the implant; the swelling in their joints has disappeared. “We took Kevin’s concept that he worked on for 10 years and made it a reality for people in a real clinical trial,” he says.

    Conceptually, bioelectronics is straightforward: Get the nervous system to tell the body to heal itself. But of course it’s not that simple. “What we’re trying to do here is completely novel,” says Pedro Irazoqui, a professor of biomedical engineering at Purdue University, where he’s investigating bioelectronic therapies for epilepsy. Jay Pasricha, a professor of medicine and neurosciences at Johns Hopkins University who studies how nerve signals affect obesity, diabetes and gastrointestinal-motility disorders, among other digestive diseases, says, “What we’re doing today is like the precursor to the Model T.

    The biggest challenge is interpreting the conversation between the body’s organs and its nervous system, according to Kris Famm, who runs the newly formed Bioelectronics R. & D. Unit at GlaxoSmithKline, the world’s seventh-largest pharmaceutical company. “No one has really tried to speak the electrical language of the body,” he says. Another obstacle is building small implants, some of them as tiny as a cubic millimeter, robust enough to run powerful microprocessors. Should scientists succeed and bioelectronics become widely adopted, millions of people could one day be walking around with networked computers hooked up to their nervous systems. And that prospect highlights yet another concern the nascent industry will have to confront: the possibility of malignant hacking. As Anand Raghunathan, a professor of electrical and computer engineering at Purdue, puts it, bioelectronics “gives me a remote control to someone’s body.”

    Despite the uncertainties, in August, GlaxoSmithKline invested $5 million in SetPoint, and its bioelectronics R. & D. unit now has partnerships with 26 independent research groups in six countries. Glaxo has also established a $50 million fund to support the science of bioelectronics and is offering a prize of $1 million to the first team that can develop an implantable device that can, by recording and responding to an organ’s electrical signals, exert influence over its function. Instead of drugs, “the treatment is a pattern of electrical impulses,” Famm says. “The information is the treatment.” In addition to rheumatoid arthritis, Famm believes, bioelectronic medicine might someday treat hypertension, asthma, diabetes, epilepsy, infertility, obesity and cancer. “This is not a one-trick pony.”

    Kevin Tracey, who is 56, came to bioelectronics because of two significant deaths. The first occurred when he was in preschool. He was 5 when his mother died as a result of an inoperable brain tumor. Shortly after the funeral*, Tracey found his maternal grandfather, a professor of pediatrics at Yale, alone in his den. “I climbed onto his lap and asked what happened,” Tracey says. “He explained that surgeons tried to take it out but couldn’t separate the brain-tumor tissue from the normal neurons. I remember saying to him, ‘Somebody should do something about that.’ That was when I decided to be a neurosurgeon. I wanted to solve problems that were insolvable.”

    Tracey’s second formative experience took place in May 1985. Having trained for neurosurgery at Cornell, he was on rotation for his residency in the emergency room at New York Hospital when an 11-month-old baby girl named Janice arrived in an ambulance with burns covering 75 percent of her body. Her grandmother was cooking when she tripped and doused Janice with a pot of boiling noodles. After three weeks in the burn unit recovering from skin grafts, Janice appeared to stabilize. Tracey joined Janice’s family to celebrate her first birthday in her hospital room. Janice was upbeat, smiling and giggling. The next day, she was dead.

    “I was haunted by her case,” Tracey says. When the autopsy report was inconclusive, Tracey redirected his energy into medical research, specifically inflammation related to sepsis, which he believed contributed to Janice’s unexpected death. Sepsis occurs when the immune system goes into overdrive, producing a potentially lethal inflammatory response to fight a severe infection. At the time of her death, however, Janice did not have an infection. It took another year to figure out that it was an overproduction of tumor necrosis factor — the catalyst for inflammation — that caused Janice’s septic shock, though her death remains a mystery.

    “Her brakes had failed,” Tracey says. “She made too much T.N.F. The obvious question was, why?” He credits Linda Watkins, a neuroscientist at the University of Colorado, Boulder, for furnishing the pivotal clue. In the mid-1990s, Watkins was exploring possible neural connections between the brain and the immune system in rats by injecting them with cytokines — molecules that, like tumor necrosis factor, contribute to inflammation — to cause fevers. But when she cut their vagus nerves, the fever never materialized. Watkins concluded that the vagus nerve must be the conduit through which the body signals the brain to induce fever.

    Tracey followed her lead by giving mice a toxin known to cause inflammation and then dosing them with an anti-inflammatory drug he had been investigating. “We injected it into their brains in teeny amounts, too small to get into their bloodstream,” he says. The drug did what it was supposed to do: It halted the production of tumor necrosis factor in the brain. Surprisingly, it also halted the production of tumor necrosis factor in the rest of the body. When Tracey cut the vagus nerve, however, the drug had no effect in the body.

    “That was the eureka moment,” he says. The signal generated by the drug had to be traveling from the brain through the nerve because cutting it blocked the signal. “There could be no other explanation.”

    Tracey then wondered if he could eliminate the drug altogether and use the nerve as a means of speaking directly to the immune system. “But there was nothing in the scientific thinking that said electricity would do anything. It was anathema to logic. Nobody thought it would work.”

    After that first surgery on the rat in 1998, Tracey spent 11 years mapping the neural pathways of tumor-necrosis-factor inflammation, charting a route from the vagus nerve to the spleen to the bloodstream and eventually to mitochondria inside cells. “We now know more about this electrical circuit to treat [inflammation] than is known about some clinically approved drugs,” Tracey says.

    By 2009, SetPoint felt ready to test Tracey’s work on people with rheumatoid arthritis, and Ralph Zitnik was approached about joining the company. “It was nuts,” Zitnik told me. “Sticking something on the vagus nerve to take away R.A.? People would think it’s witchcraft.” Zitnik’s background was in pharmaceuticals; at Amgen, he contributed to the development of Enbrel, a rheumatoid-arthritis drug that had $4.7 billion in sales last year, which made it No. 7 on the industry’s best-seller list. But the more he talked with Tracey and pored over the research, the more he said to himself: “There is good science behind this. I thought, This could work.”

    Zitnik’s first task at SetPoint was to recruit a lead scientist to set up a clinical trial. Many scientists in the United States and Europe were hesitant to do it, he says, but eventually he hired Paul-Peter Tak, a well-regarded immunologist and rheumatologist based at the Academic Medical Center, the University of Amsterdam’s teaching hospital. “He was a forward-thinking person willing to try an unconventional approach like this,” Zitnik says. Tak in turn hired Frieda Koopman, who was working on her Ph.D. in rheumatology at A.M.C., to find potential patients in the Netherlands and elsewhere in Europe.

    The day after an article about the planned trial appeared in a Dutch newspaper, Koopman’s office got more than a thousand calls from rheumatoid-arthritis patients begging to participate. “We never saw that coming,” Koopman says. “We thought we might get one or two patients to join, and wouldn’t that be nice.” Invasive surgery was involved, after all. Koopman’s team returned almost every call and selected several subjects based on what medications they had tried and the severity of the pain and swelling in their joints. Over the next two years, her team continued to enroll new patients.

    The subjects in the trial each underwent a 45-minute operation. A neurosurgeon fixed an inchlong device shaped like a corkscrew to the vagus nerve on the left side of the neck, and then embedded just below the collarbone a silver-dollar-size “pulse generator” that contained a battery and microprocessor programmed to discharge mild shocks from two electrodes. A thin wire made of a platinum alloy connected the two components beneath the skin. Once the implant was turned on, its preprogrammed charge — about one milliamp; a small LED consumes 10 times more electricity — zapped the vagus nerve in 60-second bursts, up to four times a day. Typically, a patient’s throat felt constricted and tingly for a moment. After a week or two, arthritic pain began to subside. Swollen joints shrank, and blood tests that checked for inflammatory markers usually showed striking declines.

    Koopman told me about a 38-year-old trial patient named Mirela Mustacevic whose rheumatoid arthritis was diagnosed when she was 22, and who had since tried nine different medications, including two she had to self-inject. Some of them helped but had nasty side effects, like nausea and skin rashes. Before getting the SetPoint implant in April 2013, she could barely grasp a pencil; now she’s riding her bicycle to the Dutch coast, a near-20-mile round trip from her home. Mustacevic told me: “After the implant, I started to do things I hadn’t done in years — like taking long walks or just putting clothes on in the morning without help. I was ecstatic. When they told me about the surgery, I was a bit worried, because what if something went wrong? I had to think about whether it was worth it. But it was worth it. I got my life back.”

    In February, I met Moncef Slaoui, Glaxo’s chairman of Global Research and Development, at one of the company’s 16 facilities he oversees worldwide, this one in King of Prussia, Pa. Slaoui, who is 55 and has a Ph.D. in molecular biology and immunology, was instrumental in developing the first malaria vaccine and is considered one of the most influential executives in the pharmaceutical industry.

    “When Kris came to me in early 2012 with this idea of vagus nerve stimulation,” Slaoui told me, “I was like: C’mon? You’re gonna give a shock and it changes the immune system? I was very skeptical. But finally I agreed to visit Kevin’s lab. I wanted the data, the evidence. I don’t like hot air.” He went to Tak, the lead scientist for the trials. “I asked him, ‘Paul-Peter, is it really real?’ ”

    SetPoint Medical’s new neural implant (currently being tested on animals).

    After getting an endorsement from Tak, who is now Glaxo’s global head of immuno-inflammation research, Slaoui committed to financing SetPoint. The investment was modest, though, because he felt that Tracey’s device was “just a starting point. It was still very broad — you touch the vagus nerve, you touch most of your viscera. We had wanted something very specific.” What he didn’t want was “the bulldozer approach” that characterizes already existing stimulators for treating Parkinson’s, chronic pain and epilepsy. (Pacemakers differ because they stimulate muscle, not nerves.) These devices are indiscriminate, blasting electricity into billions of neurons and hoping for the best. As Slaoui saw it, SetPoint’s stimulator was a primitive forerunner to “a device that reads your electrical impulses and sees when something is wrong, then corrects what needs correcting.”

    In 2006, Slaoui continued, “when I became chairman of R. & D., R. & D. was a liability to this company. We were spending lots of money and not producing new molecules for new medicines. I had to acknowledge that the current way of doing R. & D. wasn’t likely to be successful.” Four years later, Slaoui put together a 14-member think tank and discussed, among other topics, the Human Brain Project. The multinational endeavor, directed by the neuroscientist and Fulbright scholar Henry Markram, at the Swiss Federal Institute of Technology in Lausanne, is trying to create a computer simulation of the human brain. That got Slaoui “thinking about electrical signaling, an opportunity to make medicine — a therapeutic intervention — that’s super highly specific in terms of its geographic position. I’m going to go to the nerve that goes to your kidney and nowhere else, and only to your left kidney, and to a particular area of the left kidney.”

    That degree of precision would address one of Slaoui’s major criticisms of conventional drugs: They flood the body, and then doctors have to hope that they will perform only where they’re supposed to. “It is really difficult to design a molecule that will only interact where you want it, because it goes everywhere.” The upshot, usually: side effects. Bioelectronics could potentially eliminate those, as well as the costly redundancy involved in the drug-discovery process, in which every promising molecule must be independently evaluated. “There is very little that is transposable from one molecule to the next,” Slaoui said. “You have to redo everything.” Bioelectronics attracted him, he says, because “95 percent of the hardware is the same,” no matter what disease it treats.

    So Slaoui found himself working for a drug company while devoting himself to the idea of treating illness without drugs. In July 2012, he and Famm toured Markram’s facilities in Lausanne. There Markram showed them a 3-D digital visualization on a giant screen of 100,000 synapses actively firing in a mouse brain.

    At that moment, Famm says, he and Slaoui realized they were “biting off too much.” Slaoui and Famm concluded that starting with the brain — which seemed logical, given that it’s the body’s C.P.U. — could take decades to yield viable treatments. The human brain’s circuitry, with 100 billion neurons, seemed far too complex. “Why don’t we just skip the brain and go straight to the organs?” Slaoui suggested.

    Right then, Slaoui said, “we decided to focus on the peripheral nervous system.” The peripheral nerves link the brain and spinal cord (the central nervous system) to the organs and limbs. Rather than try to fathom the brain — a black box, basically, with its 100 trillion neural connections — Slaoui proposed that they put “an interface between a nerve and the organ with an electrical device.” To eavesdrop on a telephone call, his thinking went, you don’t tap into the switching center and search for the conversation. You go to the line nearest the caller’s location. Compared with the brain, the cablelike bundles that are the peripheral nerves contain vastly fewer fibers — hundreds versus billions.

    The brain, with its billions of neurons, seemed far too complex. ‘Why don't we just skip the brain and go straight to the organs?’ someone suggested.

    When I joined Famm in Philadelphia in February, he referred to his role as Glaxo’s bioelectronics chief as “like being a missionary.” Famm, who lives in London, was in the U.S. to attend half a dozen meetings with bioelectronics researchers. His challenge is coaxing those from disparate disciplines to embrace a singular vision. Whereas drug discovery primarily involves like-minded thinkers — molecular biologists, chemists, geneticists — bioelectronics calls for alliances between experts in fields that in many cases have little to do with medicine — nanotech, optics, electrical engineering, materials science, computer programming, wireless networking and data mining. At the moment, Famm is focused on getting what he called a “transdisciplinary” group of scientists to agree on how to solve two key technical challenges.

    The first is shrinking the hardware. It must be small enough to attach to virtually any nerve yet still have enough battery power and circuitry to run algorithms that generate the patterns of electrical impulses needed to treat various diseases. At the Charles Stark Draper Laboratory in Cambridge, Mass., we met with a team working on miniaturization. Draper is best known for internal navigation systems that guide things like ballistic missiles and spaceships. Bryan McLaughlin, who directs bioelectronics development at Draper, showed me the latest prototype mock-up — a dime-size implant. It’s small, he said, but not nearly small enough. McLaughlin wants to get its electrodes, microprocessor, battery and a wireless transmitter into a device no larger than a jelly bean. “It’s also important to make it closed-loop, with the ability to read and write to the nervous system.” The goal, in other words, is to end up with something that can continuously monitor a patient and then dispense bioelectronic therapy as needed.

    The second challenge is devising a method to make sense of signals emanating simultaneously from hundreds of thousands of neurons. Accurate recording and analysis are essential to bioelectronics in order for researchers to identify the discrepancies between baseline neural signals in healthy individuals and those produced by someone with a particular disease. The conventional approach to recording neural signals is to use tiny probes with electrodes inside called patch clamps. A prostate-cancer researcher, for example, could attach patch clamps to a nerve linked to the prostate in a healthy mouse and record the activity. The same thing would be done with a mouse whose prostate had been genetically engineered to produce malignant tumors. Comparing the output from both might allow the researcher to determine how the neural signals differ in cancerous mice. From such data, a corrective signal could be programmed into a bioelectronic device to treat the cancer.

    But there are drawbacks to using patch clamps. They can sample only one cell’s activity at a time, and therefore fail to gather enough data to see the big picture. As Adam E. Cohen, who teaches chemistry and physics at Harvard, puts it, “It’s like trying to watch an opera through a straw.”

    Cohen, an expert in an emerging field called optogenetics, thinks he can overcome the limitations of the patch clamps. His research is trying to use optogenetics to decipher the neural language of disease. “Getting patch clamps into a single [neuron] is extremely slow and laborious — about an hour per cell,” Cohen told me when I visited his lab recently. “The bigger problem is that [neural] activity comes not from the voices of individual neurons but from a whole orchestra of them acting in relation to each other. Poking at one at a time doesn’t give you the global view.”

    Optogenetics arose out of a series of developments in the 1990s. Scientists knew that proteins, called opsins, in bacteria and algae generated electricity when exposed to light. Optogenetics exploits this mechanism. Opsin genes are inserted into the DNA of a harmless virus, which is then injected into the brain or a peripheral nerve of a test subject. By choosing a virus that prefers some cell types over others, or by altering the virus’s genetic sequence, researchers can target specific neurons — cold- or pain-sensing, for example — or regions of the brain known to be responsible for certain actions or behaviors. Next, an optical fiber — a spaghetti-thin glass cable that transmits light from its tip — is inserted through the skin or skull to the site of the virus. The fiber’s light activates the opsin, which in turn conducts an electrical charge that forces the neuron to fire. Researchers have already controlled mouse behavior with optogenetics — inducing sleep and aggression on command.

    Continue reading the main story
    Instead of drugs, says the man who runs GlaxoSmithKline's bioelectronics research and development, ‘the treatment is a pattern of electrical impulses. The information is the treatment.’

    Before opsins can be used to activate neurons involved in specific ailments, however, scientists must determine not only which neurons are responsible for a particular disease but also how that disease communicates with the nervous system. Like computers, neurons speak a binary language, with a vocabulary based on whether their signal is on or off. The specific sequence, interval and intensity of these on-off shifts determine how information is conveyed. But if each disease can be thought of as speaking its own language, then a translator is needed. What Cohen and others recognized was that optogenetics can do that job. So Cohen reverse-engineered the process: Instead of using light to activate neurons, he used light to record their activity.

    Cohen showed me his “Optopatch” machine. It consisted of red and blue lasers, mirrors, lenses, a high-speed digital camera, a video projector, a microscope and several quiet cooling fans. After he turned it on, a postdoc fellow who works in his lab, Shan Lou, inserted a petri dish under its microscope. The dish contained 11 live neural cells from mice, harvested from dorsal-root ganglia, which relay sensory input to the brain. Lou added a few drops of capsaicin extract, the irritant in pepper spray, and then turned the camera on for 14 seconds. In that brief period, it snapped 7,000 frames, totaling 12 gigabytes of data. To analyze it, Cohen had written software that searches for patterns by employing techniques developed for digital voice and face recognition. “We also use algorithms and optical tricks derived from astrophysics,” Cohen said. Seconds later, an analysis appeared on Lou’s computer screen. Three of the 11 cells had been identified as firing in response to the capsaicin, indicating that they were pain-sensing neurons. It would have taken Cohen more than a day to record and make sense of that cellular information with a patch clamp. This sort of effort was a step, he said, “toward imaging large numbers of neurons in parallel, hundreds, perhaps thousands.”

    Cohen is collaborating with Ed Boyden, a professor of neuroscience at M.I.T. and a pioneer in optogenetics, to develop the so-called closed-loop implant envisioned by Bryan McLaughlin at Draper Labs. Optogenetics, Boyden told me, enables him to “aim light at some subset of cells [without] activating all the stray cells nearby.”

    Opsins might point the way to future treatments for all kinds of diseases, but researchers will most likely have to develop bioelectronic devices that don’t use them. Using genetically engineered viruses is going to be tough to get past the F.D.A. The opsin technique hinges on gene therapy, which has had limited success in clinical trials, is very expensive and seems to come with grave health risks.

    Cohen mentions two alternatives. One involves molecules that behave like opsins; another uses RNA that converts into an opsinlike protein — because it doesn’t alter DNA, it doesn’t have the risks associated with gene therapy. Neither approach is very far along, however. And “you still face the problem of getting the light in,” he says. Boyden is developing a brain implant with a built-in laser, but Cohen believes an external light source is more likely for most bioelectronics applications.

    Surmounting these sorts of technical hurdles “might take 10 years,” Famm figures. That seems somewhat optimistic if you consider Glaxo’s investment so far in bioelectronics. Melinda Stubbee, the company’s director of communications, says it has spent roughly $60 million in the area, a pittance compared with its $6.5 billion in total R. & D. expenditures in 2013. Slaoui, defending the number, said, “Funding of R. & D. is like an investment” — money only flows toward bankable ideas. While he thinks the area shows promise, he seems to want independent researchers to do the legwork before Glaxo buys in further.

    Continue reading the main story
    ‘I think this is the industry that will replace the drug industry,’ says a pioneer in bioelectronics.

    At one point, Famm referred to detractors who say bioelectronics is “too risky, will take too long and is maybe even a bit bonkers.” In trying to find some of them, I contacted a number of financial analysts who track Glaxo and the pharmaceutical industry. One, Mark Clark, at Deutsche Bank, said to me in an email: “I know next to nothing about this early-stage technology! I am prepared to bet you will not find a single Glaxo analyst that knows anything about this! Research technologies were a vogue thing to be expert on in the ’90s and tech-bubble years, but we only care about drugs that are actually in the clinical pipeline these days, not how they get there — to be brutally blunt!”

    In short, the fledgling bioelectronics industry is nowhere near mature enough for analysts to make meaningful estimates about its revenue potential. But people like Clark will certainly begin paying closer attention if bioelectronics starts to capture even a sliver of the lucrative pharmaceutical market. Drug sales for rheumatoid arthritis alone were $12.3 billion in 2012. That looks like a big opportunity to an outfit like SetPoint.

    Yet if large numbers of patients someday choose bioelectronics over drugs, another issue awaits resolution: security. Bioelectronics devices will feature wireless connectivity so they can be fine-tuned and upgraded, “just like the software on your iPhone,” Famm says. And wireless means hackable, an unsettling fact that worries two experts on medical-device security: Niraj Jha, a professor of electrical engineering at Princeton University, and Anand Raghunathan, who runs the Integrated Systems Laboratory at Purdue.

    Fears of medical devices being hacked aren’t new. In 2007, **** Cheney’s cardiologist disabled the wireless functionality in the former vice president’s defibrillator to prevent terrorists from trying to stop his heart. Jha and Raghunathan, along with the lead author, Chunxiao Li, detailed how this might be accomplished in a seven-page paper they wrote, “Hijacking an Insulin Pump,” published in June 2011. The paper described a hack they performed in their lab using inexpensive, off-the-shelf hardware.

    According to Jha and Raghunathan, there are no known cases of malicious attacks on medical devices. Nevertheless, Raghunathan says, “Society should be warned about these possibilities.” The Department of Homeland Security is no doubt worried, addressing the potential threat in an alert it issued last June. In August, the F.D.A. offered guidelines to medical-device manufacturers, recommending “wireless protection” to reduce “risks to patients from a security breach.” Whether bioelectronics developers do anything to thwart hacking (the F.D.A. guidelines are not mandatory) may ultimately depend on whether Jha and Raghunathan’s fears are realized.

    Draper’s McLaughlin doesn’t dismiss these concerns but notes that there is no “incentive for device companies to do anything about security.” He adds: “Nobody has been sued. No patient has died. But the first event that occurs with one of these devices — companies will jump on it and create secure platforms.”

    SetPoint’s chief technology officer is Mike Faltys, a medical engineer who was integral to designing the modern cochlear implant. Faltys worked for six years out of his garage, first re-engineering an existing electrical stimulator, used to stop seizures, that became the device implanted in patients in SetPoint’s trial, and more recently finishing a significantly more advanced implantable unit that he calls “the microregulator.”

    Housed in a pod shaped like a hot-dog bun and the size of a multivitamin, the entire microregulator is entirely self-contained — onboard battery, microprocessor and electrodes are integrated into a single unit. It can be wirelessly recharged, and adjusted and updated with an iPad app. The surgery to clamp it onto the vagus nerve will take about 20 minutes, and once in place, it will provide pain relief to a rheumatoid-arthritis patient for a decade or more before it needs servicing.

    On one occasion during my travels with Famm, I got to hold SetPoint’s newfangled microregulator. For now, it’s only capable of transmitting very crude signals to communicate with the nervous system — more like grunts and groans rather than the precise vocabulary that Slaoui envisions for bioelectronic therapies. Even so, the microregulator felt elegant and powerful and promising in my palm. “A patient gets a device like this implanted once for one disease, and they’re done,” Tracey says. “No prescriptions, no medicines, no injections. That’s the future. That’s what gets me out of bed in the morning.”

    Source : nytimes
    Last edited by a moderator:


    Active Member
    Ant groups 'more efficient than Google' in processing data, new study finds

    The dedication and stamina of the worker ant, toiling through the summer months and preparing for winter, were celebrated in Aesop’s Fables – in contrast to the lazy, singing grasshopper, unready for the hardships ahead.

    Now research shows that ants don’t just flourish because they work hard and will slavishly sacrifice themselves for the collective. Their success is also due to their group ability to process information “far more efficiently than Google” in the daily search for food, according to scientists.

    A major behavioural mathematics study, which could also have ramifications for how we understand human behaviour on the internet, used complex computer modelling to reveal how ants bring order to chaos by creating “highly complex networks” to govern their actions.

    It found that not only are ants “surprisingly efficient”, but they are able to deploy ingenious navigation strategies to divide themselves between “scout” and “gathering” ants during “complex feed-search movements”.

    The joint Chinese-German study, which is published in Proceedings of the National Academy of Sciences, found that while individual “scout” ants may seem “chaotic” in their movements, they are leaving a trail of pheromones to allow following “gathering” ants to refine and shorten their journeys to food sources in the vicinity of the colony.

    As this journey is repeated again and again by worker ants carrying their loads, a “self-reinforcing effect of efficiency” creates a shorter trail, saving the colony the time and energy of “continued chaotic foraging”. “While single ants can appear chaotic and random-like, they very quickly become an ordered line of ants crossing the woodland floor in the search for food,” co-author of the study Professor Jurgen Kurths told The Independent.

    He added: “That transition between chaos and order is an important mechanism and I’d go so far as to say that the learning strategy involved in that, is more accurate and complex than a Google search. These insects are, without doubt, more efficient than Google in processing information about their surroundings.”

    Previous studies had shown that worker ants assigned the most dangerous food-gathering tasks tended to be older, less valuable insects. This suggested that ant colonies were reluctant to risk their younger, more productive members.

    However, the new study reveals that older ants are valued for their increased knowledge of their nest’s surroundings.

    According to Professor Kurths, the mathematical model used in the study – which converted well-known ant behaviour patterns into equations and algorithms – is equally applicable to other animals that share homing instincts, such as albatrosses.

    It could even be used to provide a “new perspective” on behavioural patterns of humans in areas as diverse as transportation systems and how we browse the internet.

    The study comes a week after a team from the Georgia Institute of Technology revealed that ants’ skills at building stable tunnels in loose sand could aid in the design of a new generation of search-and-rescue robots.

    The team used high-speed cameras to observe how fire ants can use their antennae as extra limbs to catch themselves when they fall, in a development that can be reproduced in the development of fledgling rescue technologies.

    source independent
    Elvis left the building

    Elvis left the building

    Legendary Member
    Ant groups 'more efficient than Google' in processing data, new study finds

    The dedication and stamina of the worker ant, toiling through the summer months and preparing for winter, were celebrated in Aesop’s Fables – in contrast to the lazy, singing grasshopper, unready for the hardships ahead.

    Now research shows that ants don’t just flourish because they work hard and will slavishly sacrifice themselves for the collective. Their success is also due to their group ability to process information “far more efficiently than Google” in the daily search for food, according to scientists.

    A major behavioural mathematics study, which could also have ramifications for how we understand human behaviour on the internet, used complex computer modelling to reveal how ants bring order to chaos by creating “highly complex networks” to govern their actions.

    It found that not only are ants “surprisingly efficient”, but they are able to deploy ingenious navigation strategies to divide themselves between “scout” and “gathering” ants during “complex feed-search movements”.

    The joint Chinese-German study, which is published in Proceedings of the National Academy of Sciences, found that while individual “scout” ants may seem “chaotic” in their movements, they are leaving a trail of pheromones to allow following “gathering” ants to refine and shorten their journeys to food sources in the vicinity of the colony.

    As this journey is repeated again and again by worker ants carrying their loads, a “self-reinforcing effect of efficiency” creates a shorter trail, saving the colony the time and energy of “continued chaotic foraging”. “While single ants can appear chaotic and random-like, they very quickly become an ordered line of ants crossing the woodland floor in the search for food,” co-author of the study Professor Jurgen Kurths told The Independent.

    He added: “That transition between chaos and order is an important mechanism and I’d go so far as to say that the learning strategy involved in that, is more accurate and complex than a Google search. These insects are, without doubt, more efficient than Google in processing information about their surroundings.”

    Previous studies had shown that worker ants assigned the most dangerous food-gathering tasks tended to be older, less valuable insects. This suggested that ant colonies were reluctant to risk their younger, more productive members.

    However, the new study reveals that older ants are valued for their increased knowledge of their nest’s surroundings.

    According to Professor Kurths, the mathematical model used in the study – which converted well-known ant behaviour patterns into equations and algorithms – is equally applicable to other animals that share homing instincts, such as albatrosses.

    It could even be used to provide a “new perspective” on behavioural patterns of humans in areas as diverse as transportation systems and how we browse the internet.

    The study comes a week after a team from the Georgia Institute of Technology revealed that ants’ skills at building stable tunnels in loose sand could aid in the design of a new generation of search-and-rescue robots.

    The team used high-speed cameras to observe how fire ants can use their antennae as extra limbs to catch themselves when they fall, in a development that can be reproduced in the development of fledgling rescue technologies.

    source independent
    if they develop weapons we're doomed lol