Action! Directing your Dreams may soon be Possible

Film-Studio-Set-775

It’s always been one of our dreams… to be able to direct our dreams. it’s known as lucid dreaming. A dream state where you are totally or partially aware of being in a dream and being able to act and direct yourself and others within it. Imagine being the star, director, and scriptwriter for your own ultra-realistic dream-film!

Scientists have discovered that it is possible to induce lucid dreaming in sleepers by applying mild electrical currents to their scalps, a recent  study reported in the publication Nature Neuroscience says.

Professor J Allan Hobson, from Harvard Medical School co-authored the paper. He said:

“The key finding is that you can, surprisingly, by scalp stimulation, influence the brain. And you can influence the brain in such a way that a sleeper, a dreamer, becomes aware that he is dreaming.” 

It is a continuation of previous research in this field  led by Dr Ursula Voss of Johann Wolfgang Goethe-University in Germany who said:

“Lucid dreaming is a very good tool to observe what happens in the brain and what is causally necessary for secondary consciousness.”

Prof. Hobson also thought it could have medical benefits:

As a model for mental illness, understanding lucid dreaming is absolutely crucial. “I would be cautious about interpreting the results as of direct relevance to the treatment of medical illnesses, but [it’s] certainly a step in the direction of understanding how the brain manages to hallucinate and be deluded.”

By examining the sleepers’ REM (Rapid Eye Movements- the indicator of deep sleep dreaming) and brainwaves over a range of frequencies, scientists have found that lucid dreamers demonstrate a shift towards a more “awake-like” state in the frontal and temporal parts of the brain, with the peak in increased activity occurring around 40Hz.

a83822021

The study involved 27 volunteers, none of whom had experienced lucid dreaming before. The researchers waited until the volunteers were experiencing uninterrupted REM sleep before applying electrical stimulation to the frontal and temporal positions of the volunteers’ scalps. The applied stimulation had a variety of frequencies between two and 100Hz, but neither the experimenter nor the volunteer was informed which frequency was used, or even whether a current was applied. Five to 10 seconds later the volunteers were roused from their sleep and asked to report on their dreams. Brain activity was monitored continuously throughout the experiment.

The results showed that stimulation at 40Hz (and to a lesser extent at 25Hz) resulted in an increase in brain activity of around the same frequency in the frontal and temporal areas. They  found that such stimulation, more often than not, induced an increased level of lucidity in the dreams of  the sleepers.

The authors suggest triggering lucid dreaming in sleepers might enable them to control nightmares, for example returning soldiers  suffering with PTSD; post-traumatic stress disorder.

But for others, the chance to be “awake within a dream” may be possible… perhaps a dream come true?

Dreamsblog

VN:F [1.9.11_1134]
VN:F [1.9.11_1134]

The AI that Knows What You’re Thinking….

google-deepmind-artificial-intelligence

If you are a scientist specialising in the development of AI, artificial intelligence,  then you are in demand, big time! Silicon Valley Facebook, Google and other leading tech companies are jockeying to hire top scientists in the field of artificial intelligence, while spending heavily on a quest to make computers think more like people. Forget humanoid robots doing chores… at least for the moment. The race is currently on for computers that understand exactly what you want, perhaps even before you’ve asked them. It’s to make the human-computer interface more even, more compatible and more intuitive.  But it could also mean that the AI will know what you’re thinking… and that’s a bit freaky!

Of course, we already have AI programs can already recognise images and translate human speech. But tech researchers and scientists want to build systems that can match the human brain’s ability to handle more complex challenges. These can include to intuitively predict traffic conditions while steering automated cars or drones, for example, or to grasp the intent behind written texts and spoken messages rather than interpret them literally and slavishly.

Google paid a reported $US400 million in January to buy DeepMind, a British start-up said to be working on artificial intelligence for image recognition, e-commerce recommendations and video games. DeepMind had also drawn interest from Facebook.

download

The ultimate goal is something closer to “Samantha,” the personable operating system voiced by actress Scarlett Johansson in the film “Her”.

Her

Already, Google has used artificial intelligence to improve its voice-enabled search and Google Now, as well as its mapping and self-driving car projects. Google CEO Larry Page said this at a TED technology conference last month.

“I think we’re seeing a lot of exciting work going on, that crosses computer science and neuroscience, in terms of really understanding what it takes to make something smart.”

He then showed videos from Google and DeepMind projects in which computer systems learned to recognise cats from other animals and play games – without detailed programming instructions.

Google and Facebook both hope to do more with “deep learning,” in which computer networks teach themselves to recognise patterns by analysing vast amounts of data, rather than rely on programmers to tell them what each pattern represents. The networks tackle problems by breaking them into a series of steps, the way layers of neurons work in a human brain.

Google-buys-deepmind-artificial-intelligence-company-cover

For some, a powerful artificial brain that knows your preferences and habits and anticipates your wants and needs is a bit frightening and companies will need to consider ethics and privacy as they develop new services. The idea is to help us humans, not to cause us anxiety. If it all gets to much, you can always reach for the power switch and turn of the juice… but will the AI have anticipated that already? Click!

 

VN:F [1.9.11_1134]
VN:F [1.9.11_1134]

Stephen Hawking and his fear of Artificial Intelligence

gaming_team_fortress_man_vs_machine

In the blue corner we have Stephen Hawking, representing mankind; world renowned physicist, presenter, philosopher and cosmologist, author of the blockbusting book “A Brief History of Time”, and a brain the size of a small planet.

hawking

In the red corner, a computer. Or a computer programme, perhaps, representing AI. Artificial Intelligence.

Seconds, away, Round One. Well, not just yet, this is to be a future bout of boxing, in the not too distant future. Humankind versus AI. Some would say it should never be a contest at all. We humans invented AI, and can control it. It is our baby, our spawn of the future, and it can never bite the hand that feeds…. or can it? Stephen Hawking thinks AI is a threat to all our futures…

Stephen Hawking, in an interview with a UK Sunday paper is quoted thus:

“Success in creating AI would be the biggest event in human history. Unfortunately, it might also be the last.”

Hawking thinks we are moving too quickly, too far, without considering the possiblerepercussions. From digital personal assistants to self-driving  cars- he believes we’re on the cusp of the kinds of artificial intelligence that were previously exclusive to science fiction films. Shades of 2001: A Space Odyssey and Hal 9000 and I, Robot? The possibility of smart robots roaming the streets is not so far-fetched.  he basically asks who will control AI when AI becomes programmed to control itself?

download (1)iRobot_G_08

It’s not just that there may be massive unemployment due to robotisation, if a robot is sufficiently intelligently programmed to consider itself “aware” or even “alive” then why would it allow anyone to control it, or worse still, switch it off? If the answer is no, then we could be on the way to a global conflict between humans and robots. Pure fantasy? Stephen Hawking doesn’t think so. Perhaps we should start dumbing down drones already….

predator-firing-missile4

 

VN:F [1.9.11_1134]
VN:F [1.9.11_1134]

Immortality. Where to Put all those Accumulating Memories?

images

Let’s imagine (or scroll forward if you think it definitely will happen) we manage to cure ageing, both in the body and in the mind. We have immortality. Nice. But how would we be able to cope with the infinitely accumulating number of memories and things to remember?

Memory is a funny old thing. We do get more forgetful as we get older. Yet we still have retained and lodged forever in our memory certain events, numbers, faces, facts and triggers.  Even though brain cells die at a quicker rate as we get older, we must assume that immortality will include the ability for the brain to regenerate itself. So our capacity to think is unimpaired because our brains will remain functioning.

But what of memory? Research indicates that there is a certain part of the brain where memories reside. But it is of a finite size. As more and more experiences and memories are accumulated throughout the centuries and aeons of our immortality, that part of the brain will just become clogged and full and unable to absorb any more information. It would also be difficult if not impossible  to recall information because there are so many full rooms,corridors and halls all full of filing cabinets, full of folders, full of papers.

row-of-filing-cabinets_i-g-56-5654-phsmg00z

Your brain can keep all that stuff organized for a while (say, the span of most of a normal human lifetime) but it’s not like you can go into your brain and just delete files like cleaning up a hard drive of a computer.

Your immortal life and experiences may be infinite, but your brain’s ability to store and recall them is not. After a relatively short time into your immortality, as early as 300 years old,  your brain will be chock-a-block piled up with information/junk like one of the habitual hoarders who can never clean up or throw anything out.

NS043-Mind-Uploading

The only possible solution would appear to be to connect to technology that could store, sort and recall all that information- and perhaps delete it to. “Total Recall” anyone?  So perhaps at 30o years old plus you’ll be permanently wired in to a data/memory dump/recall system.  But your biological brain’s ability to process the information and retain it, would not be expanded once it has received the input from the memory card or whatever.  And of course you might forget that you have stored the information remotely and so the whole system falls down again!

Immortality? Not as easy as it sounds!

VN:F [1.9.11_1134]
VN:F [1.9.11_1134]

Paraplegic’s Robotic Exoskeleton to take First Kick of World Cup

FIFA-World-Cup-2014_0

There are some World Cup Football Teams who you may think need a miracle in order to win the coveted World Cup. But a miracle, of sorts, will take place on the football pitch of the Arena Corinthians in  São Paulo at the opening ceremony of the World Cup at 5pm 12 June (local time). A young Brazilian will take a kick of a football on the centre spot.

But this is no ordinary young man, nor is the kick trivial. The boy, yet to be chosen from a shortlist of nine aged between 20 and 40,will be  a paraplegic. He will rise from his wheelchair and walk to the midfield and then kick the ball. How?

It’s down to Miguel Nicolelis and his team of neuro-engineers and scientists at Duke University in North Carolina. And if the event works as intended, it should spell the end for wheelchairs, and the evolution of mind-controlled exoskeleton robots. Here’s a picture of the robot:

miguel.nicolelisx299

The mind-controlled robotic exoskeleton is a complex robotic suit built from lightweight alloys and powered by hydraulics. When a paraplegic person straps themselves in, the machine does the job that their leg muscles no longer can. But there are no buttons levers or controls to tell the robot what to do. Only the human brain.

The exoskeleton is the culmination of years of work by an international team of scientists and engineers on the Walk Again project. The robotics work was coordinated by Gordon Cheng at the Technical University in Munich and the French researchers built the exoskeleton. Nicolelis’ team focused on what many say is the most difficult bit; ways to read people’s brain waves, and use those signals to control robotic limbs.

To operate the exoskeleton, the person is helped into the suit and given a cap to wear that is fitted with electrodes to pick up their brain waves. These signals are passed to a computer worn in a backpack, where they are decoded and used to move hydraulic drivers on the suit. There’s a battery that powers everything, with a two hour life before it needs re-charging.

Nicolelis releases video with robotic exoskeleton in action

The operator’s feet rest on plates which have sensors to detect when contact is made with the ground. With each footstep, a signal is transmitted to a vibrating device sewn into the forearm of the wearer’s shirt. The device fools the brain into thinking that the sensation came from their foot. In virtual reality simulations, patients felt that their legs were moving and touching something. Here’s a diagram showing the details.

Last month, Nicolelis and his colleagues went to some football matches in São Paulo to check whether mobile phone radiation from the crowds might interfere with the suit. Electromagnetic waves could make the exoskeleton misbehave, but they were reassured.

miguel-nicolelis zzzzzzzzzzzzzzz

This is ground-breaking robotic/artificial intelligence/mind-control technology all rolled into one: Let’s keep our fingers crossed that  we will all witness the miracle first kick of the World Cup on 12 June.

VN:F [1.9.11_1134]
VN:F [1.9.11_1134]

23rd International Artificial Intelligence Conference 10-12 December 2013

sgainew2-small

AI-2013 is the thirty-third Annual International Conference of the influential and respected British Computer Society’s Specialist Group on Artificial Intelligence (SGAI), which will (again) be held in Peterhouse College, Cambridge, England.

As in previous years, the Conference is aimed at those who wish to update themselves with news and views of recent developments, understand how other groups are applying the technology and exchange ideas with leading international experts in the field. It will continue to be a meeting place for the international artificial intelligence community. The scope of the conference includes the whole range of AI technologies and application areas. Its principal aims are to review recent technical advances in AI technologies and to show how this leading edge technology has been applied to solve business and other problems.

A feature of previous conferences has been what is known as “the technical stream”. This is where the best most recent developments in AI are presented and discussed. This stream covers knowledge based systems, machine learning, verification and validation of AI systems, neural networks, genetic algorithms, natural language understanding and data mining. One of the most anticipated lectures to be given will be on neurodynamics and creativity, given by Prof. Murray Shanahan (Imperial College, London).

The conference is where there is a meeting of minds (and sometimes healthy disagreements) about all aspects of AI. We anticipate some headline AI material being published before the end of the year as a result of this conference. Watch this space!

VN:F [1.9.11_1134]
VN:F [1.9.11_1134]

Computers Soon to Understand Human Language?

HAL_9000

Do you remember the computer HAL 9000 in Stanley Kubrick’s 1960’s science fiction film “2001 A Space Odyssey”? Forget the fact that it went a bit OTT and mission crazy towards the end, one of the interesting things about HAL (which stood for Heuristically programmed ALgorithmic computer) was that it could understand and converse in English. No need for inputting via a keyboard, or translation into machine code.  Think  “lexical semantics”.

Ever since the release of the film linguists and computer scientists have tried to get computers to understand human language by programming the semantics of language as software.  We already have programmes that can understand and distinguish numbers and certain words on our mobiles, when we pay bills over the phone,  and even in computer games, but  a computer that can understand and be fluent in a human language has eluded us.

That may be changing- a University of Texas at Austin linguistics Professor, Katrin Erk, is using supercomputers to develop a new method for helping computers learn natural language.

Katrin Erk

Instead of hard-coding human logic or deciphering dictionaries to try to teach computers language, Erk decided to try a different tactic: feed computers a vast body of texts (as an input of human knowledge) and use the implicit connections between the words to create a map of relationships.

Erk says:

“An intuition for me was that you could visualize the different meanings of a word as points in space. You could think of them as sometimes far apart, like a battery charge and criminal charges, and sometimes close together, like criminal charges and accusations (“the newspaper published charges…”). The meaning of a word in a particular context is a point in this space. Then we don’t have to say how many senses a word has. Instead we say: ‘This use of the word is close to this usage in another sentence, but far away from the third use.'”

I have to say that as a human, I had some trouble getting my head round that quote! Perhaps we should be looking at how babies learn language and try to replicate that learning in a computer.  But back to Erk’s work, to create a model that can accurately recreate the intuitive ability to distinguish word meaning will require a lot of text and a lot of analytical crunching power.

Erk again:

“The lower end for this kind of a research is a text collection of 100 million words. If you can give me a few billion words, I’d be much happier. But how can we process all of that information? That’s where supercomputers come in.”

So we need a mega computer to help us devise a computer that will not only understand us, but communicate intelligently with us. If this could be achieved, how close wiuld such a computer be to a sentient entity? What if it’s first words to us once we switch on this fully loaded language-conversant computer are “I’m hungry”!   Well, as long it doesn’t start singing “Daisy Daisy” and switching off life support systems… But perhaps HAL 9001 will be better behaved.

VN:F [1.9.11_1134]
VN:F [1.9.11_1134]

Brains Grown in Laboratory

cerebral-organoid-brain-grown-in-a-jar-640x412
There was a line in the Bonzo Dog Doh Dah Band’s cover of the song “The Monster Mash” in which the question is asked of Igor.. “Have you watered the brains?” While a mock schlock rock song about werewolves, vampires and mummies having a great party rave up together may not seem predictive, the reference to nurturing brains now might seem prescient. That’s because “mini-brains” have been grown in a laboratory, and they may hold the key to diseases such as schizophrenia and autism.
The 4mm-wide biological structures have made using human stem cells. Can they be used instead of computer CPU’s? Can they be used as add-on memory to our own brains? Could they be grown so large that they could be used a substitute superior brains? Are they alive and can they think?Sorry, that’s still science fiction. The mini brains are incapable of thought and no use for transplants. So what on earth use are they? Because they share the design of functioning brains, they may be useful for research into the brain and its workings and in the testing of drugs.Rather bizarrely, we have been asked to think..

“about the brain as a car, [and] what we have created is a car which has its engine on the roof and the gear box in the trunk. You can study the car parts but you can’t drive it.”

Prof. Knoblich

These are the wprds of Professor Juergen Knoblich of the Institute of Molecular Biotechnology in Vienna and head of a British and Austrian team. The research was published in journal Nature recently.

In one experiment, the researchers grew a mini-brain using cells taken from a patient with microcephaly. That disease (meaning “small head”) is when the size of the skull limits the size of the brain and therefore results in impaired brain function and limited longevity. They found that the brain grown from microcephaly-affected stem cells resulted in a stunted mini brain – mimicking the effects of the disease.

Neuroscientist Professor Paul Matthews, from Imperial College London, said the study offered the promise of a ‘major new tool’ for understanding major developmental disorders.

But not everyone is bubbling with enthusiasm for this breakthrough: Dr Dean Burnett, lecturer in psychiatry at the University of Cardiff, said;

“Saying you can replicate the workings of the brain with tissue in a dish is like inventing the abacus and saying you can use it to run the latest version of Microsoft Windows.”

So the bottom line is that while this is progress, we’ll be assembling no Franklenstein monsters, or finding a cure for brain diseases just yet.

VN:F [1.9.11_1134]
VN:F [1.9.11_1134]

Mind Reading a Step Closer

fotolia_small

Harnessed child from “Falling Skies” TV series

It’s been a science fiction standard since the 30’s- the ability to read people’s minds… and then manipulate them to make them do what you want. Most often an ability used by evil aliens bent on dominating Earth. Think the Midwich Cuckoos, the Daleks’ Robomen, and more recently the children “harnessed” in Falling Skies. But the ability to read a mind is now one step coser thanks to a Dutch computer program that can identify what someone is looking at using brain scans.

A team from Radboud University Nijmegen in the Netherlands (pictured above)used image and shape recognition software and a specially designed algorithm to assess changes in a person’s brain activity using functional magnetic resonance imaging (fMRI) technology.

Those participating in the tests were shown a series of letters. The scientists were able to tell, from the brain scans exactly what letters they were looking at and when, using the software. During the tests thescientists showed participants letters and then ran the changes that occurred in the brain after each letter was shown through a specially created computer algorithm to identify them. They used fMRI scans to zoom in on changes in specific regions of the brain called voxels, in the occipital lobe; one of the four main lobes or regions of the cerebral cortex in the brain. These voxels are around 2 millimetres square and the occipital lobe is the part of the brain which reacts to visual stimuli and processes what the eyes can see through the retina.

This algorithm was able to convert the voxels, and their relevant changes, into image pixels, making it possible to reconstruct a picture of what the person was looking at, at the time of the scan. The model has been designed to compare letters, and so could be seen as rather limited. But it could be expanded for other imagery. This could include complex imagery such as a person’s face.

Ok, while this may not be exactly reading people’s thoughts, if it can be discovered how the brain reacts to non-physical/visual stimuli then “telepathic” computer algorithms could be developed- and that would be definitley in the realms of mind-reading. 

 

 

VN:F [1.9.11_1134]
VN:F [1.9.11_1134]

The World’s First Stainless Steel Robot…Check Out How it Moves!

This robot has been created by Kawasaki Heavy Industries. The bot has seven degrees of freedom and looks as if it is coming straight out of a Terminator movie. But it’s stainless steel look has a reason: It can be used in the drug discovery and pharmaceutical fields to automate experiments that use dangerous chemicals. It can be sterilized using hydrogen peroxide gas, which allows it to work in sterile environments.

VN:F [1.9.11_1134]
VN:F [1.9.11_1134]