AI-2013 is the thirty-third Annual International Conference of the influential and respected British Computer Society’s Specialist Group on Artificial Intelligence (SGAI), which will (again) be held in Peterhouse College, Cambridge, England.
As in previous years, the Conference is aimed at those who wish to update themselves with news and views of recent developments, understand how other groups are applying the technology and exchange ideas with leading international experts in the field. It will continue to be a meeting place for the international artificial intelligence community. The scope of the conference includes the whole range of AI technologies and application areas. Its principal aims are to review recent technical advances in AI technologies and to show how this leading edge technology has been applied to solve business and other problems.
A feature of previous conferences has been what is known as “the technical stream”. This is where the best most recent developments in AI are presented and discussed. This stream covers knowledge based systems, machine learning, verification and validation of AI systems, neural networks, genetic algorithms, natural language understanding and data mining. One of the most anticipated lectures to be given will be on neurodynamics and creativity, given by Prof. Murray Shanahan (Imperial College, London).
The conference is where there is a meeting of minds (and sometimes healthy disagreements) about all aspects of AI. We anticipate some headline AI material being published before the end of the year as a result of this conference. Watch this space!
Do you remember the computer HAL 9000 in Stanley Kubrick’s 1960’s science fiction film “2001 A Space Odyssey”? Forget the fact that it went a bit OTT and mission crazy towards the end, one of the interesting things about HAL (which stood for Heuristically programmed ALgorithmic computer) was that it could understand and converse in English. No need for inputting via a keyboard, or translation into machine code. Think “lexical semantics”.
Ever since the release of the film linguists and computer scientists have tried to get computers to understand human language by programming the semantics of language as software. We already have programmes that can understand and distinguish numbers and certain words on our mobiles, when we pay bills over the phone, and even in computer games, but a computer that can understand and be fluent in a human language has eluded us.
That may be changing- a University of Texas at Austin linguistics Professor, Katrin Erk, is using supercomputers to develop a new method for helping computers learn natural language.
Instead of hard-coding human logic or deciphering dictionaries to try to teach computers language, Erk decided to try a different tactic: feed computers a vast body of texts (as an input of human knowledge) and use the implicit connections between the words to create a map of relationships.
“An intuition for me was that you could visualize the different meanings of a word as points in space. You could think of them as sometimes far apart, like a battery charge and criminal charges, and sometimes close together, like criminal charges and accusations (“the newspaper published charges…”). The meaning of a word in a particular context is a point in this space. Then we don’t have to say how many senses a word has. Instead we say: ‘This use of the word is close to this usage in another sentence, but far away from the third use.'”
I have to say that as a human, I had some trouble getting my head round that quote! Perhaps we should be looking at how babies learn language and try to replicate that learning in a computer. But back to Erk’s work, to create a model that can accurately recreate the intuitive ability to distinguish word meaning will require a lot of text and a lot of analytical crunching power.
“The lower end for this kind of a research is a text collection of 100 million words. If you can give me a few billion words, I’d be much happier. But how can we process all of that information? That’s where supercomputers come in.”
So we need a mega computer to help us devise a computer that will not only understand us, but communicate intelligently with us. If this could be achieved, how close wiuld such a computer be to a sentient entity? What if it’s first words to us once we switch on this fully loaded language-conversant computer are “I’m hungry”! Well, as long it doesn’t start singing “Daisy Daisy” and switching off life support systems… But perhaps HAL 9001 will be better behaved.
“about the brain as a car, [and] what we have created is a car which has its engine on the roof and the gear box in the trunk. You can study the car parts but you can’t drive it.”
In one experiment, the researchers grew a mini-brain using cells taken from a patient with microcephaly. That disease (meaning “small head”) is when the size of the skull limits the size of the brain and therefore results in impaired brain function and limited longevity. They found that the brain grown from microcephaly-affected stem cells resulted in a stunted mini brain – mimicking the effects of the disease.
Neuroscientist Professor Paul Matthews, from Imperial College London, said the study offered the promise of a ‘major new tool’ for understanding major developmental disorders.
But not everyone is bubbling with enthusiasm for this breakthrough: Dr Dean Burnett, lecturer in psychiatry at the University of Cardiff, said;
“Saying you can replicate the workings of the brain with tissue in a dish is like inventing the abacus and saying you can use it to run the latest version of Microsoft Windows.”
So the bottom line is that while this is progress, we’ll be assembling no Franklenstein monsters, or finding a cure for brain diseases just yet.
It’s been a science fiction standard since the 30’s- the ability to read people’s minds… and then manipulate them to make them do what you want. Most often an ability used by evil aliens bent on dominating Earth. Think the Midwich Cuckoos, the Daleks’ Robomen, and more recently the children “harnessed” in Falling Skies. But the ability to read a mind is now one step coser thanks to a Dutch computer program that can identify what someone is looking at using brain scans.
A team from Radboud University Nijmegen in the Netherlands (pictured above)used image and shape recognition software and a specially designed algorithm to assess changes in a person’s brain activity using functional magnetic resonance imaging (fMRI) technology.
Those participating in the tests were shown a series of letters. The scientists were able to tell, from the brain scans exactly what letters they were looking at and when, using the software. During the tests thescientists showed participants letters and then ran the changes that occurred in the brain after each letter was shown through a specially created computer algorithm to identify them. They used fMRI scans to zoom in on changes in specific regions of the brain called voxels, in the occipital lobe; one of the four main lobes or regions of the cerebral cortex in the brain. These voxels are around 2 millimetres square and the occipital lobe is the part of the brain which reacts to visual stimuli and processes what the eyes can see through the retina.
This algorithm was able to convert the voxels, and their relevant changes, into image pixels, making it possible to reconstruct a picture of what the person was looking at, at the time of the scan. The model has been designed to compare letters, and so could be seen as rather limited. But it could be expanded for other imagery. This could include complex imagery such as a person’s face.
Ok, while this may not be exactly reading people’s thoughts, if it can be discovered how the brain reacts to non-physical/visual stimuli then “telepathic” computer algorithms could be developed- and that would be definitley in the realms of mind-reading.
This robot has been created by Kawasaki Heavy Industries. The bot has seven degrees of freedom and looks as if it is coming straight out of a Terminator movie. But it’s stainless steel look has a reason: It can be used in the drug discovery and pharmaceutical fields to automate experiments that use dangerous chemicals. It can be sterilized using hydrogen peroxide gas, which allows it to work in sterile environments.