If you are a scientist specialising in the development of AI, artificial intelligence, then you are in demand, big time! Silicon Valley Facebook, Google and other leading tech companies are jockeying to hire top scientists in the field of artificial intelligence, while spending heavily on a quest to make computers think more like people. Forget humanoid robots doing chores… at least for the moment. The race is currently on for computers that understand exactly what you want, perhaps even before you’ve asked them. It’s to make the human-computer interface more even, more compatible and more intuitive. But it could also mean that the AI will know what you’re thinking… and that’s a bit freaky!
Of course, we already have AI programs can already recognise images and translate human speech. But tech researchers and scientists want to build systems that can match the human brain’s ability to handle more complex challenges. These can include to intuitively predict traffic conditions while steering automated cars or drones, for example, or to grasp the intent behind written texts and spoken messages rather than interpret them literally and slavishly.
Google paid a reported $US400 million in January to buy DeepMind, a British start-up said to be working on artificial intelligence for image recognition, e-commerce recommendations and video games. DeepMind had also drawn interest from Facebook.
The ultimate goal is something closer to “Samantha,” the personable operating system voiced by actress Scarlett Johansson in the film “Her”.
Already, Google has used artificial intelligence to improve its voice-enabled search and Google Now, as well as its mapping and self-driving car projects. Google CEO Larry Page said this at a TED technology conference last month.
“I think we’re seeing a lot of exciting work going on, that crosses computer science and neuroscience, in terms of really understanding what it takes to make something smart.”
He then showed videos from Google and DeepMind projects in which computer systems learned to recognise cats from other animals and play games – without detailed programming instructions.
Google and Facebook both hope to do more with “deep learning,” in which computer networks teach themselves to recognise patterns by analysing vast amounts of data, rather than rely on programmers to tell them what each pattern represents. The networks tackle problems by breaking them into a series of steps, the way layers of neurons work in a human brain.
For some, a powerful artificial brain that knows your preferences and habits and anticipates your wants and needs is a bit frightening and companies will need to consider ethics and privacy as they develop new services. The idea is to help us humans, not to cause us anxiety. If it all gets to much, you can always reach for the power switch and turn of the juice… but will the AI have anticipated that already? Click!
In the blue corner we have Stephen Hawking, representing mankind; world renowned physicist, presenter, philosopher and cosmologist, author of the blockbusting book “A Brief History of Time”, and a brain the size of a small planet.
In the red corner, a computer. Or a computer programme, perhaps, representing AI. Artificial Intelligence.
Seconds, away, Round One. Well, not just yet, this is to be a future bout of boxing, in the not too distant future. Humankind versus AI. Some would say it should never be a contest at all. We humans invented AI, and can control it. It is our baby, our spawn of the future, and it can never bite the hand that feeds…. or can it? Stephen Hawking thinks AI is a threat to all our futures…
Stephen Hawking, in an interview with a UK Sunday paper is quoted thus:
“Success in creating AI would be the biggest event in human history. Unfortunately, it might also be the last.”
Hawking thinks we are moving too quickly, too far, without considering the possiblerepercussions. From digital personal assistants to self-driving cars- he believes we’re on the cusp of the kinds of artificial intelligence that were previously exclusive to science fiction films. Shades of 2001: A Space Odyssey and Hal 9000 and I, Robot? The possibility of smart robots roaming the streets is not so far-fetched. he basically asks who will control AI when AI becomes programmed to control itself?
It’s not just that there may be massive unemployment due to robotisation, if a robot is sufficiently intelligently programmed to consider itself “aware” or even “alive” then why would it allow anyone to control it, or worse still, switch it off? If the answer is no, then we could be on the way to a global conflict between humans and robots. Pure fantasy? Stephen Hawking doesn’t think so. Perhaps we should start dumbing down drones already….
Let’s imagine (or scroll forward if you think it definitely will happen) we manage to cure ageing, both in the body and in the mind. We have immortality. Nice. But how would we be able to cope with the infinitely accumulating number of memories and things to remember?
Memory is a funny old thing. We do get more forgetful as we get older. Yet we still have retained and lodged forever in our memory certain events, numbers, faces, facts and triggers. Even though brain cells die at a quicker rate as we get older, we must assume that immortality will include the ability for the brain to regenerate itself. So our capacity to think is unimpaired because our brains will remain functioning.
But what of memory? Research indicates that there is a certain part of the brain where memories reside. But it is of a finite size. As more and more experiences and memories are accumulated throughout the centuries and aeons of our immortality, that part of the brain will just become clogged and full and unable to absorb any more information. It would also be difficult if not impossible to recall information because there are so many full rooms,corridors and halls all full of filing cabinets, full of folders, full of papers.
Your brain can keep all that stuff organized for a while (say, the span of most of a normal human lifetime) but it’s not like you can go into your brain and just delete files like cleaning up a hard drive of a computer.
Your immortal life and experiences may be infinite, but your brain’s ability to store and recall them is not. After a relatively short time into your immortality, as early as 300 years old, your brain will be chock-a-block piled up with information/junk like one of the habitual hoarders who can never clean up or throw anything out.
The only possible solution would appear to be to connect to technology that could store, sort and recall all that information- and perhaps delete it to. “Total Recall” anyone? So perhaps at 30o years old plus you’ll be permanently wired in to a data/memory dump/recall system. But your biological brain’s ability to process the information and retain it, would not be expanded once it has received the input from the memory card or whatever. And of course you might forget that you have stored the information remotely and so the whole system falls down again!
Immortality? Not as easy as it sounds!
You know the old saying; you wait ages for a bus and then three come along together? It seems we wait ages for scientific advances in one field or another and suddenly there are a plethora of papers, breakthroughs, new ideas and innovations. In the field of longevity, anti-ageing and even immortality the last year has been a bumper one- and it looks like it’s going to continue into 2014. Perhaps we are about to see some real advances in the next five years? We’ve seen naked mole rat DNA, stem cell stimulation to prevent ageing, and now there’s another declaration: Don’t purchase that life assurance plan just yet!
The spotlight shifts to Israel’s Tel Aviv University where researchers have developed a computer algorithm that predicts which genes can be switched off to create the same anti-ageing effect as calorie restriction. The findings of the research were reported recently in the leading journal on biology and associated fields; “Nature Communications.“ . The findings and research, if built upon, could lead to the development of new drugs to treat ageing.
Traditional research in this field has looked for ways to kill off bad cells, such as cancer cells. Chemotherapy and radiation are ways to treat cancerous cells and stop them multiplying. But they involve the destruction of the cells, which can leave a patient invalided or unable to perform certain functions that they could before the cancer took hold. The new research looks at ways of transforming a diseased cell into a healthy one. No steamhammer to crack a walnut, but a subtle terraforming of a cell from being a bad’ un to being a good ‘un.
This may seem similar to other recent discoveries, but the Tel Aviv laboratory of Professor Eytan Ruppin (pictured above) is a leader in this growing field of something called “genome-scale metabolic modeling” or GSMMs. Ruppin and his researchers use mathematical equations and computers, to understand how GSMMs describe the metabolism, or life-sustaining, processes of living cells. Without getting too technical the algorithm MTA can take information about any two metabolic states and predict the environmental or genetic changes required to go from one state to the other. Such as diseased or non-functioning, to restored and active. “Gene expression” is the measurement of the expression level of individual genes in a cell, and genes can be “turned off” in various ways to prevent them from being expressed in the cell.
The study used yeast. And the algorithm predicted how old yeast could be made to look like new yeast. Why yeast? Because it is the most widely used genetic model as so much of its DNA is preserved in humans. Now you know!
By turning off two genes in real yeast, the researchers found that the yeast’s lifespan could be extended, significantly. By up to nearly a third. While currently there is no way to verify the results in humans, many of these crucial genes are known to extend lifespan in not only yeast, but worms, and mice. That’s where the research will go next- tests on mice.
The glittering prize at the end of this road would be an extended lifespan for we humans, and for finding cures for metabolic diseases such as obesity, diabetes, neurodegenerative disorders and of course, the big one, cancer. And maybe extending skin life so that you need never worry about wrinkles, crow’s feet and a saggy neck!
We’ve come a long way since the speaking clock, but transactions over the phone or online can still sometimes be frustrating affairs. Endless options repeated just to get your bank balance, and not having a 24 hour service can be a pain. But if you want to ask something, anything about the new BMW electric car, the i3, you’ll get a swift and expert response… from a computer. The BMW I genius is a remarkable programme, known as “The Brain”.
Dimitry Aksenov, 21 years old, founded technology company London Brand Management in 2011. The company provides an AI service for big companies who want to outsource customer/staff interactions to computers.
BMW UK marketing director Chris Brownridge said:
“BMW I Genius is capable of understanding each question and responding accurately every time as if you were talking to an expert from the company. The system operates around the clock, allowing the consumer to ask any question relating to the “i” cars but without the hassle of having to pick up the phone or go into a dealership.”
LBM’s system is cloud-based, and so it can be accessed from anywhere . It can deal with thousands of enquiries simultaneously, and its database has a virtually unlimited memory capacity. It’s the equivalent of having hundreds or even thousands of call centre staff, with the added advamtage that it remembers and learns, and there is no downtime. Much better then than our human brain?
Aksenov provides the technology to brands under licence with a one-off implementation fee to “teach” the system. Unlike hiring humans, the AI only has to learn once and that’s it for good. He said:
“Within five years we will have a system that truly knows more than a human could ever know and is more efficient at delivering information. It will replace many of the boring jobs that are currently done by humans. Unfortunately, this may take some jobs from the economy by replacing human beings with a machine. But it is the future.”
Do you remember the computer HAL 9000 in Stanley Kubrick’s 1960’s science fiction film “2001 A Space Odyssey”? Forget the fact that it went a bit OTT and mission crazy towards the end, one of the interesting things about HAL (which stood for Heuristically programmed ALgorithmic computer) was that it could understand and converse in English. No need for inputting via a keyboard, or translation into machine code. Think “lexical semantics”.
Ever since the release of the film linguists and computer scientists have tried to get computers to understand human language by programming the semantics of language as software. We already have programmes that can understand and distinguish numbers and certain words on our mobiles, when we pay bills over the phone, and even in computer games, but a computer that can understand and be fluent in a human language has eluded us.
That may be changing- a University of Texas at Austin linguistics Professor, Katrin Erk, is using supercomputers to develop a new method for helping computers learn natural language.
Instead of hard-coding human logic or deciphering dictionaries to try to teach computers language, Erk decided to try a different tactic: feed computers a vast body of texts (as an input of human knowledge) and use the implicit connections between the words to create a map of relationships.
“An intuition for me was that you could visualize the different meanings of a word as points in space. You could think of them as sometimes far apart, like a battery charge and criminal charges, and sometimes close together, like criminal charges and accusations (“the newspaper published charges…”). The meaning of a word in a particular context is a point in this space. Then we don’t have to say how many senses a word has. Instead we say: ‘This use of the word is close to this usage in another sentence, but far away from the third use.'”
I have to say that as a human, I had some trouble getting my head round that quote! Perhaps we should be looking at how babies learn language and try to replicate that learning in a computer. But back to Erk’s work, to create a model that can accurately recreate the intuitive ability to distinguish word meaning will require a lot of text and a lot of analytical crunching power.
“The lower end for this kind of a research is a text collection of 100 million words. If you can give me a few billion words, I’d be much happier. But how can we process all of that information? That’s where supercomputers come in.”
So we need a mega computer to help us devise a computer that will not only understand us, but communicate intelligently with us. If this could be achieved, how close wiuld such a computer be to a sentient entity? What if it’s first words to us once we switch on this fully loaded language-conversant computer are “I’m hungry”! Well, as long it doesn’t start singing “Daisy Daisy” and switching off life support systems… But perhaps HAL 9001 will be better behaved.