If you are a scientist specialising in the development of AI, artificial intelligence, then you are in demand, big time! Silicon Valley Facebook, Google and other leading tech companies are jockeying to hire top scientists in the field of artificial intelligence, while spending heavily on a quest to make computers think more like people. Forget humanoid robots doing chores… at least for the moment. The race is currently on for computers that understand exactly what you want, perhaps even before you’ve asked them. It’s to make the human-computer interface more even, more compatible and more intuitive. But it could also mean that the AI will know what you’re thinking… and that’s a bit freaky!
Of course, we already have AI programs can already recognise images and translate human speech. But tech researchers and scientists want to build systems that can match the human brain’s ability to handle more complex challenges. These can include to intuitively predict traffic conditions while steering automated cars or drones, for example, or to grasp the intent behind written texts and spoken messages rather than interpret them literally and slavishly.
Google paid a reported $US400 million in January to buy DeepMind, a British start-up said to be working on artificial intelligence for image recognition, e-commerce recommendations and video games. DeepMind had also drawn interest from Facebook.
The ultimate goal is something closer to “Samantha,” the personable operating system voiced by actress Scarlett Johansson in the film “Her”.
Already, Google has used artificial intelligence to improve its voice-enabled search and Google Now, as well as its mapping and self-driving car projects. Google CEO Larry Page said this at a TED technology conference last month.
“I think we’re seeing a lot of exciting work going on, that crosses computer science and neuroscience, in terms of really understanding what it takes to make something smart.”
He then showed videos from Google and DeepMind projects in which computer systems learned to recognise cats from other animals and play games – without detailed programming instructions.
Google and Facebook both hope to do more with “deep learning,” in which computer networks teach themselves to recognise patterns by analysing vast amounts of data, rather than rely on programmers to tell them what each pattern represents. The networks tackle problems by breaking them into a series of steps, the way layers of neurons work in a human brain.
For some, a powerful artificial brain that knows your preferences and habits and anticipates your wants and needs is a bit frightening and companies will need to consider ethics and privacy as they develop new services. The idea is to help us humans, not to cause us anxiety. If it all gets to much, you can always reach for the power switch and turn of the juice… but will the AI have anticipated that already? Click!
In the blue corner we have Stephen Hawking, representing mankind; world renowned physicist, presenter, philosopher and cosmologist, author of the blockbusting book “A Brief History of Time”, and a brain the size of a small planet.
In the red corner, a computer. Or a computer programme, perhaps, representing AI. Artificial Intelligence.
Seconds, away, Round One. Well, not just yet, this is to be a future bout of boxing, in the not too distant future. Humankind versus AI. Some would say it should never be a contest at all. We humans invented AI, and can control it. It is our baby, our spawn of the future, and it can never bite the hand that feeds…. or can it? Stephen Hawking thinks AI is a threat to all our futures…
Stephen Hawking, in an interview with a UK Sunday paper is quoted thus:
“Success in creating AI would be the biggest event in human history. Unfortunately, it might also be the last.”
Hawking thinks we are moving too quickly, too far, without considering the possiblerepercussions. From digital personal assistants to self-driving cars- he believes we’re on the cusp of the kinds of artificial intelligence that were previously exclusive to science fiction films. Shades of 2001: A Space Odyssey and Hal 9000 and I, Robot? The possibility of smart robots roaming the streets is not so far-fetched. he basically asks who will control AI when AI becomes programmed to control itself?
It’s not just that there may be massive unemployment due to robotisation, if a robot is sufficiently intelligently programmed to consider itself “aware” or even “alive” then why would it allow anyone to control it, or worse still, switch it off? If the answer is no, then we could be on the way to a global conflict between humans and robots. Pure fantasy? Stephen Hawking doesn’t think so. Perhaps we should start dumbing down drones already….
We’ve come a long way since the speaking clock, but transactions over the phone or online can still sometimes be frustrating affairs. Endless options repeated just to get your bank balance, and not having a 24 hour service can be a pain. But if you want to ask something, anything about the new BMW electric car, the i3, you’ll get a swift and expert response… from a computer. The BMW I genius is a remarkable programme, known as “The Brain”.
Dimitry Aksenov, 21 years old, founded technology company London Brand Management in 2011. The company provides an AI service for big companies who want to outsource customer/staff interactions to computers.
BMW UK marketing director Chris Brownridge said:
“BMW I Genius is capable of understanding each question and responding accurately every time as if you were talking to an expert from the company. The system operates around the clock, allowing the consumer to ask any question relating to the “i” cars but without the hassle of having to pick up the phone or go into a dealership.”
LBM’s system is cloud-based, and so it can be accessed from anywhere . It can deal with thousands of enquiries simultaneously, and its database has a virtually unlimited memory capacity. It’s the equivalent of having hundreds or even thousands of call centre staff, with the added advamtage that it remembers and learns, and there is no downtime. Much better then than our human brain?
Aksenov provides the technology to brands under licence with a one-off implementation fee to “teach” the system. Unlike hiring humans, the AI only has to learn once and that’s it for good. He said:
“Within five years we will have a system that truly knows more than a human could ever know and is more efficient at delivering information. It will replace many of the boring jobs that are currently done by humans. Unfortunately, this may take some jobs from the economy by replacing human beings with a machine. But it is the future.”
The brain. Famously said by Woody Allen to be his “second-favourite organ”. The brain. The key component used by Baron von Frankenstein to create his creature (too bad that he had to use the damaged brain of a criminal instead of a healthy brain- damn that clumsy Igor!).
Whereas organ transplants are common these days, there has been no known attempt to effect a human brain transplant. But advances in understanding that most complex part of our body are being made.
A 65 year old European woman donated her brain to science five years ago. Over the next half a decade, the brain was sliced into extremely thin strips and then studied and copied into a computer programme known as “BigBrain”. The brain can be viewed at virtually cellular level- zooming in to a resolution of just 20 microns across (a millimetre is 1000 microns). The information is being shared; reserachers worldwide are able to download digital slices of the computerised BigBrain to assist their own research projects.
Prof. Katrin Amunts at Dusseldorf’s Heinrich Heine University said:
“It is impossible to understand the function of the brain without knowing its anatomy, and its microstructure in particular. Brain structure and function go hand-in-hand.”
The donated brain was immersed in paraffin and transported to the University in Germany. A custom built bacon-slicer was used to cut the brain into nearly 7,500 separate slices, each the thickness of plastic food wrap. The slices were then stained, mounted on slides, and then scanned onto a computer. At McGill University in Canada, computer scientists then used this to create a 3D model, BigBrain, taking up a terabyte of computer memory. The project is ongoing; the next step is to use the BigBrain to simulate the entire workings of the brain.
At what point might BigBrain comes self-aware within the computer? We are told that that is not possible…