>> |
No. 2946
To OP: No and No, but Watson represents a small, albeit significant advancement. You don't get to be the dominant species on a planet, capable of wiping out all life and landing men on the moon, without first learning to make fire and develop agriculture.
My personal prediction is that we won't be able to have true AI until we've surpassed the limits of transistors and conventional microchips. Quantum computing, protein and DNA-based computing, artificial neural networks, and combined read-write processor/data storage devices will be the next big things, and be as big an advancement as going from vacuum tubes and punch cards to Watson overnight. And, of course, computers with the data storage, processing power, and compact size of the human brain will also be a prerequisite for any future brain-uploading.
Until then, robots and specialized computers like Watson will continue to advance, mastering more and more complex tasks (first chess and composing music, today Jeopardy, tomorrow...?). We might soon get to the point where we can have computers who, while not true AI, will be able to outperform humans, in both speed and accuracy, in not just one extremely specialized task, but in a dozen or so related tasks, with minimal human input. Data processing, secretarial and administrative work could all be performed by such devices.
But OP is right about Watson missing context. Humans think in words, symbols, images, lofty concepts and basic, animal desires. The feedback loops in our neural networks are so perfectly ordered and yet seemingly random enough to allow for inspiration, fantasy, strokes of genius, induction, deduction, and guessing, both educated and shot-in-the-dark. Until we have a computer that actually thinks-- talking to itself and thinking in full sentences and recalled images--it'll be nothing more than machine.
|