sI.T. Testing Industry Leader James Bach had a post on his blog called The Future Will Need Us to Reboot It concerning the Technological Singularity. Or for those of us who don't normally discuss such things the point at which A.I. technology will exists capable of surpassing human intelligence and increasing its own intelligence. In other words the point where computers don't need humans to progress their own intelligence.
I’ve been reading a bit about the Technological Singularity. It’s an interesting and chilling idea conceived by people who aren’t testers. It goes like this: the progress of technology is increasing exponentially. Eventually the A.I. technology will exist that will be capable of surpassing human intelligence and increasing its own intelligence. At that point, called the Singularity, the future will not need us… Transhumanity will be born… A new era of evolution will begin.He is right, probably a tester would not be be involved in an advanced A.I. project because by it's own definition a A.I. project is designed to result in the unexpected. To succeed, the project would result in something greater than the sum of its parts, to be able to expect the unexpected would make the tester clairvoyant which is an even greater trick that achieving true A.I.
I think a tester was not involved in this particular project plan. For one thing, we aren’t even able to define intelligence, except as the ability to perform rather narrow and banal tasks super-fast, so how do we get from there to something human-like? It seems to me that the efforts to create machines that will fool humans into believing that they are smart are equivalent to carving a Ferrari out of wax. Sure you could fool someone, but it’s still not a Ferrari. Wishing and believing doesn’t make it a Ferrari.
Because we know how a Ferrari works, it’s easy to understand that a wax Ferrari is very different from a real one. Since we don’t know what intelligence really is, even smart people easily will confuse wax intelligence for real intelligence. In testing terms, however, I have to ask “What are the features of artificial intelligence? How would you test them? How would you know they are reliable? And most importantly, how would you know that human intelligence doesn’t possess secret and subtle features that have not yet been identified?” Being beaten in chess by a chess computer is no evidence that such a computer can help you with your taxes, or advise you on your troubles with girls. Impressive feats of “intelligence” simply do not encompass intelligence in all the forms that we routinely experience it.
Take a look at James's post it gave me something to ponder. Sphere: Related Content