Translate

30 julho 2019

Máquinas não vão dominar o mundo


Stephen Hawking envisioned an AI apocalypse and noted astronomer Sir Martin Rees envisions cyborgs on Mars. A tech analyst thinks robots should run for office. Never mind, Dilbert’s creator, Scott Adams, thinks we’ll soon be ruled by The Algorithm anyway. And cosmologist Max Tegmark explains how he thinks AI could end up running the world. It’s a cool apocalypse but does that make it more likely?

Here are some of the true limitations on machine intelligence, for the next time the subject comes up over coffee:

● Computer science professor Robert J. Marks points out that machine intelligence is much more sophisticated today than in the past but it hasn’t fundamentally changed: “A bigger computer would be like a bigger truck. All a truck can really do is haul things and all computers can really do is calculate. Limitations on computer performance are constrained by algorithmic information theory. According to the Church-Turing Thesis, anything done on the very fast computers of today could have been done—in principle—on Turing’s original 1930’s Turing machine. We can perform tasks faster today but the fundamental limitations of computing remain.”

● Software pioneer François Chollet points out that an intelligence is not capable of designing an intelligence greater than itself.

Melanie Mitchell, Professor of Computer Science at Portland State University warns that machines do not understand what things mean. However powerful, they will always be vulnerable to malicious takeovers:


Numerous studies have demonstrated the ease with which hackers could, in principle, fool face- and object-recognition systems with specific minuscule changes to images, put inconspicuous stickers on a stop sign to make a self-driving car’s vision system mistake it for a yield sign or modify an audio signal so that it sounds like background music to a human but instructs a Siri or Alexa system to perform a silent command.
Melanie Mitchell, “Artificial Intelligence Hits the Barrier of Meaning” AT New York Times

Those who say that a system can be designed that is sophisticated enough to prevent that miss the point: The hacker is looking for a vulnerability due to the fact that the system only mimics understanding but does not possess it. To the extent that the system does not possess understanding, such points will probably always exist.

● Computer engineer Eric Holloway reminds us that a defining aspect of the human mind is its ability to create mutual information. You might understand a sign you have never seen before because you can guess, on your own, what someone might be trying to tell you. Machines operate according to randomness and determinism but Levin’s Law of independence conservation states that “no combination of random and deterministic processing can increase mutual information.”

● Software architect Brendan Dixon adds that human intelligence is not simply a matter of IQ; culture plays an important role. The geniuses who develop great ideas are working within a surrounding culture of other unique people with ideas. One can input a great deal of information into a system but the system isn’t a culture in the same way.

● Finally, physicist Alfredo Metere of the International Computer Science Institute (ICSI) insists that AI must deal in specifics but humans live in an indefinitely blurry world that is always changing:


AI is a bunch of mathematical models that need to be realised in some physical medium, such as, for example, programs that can be stored and run in a computer. No wizards, no magic. The moment we implement AI models as computer programs, we are sacrificing something, due to the fact that we must reduce reality to a bunch of finite bits that a computer can crunch on.Alfredo Metere, “AI will never conquer humanity. It’s too rational.” at Cosmos

So, given these limitations, is AI a threat to democracy? The main problem that neurosurgeon Michael Egnor sees is its obscurity. We often don’t know how AI is being used:


What algorithms does Google use when we search on political topics? We don’t know. It is inevitable that such searches are biased, perhaps deliberately, perhaps not… It is not far-fetched to imagine self-driving cars “choosing” routes that go past merchants who “advertise” surreptitiously, using the autonomous vehicles. How much would Mc Donald’s pay to route the cars and slow them down when they pass the Golden Arches? How much would a political party pay to skew a Google search on their candidates?

In short, the real concern is not that the machine will run the show but that the machine’s owners will run it and we won’t know exactly how they are doing it. They’ll say they don’t know how they are doing it either. It just somehow happened.

Assuming we get past that, non-hype experts think that our future is more likely to include coexisting with AI and robots than having our lives run by them:


Once people come to understand how limited today’s machine learning systems are, the exaggerated hopes they have aroused will evaporate quickly, warns Roger Schank, an AI expert who specialises in the psychology of learning. The result, he predicts, will be a new “AI winter” — a reference to the period in the late 1980s when disappointment over the progress of the technology led to a retreat from the field… David Mindell, a Massachusetts Institute of Technology professor who has written about the challenges of getting humans and robots to interact effectively, puts it most succinctly: “The computer science world still has a long way to go before it has a clue about how to deal with people.” Richard Waters, “Artificial intelligence: when humans coexist with robots” at Financial Times

And with that, of course, supercomputers are not going to be much of a help.

Fonte: aqui

Resultado de imagem para ai winter

Nenhum comentário:

Postar um comentário