Wednesday, September 2, 2015

AI: concerning but not evil



    Clarke's Third Law: Any sufficiently advanced technology is indistinguishable from magic

    Hawking, Musk, Wozniak. A new line of men's cologne or a cabal of reactionists against the benevolent Skynet overlords who just want to harvest human energy? Don't worry, it won't hurt.

    Neither, of course. What scent could 'Wozniak' possibly be? Grizzly bear breath?

    All they did was sign an open letter  stating concern over ethics in artificial intelligence, that researchers and implementers should be aware of the unintended consequences of the things they build (which are intended to do human like things but may not be ... perfect).

    Frankly I've overstated the letter's case quite a bit. They call for "maximizing the societal benefit of AI". Wow. Earthshattering. We never knew. It's pretty weak.The strongest thing said there is:
    Because of the great potential of AI, it is important to research how to reap its benefits while avoiding potential pitfalls.
    That was the single mention of anything coming close to not positive. 'Avoid pitfalls', that doesn't seem too extreme. Don't step in puddles. Don't close a door on your fingers. Thanks for the tip.

    Sure there was press about it and interviews and off the cuff fear mongering, based on years of anti-science science-fiction, where the interesting story is the mad or evil or misguided scientist, not the usual reality-based scientific-method science.

    OK there was a link to a more detailed outline of research objectives that would support 'robustness' of AI applications. And that's a longer document, and it, out of much more content, has two items (out of many) interpretable as negative:

    Autonomous weapons: "...perhaps result in 'accidental' battles or wars". Ack! Now I have to be worried about being shot in the face by an Amazon delivery drone.
    Security: how to prevent intentional manipulation by unauthorized parties.

    I have quite of few inspired fears of AI, but they weren't really represented in this document.

    So I conclude that most of these fears dredged up by the letter are very external to the letter, created out of readers' own preconceptions.

    ---

    There's all sorts of worries about science in general, the unintended consequences of something so obviously beneficial. And AI, since it is usually heuristic based, has even more room to have hidden 'features' pop up unexpectedly (and when it does less understanding of 'why' it happened).

    But to the 'evil' point. it's not the AI's fault. It's not a sentient being. Even if it passes a cursory Turing test, it has no 'intention' or 'desire' or other mind to be second guessed as to its machinations (pun intended). If it is anybody's fault it is some person: the designer who did not consider out of range uses, the coder who did not test all cases, the manager who decided a necessary feature was only desirable and left out of the deliverable. repurposed it outside of its limits. Your worries are second guessing a bunch of people behind a curtain, on top of assessing the machine's engineering.

    What are the evils that are going to come about because of AI? It's not a unconscious automaton, after calculating a large nonlinear optimization problem, settling on a local maximum which involves removing all nuclear missiles by ..um.. using them (much more efficient that way). It's not having your resume rejected on submission because it has calculated that your deterministic personality profile is incompatible with an unseen combination of HR parameters which imply that all candidates cannot be black. Haha those may very well happen eventually. It's going to be sending a breast cancer screening reminder to families whose mom has recently died of breast cancer. It's going to be charging you double fees automatically every time you drive through a toll booth because you have another car with EZ Pass. That is, the errors may be associated with AI systems, but the problem is not the inscrutability of the AI but lo-tech errors based on assuming the AI is thinking of all these things and will act responsible enough to fix them. 'AI' isn't thinking and isn't responsible. People have to do that.

    So be concerned about that fraud detection that doesn't allow you to buy a latte at midnight (it's decaf!) but does allow midday withdrawals just under the daily limit for a week. Or be concerned about the process that,  for all the those ahead of you in the royal line of succession, automatically created and accepted Outlook appointments at an abandoned warehouse filled with rusting oil cans and accelerant and told to light a match.
    Any sufficiently advanced incompetence is indistinguishable from malice. (Grey's law)


    No comments: