Computers will never lose quietly like humans sometimes do - GoGameGuru
I won't pretend to know anything about the ancient and complex game of Go, which exceeds chess - so I've been told - in the sheer variety of potential outcomes. But I do love thinking about thinking. So I was interested to read an account of a win by a human master of the game, Lee Sedol, against his competitor, AlphaGo, in the Google Deep Mind Challenge, which matched silicon and biological wits for Go survival.
It was alas the only win for Sedol in the competition. This thought on thinking, though, grabbed my attention. GoGameGuru:
As we’ve discussed before, the algorithms which guide computer Go players seek to maximize the probability of winning. The margin of victory or defeat is irrelevant.
This leads to a behavior where computers usually 'win small, or lose big'. When computers are behind, they takes risks in an attempt to catch up, sometimes crazy risks which make it easier to shut them out of the game.
For the most part though, this is the behavior you want to see. Computers will never lose quietly like humans sometimes do.
To "lose quietly" could of course mean to lose meekly, or to leave defeated.
It seems to me that a distinguishing feature of the human mind is not an ability to surmount any problem, to succeed figuratively or literally in not dying, but in an ability to relinquish an advantageous position in favor of something else. Altruistic behavior or cooperation among individuals, or between groups, confers more than mere survival. It makes it possible to flourish. Can those actions be accounted for mathematically? Perhaps they can.
Can a conditional directive - if this, then that - for an unconditional exchange be mapped upon hardware? I don't know.
Is there a mercy algorithm?