Skip to main content.
November 21st, 2005

Tired of ads? Then register an account or login

Top Go Software at Japanese shodan level

According to this report on the 2005 Gifu Challenge on Sensei’s Library (by Bob Myers, based on an article in the Asahi Shimbun, a Japanese newspaper), 3 Go-playing programs have been officially rated by the Nihon Ki-in at the Japanese amateur shodan level. Apparently, at least one other is also at dan-level strength, but is not yet rated by the Nihon Ki-in. Probably the best-known Go playing program in the West with shodan status is the latest version of David Fotland’s Many Faces of Go. David is an American 3-dan

A Japanese professor of AI further predicted that Go software would finally match top pros somewhere between 2030 and 2050.

The Gifu Challenge is an annual Go Software tournament, this year won by a program from North Korea, KCC Igo.

Posted by Steve in News

10 Comments »

This entry was posted on Monday, November 21st, 2005 at 11:31 am and is filed under News. You can follow any responses to this entry through the comments RSS 2.0 feed. You can skip to the end and leave a response. Pinging is currently not allowed.

10 Responses to “Top Go Software at Japanese shodan level”

  1. Chris says:

    Many Faces of Go is shodan? they must be joking! Its not even 10k. It’s hopeless at evaluating ko fights, and can’t cope with fights that are relatively complex (and thats complex to a 13k!)

  2. Steve says:

    I did say “latest version” of Many Faces of Go. It’s been around a long time, and the earliest versions were, of course, much weaker than the latest versions.

    The author does note that the program struggles with complex fights, though.

  3. Chris says:

    The latest version commercially available is version 11.0. This is the version to which I refer. Perhaps David has implemented a new version for the purposes of that article but if its improved 10+ stones that would be an astounding achievement.

    I accept that playing a computer program “cold” (ie not ever having played a computer program before) might give the program a considerable advantage. Especially if told its “1d” level. Given that superficially it makes “correct” moves, a genuine 1d might be inclined to treat it with more respect than it deserves, and not discover its actual weaknesses. Some games do go by without a ko fight, for instance. But its still hard to believe that a 1k player lost to it. But that is what the article said, and why would they lie?

    I wonder on what basis they make these crazy predications of professional’ness? A 20 year window – thats playing it pretty safe.

    I think to achieve it they will need (a) computers to be a lot faster, which is presumably a given, and (b) some concerted research and effort from a large group of people, not just one person programming it when he’s got time. Maybe 100’s of manyears are required, perhaps 1000’s. This can be achieved in a lot less than 25 years if desired.

    Personally I suspect that there is a lot of subtle resistance to the idea. I’d imagine that most professionals and many dan amateurs would be opposed to the idea of a program that is as good or better than them. There may also be a cultural resistance to the idea – go as form of poetry rather than merely a game of skill or knowledge

  4. Tom says:

    (a) computers to be a lot faster, which is presumably a given, and (b) some concerted research and effort from a large group of people, not just one person programming it when he’s got time.”

    Chris,

    About (a) – computer speed: it’s actually of much less help than one may hope. There’s (demonstrably) no way out of great strides in algorithmics…
    (b): professional researchers have been working at Go AI for the last 30 years – not many of them, admittedly, but still a little more than “one […] when he’s got time” :). I know 4 French players who’ve done their PhD on the subject, for instance. The GnuGo project currently takes in the collaboration of a few tens of pro or amateur programmers.
    I’m not sure either about your view on the “subtle resistance to the idea” – I for one would welcome an automated sparring partner (hell, anything challenging at 9 stones would be great – and maybe worth a few hundred euro/dollars), and I don’t think it’s an isolated feeling.
    Hey, it’s an awful lot to be disagreeing upon, sorry :).

    All, greetings, and for the lot of you that are Bokkes fans, here’s looking for a great game come Saturday night !

  5. Tom says:

    … and boy did I screw the HTML tag thing.

  6. Chris says:

    Professional researchers, sure, but professional players? Name them! And I don’t mean the occasional consultation, I mean a (presumably former) professional devoting himself fulltime to this endeavour.

    There has been a fair amount of academic research, agreed, but show me the bottom line? Where’s a program I can play?

    I’d like a strong computer opponent, don’t get me wrong! Its bizarre (and disappointing) that four months after taking up the game I’m beating these programs (gnugo, mfogo).

    I’m speculating as to the reasons why there isn’t a strong Go playing program yet. If IBM can finance Deep Blue (presumably at a cost of millions) why has no one bankrolled a similar exercise for Go. My guess is because it wouldn’t have the same PR value, because there is a resistance to computerized Go in the East (which is presumably where the sponsor would have to be, given that that is where the game is popular).

  7. Steve says:

    I might hazard a guess that people in the Chess-programming world thought it was quite possible that upping the processing power of the computerized chess player would allow it to surpass top players. Achieving this would be worth a lot of PR.

    Very few, if any, people in the Go-programming world currently think that upping the processing power available to a Go playing program would allow it to achieve anything comparable to even top amateurs, so there would never be as much PR. I reckon that if someone can show that with 3 orders of magnitude more computing power, a specific algorithm could take on pro’s, you’ll see at least one of the big Oriental tech corporations jumping at the idea.

  8. Tom says:

    Chris : “Professional researchers, sure, but professional players? Name them! And I don’t mean the occasional consultation, I mean a (presumably former) professional devoting himself fulltime to this endeavour.”

    We’re still lightyears away from where a pro player in the loop would make any difference, actually.
    Fotland (MFoG) or the others are clearly strong enough (*). Even implementing _your_ knowledge of the game would be a tremendous step forward (beating the ‘bots, you must be around 10k, no ? In 4 months, not bad at all). As for the prizes – Ing Chang Ki offered US$ 1000000 every year for 20 years for a strong program (target was about 5-6D ama, that of a Japanese insei, actually). He died in 2001 or so, and noone ever came near.
    My guess is that you’re figuring too strong a parallel with chess computerization – where brute strength is so efficient. Just doesn’t work with Go; as I said, an algorithmic breakthrough is essential. The main difference is that we don’t know of any implementable heuristic yet (a kind of “intermediate goal to the post”). In chess, the gain in material is a very (very) good heuristic.

    (*)reminds me of something: the only really impressive achievement in the field is that of Tom Wolf, GoTools, a program that solves tsumego – it’s at least of dan-level strength *in sufficiently closed problems*, whereas TW is about 7k (or was, when he wrote it, 3-4 years ago)
    .

  9. konrad says:

    Well I’m prone to bore people on this topic, but can’t resist…

    Tom is right on the key issues – consider this: if it were just a case of complexity we could already do exciting things by setting our sights on 9×9 go, which is of comparable complexity to chess in terms of branching factor (nr of options per move). Even a program that could rival strong amateurs at 9×9 go would cause a stir.

    The reason we cannot do this is not complexity but as Tom points out the lack of good heuristics for evaluating unfinished positions. On considering a sequence, we simply have no idea how to decide (or even guess) whether the result favours black or white. Without this, the entire philosophy behind chess programs is useless.

  10. Chris says:

    But a competent player could do that in an instant. Somehow this intuitive understanding must be analysed and broken down. While this may be (very) difficult, it must be possible. These are the kinds of things that need expert input from many people in many fields (computer science, neuroscience, very strong players).

    The ING prize of 1,000,000 dollars, was, frankly, nowhere near enough. Make it 10 times that amount, with the money paid up front, and it could happen.

Leave a Reply

You must be logged in to post a comment.