- Release Year: 1999
- Platforms: Windows
- Publisher: CGS
- Developer: Korea Computer Center
- Genre: Board game, Go, Weiqi
- Perspective: Top-down
- Game Mode: Single-player
- Gameplay: AI, Board game

Description
KCC Igo, released in South Korea as Segyechoegang Eunbyeol, is a computer adaptation of the classic board game Go developed by the Korea Computer Center and published by CGS for Windows in 1999. This game series is renowned for its high-quality AI, which has achieved notable success in international competitions like the Gifu Challenge (2003-2006) and the 2009 UEC Cup, incorporating advanced techniques such as Monte Carlo Tree Search. Despite early controversies surrounding plagiarism allegations, the program has evolved into a competitive force in computer Go, offering players a digital version of the strategic board game.
Summary of Past and Present Scenarios in IGO (Self-Learning GO Program)
Background
- GO: A board game with simple rules but extreme strategic depth. Players alternate placing stones, aiming to control territory (“liberty,” “eye,” “unconditional live”).
- Existing Programs: Most (e.g., GNU GO, NeuroGo) rely on pattern databases and rule-based engines but cannot beat average human players.
Past Scenarios
-
Play Against Itself:
- A single neural network plays against itself.
- Learning: If black wins, all black moves are labeled “good” for training.
- Results: Initial improvement followed by degradation; the network learns a bad deterministic pattern. No guaranteed improvement.
-
Group Playing:
- Multiple neural networks (e.g., 18) play in pairs; the loser learns from the winner.
- Results: Early improvement, but if one player dominates, the system degrades. No convergence observed after 1 month on a 9×9 board. No guaranteed improvement.
-
ABC Scenario:
- Three players (A, B, C) compete. If B loses to A, a teacher (C) suggests moves. If C’s move (different from B’s) leads to a better outcome against A, it is labeled “understandable good.”
- Results: A “best” player emerges in 1 week but is beaten by new random players. Improvement slows over time. Guaranteed improvement, but training is slow.
Present Scenario
- Architecture: Uses reinforcement learning with neural networks.
- Output Representation: Each intersection’s output is a real number in [0,1], representing the likelihood of securing it as black territory.
- Learning: Good moves are identified automatically via reinforcement learning.
- Results:
- 5×5 Board:
- Beats random players after 3–4 hours (100% win rate).
- Comparable to GNU GO after 1–2 weeks (168–336 hours of training).
- 7×7 Board: GNU GO still wins easily after 1 month of training.
- 5×5 Board:
- Why Better?
- Consistent Target: Output aligns with actual game outcomes, unlike past methods.
- Local Correlation: Target correlates with local board features, reducing complexity (e.g., 5×5: 325/8 vs. past spatial complexity).
- Adaptive Training: Data quality improves as the network learns.
Key Problems
- Intrinsic:
- No complexity bounds for iterations to improve players.
- Representation space is extremely large.
- Technical:
- Position-Level Evaluation: Lacks a method to empirically measure iteration bounds or trade performance for speed.
- Unusual Moves: Hard to identify and respond to rare board states automatically.
- Time Complexity:
- Per game: O(n⁶W) (n = board size, W = weights).
- Learning: O(n⁴W) for TD0, O(n⁶W) for Q-Learning.
Lessons
- Improvement is Possible but Slow:
- Smaller boards (5×5) show rapid gains against random players but plateau quickly. Larger boards (7×7) struggle even after extended training.
- Determinism ≠ Understanding:
- Neural networks learn input-output correlations via hill-climbing, not the intrinsic logic of GO.
Conclusion
- A self-learning GO program is feasible using neural networks and reinforcement learning.
- Challenges Remain:
- Automatic Feature Discovery: Reduce representation space.
- Learning from Failure: Improve handling of unusual board states.
- Position-Level Evaluation: Develop metrics to study iteration bounds and optimize performance.
- Outlook: While progress is evident (e.g., 5×5 results), scaling to competitive play requires solving technical and theoretical hurdles.
Key Takeaway
IGO demonstrates that self-improvement is possible in GO, but the path to human-level play demands addressing the game’s exponential complexity and developing robust evaluation methods. The present scenario’s output representation offers a significant leap, but practical scalability remains elusive.