Arimaa
Encyclopedia
The objective of the game is to move a rabbit of one's own color onto the home rank of the opponent. Thus Gold wins by moving a gold rabbit to the eighth rank, and Silver wins by moving a silver rabbit to the first rank. However, because it is difficult to usher a rabbit to the goal line while the board is full of pieces, an intermediate objective is to capture opposing pieces by pushing or pulling them into the trap squares.

The game begins with an empty board. Gold places the sixteen gold pieces in any configuration on the first and second ranks. Silver then places the sixteen silver pieces in any configuration on seventh and eighth ranks. The diagram at right shows one possible initial placement.
After the pieces are placed on the board, the players alternate turns, starting with Gold. A turn consists of making one to four steps. With each step a friendly piece may move into an unoccupied square one space left, right, forward, or backward, except that rabbits may not step backward. The steps of a turn may be made by a single piece or distributed between several pieces in any order.

A turn must make a net change to the position. Thus one may not, for example, take one step forward and one step back with the same piece, effectively passing the turn. Furthermore, one's turn may not create the same position with the same player to move as has been created twice before. This rule is similar to the situational super ko rule in the game of Go
Go (board game)
Go , is an ancient board game for two players that originated in China more than 2,000 years ago...

, which prevents endless loops, and is in contrast to chess where endless loops are considered draws. The prohibitions on passing and repetition make Arimaa a drawless game.

The second diagram, from the same game as the initial position above, helps illustrate the remaining rules of movement.

A player may use two consecutive steps of a turn to dislodge an opposing piece with a stronger friendly piece which is adjacent (in one of the four cardinal direction
Cardinal direction
The four cardinal directions or cardinal points are the directions of north, east, south, and west, commonly denoted by their initials: N, E, S, W. East and west are at right angles to north and south, with east being in the direction of rotation and west being directly opposite. Intermediate...

s). For example, a friendly dog may dislodge an opposing rabbit or cat, but not a dog, horse, camel, or elephant. The stronger piece may pull or push the adjacent weaker piece. When pulling, the stronger piece steps into an empty square, and the square it came from is occupied by the weaker piece. The silver elephant on d5 could step to d4 (or c5 or e5) and pull the gold horse from d6 to d5. When pushing, the weaker piece is moved to an adjacent empty square, and the square it came from is occupied by the stronger piece. The gold elephant on d3 could push the silver rabbit on d2 to e2 and then occupy d2. Note that the rabbit on d2 can't be pushed to d1, c2, or d3, because those squares are not empty.

Friendly pieces may not be dislodged. Also, a piece may not push and pull simultaneously. For example the gold elephant on d3 could not simultaneously push the silver rabbit on d2 to e2 and pull the silver rabbit from c3 to d3. An elephant can never be dislodged, since there is nothing stronger.

A piece which is adjacent (in any cardinal direction
Cardinal direction
The four cardinal directions or cardinal points are the directions of north, east, south, and west, commonly denoted by their initials: N, E, S, W. East and west are at right angles to north and south, with east being in the direction of rotation and west being directly opposite. Intermediate...

) to a stronger opposing piece is frozen, unless it is also adjacent to a friendly piece. Frozen pieces may not be moved by the owner, but may be dislodged by the opponent. A frozen piece can freeze another still weaker piece. The silver rabbit on a7 is frozen, but the one on d2 is able to move because it is adjacent to a silver piece. Similarly the gold rabbit on b7 is frozen, but the gold cat on c1 is not. The dogs on a6 and b6 do not freeze each other because they are of equal strength. An elephant cannot be frozen, since there is nothing stronger, but an elephant can be blockaded.

A piece which enters a trap square is captured and removed from the game unless there is a friendly piece adjacent. Silver could move to capture the gold horse on d6 by pushing it to c6 with the elephant on d5. Also a piece on a trap square is captured if all adjacent friendly pieces move away. Thus if the silver rabbit on c4 and the silver horse on c2 move away, voluntarily or by being dislodged, the silver rabbit on c3 will be captured.

Note that a piece may voluntarily step into a trap square, even if it is captured thereby. Also, the second step of a pulling maneuver may be completed, even if the piece doing the pulling is captured on the first step. For example, Silver to move could step the silver rabbit from f4 to g4, step the silver horse from f2 to f3, which captures the horse, and still pull the gold rabbit from f1 to f2 as part of the horse's move.

In the diagrammed position, if it were Gold's turn to move, Gold could win in three steps: The dog on a6 can push the rabbit on a7 to a8, and when the dog is on a7, it unfreezes the rabbit on b7, which can step to b8 for the victory.

Although almost all games end with a rabbit reaching goal, there are two other ways for the game to end.
  • If a player has no legal move, either because all friendly pieces are frozen or blockaded, or because the only moves by mobile pieces are illegal due to repetition of position, the player whose turn it is loses.
  • A player wins by capturing all eight opposing rabbits, even if he sacrifices his last rabbit in the same turn in which he captures the last opposing rabbit. (Originally Arimaa was drawn if all sixteen rabbits were captured, but on July 1, 2008, Syed changed the rules of Arimaa to eliminate the possibility of draws. This change was essentially cosmetic, as there had never been a draw in thousands of human games anyway.)


Finally, if an opposing rabbit is dislodged onto its goal line and dislodged off within the same turn, the game continues.

Strategy and tactics

For beginning insights into good play, see the Arimaa Wikibook articles on tactics and strategy.

Computer performance

Several aspects of Arimaa make it difficult for computer programs to beat good human players. Because so much effort has gone into the development of strong chess-playing software
Computer chess
Computer chess is computer architecture encompassing hardware and software capable of playing chess autonomously without human guidance. Computer chess acts as solo entertainment , as aids to chess analysis, for computer chess competitions, and as research to provide insights into human...

, it is particularly relevant to understand why techniques applicable to chess are less effective for Arimaa.

Top chess programs use brute-force search
Brute-force search
In computer science, brute-force search or exhaustive search, also known as generate and test, is a trivial but very general problem-solving technique that consists of systematically enumerating all possible candidates for the solution and checking whether each candidate satisfies the problem's...

ing coupled with static position evaluation dominated by material considerations. Chess programs examine many, many possible moves, but they are not good (compared to humans) at determining who is winning at the end of a series of moves unless one side has more pieces than the other. The same is true for Arimaa programs, but their results are not as good in practice.

When brute-force searching is applied to Arimaa, the depth of the search is limited by the huge number of options each player has on each turn. Computationally, the number of options a player has available to them governs the number of different paths play can go down. This is known as the branching factor
Branching factor
In computing, tree data structures, and game theory, the branching factor is the number of children at each node. If this value is not uniform, an average branching factor can be calculated....

. The average branching factor in a game of Chess is about 35, whereas in Arimaa it is about 17,281.

These differing branching factors imply that a computer which can search to a depth of eight turns for each player in chess, can only search about three turns deep for each player in Arimaa:



Brute force search depth, for chess software, is nearly doubled by alpha-beta pruning
Alpha-beta pruning
Alpha-beta pruning is a search algorithm which seeks to decrease the number of nodes that are evaluated by the minimax algorithm in its search tree. It is an adversarial search algorithm used commonly for machine playing of two-player games...

, which allows the software to conclude that one move is better than another without examining every possible continuation of the weaker move. If the opponent can crush a certain move with one reply, it isn't necessary to examine other replies, which dramatically increases search speed. In Arimaa, however, the side to move switches only every four steps, which reduces the number of available cutoffs in a step-based search.

Furthermore, the usefulness of alpha-beta pruning is heavily dependent on the order in which moves are considered. Good moves must be considered before bad ones in order for the bad ones to be neglected. In particular, checking and capturing moves are key for pruning, because they are often much better than other moves. In Arimaa software the speedup provided by alpha-beta pruning is less, because captures are rarer. In rated games played on arimaa.com, only 3% of steps result in capture, compared to about 19% of chess moves that result in capture.

In most Arimaa positions, particularly toward the beginning of the game when the board is still crowded, a competent player can avoid losing any pieces within the next two turns. Compared to chess, Arimaa allows either player to delay captures for longer. Indeed, the median move number of the first capture in chess is turn 6, whereas in Arimaa it is turn 12. The struggle is initially more positional in Arimaa, and revolves around making captures unavoidable at some point in the future. This magnifies the importance of correctly judging who is gaining ground in non-material ways. Thus the strength of computer programs (examining millions of positions) is not as significant as their weakness (judging the position apart from who has more pieces).

The weakness of Arimaa programs in the opening phases is further magnified by the setup phase. In chess every game starts from the same position. By compiling before the game a list of stock replies to all standard opening moves, chess programs may often make a dozen or more excellent moves before starting to "think". Humans do the same, but have a smaller and less reliable memory of openings, which puts humans at a relative disadvantage in chess. Arimaa, in contrast, has millions of possible ways to set up the pieces even before the first piece moves. This prevents programs from having any meaningful opening book.

As the game progresses, exchanges and the advancement of rabbits tend to make the position more open and tactical. Arimaa programs typically play better in this sort of position, because they see tactical shots which humans overlook. However, it is usually possible for humans to avoid wide-open positions by conservative play, and to angle for strategic positions in which computers fare worse. Against a conservative opponent it is almost impossible to bust open the position in Arimaa, whereas in chess it is merely difficult. One must beat defensive play by the accumulation of small, long-term advantages, which programs do not do very well.

One additional technique from computer chess which does not apply to Arimaa is endgame tablebase
Endgame tablebase
An endgame tablebase is a computerized database that contains precalculated exhaustive analysis of a chess endgame position. It is typically used by a computer chess engine during play, or by a human or computer that is retrospectively analysing a game that has already been played.The tablebase...

s. Master-level chess games sometimes trade down into unclear endgames with only a few pieces, for example king and knight vs. king and rook. It is possible to build, by retrograde analysis
Retrograde analysis
In chess, retrograde analysis is a computational method used to solve game positions for optimal play by working backward from known outcomes , such as the construction of endgame tablebases. In game theory at large, this method is called backward induction...

, an exhaustive table of the correct move in all such positions. Programs have only to consult a pre-generated table in such positions, rather than "thinking" afresh, which gives them a relative advantage over humans. Arimaa, in contrast, seldom comes to an endgame. Equal exchanges of pieces are less common than in chess, so it is rare for a game of Arimaa to "trade down" and still be unclear. An average game of Arimaa has only eight captures (compared to seventeen for chess), and top humans can often defeat top programs in Arimaa without losing a single piece, for example the second game of the 2011 challenge match. In the 2007 Postal Championship, the game between the top two finishers featured only one capture, a goal-forcing sacrifice.

Omar Syed hopes that, because traditional computer game-playing techniques are only moderately effective for Arimaa, programmers will be forced to use artificial intelligence techniques to create a strong Arimaa-playing program. The successful quest to build a world-championship-caliber chess program has produced many techniques to successfully play games, but has contributed essentially nothing to more general reasoning; in fact, the techniques of chess playing programs have been excluded from some definitions of artificial intelligence
Artificial intelligence
Artificial intelligence is the intelligence of machines and the branch of computer science that aims to create it. AI textbooks define the field as "the study and design of intelligent agents" where an intelligent agent is a system that perceives its environment and takes actions that maximize its...

; a goal for Arimaa is that the techniques involved in playing it will help the larger goals of artificial intelligence.

The structure of Syed's man-against-machine challenge is focused on rewarding advances in AI software and not advances in hardware. In the annual challenge, programs are run on machines chosen and provided by Syed himself, under the criterion that it be a typical, inexpensive, off-the-shelf home computer. The challenge would not be open to anyone requiring expensive multi-processor machines such as those used to challenge top-level chess players, much less something like the custom-built supercomputer Deep Blue, even though it was the success of this hardware-intensive approach which inspired Arimaa's invention. Syed believes that even the computer used in the 2004 challenge match (a Pentium 4 2.4 GHz system with 512 MB of RAM) had sufficient hardware to win the challenge prize if only it was running the proper software. Supercomputers might already have the power to conquer Arimaa by brute force using conventional AI software, and eventually personal computers will too, if hardware continues to advance at the current rate. This is why the Arimaa challenge prize is offered only until the year 2020.

Challenge history

Year Prize Challenger / Developer Human Defender (Human Rank) Result Notes
2004 $10,000 Bomb /
David Fotland
Omar Syed (1) 0–8 Syed gave a rabbit handicap in the last game and won.
2005 $10,000 Bomb /
David Fotland
Frank Heinemann (5) 1–7 No handicap games
2006 $17,500 Bomb /
David Fotland
Karl Juhnke (1)
Greg Magne (2)
Paul Mertens (5)
0–3
0–3
1–2
Mertens gave a camel handicap in his last game and lost.
2007 $17,100 Bomb /
David Fotland
Karl Juhnke (1)
Omar Syed (9)
Brendan M (12)
N Siddiqui (23)
0–3
0–3
0–2
1–0
Juhnke gave handicaps of a dog, a horse, and a camel respectively, and won all three. Syed gave a cat handicap in his last game and won. Siddiqui substituted for Brendan's second game.
2008 $17,000 Bomb /
David Fotland
Jean Daligault (2)
Greg Magne (3)
Mark Mistretta (20)
Omar Syed (24)
0–3
0–3
0–1
0–2
No handicap games. Syed substituted for Mistretta's final two games.
2009 $16,500 Clueless /
Jeff Bacher
Jean Daligault (1)
Karl Juhnke (2)
Jan Macura (14)
Omar Syed (18)
0–2
1–2
1–2
0–1
Juhnke gave a dog handicap in his second game and lost. Daligault gave a horse handicap in his last game and won. Syed substituted for Daligault's first game.
2010 $16,250 Marwin /
Mattias Hultgren
Greg Magne (3)
Louis-Daniel Scott (10)
Patrick Dudek (23)
0–3
1–2
2–1
Scott gave a dog handicap in his second game and lost.
2011 $11,000 Marwin /
Mattias Hultgren
Karl Juhnke (3)
Gregory Clark (7)
Toby Hudson (14)
1–2
0–3
0–3
Juhnke gave a cat handicap in his last game and lost


The Arimaa Challenge has been held eight times so far. Prior to the third match, Syed changed the format to require the software to win two out of three games against each of three players, to reduce the psychological pressure on individual volunteer defenders. Also Syed called for outside sponsorship of the Arimaa Challenge to build a bigger prize fund.

In the first five challenge cycles, David Fotland, renowned for his program Many Faces of Go, won the Arimaa Computer Championship and the right to play for the prize money, only to see his program beaten decisively each year. In 2009 Fotland's program was surpassed by several new programs in the same year, the strongest of which was Clueless by Jeff Bacher. Humanity's margin of dominance over computers appeared to widen each year from 2004 to 2008 as the best human players improved, but the 2009 Arimaa Challenge was more competitive. Clueless became the first bot to win two games of a Challenge match.

In 2010, Mattias Hultgren's bot Marwin edged out Clueless in the computer championship. In the Challenge match Marwin became the first bot to win two out of three games against a single human defender, and also the first bot to win three of the nine games overall. In 2011, however, Marwin won only one of the nine games, and that having received a material handicap.

The material handicaps given in the Challenge games can be roughly equated to chess handicaps as a proportion of the total material on the board in each game. Arimaa handicaps of rabbit, dog, horse, and camel are roughly equivalent to chess handicaps of pawn, two pawns, knight, and rook respectively.

Comparing Arimaa challenge to chess challenges

It has been argued that a computer has beaten the world chess champion but not beaten the human in the Arimaa challenge because of six reasons:
  1. Arimaa is a new game. Therefore, the number of programmers and amount of time devoted to computer Arimaa is much less than for computer chess. Computer chess had thousands more programmers and 40 more years than computer Arimaa. The later and smaller effort resulted in less and slower progress in computer Arimaa.
  2. The rules for the Arimaa challenge required the computer to show a higher playing ability than the rules for the chess matches. In the Arimaa challenge, the computer must beat three human players in three matches. In the chess matches, the computer must win one match against one human player.
  3. In the Arimaa challenge, the computer needs to score 2/3 of the total points to win. In chess matches, the computer needs to score more than 1/2 of the total points to win.
  4. In the Arimaa challenge, the computer needs to win a qualification match. Then the human studied the computer games to find the computer’s weakness. In chess, there was no qualification match.
  5. In the Arimaa challenge, the computer cannot be improved between games. In chess, the computer was improved between games.
  6. In the Arimaa challenge, the rules reject powerful or custom made computers priced over $1,000. However, a powerful custom made computer beat the world chess champion.


However, the Arimaa community disputes this argument point by point. To the first point, Arimaa is a new game, so the playing community is still small and even the best players are not professional players and have only been playing the game for a few years. Thus the human players in the Arimaa challenge are much weaker than the human players in the chess challenge. The weakness of humans players should make the Arimaa Challenge easier to conquer than chess, which compensates developers for having studied the problem for a shorter time.

The remaining five points compare the Arimaa Challenge only to Kasparov vs. Deep Blue, ignoring all other man vs. machine chess matches in which computers have prevailed. The chess match which can most closely be compared to the Arimaa challenge match is the Man vs Machine World Team Championship. In 2004 and 2005 a team of humans played against a team of computer opponents. In both years the computers won by wide margin. In 2005 all three humans lost, the computers won 2/3 of the total points, the chess engines were commercially available for the humans to study, and the machine hardware used was not a supercomputer, but rather comparable to hardware used in the Arimaa Challenge.

Man-vs.-machine chess matches since 2005 have shown increasing computer dominance. For example, the 2006 Deep Fritz vs. Vladimir Kramnik and 2007 Rybka vs. Jaan Ehlvest matches gave additional advantages to the human player, but the computers (running on commodity hardware) prevailed anyway.

World Championship

Each year since 2004 the Arimaa community has held a World Championship tournament. The tournament is played over the Internet and is open to everyone. Past world champion title holders are:
  • 2011 – Jean Daligault of France
  • 2010 – Jean Daligault of France
  • 2009 – Jean Daligault of France
  • 2008 – Karl Juhnke of USA
  • 2007 – Jean Daligault of France
  • 2006 – Till Wiechers of Germany
  • 2005 – Karl Juhnke of USA
  • 2004 – Frank Heinemann of Germany

Computer World Championship

Each year since 2004 the Arimaa community has held a Computer World Championship tournament. The tournament is played over the Internet and is open to everyone. Past computer world champion title holders are:
  • 2011 – bot_sharp developed by David Wu of USA
  • 2010 – bot_marwin developed by Mattias Hultgren of Sweden
  • 2009 – bot_clueless developed by Jeff Bacher of Canada
  • 2008 – bot_Bomb developed by David Fotland of USA
  • 2007 – bot_Bomb developed by David Fotland of USA
  • 2006 – bot_Bomb developed by David Fotland of USA
  • 2005 – bot_Bomb developed by David Fotland of USA
  • 2004 – bot_Bomb developed by David Fotland of USA

Patent and trademark

was filed on 3 October 2003, and granted on 3 January 2006. Omar Syed also holds a trademark on the name "Arimaa".

Syed has stated that he does not intend to restrict noncommercial use and has released a license called "The Arimaa Public License" with the declared intent to "make Arimaa as much of a public domain game as possible while still protecting its commercial usage". Items covered by the license are the patent and the trademark.

See also

  • Game complexity
    Game complexity
    Combinatorial game theory has several ways of measuring game complexity. This article describes five of them: state-space complexity, game tree size, decision complexity, game-tree complexity, and computational complexity.-Measures of game complexity:...

  • Go (board game)
    Go (board game)
    Go , is an ancient board game for two players that originated in China more than 2,000 years ago...

  • Chess
    Chess
    Chess is a two-player board game played on a chessboard, a square-checkered board with 64 squares arranged in an eight-by-eight grid. It is one of the world's most popular games, played by millions of people worldwide at home, in clubs, online, by correspondence, and in tournaments.Each player...

  • List of world championships in mind sports

External links

The source of this article is wikipedia, the free encyclopedia.  The text of this article is licensed under the GFDL.
 
x
OK