Just some posts I ripped from rec.games.chess.computer, with my
comments between [] brackets.
-----------------------------------------------------------
[The alfa-beta search algorithm:]

int search(int alpha,int beta,int depth)
{
        int i,x;

        if(!depth)
                return(evaluate());
        if(!generate_moves())
                return(0);  /* no moves, draw game */
        for(i=first_move;ialpha) {
                        if(x>=beta)
                                return(beta);
                        alpha=x;
                        if(!ply)
                                best_move=i;
                }
        }
        return(alpha);
}

This should be close. Have fun. :)

Tom Kerrigan
-----------------------------------------------------------

>I still have no idea what these are:

>1) null move.

A null move is doing nothing, even if the rules don't allow this.

Why?  In some games you can make the assumption that the player to move can
always improve their situation.  So, by testing a null move, you can see if
there's bound to be a cutoff.  IE 'this position is already too good, once
they've made a move here, it's going to get even better, so I don't need to
spend time generating and testing real moves...'

Chess is an example of a game where this assumption nearly always holds.  In
fact there's a word for those times when it doesn't: zugzwang, the German for
oh sh*t, I have to make a move here :)

[don't use this in draughts/checkers: zugzwang is extremely common]

>2) quiecient(sp?) search.

It's too late for me to spell this properly too :)  So I looked it up:
quiescent.

The idea is you only do an evaluation on a 'quiet' position -- eg when all
sensible captures have been made -- to reduce the chance of the result being
Horribly Wrong.  You don't want to think a position is quite good for you
just because you haven't looked to see that your queen can be taken for no
loss next move.

This goes right back to the original paper by Claude Shannon in 1948.  The
drawback is that it takes more time and 99% of it is wasted (the extra
positions generated are in bits of the search tree that get cut off).

3) hash tables. [=~ transposition table]

Once you've evaluated a position, you'd like not to have to evaluate it
again, should you come across it again in the search tree (eg from a
transposition of moves).  So you store the result in a table.  Before
evaluating any position, you check the table to see if you've stored the
result.

Given enough size, hashing about the fastest way to search a table (see any
computer science book on algorithms) given that the number of legal positions
is almost invariably vastly greater than the memory available.

So you generate a (hopefully unique) 'hashing key' from the position, and use
that to look up the position.  Getting the right hashing function can be
tricky, or the look-up takes way too long (because too many positions you're
interested in have the same key) or returns the wrong result or both.

If you don't want to hash positions, find something that can take a long time
to calculate but doesn't change in every single position generated... like
the evaluation of the pawn structure, and hash that.

Ian Watters
-----------------------------------------------------------
[more on hashtables]
-->Hi,
-->
-->I'd greatly appreciate if someone who is familiar
-->with chess programming techniques or who is writing 
-->a chess program would help me.
-->The first question I posted remained ununswered either
-->because I posted it on the weekend or because I failed
-->to ask it clearly. Here is my second try.
-->
-->1) What exactly is the hash table for alpha-beta search in chess?

If you search a position reached after the following moves:

e4 e5 nf3 Nc6

to (say) 4 additional plies, then you search the position reached
after the following moves:

Nf3 e5 e4 Nc6

you reach the exact same position again.  If you save the result of
the first search, you can look it up here and avoid the needed 4-ply
search.  Saves a lot of time.  Also, that's why it is often called the
transposition table, because it recognizes that the same position has
been reached via a *transposition* of moves.  There are lots of other
slick benefits from this table, but this is the main thing.  It simply
lets you avoid searching the same position twice which can be expensive
when you start searching 10-11 plies in the middlegame as I often see
in Crafty on the P133.  If you only get to 3 plies, it's difficult to
transpose.  That's also why on some endings a modern chess program can
go Crazy.  Try position #70 from Fine's Basic Chess Endings.  It is
one of the best examples of transposition table utility.  Crafty solves
this particular ending in < 1 second, after searching to 18-20 plies
depending on the size of the transposition table you use.  Without it,
it will take *forever*.

-->2) Why is its size often equal 2^n?

Because you typically have a hash key that is a 64-bit random number
constructed by exclusive-oring a random number that is unique for
each piece/square combination. (more later).  You take this key, and
it with 2^n-1 (for n=16, this is 65535) which extracts the low order
16 bits of the random hash key.  This index will be used as a direct
subscript (or index) into the table.  This is the cheapest way to
compute the index.  should you choose to not use 2^n, but, instead
use an odd value (say) "m" then you have to compute hash % m to get
your index.  This "modulo m" calculation is slower.  On some machines
like the Cray it's *terribly slow* since there is no divide instruction
of any sort (there's a floating point reciprocal approximation, but you
have to calculate that, then do some multiplies/conversion to integer,
subtraction, etc to do the computation...  *slow*)

-->3) What are the practical difficulties in implementing effective 
-->hashing?

None, really.  For each of the 12 pieces, generate an array of 64
random numbers that are 64 bits long.  Check 'em for duplicates or
being "too close" (hamming distance) to other numbers.  To compute
the hash key:

  hash_key=0;
  for (i=first_piece;i!=last_piece;i++)
    hash_key=xor(hash_key,random[piece_type[i]][piece_location[i]];

The interesting part is that the loop can be tossed out.  Since XOR
is an "order independent" calculation (xor(a,xor(b,c)) == xor(c,xor(a,b)))
you can incrementally compute this by xoring the from piece/square which
takes that out of the hash key, then xoring the to piece/square, which folds
that in.  Much faster than the loop for each move.

Robert Hyatt  
-----------------------------------------------------------
>I have to write a Checkers program in Lisp.

>Any information about implementation of Checkers in Lisp would be very
>welcome. Any advice, envent the best way to represent the bord, and
>create new moves. Which is best, a list or a array. I am thinking about

>Any source code for interactive games(computer against humans) would be
>very welcome.

Don't know about lisp, but best you can try reading some
books about computer chess and adapt it to draughts. Writing
                     ^^^^^
a game playing program it not something you do overnight! (if
you want to do right that is), but if it is for school or something,
I would go for the following:

use the alfa-beta algorithm as the search engine. (look it up in a book!)

As evaluation function you can take material plus a *random* value.
Random evaluation is really silly ofcourse, but it is a lot better
than nothing, and very easy to program. (random evaluation tends to
optimise mobility at the cost of the opponent)

This should give enough playing strength to beat beginners at most
games.

Michel