In general, the cacheing mechanism works as follows: each class has an associated wrapper, with some number of uniformly-distributed random hash values associated with it; each cache has an associated index into this pseudovector of random hash values. To look a value up from a cache from a single class, the hash corresponding to the cache's index is looked up and reduced to the size of the cache (by bitmasking, for cache sizes of a power of two); then the entry at that index is looked up and compared for indentity with the wrapper in question. If it matches, this is a hit; otherwise the cache is walked sequentially from this index, skipping the 0th entry. If the original index is reached, the cache does not contain the value sought1.
To add an entry to a cache, much the same computation is executed. However, if there is a collision in hash values, before the cache is grown, an attempt is made to fill the cache using a different index into the wrappers' hash values.
Wrappers are invalidated for caches by setting all of their hash values
to zero. (Additionally, they are invalidated by setting their
depthoid to -1, to communicate to structure type testers, and
invalid to non-
nil, communicating to
The hash value for multiple dispatch is computed by summing all of the
individual hash values from each wrapper (excluding arguments for which
all methods have
t specializers, for which no dispatch
computation needs to be done), jumping to the cache miss case if any
wrapper has a zero hash index.
(FIXME: As of sbcl-0.9.x.y, the generality of multiple hash values per wrapper was removed, as it appeared to do nothing in particular for performance in real-world situations.)
References (O for working BibTeX):
The CLOS standards proposal
Kiczales and Rodruigez
 Actually, there's some kind of scope for overflow.