ACF Bulletin #209, March 24, 2003

** Italo-Australian Club 41st Doeberl Cup
Australia's premier weekend tournament!
Canberra, 18-21 April
Total Prizes: $10,000

** University Open 2003
$4000 total prizes
Category 3 Grand Prix
12-13 July
Adlaide University, SA

** Australian Chess Magazine
Issue two now available.
Includes Pratt Foundation Australian Open, Auckland International, World
and Australian News, Pawn Stars in Slovenia, Winning the Aussie Open,
Testing Tactics, BDG Ziegler, Software Reviews, Problem Billabong, Games
Column. or telephone (02) 9838-1529
and ask for Brian Jones.

** Chess Today
Daily Chess News
Annotated Games
Chess Lessons and Hints
Interviews, reviews and more!
Free trial

** Job Opportunity - Chess Kids, Melbourne
A position as full-time Chess Coach is available with 
Chess Kids from the start of Term 2 (May 1st).
Salary: $27,700 + super (9% of gross)
Contact or David on 0411-877-833 for more details.


* Gold Coast news
* World Junior - under tournaments
* Letters - More on the Glicko rating system
* Chess World Grand Prix 2003
* Upcoming tournaments


ACF homepage:
Bulletins online:
International news and games:


Not a lot of news this week locally, but plenty of action in the Amber 
Blindfold/Rapid overseas (Games and results at
 - and the dispute over ratings continues.

Some people have had trouble reading the html (webpage-style)
bulletins lately, so we're switching back to the tried and true but 
boring and limited text version. A pity, in my opinion, since html 
allows for many things plain text does not. A mystery, too - I tested
the bulletins in two browsers and no fewer than three email clients,
and made sure that I only used plain code - nothing fancy -
and they all work perfectly for me. Nevertheless, the experts assure
me that "compatibility" is a big problem for html in emails. There *may*
also be a problem with the new distribution system used - but then again,
it works fine for me on multiple "platforms". 

It all reminds me of the notorious 
claim of Derrida and the postmodernist philosophers, relayed endlessly 
in my in uni days: that *everything* (rocks, stars, potatoes, US presidents etc)
is a "text". I used to think that postmodernism was a dangerous psychological
condition, and my lecturer - who regularly threatened to lock errant students in 
a room with just a chair, a table and a copy of Derrida's "Of Grammatology"   for 
company - obviously agreed. But now, having wrestled with the difficulties of putting
html into emails, I think Derrida may have been right. Perhaps he designed
all those incompatible email clients out there...

Please note that previous bulletins can be seen at:


Gold Coast Active Championships
81 competitors took part in the Gold Coast Active Championships at the
Gardiner Chess Centre last Sunday. Arianne Caoili defeated WIM Anastasiya
Sorokina in round five and IM Stephen Solomon in round six. She then took a
draw against Natalie Mills to win the event on 6.5/7 with Stephen Solomon
and Anastasiya Sorokina second and third respectively on 6/7. Natalie Mills
finished fourth. How good to see three females in the top four. Go to the
results section of for the full standings.

Gold Coast Primary Schools Teams Championships
During the week, 773 students participated in round one of the Gold Coast
Primary Schools Teams Championships held at Robina Town Centre Community

Australian Clubs Teams Championships 29 September to 3 October
It is looking almost certain that this event will proceed as planned. Teams
from Canberra and the Gold Coast have already entered and teams from
Bullwinkle (Brisbane), St George (Sydney) and Koala Club (Sydney) are almost
certain. There are several other clubs on the likely/possible list.

From the organisers/sponsors point of view, we need you to do two things
urgently. Firstly get your entry fee of $400 per team payable to Gold Coast
Chess Club sent to Graeme Gardiner 11 Hardys Road Mudgeeraba 4213 by the end
of the month. Secondly, we need you to make a provisional booking with the
venue by the end of the month. Accommodation Bookings: Rydges Oasis Resort with a copy to Kerry Corker
please. Rydges Oasis Resort webpage

In return the organisers, Kerry Corker and myself, intend to give all teams
a great week of chess, both from a playing and social point of view. 

- Graeme Gardiner


for an article discussing a JavaScript chess program.


Of course it´s not my business all that stuff about your rating system, 
but you Aussies seem to be very underrated compared to "Northern Europe"... 
My national ELO has been "eroded" by playing twice in Australia. 
(It costed more than the travel..."). Look at eg. Catherine Lip approximately 
1800 local ELO +2000. If I remember right there is also gap between 
the Australian and the New Zealand rating.
- cheers
Henrik Mortensen


Those who complain about the ratings system and "rust" should consider the 
position of bridge players. You can gain points for scoring anywhere in the 
top half of the field, but you cannot lose points if you spend the rest of 
your life coming last!
- Robin Stokes


Greetings. May I add a few words? Not so long ago I recall a discussion, 
I think on the Bulletin Board at Chesscafe (Dan Heisman?), that many people, 
rationally or otherwise, are reluctant to play in tournaments fearing they'll 
lose ratings points. Apparently juniors are sensitive to this (they dislike the 
snakes more than they like the ladders). Certainly, my carcass has been 
feeding keen and ambitious players since my 'come back' to OTB after 
about 20 years absence. 

The suggestion was that once you reached a certain threshold, there was 
a limit to how low your ratings would fall. Say, once you reach 1700, you 
can't fall below 1600, or whatever. There's an ego-protecting safety net.

Also someone on the net also wisely (?) observed that in the long run your 
rating is roughly correct. If you lose some games, then you learn, and 
this can only raise your rating in the long run. John Maynard Keynes 
once remarked, however, that "in the long run we are all dead".
- Bruce Littleboy



(i) Ian's claim about incorrect results is unfortunate since in his own
example he is in error himself. His result in the Interclub match which
should have been 3/3 had been coded as 2/3 not 2/4 as he claims. This error
occurred in the December 2000 rating period which was the first under the
Glicko. Ian's suspicions should have been aroused since his August 2000
rating was 2587 and his published December 2000 rating was only 2590 even
though his performance rating was well in excess of 2600. His correct
December rating should have been 2602. It is interesting to note that Ian's
December 2000 rating of 2602 would be the same under either the Glicko or
ACF ELO systems.

As for transparency and ease of calculation, Shaun Press correctly pointed
out on the ACF bulletin board no one could calculate their ELO ratings
without reference to the probability tables. Both Ian Rout and Barry Cox
have created easy to use Excel spreadsheets based on the Glicko system that
give excellent estimates of a players rating.

The idea of the Glicko/Glicko2 system is fairly simple, which is that if
you have a reliable rating, your rating will fluctuate less than if you
have an unreliable rating, and secondly that unreliably rated opponents
have less of an effect on your rating than reliably rated opponents. This
is logical and simple to comprehend. It's only the mathematics that's
complicated and that shouldn't be an issue given the majority of players
couldn't care less about calculating their own rating. For the minority of
those who are interested they can use either of the two spreadsheets
mentioned above.

Rather than try and calculate ones own rating it might be better to place
the crosstables of all tournaments that were rated on the ACF web page.
This would allow players to determine that their results were correctly

Ian insists that "Transparency and ease of calculation of a rating system
is essential" and claims that it's possible to have a system that is both
simple and accurate. Here we part company. The fundamental question in our
view is - what is a rating system intended to achieve? Our answer would be
- to provide each player with a rating that accurately reflects his current
playing strength. If Ian disagrees with this then further discussion would
appear futile - we simply want different things.

If accuracy is important then transparency and simplicity will suffer. This
just follows from the nature of the beast. A rating system that applied a
fixed adjustment for every game would be very easy to calculate but
wouldn't give accurate results.
The more accurate you try to be the more factors you need to consider and
the more complex your system will be. The ELO system introduces the concept
of probability tables to take into account expected scores. Glicko goes a
step further with concept of reliability and the level of confidence that
can be placed in a rating. Glicko2 goes further again by trying to detect
players whose ratings are undergoing a relatively rapid change. These are
fairly standard statistical techniques and have good theoretical foundation
but they result in complex calculations and require a computer for any
substantial number of games. Claiming the system should be both accurate
and simple is akin to claiming that brain surgery or nuclear physics should
be simple. The universe is deaf to your wishes.

One might ask - how do we know the Glicko is accurate (or, at least, more
accurate than ELO)? The answer is that we do what scientists do when
testing a theory - we make predictions and see how they pan out. If a
rating system is providing accurate ratings then those ratings should be an
good predictor of performance. If rating system A says that X should
score 40% against Y and rating system B says he should score 10% then we
can look at what X actually scores and determine which system is better.

This is exactly what we do when ratings are calculated. We calculate
ratings under both ELO and Glicko and monitor how well each does as a
predictor of performance. The results show quite clearly that Glicko is
superior. The pool of established players is very stable with little change
in the mean. This is because expected scores are close to actual scores -
exactly what one would expect if the ratings were accurate.

(ii) Ian claims that "This system is not only unfair to players who return
after an absence .. but is also open to abuse." and cites Markus Wettstein
as an example of the former. Let's look at the facts. Although we are
reluctant to show individual player histories in an open forum Ian leaves
us no choice. Markus Wettstein had not played a rated game in Australia
since prior to 1986 and had a rating of 2339. This rating was over 14 years
old when he returned to playing in Australia in 2001. Wettstein's playing
history is as follows:

Period      Published   Performance Score Games Played
            Rating      Rating

Dec 2001    1899        1878        5.5         9
Apr 2002    1862        1580        0           1
Aug 2002    1901        1945        6           8
Dec 2002    1967        2077        8           11
Mar 2003    1945        1755        1           2

As can be seen from this he performed in December 2001 at the 1878 level
and his rating was published as 1899. Since this time he has performed
nowhere near the 2300 level. His March 2003 rating of 1945 is a true
indication of his current strength. If giving accurate ratings is unfair
then we suppose this is unfair. But we thought this was just what a rating
system was supposed to do.

As for Alex Wohl, Alex hasn't played in Australia since July 2000 and his
last active ACF rating in December 2000 was 2493. His FIDE rating in July
2000 was 2461. As Ian points out Alex has been active internationally. In
fact he has been active in every FIDE list since he left Australia. His
FIDE rating declined steadily from 2461 in July 2000 to a low of 2371 in
July 2001. It did not exceed 2400 again until July 2002 when it was 2419.
It is currently 2415 on the January 2003 FIDE list If his performance
rating is close to his current ACF rating of 2493 when he next competes in
Australia then he will maintain his rating. However based on his FIDE
rating movements this would seem unlikely. Alex's rating was considered
extremely reliable back in December 2000, his RD and hence his rating still
hasn't progressed to the unreliably stage.

As for Ian's claim that he can gain a few hundred points by playing in weak
tournaments and scoring 100% this is just plain absurd. No matter what Alex
does, his rating cannot rise above his performance level if he is improving
(or fall below it if he below par). Alex can play in a million low rated
tournaments but won't get to be rated at, say, 2600, unless he performs in
all those tournaments as a 2600 rated player.

(iii) Ian's assertion that compaction is just another euphemism for
deflation is false. For him to say so is to twist the facts. As mentioned
in by us in previous bulletins as well as by Bill on the ACF bulletin board
the system is not deflating. We will repeat it once more. The majority of
the players in any ratings period fall into the very reliable category.
These players are identified in the March 2003 ACF ratings on the ACF web
page with a !! after their name. The average rating change for this group
of players changes by less than 1 rating point per rating period. In most
periods this change is up not down. Therefore they are stable. Therefore
there is no problem in the population of stable players.

As for Nick Speck, it's very disappointing to see Ian bring unfounded
rumours into what is supposed to be a serious debate. At no stage was
Speck's rating ever calculated nor suggested that it would be around 2490.
Bill had heard the rumour that Nick Speck's rating would be around 2490
well before the Australian Championship in which Nick competed was even
submitted by the Victorian Ratings Officer for calculation in the April
2002 rating list. This 2490 rubbish was speculation on the part of a number
of players in Victoria, but had no basis in fact. Specks' rating of 2411
for the Australian Championship was calculated in the same manner as all
other players' ratings were calculated. As Nick's performance rating was
well over 2500 his rating was not subject to the criteria that no players
rating will exceed their performance rating.

Let us make it absolutely clear. At no stage has either of us taken any
action against an individual or groups calculated rating. We definitely do
not, as Ian implies, make "executive decisions" that a person's rating is
too high (or low) and diddle the numbers to something else. We feel that
Ian should publicly retract this assertion that we manipulated Nick Speck's
rating in the next bulletin.

(iv) Ian says, "This problem is not confined to the ACT (see later) and to
blame the old ELO system for it is to deflect responsibility for correcting
the problem."
How he can claim that we are deflecting responsibility for the problem when
it was us who determined that the underrated ACT juniors were not getting
corrected by the Glicko system fast enough due to the problems of the
previous ELO
system? It was Bill that actually brought up the issue with the ACF Council
and asked that they authorise us to correct the problem. There is not in
our opinion an endemic problem with underrated juniors in other states (see

(v) Firstly Pecori V Wallace at Ballarat has nothing to do with, as you put
it, "ridiculously low ratings of Australian juniors". Wallace rated 2484
lost to Pecori rated 2001 and under Glicko2 Jean-Paul would lose 30 rating
points. Jean-Paul has a 94% winning expectancy. Jean-Paul would need to win
16 games against Pecori to regain his lost points. Now under the ELO system
Jean-Paul would only lose 14 rating points. Unfortunately for Jean-Paul it
would under the old ACF ELO system require him to win 21 games against

Looking at Ian's comments with regards Bill's previous response we can say
the following.

(a) The reason I (Bill) didn't comment on transparency and simplicity was
because I was aware that Graham would be. As for Ian's assertion that
everyone is happy with the FIDE ELO system we contend this is not the case,
otherwise FIDE would not be looking at reforming it.

(b) Both of us are always interested in other people's comments regarding
the rating system. Some suggestions deserve to be seriously considered,
some do not. It is our contention however that Ian does not speak for the
majority of players.

(c) The following statement is completely false "Clearly, the idea of
bringing juniors in at their first performance rating is asking for trouble
- as they improve they will inevitably deflate the rest of the system."
This simply highlights Ian's lack of understanding of the Glicko system.
Improving juniors DO NOT deflate the rest of the pool. This is clearly
demonstrable from the figures. As for Box Hill and Gold Coast chess clubs
if they are so concerned about under-rated juniors how come they have not
raised the issue either with us directly or to the ACF via Chess Victoria
or the CAQ?

(d) Ian says he is not joking regards the 336 or 480 rule. For him to say "
The 336 or 480 system may not be mathematically sound but it works
perfectly well elsewhere, and, for some reason that Bill did not explain,
worked perfectly well in Australia (i.e. was not inflationary) for
everybody except me" is to completely dismiss what I (Bill) said. I (Bill)
made it quite clear that the major losers were the sub 1000 rated players.
Although Ian was the major beneficiary he was not the only beneficiary
amongst the elite players.

Finally when it comes to Ian's comment regarding the 150-point bonus and
his suggesting a 100-point bonus, his recollection differs from mine

My (Bill's) recollection is as follows:

"Firstly I (Bill) wouldn't say that Ian complained about his rating heading
towards 2700 as much as saying it was embarrassing that it did so. In fact
it was Graham and my observation that the 336 rule was unfairly advantaging
some high rated players and disadvantaging most sub 1000 players in
general. As such Graham and I abolished the use of the 336 rule prior to
the April 1999 ratings. Shaun Press had written an article in the August
1999 Australian Chess Forum discussing the comparison of ACF to FIDE
ratings. This generated some discussion within the chess community and I
(Bill) remember saying we were going to investigate this and mentioned it
to Ian when I (Bill) saw him at a NSWJCL event. It was at this time that
Ian first suggested to me (Bill) that we should just add 100 points to all
players except him. Graham and my investigation showed that the
relationship between ACF and FIDE ratings were significantly out of whack.
The best correction factor was determined to be 150 points. Of all the
active players on the ACF and FIDE rating lists Ian was the only player
with a rating in excess of his ACF rating. It was decided not to allocate
any adjustment points to Ian's ACF rating. Ian's FIDE rating was 2562 on
the January 2000 list and his ACF rating was 2642 on the December 1999
list. In comparison to Darryl Johansen, Ian out rated Darryl on the ACF
list by a massive 270 points yet only out rated him on the FIDE list by 68
points. After the 150 point adjustment Ian now out rated Darryl by 122
points. Perhaps we should have deducted points from Ian but this was felt
to be unnecessary. Ian's ACF rating had peaked at 2703 on the December 1998
list. Ian's rating dropped steadily over the following 3 lists as the use
of the 336 rule ended after the December 1998 list. It was felt that
without the effect of the 336 rule unfairly advantaging his rating that his
rating would fall to a more accurate level. This indeed proved to be true."

Looking at Ian's comments with regards to Graham's previous response we can
say the following.

(a) Ian says "Unfortunately Glicko's high K factor added to the incredibly
low ratings at which most juniors enter the system tends to negate this
advantage of Glicko over ELO. However if juniors were entered at an
arbitrary 1000 rating, this improvement over ELO might become more useful."
All that can be said about this is that Ian's statement is wrong. Ian's
belief that all new junior players are under-rated is clearly false. The
reason why a new junior has a low rating is because he is just that "new"
and has little skill. Also his suggestion that an arbitrary 1000 rating
would be any benefit will be shown below to be false.

(b) We are pleased to see Ian admit "Of course you are right that the key
factor for a rating system is how well it does the job." The Glicko does
its job better than any other rating method. As for his comment "If Glicko
was doing the job, we wouldn't be having this discussion." this has nothing
to do with doing its job. What it has to do with is the lack of
understanding of the Glicko system by Ian.

(c) Ian argues for the following "If Glicko is really such a great rating
system, it should work fine with an arbitrary starting rating for new
players (say 1,000), a reduced (and preferably fixed) K factor for
established players (even those who have been out of the game for a while),
a rust factor and a minimum point gain for wins." Asking for a Glicko
system with a fixed K factor is an oxymoron - like asking for a truthful
politician. The whole point of the Glicko system is that K is not fixed but
varies from game to game. This seems a self-evident plus to us. The result
of a game against someone who has played 5 games a week for the past year
tells us more about your rating than a game against someone who hasn't
played for ten years. We have dealt with all of these issues above as well
in response to Guy West's letter last week.

Finally Ian says "A major fix is desperately needed, as soon as possible
and if this offends some people's mathematical sensibilities, so be it."

What Ian fails to appreciate is that all rating systems are based on
mathematics. To argue for a system that is not mathematically sound, lacks

Ian seems to want a system that appears "fair". His use of the term
"unfair" in relation to Markus Wettstein is revealing and, we think,
highlights a wrong impression many players have - that players who lose
rating points are, in some way, being punished. A rating system does not
punish (or reward) anybody. It simply says "Here's a number which, based on
your results, is an indication of your current playing strength". Of course
losing rating points can be a significant blow to the ego so its easy for a
player to feel he's being punished. But if we can all bear in mind that all
that is being done is executing an algorithm and crunching numbers then we
can focus on the key issue - is the system accurate.

To test the accuracy of the system let's setup a hypothetical situation and
see how the different rating systems cope. As a common complaint is that
under-rated juniors are robbing seniors of rating points let's try the
following simple situation:

We assume

1) A pool of 20 players, each with a stable (RD = 60) rating of 1500.

2) A new player enters the pool and every rating period he plays one game
against every other player in the pool (i.e. 20 games).

3) During the first period the new player performs like an 864 rated
player. Thereafter, his strength improves by between 150 and 200 points
each period.

An ideal rating system would give this player a rating of 864 for the first
period, 989 for the second etc. (For clarity we have picked ratings that
yield an expected score close to a full point or half point). The ratings
of the other players should
not change as they are still performing at a 1500 level.

Now, lets see how the different systems cope with this scenario. First we
look at the Glicko system, assigning the player an initial rating based on
his performance.

In the following 'Strength' refers to the aimed for playing strength of the
player within the rating period, 'Score' is his actual score out of 20,
is his performance rating as determined by the rating system, 'new rating'
the players new rating at the end of the rating period, 'diff' is the
difference between his new rating and the strength and 'pool' is that
rating of the 20 1500 rated players.

            strength    score       perf  new rating  diff  pool
period 1          864   0.5         864         848   16    1501
period 2          989   1.0         989         923   66    1500
period 3          1199  3.0         1199        1140  59    1499
period 4          1393  7.0         1392        1360  33    1496

Ian claims it would be better to give all new players an initial rating of
1000 rather than an initial rating based on performance. Does this make
things better? Lets try it.

            strength    score       perf  new rating  diff  pool
period 1          864   0.5         864         915   -51   1501
period 2          989   1.0         990         942    47   1501
period 3          1199  3.0         1200        1093  106   1500
period 4          1393  7.0         1392        1324   69   1497

Now although the Glicko can handle the initial rating being 1000 the
resultant figures are not as good.

Lets look at what Glicko2 does. First when the initial rating is based on

            strength    score       perf  new rating  diff  pool
period 1          864   0.5         864         852   16    1501
period 2          989   1.0         989         927   62    1500
period 3          1199  3.0         1199        1143  56    1499
period 4          1393  7.0         1392        1362  31    1497

So Glicko2 is slightly better than Glicko when ratings are based on

Lets Look at Glicko2 when the initial is based on the initial rating being

            strength    score       perf  new rating  diff  pool
period 1          864   0.5         864         916   -52   1501
period 2          989   1.0         990         944    45   1501
period 3          1199  3.0         1200        1097  102   1500
period 4          1393  7.0         1392        1328   65   1497

As can be seen it's slightly better than Glicko but still worse than using
a performance rating as the initial basis.

Of course, Ian has also been claiming ELO is superior. So how does it cope?
Lets try ELO with a K factor of 15 and assign an initial rating based on

            strength    score       perf  new rating  diff  pool
period 1          864   0.5         864         864   0     1500
period 2          989   1.0         989         871   118   1500
period 3          1199  3.0         1198        908   291   1498
period 4          1393  7.0         1390        1003  390   1493

Oh dear, not good at all. What about trying a K factor of 30 for the new

            strength    score       perf  new rating  diff  pool
period 1          864   0.5         864         864   0     1500
period 2          989   1.0         989         878   111   1500
period 3          1199  3.0         1198        952   247   1498
period 4          1393  7.0         1390        1137  256   1493

This helps a little but not much.

Perhaps we need to combine Ian's suggestions. ELO with a start rating of

ELO - INITIAL=1000 K = 30
            strength    score       perf  new rating  diff  pool
period 1          864   0.5         864         983   -119  1500
period 2          989   1.0         989         984      5  1500
period 3          1199  3.0         1199        1045   154  1499
period 4          1393  7.0         1391        1214   179  1495

OK this is better, but its still worse than the Glicko and Glicko2 systems.

The last two columns are the key. We want difference to be as small as
possible and Pool Rating to be as close to 1500 as possible. The current
system does this better than any of the others.

Note that our imaginary scenario above only considers games played by the
newcomer against established players. We don't consider games played the
established against each other. Assuming that the established score an
average of 50% against each other these results would not affect the ELO
calculations at all. Under Glicko such results would add to the confidence
level for the established players and so further reduce any drop in their

For those that think that ELO is superior to Glicko/Glicko2 or that some
"feel good" or random change to the current system would improve things
then clearly the above shows otherwise.

- Graham Saint & Bill Gletsos
ACF Rating Officers


At risk of fanning the flames of the great ratings debate even further 
I’d like to make the following observations.  Note these are the opinions 
of someone who is neither a titled player (far from it) nor an office holder 
in the ACF or any state association.  Also, I tried to pick issues from 
various authors but name anyone in particular.  This is not a deliberate 
attempt to be obtuse or construct straw-men arguments but rather an 
attempt to address issues in a less confronting way.

Glicko is complicated -> lacks transparency
I believe this argument in fallacious.  The rating system lacks transparency 
because not all the numerical information kept on a player is published by 
the ACF rating officers.  They publish a rating and a reliability “indicator” but 
not the actual RD and nowhere are the Glicko-2 volatilities even hinted at.  
Without knowledge of this information it is patently impossible for anyone 
to accurately test the correctness of their rating movement in a period based 
on their results - even when all their games have been against other rated players.

If all the information was known then I’m sure anyone with a computer could 
enter the numbers into an application and crank the handle to see the results 
of their last tournament performance, or check an unexpected rating fluctuation 
from the recent ratings list.  I’d be willing to develop such an application for 
free distribution with the assistance of the ACF rating officers.

In fact, in the perfect world, a web application on the ACF rating page would 
accept input and provide feed back on the results the rating adjustment on a 
player by player basis – similar to the functioning of the ‘history’ command on FICS.

Glicko places too much emphasis on recent results -> return to Elo
Again I don’t believe one leads to the other.  Prof. Glickman’s main idea 
was not to add emphasis to recent results at all.  That was Ken Thompson’s 
system.  Glickman’s idea was to add the concept of a ratings reliability 
factor (the RD).  The effect of this is that players with larger RDs experience 
greater fluctuations than those with lower RDs (i.e. more reliable ratings).

This would seem a very sensible approach and not really related to recent 
results at all.  In fact, the average player would begin with a high RD when 
they first begin playing in rated tournaments which would establish their 
rating “ball-park” and assuming they continue to be an active player their 
RD would decrease and therefore so would their rating movements.  In this 
scenario it is the early results which are emphasized.

Furthermore, the Glicko system has a parameter called c which is used to 
determine how quickly a reliable RD “ages” to become an unreliable one.  
If the ratings are shown to fluctuate too much (an argument I don’t wish to 
get into) perhaps slowing the aging of RDs is one way to reduce the extent 
of these fluctuations.  Certainly tweaking of the currently system to address 
demonstrated flaws is preferable to throwing out the whole Glicko system 
with the proverbial bathwater.

Glicko has a high K-factor
This assertion is simply wrong.  In fact, Glicko doesn’t have K-factors at all.  
The K-factor is an artifact of the Elo system.  Under Glicko the amount of 
rating movement is based on the RD of the players involved sometimes it is 
high and sometimes it is low depending on the degree of reliability held in 
those ratings.  Furthermore, under Glicko, rating changes are not balanced.  
If one players rating increases by a certain number, the opponent’s rating 
will not usually decrease by the same amount.  This is as one would expect 
in a system which is tracking reliability as well as rating and less reliable 
ratings should move more than more reliable ones.

A lot of people dislike the current rating system -> change the rating system
Again I’d argue against this line of reasoning.  In any rating system which is 
half doing its job you are going to have detractors - human nature being what 
it is, and all.  I believe point has already been made well by others.


Also other factors could be causing the dissatisfaction with the ratings 
system.  Some of these factors could be:

Lack of understanding of the mechanics of the system.  This can be 
addressed with more helpful information on the ACF site explaining 
exactly how the system works. 
Lack of transparency in the process.  Publish ratings, RDs, volatilities 
and other parameters.  Provide tools which allow players to review their 
recent history and the impact these had on their rating. 
Appeals to the person.  I.E. a GM and an IM don’t like it so it can’t be 
any good.  Difficult to correct but perhaps a less adversarial forum to 
raise issues like this would help.  Letters and computer bulletin board 
threads can quickly escalate into flame wars.  Open minds are required 
on both sides of the divide. 
There does seem to be a great deal of interest in the ratings system 
at the moment.  What should be done about it is a matter (I believe) 
for the ACF council.  Hopefully a solution can be reached which is 
agreeable and fair to the majority, if not all, Australia chess players, 
lofty or lowly, junior or adult as well as be statistically sound.

- Regards, 
Barry Cox


Dear Sir,
I am writing in response to the letters in ACF Bulletin 208 on the Glicko rating system.
Most importantly, I was extremely disappointed with some of the ad 
hominem attacks and derogatory comments contained in these letters, 
particularly by those seeking to defend the Glicko system.  In my view, 
the onus is on those who support a complex mathematical system to 
understand and address the concerns of the users of the system 
(which seem to be deflation, volatility and lack of transparency) 
rather than disparage users for a lack of knowledge.
On a more technical (but less important) level, although I do not 
pretend to fully understand the Glicko system (and serious 
statistical study was too long ago!), I would like to make the 
following comments about each of the concerns in turn:
- if I understand the comments by the various Glicko experts, 
what is being experienced by "elite" players as deflation is in 
fact mathematically compaction around some mean (as an 
aside, I would be interested what this mean is), ie merely 
reducing rating differentials.
- As stated, rating differentials are merely representations 
of a probability distribution and so there is no "magic" in 
any nominal rating or nominal rating differentials - it is a relative 
scale (that said, it would of course be nice to align to some 
degree with the FIDE system).
- Given the level of concern, cannot the Glicko parameters 
be tweaked to maintain the current nominal rating distribution?  
This would seem to maintain a "pure" system while 
addressing the concerns of "elite" players.
- As an aside, I suggest that the concerns of "elite" players 
should not be lightly discarded.  I think that most chess players 
derive satisfaction from improving their chess and testing 
themselves against stronger players.  A disaffected and inactive 
"elite" level will diminish the enjoyment of chess for all (and I suspect 
their inactivity has a "trickle down" effect on the activity of the next tier of players).
- I agree that in a pure sense, a rating is just a number which should 
represent an expectation of current performance.
- The point is also made that ratings are really only important for elite 
players.  However, I suspect many players gain satisfaction through 
improvement.  A rating is the obvious way of tracking improvement 
and progress.
- As such, although a rating in a pure sense can be seen as a measure 
of current playing strength, it is only natural for a player to see their 
rating as the result of cumulative efforts and improvement.
- What is perceived as an overly volatile rating system can therefore 
take away much of the satisfaction of playing (and will not only affect 
"elite" players).
- Again, it seems to me that the Glicko parameters can readily be tweaked 
to reduce the volatility of ratings for players who have played many games 
over their career although may have been absent for some time from the 
Australian chess scene.
- Transparency is an important concern - it is not sufficient merely 
to state that the "black box" is doing its job.
- In seeking transparency, I do not think it is necessary to revert to 
an ELO type system.  However, I suggest some simple measures 
to improve transparency which should be relatively simple given 
the benefits of the internet.
- For example, for each active player, it should be possible to publish 
on the website supporting information such as:  number of games 
played, results, expected performance, actual performance, rating movement.
- Maybe it would even be possible to put a "pocket Glicko calculator" on the site?
In short, and most importantly, I hope that future discussion on the 
rating system (and other points of contention) can be conducted in 
a less personal and more objective manner.
In addition, I hope that serious consideration can be given to the proposals above:
- changing the Glicko parameters to maintain existing nominal rating 
differentials (perhaps by flattening out and/or cutting off the probability 
distribution curve);
- changing the Glicko parameters to reduce the volatility of rating 
of returning players;
- publishing more detailed supporting information with the ratings 
lists to improve the transparency of the rating system.
Jeremy Hirschhorn 


Malbork Castle Cup; 20-21 September 2003; Malborku in Poland. 
Tournament played in a beautiful castle. Last year 6 GM and 6 IM played.
The first prize in the year 2003 is 2000 PLN (about 500 $). 
MSC Jerzy Skonieczny

2nd Bangkok Chess Club Open, 1-5 of
May, 9 rounds Swiss, Novotel Bangkok on Siam Square, 
see or email

$10,000 Tampa Open
April 11-13, 2003

$8,000 Paul Morphy Open
may 9-11, 2003. 2 or 3-day Schedule.


The Italo-Australian Club 41st Doeberl Cup
A Class 3 ACF Grand Prix Event 18-21 April 2003
Location: The Italo-Australian Club, 
78 Franklin Street, Forrest, Canberra, ACT.
Total Prizes: $10,000
Time Limits: Digital clocks will be used. 
All divisions: 90 minutes plus 30 seconds per move from the beginning.
Entry Fees:
Premier Division: Adult $100; Under 18s $60 
(GMs & IMs free, if entry received by 11-04-2003.
Major & Minor Divisions: Adult $90; Under 18s $50 
Please note that a $20 (Adult) /$10 (Under 18s) 
discount applies, if entry is received by 11-04-2003.
Entries to: 
Paul Dunn (Treasurer, Doeberl Cup)
20 Richmond St, Macquarie, ACT 2614
Please make cheques payable to ACTCA.
Roger McCart (Convener, Doeberl Cup) Ph: 02-62516190

Sydney Easter Cup
The fourth Sydney Easter Cup will be held at Cabra-Vale Diggers Club, 1
Bartley Street Cabramatta on Easter Saturday and Monday commencing at
9.30am.  7 rounds of 1 hour each per player, loss on flagfall.  Entry fees:
full $25, Concession $15, Junior $10.  Guaranteed first prize of minimum
$250. Register and pay on first day of play.  Games will be rated.  Contact:
Ernest Dorm 9727-2931

Chess World ANZAC Day weekender
Category 2 Grand Prix event
April 25-27
ChessWorld Tournament Centre 
Contact David Cordover (03) 957 6177 or 0411-877-833

Anzac Allegro
8 rounds,15 minutes each
Friday 25th April 2003
Carina Leagues Club
Creek Road, Carina (opposite Meadowlands Rd)
Register by 10.00am
Entries: Close by 5pm Thursday 24th April
Rounds: Start at 10.15am - 4 before lunch and 4 after
Fee: $40-00 each player 
Contact: Clive or Wendy Terry (07) 3890-0064  041-3355479 
Only 20 places available so get your entries in early!
Morning tea provided - Club Bistro open from 1pm. 
Make all cheques to ROOKIES CHESS CLUB 
and post to 11 Jericho Circuit, Murarrie. 4172

Primary School Chess Tournament
Queensland Junior Rated! 
8 rounds, 15 minutes each. Friday 25th April 2003
Carina Leagues Club
Creek Road, Carina (opposite Meadowlands Rd)
Time: Register by 10.00am
Entries: Close by 5pm Thursday 24th April
Rounds: Start at 10.15am - 4 before lunch and 4 after
Fee: $15-00 each player 
Presentation of Trophies: No later than 4.30pm
Organisers and Arbiters: Clive & Wendy Terry 3890-0064:::041-3355479
Limited places available - Morning tea provided - Club Bistro open from 1pm.
Make all cheques to ROOKIES CHESS CLUB 
and post to 11 Jericho Circuit, Murarrie. 4172

University Open 2003
$4000 Total Prizes
Category Three Grand Prix
12th & 13th July
$35 Adult   $25 Junior/Concession
Adelaide University, SA
Official site

World Junior & Girls Chess Championships
Nakhchivan, Azerbaijan
21 June - 4 July 2003
21 June 2003 (arrival) to 4 July 2003 (departure) 
at the Olympic Center of Nakhchivan. 
Only those born on or after 1st January 1983 are eligible.
The Registration Forms shall be submitted to 
Azerbaijan Chess Federation to be received before 30 May 2003.
Swiss System, 13 rounds, with a free day after the 7th round.


Doeberl Cup
Category 3
Apr 18-21
Contact Roger McCart
'phone  (06) 6251 6190

Chess World ANZAC Day weekender
Category 2
April 25-27
ChessWorld Tournament Centre
Contact David Cordover (03) 957 6177 or 0411-877-833

37th. Peninsula Open
Category 1
May  3-5
Contact Mark Stokes (07) 3205 6042

Laurieton May Open
Category 1
May 3-4
Contact Endel Lane  (02) 6559  9060

NSWCA May Weekender
Category  2
May 17-18
Contact P.Cassettari
0403 775476

Tasmanian Chess Championship
Category  1
Jun 7-9
Contact  K.Bonham  (03) 6224 8487

NSW Open Championship
Category  3
Jun 7-9
Contact: P.Cassettari
0403 775476

Taree RSL June Open
Category 1
Jun 14-15
Contact Endel Lane  (02) 6559  9060

Gold Coast Open (Gold Coast CC)
Category 3
Jun 21-22
Contact Graeme Gardiner
(07) 5530 5794

Caloundra Open
Category 3?
Jun 28/29
Contact Derrick Jeffries

University Open
Category  3
JUL 12-13 ph (08) 8303 3029 or ph  (08) 8332 3752

NSWCA August Weekender
Category  2
Aug 2-3
Contact P.Cassettari
0403 775476

Father's Day Tournament
Category 2/3?
Sep 6-7
Contact:  David Cordover (03) 9576177 or 0411-877-833

Gold Coast Classic (Gold Coast CC)
Category 3
Sep 20-21
Contact Graeme Gardiner
(07) 5530 5794

12th. Redcliffe Challenge
Category 2
Sep 27-28
Contact Mark Stokes (07) 3205 6042

Tweed Open
Category  3
Oct 4-5
Contact Audie Pennefather

Koala Open
Category 3
Oct 5-6
Contact Brian Jones

Laurieton Open
Category 1
Nov 1-2
Contact Endel Lane  (02) 6559  9060

November weekender
Category  1
Nov 1-2 or 1-3
Contact  K.Bonham  (03) 6224 8487

Gosford Open
Category 2
Nov 8-9
Contact Lachlan Yee

Taree RSL Spring Open
Category 1
Nov 15-16
Contact Endel Lane  (02) 6559  9060

NSWCA November Weekender
Category 2
Nov 22-23
0403 775476

X-Mas Swiss Tournament
Category 2-3?
December 20-21
Contact David Cordover (03) 9576177 or 0411-877-833

Total 29 NSW 14 QLD 6 VIC 4 ACT 1 TAS 3 SA 1

Best wishes till next time
- Paul Broekhuyse