When Did US Fencing Begin Using Ratings?

Discussion in 'Fencing Discussion' started by sdubinsky, Mar 2, 2015.

  1. teacup

    teacup Podium

    Joined:
    Mar 14, 2006
    Messages:
    4,872
    Likes Received:
    170
    I think it would be really helpful to take out those who earned As at youth/cadet/junior events. Not sure how it could be calculated but the data would be more meaningful.
     
  2. Bonehead

    Bonehead Podium

    Joined:
    Aug 27, 2002
    Messages:
    2,563
    Likes Received:
    137
    Why? Do you think a junior A is easier to earn than a mixed open A?


    Sent from my iPad using Tapatalk HD
     
  3. teacup

    teacup Podium

    Joined:
    Mar 14, 2006
    Messages:
    4,872
    Likes Received:
    170
    Others may disagree, but yes, I think that earning an A by making top 8 at a Div I NAC with 300 fencers isn't necessary the same as earning an A at a Junior local event with 15 fencers.
     
  4. K O'N

    K O'N Podium

    Joined:
    Aug 14, 2006
    Messages:
    3,593
    Likes Received:
    458
    But in fact you're much more likely to get an A at a junior NAC than a local event. Is there even such a thing as a local junior A1 event? I live near a good club for juniors, and I've never seen one. The only junior local events I see are qualifiers. OTOH, there are lots of local smallish A1 or A2 open events.

    I'm sure it varies by weapon and area, but the juniors I know with A's are not soft A's. A cadet A can sometimes be a soft A, but I can't think of a junior I know with an A whom I'm glad to see in a pool.
     
  5. jdude97

    jdude97 Podium

    Joined:
    Apr 10, 2013
    Messages:
    1,673
    Likes Received:
    155
    Just an FYI, if you're saying that a cadet A is weaker than a junior A is weaker than an open A, you are inherently providing support for the claim that the ratings system is flawed.

    The point of the ratings system is that everyone with the same rating is the same skill. Now I doubt anyone (at risk of ridicule) would claim that all A's are the same. What about female A's or vet A's or even youth A's (there are a lot in Y14ME and even a couple in Y12ME and Y14WE, without even knowing about other weapons)?

    I'd say a perfect ratings system has people of equal skill as having equal rating. This goal requires two aspects: sufficient delineation that a strong __ and weak __ are as close as possible and then a way to properly assign those ratings. The latter aspect seems to be the much harder goal, and I don't have a good suggestion how to achieve it, but it is pretty clear that our current system fails miserably at the first aspect.

    An A ranges from multi-time Olympians to current and former World Team members to div1 NAC medalists to winners of an A4 to winners of a local 15 person A1 to top Y14s to top vet60s to all these same categories but female, etc., etc.

    So either the system uses two or more separate scales (i.e. men and women; mixed and women; youth, cadet/junior, open, vet; etc.) or the system says that a man of such and such rating and a woman of such and such rating and a 12yo and 70yo and everyone of such and such rating are sufficiently similar.
     
  6. Bonehead

    Bonehead Podium

    Joined:
    Aug 27, 2002
    Messages:
    2,563
    Likes Received:
    137
    Because there are fewer fencers, or because the fencers are weaker on average?

    Do you think, for example, if you chose 15 fencers from that 300 person NAC (by fair dice roll or something, so that your sample was representative of the strength of the larger event), that it would be just as easy as a local Junior event?

    Or do you think the fact that they are Juniors factors into it?
     
  7. teacup

    teacup Podium

    Joined:
    Mar 14, 2006
    Messages:
    4,872
    Likes Received:
    170
    Let me rephrase:

    I think that earning an A by making top 8 at a Div I NAC or Championship shouldn't be treated the same as earning an A at a local or regional junior/cadet/Y14/Y12/Y10* event with 15 fencers. (Maybe even senior event...)

    So either give more classifications out at certain national/regional/local events, depending on size or strength, (the same as WV are added**), award national points to top 40% at national and regional events, or add a distinction to letter classification ratings such as SA15 (Senior A), JA15, CA15, YA15, VA15 etc. Points for youth events aren't used in seeding for senior events, etc, why are "classifications" earned at youth events used to seed senior events?

    *I know that there probably aren't any Y10 fencers with As but the point is it would be allowed if a tournament reached A1 status.
    **I actually don't agree that WV should be added PRIOR to events but they are.
     
    Last edited: Mar 27, 2015
  8. fdad

    fdad Podium

    Joined:
    Jul 9, 2008
    Messages:
    4,129
    Likes Received:
    214
    I think local Cadet and Junior events are relatively rare compared to opens, at least around here. Once you decide to go to the 13 year old minimum, excluding older fencers doesn't make a lot of sense (most of the fencers will still be cadets anyway).

    Depending on the weapon/gender relative strength of Junior vs. D1 NACs can vary. In WF, they are almost equivalent, in ME they are very different.
     
  9. teacup

    teacup Podium

    Joined:
    Mar 14, 2006
    Messages:
    4,872
    Likes Received:
    170
    Some of the cadet events at the Remenyik RJCC, the Duel at Dallas RJCC, FAD - Parker Kick Off, The North Texas Round Up were A1/A2 events.

    It isn't just As, Bs and Cs earned at all age level local and regional events are treated the same as those earned at senior national events. Bs or Cs earned at local youth events, are used to seed fencers for local mixed opens and to determine the strength of those tournaments to award more A/B/Cs.

    And I agree with fdad, strength and availability of tournaments, varies greatly across weapons, age level and genders.
     
    Last edited: Mar 27, 2015
  10. Peach

    Peach Podium

    Joined:
    Feb 10, 2001
    Messages:
    163
    Likes Received:
    766
    The A I earned in a Div IA wasn't nearly as difficult as the B's I've earned at various points, except for having to be smart enough to unhook, shake hands, and walk away while my opponent's coach was arguing with the referee after the last touch in the IA. Y'all are trying to define something too closely. teacup and fdad are half right; strength of tournaments varies greatly. Period. And though other methods based on cumulative encounters will certainly be more granular, accidents will continue to happen and things will take a while to sort out. Meanwhile, all you need is a fairly efficient method of sorting people out so that in the seeding rounds things don't get too crazy.
     
  11. K O'N

    K O'N Podium

    Joined:
    Aug 14, 2006
    Messages:
    3,593
    Likes Received:
    458
    Exactly what Peach said. I've never seen anything that leads me to think that ratings from national events are harder or more significant than ratings from local events. If you think there's such a difference, I encourage you to dive into askfred's data and show it. Anecdotally, I think there's not.

    At any rate, a good result at a Div I NAC does carry more weight than winning a local A2; you get points. If someone is seeding a huge event with lots of A's coming, they can use points in the seeding. So if it matters and you have a lot of A's coming who got their A's in various ways, the current system does exactly what you want, it differentiates between someone who made the final eight at a Div I NAC and someone who won a local A2.

    If you really feel the need for a social distinction, when you win your A at a Div I NAC you can tell people "I'm an A, and I have this many points, too!"
     
  12. Bonehead

    Bonehead Podium

    Joined:
    Aug 27, 2002
    Messages:
    2,563
    Likes Received:
    137
    Do you think that the body of Cadets should have a lower percentage of ratings than the body of Seniors (excluding cadets)?
     
  13. K O'N

    K O'N Podium

    Joined:
    Aug 14, 2006
    Messages:
    3,593
    Likes Received:
    458
    Uh. I'm not sure I have a target number for the percentage of cadets who should have a rating. It's a kind of complicated question; the cadets I know are in general working harder and taking more lessons than the median non-cadet senior. They have overwhelmingly earned ratings in open events or at NACs, not in local cadet restricted events, since there are not many of them. If I search for A1 or better cadet events, I only find eight results on askfred, ever. However, four of them were last year. Maybe they're becoming more common, I don't know. There would have to be an enormous growth to have them have much effect.

    As I said, anecdotally, I don't think a cadet A is a weak A. If you got an A at a cadet NAC I think you probably earned it. Most of the cadets and juniors around here earn ratings in open events, I think. It's my impression that they think it's easier to beat an old guy like me for a rating than to beat someone at a NAC. I don't really think there's much difference in how hard it is to get a rating; when I was competing, I was a B. I got a B locally, I think I got a B once at SN, I was just short of an A in big events and small events and all over. That's where I seemed to end up. So the idea that one path is a lot easier than another doesn't seem reasonable to me. And if there were such a path, I think it would be well-known, and coaches would exploit it. But if you try to build an "easy" A1 to give a kid an A, you find other people signing up and suddenly it's not so easy anymore, it's pretty self-correcting, IME.

    But I'm open to the idea that I'm wrong about that. A clever person could write a script to look on askfred and find everyone who had earned an A at a cadet-restricted event, and then map their results against people who earned an A at a senior event, and see if there was a large difference, either head to head or in tournament placement. Maybe across all weapons and regions it exists, I don't know.
     
    Last edited: Mar 27, 2015
  14. Bonehead

    Bonehead Podium

    Joined:
    Aug 27, 2002
    Messages:
    2,563
    Likes Received:
    137
    I don't necessarily think that a cadet A is a weaker A. But if ratings have a strong correlation to "skill" (ability to win bouts), then density of ratings should have a strong correlation to density of skill.

    If you think that senior fencers are, on average, more likely to beat cadet fencers, then it reasonably should follow that population of senior fencers should have a higher density of ratings than a population of cadets.

    If there are confounding factors, like "more seniors start as beginners after they're cadets" or something, then anything that effects the density of rating should presumably also effect the average ability.

    So if the density of ratings is the same (within a degree of statistical error), you must think that either a) the average skill/ability to win is the same between the groups (withing a degree of statistical error), or b) there is not a correlation between the average skill/ability to win, and ratings.

    For me, I think it's probably the case that the average skill/ability to win of seniors is higher than the average skill of cadets, simply due to physical size and the the potential for more experience. So I'm very curious to know the average density of ratings.
     
  15. Peach

    Peach Podium

    Joined:
    Feb 10, 2001
    Messages:
    163
    Likes Received:
    766
    One would hope that many of the senior fencers were, prior to their seniority, juniors. And before that, cadets.
     
  16. Bonehead

    Bonehead Podium

    Joined:
    Aug 27, 2002
    Messages:
    2,563
    Likes Received:
    137
    That's one of the reasons why I think the average ability-to-win level of an indicative sample of senior fencers would be higher that the average ability-to-win of a group of indicative sample cadets. The seniors would have much more potential for experience, and I would think, be stronger on average too.
     
  17. mfp

    mfp Podium

    Joined:
    May 10, 2002
    Messages:
    1,828
    Likes Received:
    221
    Yep. People in these threads often promote alternative rating or classification systems because they could somehow be more "granular" or "accurate" than the classification and ranking systems currently in use -- accompanied by claims their new systems make for "more accurate seeding" of competitions.

    Yet they seem unwilling/unable to specify objective ways to measure if or how these "more granular or accurate" ratings/classifications make for "more accurate" seeding of competitions[SUP]*[/SUP]

    So these proposals ... involve more complexity than what we have currently. The complexity takes more resources to implement. The proposals ignore how long (if ever) they take to reach some claimed notion of accuracy. And even if and when they did hit that "accuracy", would that matter in any measurable way(s) to how various competitions perform. And for ways that the USFA would care about and to the extent that it'd justify the extra complexity and resources.


    [SUP]*[/SUP]Footnote/hint: While ratings, rankings and classifications are possible inputs to a seeding system, they aren't themselves a seeding system. Measuring how granular or accurate your system is for ratings doesn't measure the performance of a seeding system.
     
    Last edited: Mar 27, 2015
  18. Blackwood

    Blackwood Made the Cut

    Joined:
    Mar 23, 2011
    Messages:
    356
    Likes Received:
    24
    I don't have a dog in this hunt, but I suggest that the way to measure the accuracy of a seeding method is to compare the initial seeds to either the final places or the seeding for the second round. I know that people have already pointed out that seeding is not intended to predict final results, but they are clearly intended to be a measure of the strength of each fencer and so I think results are the most objective way of measuring that.

    I think that an appropriate way to compare seeding to results is to calculate the difference between seed and result for each fencer as a percentage of the size of the field, and take the average of those percentages for all fencers in the event. So when comparing seeding methods, I would prefer the one that produced the lowest average percentage difference between seed and result.
     
  19. teacup

    teacup Podium

    Joined:
    Mar 14, 2006
    Messages:
    4,872
    Likes Received:
    170
    Actually I am proposing a simpler system.

    Since youth events must be C1 or higher most youths do not have classifications. However, many do have national points earned at both national and regional events. These national points are used not only for seeding in their respective age categories, but also for qualification purposes.

    So instead of having three points systems, (rolling, team and regional), why not have one national points system which awards points to 40% of the field for national events and, though on a lower scale, 40% of points to regional events? Points roll off after one calendar year, no replacement schedules.

    This one point list in conjunction with letter rankings, would be used not only for seeding but also for qualification purposes.
     
  20. mfp

    mfp Podium

    Joined:
    May 10, 2002
    Messages:
    1,828
    Likes Received:
    221
    There are lots of problems with approaches that involve comparing the entire initial ranked list of competitors to the final results of a competition and claiming that measures either "accuracy" of the rating system used to produce the list or the "accuracy" of the seeding system.

    First off, the initial ranked list (produced however) and seeding system are two related, but still different things. The ranked list is *input* to the seeding system but not the whole thing. (The input only seeds the seeding system if you will). The seeding system uses the seeding input list along with an algorithm to assign competitors to positions in the competition. There are lots of methods of taking the input list and doing assignments. Given that there are lots of ways to do the assignments even for the same list what are you measuring when you compare input list to final results? Are you really measuring the system that produced the initial ranked list? Or the seeding algorithm? A mixture of both?

    The next, bigger issue is that talk of "seeding" or "seeding accuracy" is pretty nonsensical unless you include the discussion of the *competition* a seeding system is used for. There are lots of different competition formats. They have various features and drawbacks. They perform differently. They can use different algorithms in their seeding systems. They can take input produced by different rating, ranking or classification systems.

    So when you compare an initial list of competitors to the final results of a competition in the way you suggest, are you really measuring characteristics of the (rating) system used to produce the initial input? Or properties of the seeding system? Or the competition format? How can you tell?

    Some background: when researchers investigate competitions and seeding systems, they oftentimes start with a list of perfectly accurately rated/ranked competitors. They explore and measure how the competition / seeding systems perform. One common metric is the "predictive power" of a competition. When they run their perfectly rated competitors through the competition format and seeding systems of interest, what's the probability that the highest rated/ranked player wins? What's the probability the second best player wins? What's the probability the first and second best players end up together in the final match up? How do the top 4 ranked going in do? Where did the top 4 finishers rank initially? What happens if you only give the system accurate ratings/rankings for the top 4 and all players below them are randomized?
     

Share This Page