Try not to derail this thread please.Sooooooooo this Harrison-Hunte guy, he any good? He coming?
Here’s the challenge. If the rating services are systematically wrong in a predictie way on *some* kids, then there should be an algoritm that could tell you which kids they are likely underrating or overrating. An arbitrage algorithm in effect. Maybe its kids with good measurables from small schools. Or two sport stars with insufficient football experience. Or kids who have multiple teammates being recruited by major programs are overrated. Whatever, you can potentially find a basis to identify rating errors. And if you did, the rating services would incorporate that into their process over time and the ratings would improve. There would be fewer predictable errors, but still outcome variances, because uncertainty.Ok that works. It's a perfectly fine criteria to say that you'd assign star ratings based on the schools recruiting a given kid.
But then what do you do with the kids who are highly recruited by Alabama, UGA, OSU, Clemson, etc., and then turns out to not be an elite college player and isn't drafted? Use a kid like Ermon Lane as an example. He was recruited like a 5 star. Alabama, UGA, Clemson, etc., all wanted him. Yet, after the fact, do you think he was overrated? Using one of your metrics; i.e. his ultimate NFL draft selection, he was overrated. Yet using your other metric; i.e. quality of the teams recruiting a given kid, Lane was properly ranked. So which is it D$?
Do you see the inherent flaw in the argument you've been making whereby you judge rankings to have been wrong in hindsight depending on draft position? Even your very own criterion, which is based on the programs who recruit a given kid, isn't a perfect predictor of future college or draft choice success. So what do you then do with that?
Here’s the challenge. If the rating services are systematically wrong in a predictie way on *some* kids, then there should be an algoritm that could tell you which kids they are likely underrating or overrating. An arbitrage algorithm in effect. Maybe its kids with good measurables from small schools. Or two sport stars with insufficient football experience. Or kids who have multiple teammates being recruited by major programs are overrated. Whatever, you can potentially find a basis to identify rating errors. And if you did, the rating services would incorporate that into their process over time and the ratings would improve. There would be fewer predictable errors, but still outcome variances, because uncertainty.
But if there is no clear way to identify flaws in the service rankings, then there is nothing interesting to discuss here.
DMoney said:You mean the guy who killed Notre Dame, was good enough to leave early, got sick, and still was deemed a Top 150 prospect in the country?
If we sign a pair of three-stars like McIntosh every year, we will never have a problem at DT.
Pitt had 345 total yards. We lost against Pitt because Malik Rosier is awful.Do we still get steamrolled against Pitt?
I am not used to seeing you make such loose and controversial statements. CFB success impacts millions of fans. Some specific branch of cancer research may impact a small number of people you have never met, years from now. How to trade off those matters is not obvious, at least unless central planners figure out the answer. Otherwise we’d shut sports down and deploy everyone into cancer research.There's a lot of truth to that. Personally, I see the distribution of elite college players/NFL draft picks generally occurring along the lines of the star ratings; i.e. more 5 stars achieve elite status and NFL draft success than 4 stars, more 4 stars than 3 stars, etc. So I don't know that the star ratings are grossly wrong. They are what they are. No, we wouldn't want to analyze the epidemiology of cancer treatments using such an imperfect process, but there's a whole less at stake with evaluating HS football recruits.
And in any event I don't know that anyone's ever developed the sort of arbitrage algorithm that attempts to quantify all this and then measure standard deviations from the expected outcome based on the different probabilities of the various star rankings. Arguably it could be improved greatly, but I don't think anyone cares enough to expend the time or money to do it. So we have what we have. which isn't perfect but seems to be directionally correct. But it's a whole lot more imperfect than perfect.
I am not used to seeing you make such loose and controversial statements. CFB success impacts millions of fans. Some specific branch of cancer research may impact a small number of people you have never met, years from now. How to trade off those matters is not obvious, at least unless central planners figure out the answer. Otherwise we’d shut sports down and deploy everyone into cancer research.
What was I thinking! Spending too much time in Cali these days I suppose. I stand corrected!!
That said, two comments. Noting that the star rankings are directionally accurate doesnt tell us that much. You really need a baseline expectation to judge them against. How much better should we expect the top 1% to perform than the top 10% or the top 50%? How mich better or worse are the services relative to each other at predicting? Any differences? Any obvious biases? Regional biases? Program biases? Position biases?
Also, how do you relatively measure a kid by position? QB is more important than LB, but an average QB isn’t distinguished. To the contrary, an average LB may help special teams a lot. An average QB only helps when your roster is full of Rosier and Perry. Rating kids as a group may be fun but it’s likely also wrong. Kickers rarely get highly rated, but the NFL shows you what a great kicker is worth. I’d suggest viewing the ratings only by position. And then what do you do with position switches? McIntosh was the 40th highest rated SDE on Rivals his senior year. Maybe they were RIGHT? He ended up being better as a DT. How do we criticize them for rating him differently at DE? And when you compare offers, how do you handle staff differences on a kid’s position? UM wanted a kid as a DB. He wanted to be a RB. No UM offer. Kid goes to SDST. Rating services downgrade. That kid ended up in the NFL Hall of Fame as a RB (Marshall Faulk).
No disagreement on any of that. As I stated, the analysis and methodology can be improved
There is at least some money available for doing this, actually, and it probably isnt hard. Major programs have sizeable recruiting budgets. Alabama may well do a version of this in-house.
Ultimately this will occur. Just a matter of when. ****, Alabama may well already be developing something.
That example doesn't fly. Nobody ranks lottery tickets.
Lu I was headed towards the topic you are mentioning. A huge gap in this discussion is a lack of understanding about what the rating services actually do. The _reality_ of what they do is a lot closer to compiling info on who is recruiting a kid and then rating kids based on who is recruiting them, then it is a true evaluation process. If they were only including info on who is recruiting a kid, the circularity D$ is talking about would be obvious. It’s there, just not entirely circular.
@PalyCane, this is for you also.
Let’s say a quant tech dork geek who is bored bothered to form an algorithm to rank Hs kids. His inputs were solely which schools are recruiting the kid (and which aren’t), where he’s from, what position he plays, his measurables and the roster needs of the schools recruiting him (and the ones that aren’t but geography would suggest should be).
A little machine learning would likely be able to come up with a better ranking than rivals with that info. Except nowhere in that info is there an actual evaluation. And the offer data is self reported and not confirmed. Schools are not even allowed to talk about recruits. They could be recruiting a kid as a courtesy to his coach, to help him get attention for other offers, because they want to get his teammate to commit, or just to confuse their rivals about who they really want. We just don’t know.
The optimal algorithm would be the best predictive measure of future outcomes, and yet it would be missing critical information that if considered might well lead to a different assessment of some subset of kids. How they actually perform on field. Do they like contact or not. Are they still developing physically or already maxed out. So you could well create an optimal general algorithm and still leave room for D$ to validly point out some kids that the algorithm is wrong on. Not because of future uncertain outcomes. Because at the time of the estimate, the algorithm missed important inputs.
Here's where you go astray. Ranking a player is an individual decision that needs to be judged individually. You can't judge that kind of individual decision based on group data.
The roulette example highlights the flaw in your approach. The odds in roulette are fixed. One out of 38. A better example would be making a bet on a football game. You can make bad bets and good bets. Individual decisions with different probabilities of success. If I win 60% of my bets, that doesn't mean that every individual bet I make is sound or based on proper analysis.
That's the situation here. Rivals made a bad bet on McIntosh.
Let's expand on this for a minute. If we were to rank various types of lottery ticket, the most practical way I can think of to rank them would be based on expected rate of return. Let's assume a hypothetical Florida lotto ticket has an expected rate of return of 10% (e.g., $1.10 per $1 ticket purchased), a California ticket has an expected rate of return of 0% (e.g., $1.00 per $1 ticket purchased), and a Nevada lottery ticket with an expected rate of return of -20% (e.g., $.80 per $1.00 ticket purchased). By expected rate of return, the rankings would be: 1. Florida, 2. California, 3. Nevada.
Let's say I purchase one of each ticket. I win $100 on the Nevada ticket, while getting $0 on the other two. That result, while absolutely within the realm of possibility, does not mean the initial ranking was off -- after all, the probability was better on the Florida ticket. It just means that in this very limited sample size of 1 ticket, the lower ranked lottery netted me the best return.
Let's expand on this for a minute. If we were to rank various types of lottery ticket, the most practical way I can think of to rank them would be based on expected rate of return. Let's assume a hypothetical Florida lotto ticket has an expected rate of return of 10% (e.g., $1.10 per $1 ticket purchased), a California ticket has an expected rate of return of 0% (e.g., $1.00 per $1 ticket purchased), and a Nevada lottery ticket with an expected rate of return of -20% (e.g., $.80 per $1.00 ticket purchased). By expected rate of return, the rankings would be: 1. Florida, 2. California, 3. Nevada.
Let's say I purchase one of each ticket. I win $100 on the Nevada ticket, while getting $0 on the other two. That result, while absolutely within the realm of possibility, does not mean the initial ranking was off -- after all, the probability was better on the Florida ticket. It just means that in this very limited sample size of 1 ticket, the lower ranked lottery netted me the best return.
Yes. Thank you. This is correct. Just because there can be outlier results doesn't mean the probabilities were wrong initially
For you to know that as certainly as you think you know it, you must believe in Fate and the Book being written already. What you are missing here is variance. Here’s an illustration:Here is what @PalyCane keeps missing: the "expected rate of return" here isn't based on anything tangible. Rivals sets that expectation themselves, subjectively, by assigning a star rating. It's not objective. They control that.
If Rivals decides to make me a four-star, does my probability of success go up? Of course not. I'm the same bad football player.
When I criticize Rivals for making RJ McIntosh a three-star, I am questioning their assignment of expected value to this individual player. I did the same thing in 2015 without the benefit of hindsight. They got it wrong.
Again, this is an individual evaluation. Outliers don’t come into play. Rivals set the wrong "expected value" on McIntosh.
Nobody is saying that four-star v. three-stars don’t matter because McIntosh outplayed a bunch of four stars. I am saying that they got McIntosh wrong.