Upon Further Review- Dan Enos

View as article
Not that this really adds to this interesting discussion on Enos, but I find it interesting that Enos'
predecessor at CMU (Butch Jones) was also on Saban's staff whilst Jones' OC at CMU and Cinncinnati
(Mike Bajakian) was on same Tampa Bay Bucs' staff with Enos' (new) OL coach Butch Barry.
Small world.

Enos also replaced Jim Chaney as OC at Arkansas. Chaney is the OC who just left Georgia and allowed James Coley to be promoted (and prompt Georgia to chase Enos).
 
Advertisement
I've presented the data against his peers- which I believe to be the important data- if you want to take all of the data and evaluate him that way, that is absolutely your right. I believe the best- and most accurate- way to evaluate programs is to look at their results against peer teams. Some disagree and I understand that.
I'll say this on each one of your posts (another great one, BTW), who disagrees with this and why? Can we get a rationale from those people? Context is everything with statistics. And, it doesn't beat watching every single game and understanding "why" things unfolded the way did, but if we can move toward contextual numbers, we can make better decisions.
 
I've presented the data against his peers- which I believe to be the important data- if you want to take all of the data and evaluate him that way, that is absolutely your right. I believe the best- and most accurate- way to evaluate programs is to look at their results against peer teams. Some disagree and I understand that.
View attachment 76216
Take into consideration the fact he developed a QB after losing the only NFL QB in program history with these results, or not. Add the data from the games vs. P5 teams, and that's fine. Even with Butch Jones excellent season with an NFL QB they played three P5 teams and did this on offense:
View attachment 76218

Is that representative of what the offense is capable of, or is that more a reflection of a disparity in talent/resources?

The debate is one that could go on endlessly, so I will let this be the entirety of my case, and the individual can make their decision for themselves with that question.

But the S&P+ adjusts every team for their level of competition, and then ranks them. It's not putting Central Michigan in an unfair comparison. That's what is so nice about its rankings. It's adjusting.

In 2012, which is 3 years into the Enos offensive era at CMU, they were 91st in S&P+ offensive rankings.

Northern Ill. 43
Kent State was 69
Western Michigan 76
Toledo 57
Ball State 60
etc.

These are his "peers." They are all playing relatively similar schedules when averaged out across his time there.
Now the MAC teams that outperform him every year, does change from year to year. BUT, what doesn't change, is a bunch of his MAC peers outperform him every year. If you took the S&P+ and ranked it purely by the MAC, he's still at the bottom end of the conference every year.

Nobody has really explained that. Some people have tried to say "talent," but it's not like Ball State has soooooo much more talent. Everybody in the MAC recruits at a close enough level. This isn't Rutgers having to play Ohio State, Michigan, Penn State, etc. every year in conference.
 
All i know is hes going to be a **** of a lot better then Richt. We might be going from Rosier to Hurst. Let that sink in
 
I'll say this on each one of your posts (another great one, BTW), who disagrees with this and why? Can we get a rationale from those people?

I still don't really understand what he means? That it's not fair to look at the S&P+ rankings overall because it's comparing CMU to every team?

1. That's not true. The S&P+ compares you to the average team, and then asks how much better were you then the average team would have been against your schedule. It then ranks the teams based on that metric number.

2. Even if you broke the rankings down by MAC only, his rankings relative to his MAC peers are bad. And that's the nicest way to put it.
 
Advertisement
I still don't really understand what he means? That it's not fair to look at the S&P+ rankings overall because it's comparing CMU to every team?

1. That's not true. The S&P+ compares you to the average team, and then asks how much better were you then the average team would have been against your schedule. It then ranks the teams based on that metric number.

2. Even if you broke the rankings down by MAC only, his rankings relative to his MAC peers are bad. And that's the nicest way to put it.
I'm not talking about his conclusion as it relates to Enos' time at CMU. I'm talking about the methodology of using peer to peer vs totality stats. The problem for Enos is that, even peer to peer, his time at CMU is a question mark in my eyes. But, I'm encouraging people to start viewing statistics in context of per play, per pace, vs peer teams, etc.
 
I still don't really understand what he means? That it's not fair to look at the S&P+ rankings overall because it's comparing CMU to every team?

1. That's not true. The S&P+ compares you to the average team, and then asks how much better were you then the average team would have been against your schedule. It then ranks the teams based on that metric number.

2. Even if you broke the rankings down by MAC only, his rankings relative to his MAC peers are bad. And that's the nicest way to put it.
I see your point, but I ask you to explain what it was that Saban saw in him then? To promote an appearingly lackluster OC to his staff for exactly that? You have to believe in Sabans judgement at this point
 
I'm not talking about his conclusion as it relates to Enos' time at CMU. I'm talking about the methodology of using peer to peer vs totality stats. The problem for Enos is that, even peer to peer, his time at CMU is a question mark in my eyes. But, I'm encouraging people to start viewing statistics in context of per play, per pace, vs peer teams, etc.

Basically, you're encouraging people to start using the S&P+. Which is a play to play, down to down measurement, that adjusts for your level of competition.

Which is exactly what I've been bringing up.
 
Advertisement
I would be lying if i said i knew how S&P+ comes up with their peer data.

Everybody is ranked on a per play basis based on success and explosiveness. Basically, did you get the minimum number of yards to be "successful"? How much above the minimum did you get?

This is the adjusted for your schedule, and put against what an average team would have done against that schedule. So you're being compared to what is an average college football team.

It then ranks. You can then take all the rankings. Or you can break it down by conference teams or whatever subgroup you want.
 
Everybody is ranked on a per play basis based on success and explosiveness. Basically, did you get the minimum number of yards to be "successful"? How much above the minimum did you get?

This is the adjusted for your schedule, and put against what an average team would have done against that schedule. So you're being compared to what is an average college football team.

It then ranks. You can then take all the rankings. Or you can break it down by conference teams or whatever subgroup you want.
How is the "schedule" portion decided? What are the components to make up the number? How close is it to "how X and Y team both did against Z team?"
 
What are you using for your analytics, Moss? SBNation? I'm assuming it isn't FpptballOutsiders because they have CMU's offensive rankings as follows:

80
74
71
103
61
 
Success rate isn’t the same thing as Yards Per Play. Additionally, I believe S & P + is great for the NFL but is poor in the college game because it doesn’t adjust enough in my mind to get to a true average.

I don’t really love success rate as much as I do Yards Per Play because it’s a percentage of the down market etc. but I do understand it has its proponents.

Basically, I don’t like the metrics used elsewhere and are calculating my own and drawing conclusions from there.
 
Advertisement
Yards/play is a good one to use. You can pretty much tell who is elite using that stat on both sides of the ball. A lot of the advanced metrics will still show Bama had no drop off in production on the defensive side of the ball despite it being obvious to anyone who watches them with any regularity. FootballOutsiders still has them as a top 5 defense. Yards/play for the year they were 24th which fits in more with how they actually looked this year.

It can obviously be skewed as well depending on who a team plays whenever you're looking the numbers up, but by the end of the year the rankings for yards/play on both offense and defense tend to be spot on compared to how the units performed over the course of the year.
 
Good stuff Lance. Appreciate the effort. Whether or not Enos becomes a great OC for us remains to be seen but the potential is there and with our roster on that side of the ball I do expect us to be at least be a good offense. Couple that with our outstanding D and we should have something. The favorable schedule helps. Just beat the two other Florida teams and another 10 win season with a top 15 ranking is very possible maybe even likely.
 
What are you using for your analytics, Moss? SBNation? I'm assuming it isn't FpptballOutsiders because they have CMU's offensive rankings as follows:

80
74
71
103
61

I'm using the Football Outsiders Offensive Rankings for 2012. CMU is 91 on their list.
 
Advertisement
Was Enos actually calling the plays at CMU? All the information online says he had an OC, but I wasn't watching CMU back then (nor am I now) so I don't know for sure. If Enos wasn't calling the plays, then his results shouldn't have as much bearing on predicting his success here, since we did not hire him to be HC like he was at CMU.
 
Success rate isn’t the same thing as Yards Per Play. Additionally, I believe S & P + is great for the NFL but is poor in the college game because it doesn’t adjust enough in my mind to get to a true average.

I don’t really love success rate as much as I do Yards Per Play because it’s a percentage of the down market etc. but I do understand it has its proponents.

Basically, I don’t like the metrics used elsewhere and are calculating my own and drawing conclusions from there.

Right. But the S&P+ Offensive Rankings don't measure purely success rate. That is simply a component of the ranking, and you can look specifically at that ranking if you want.
But the S&P+ measures success rate v. explosive rate. How often do you get the minimum number of yards needed to be "successful"? When you are successful, how far above the minimum are you?

I think it was LuCane that talked about this last night. Having an explosive offense is great. Your "Yards Per Play" measure. But at the same time, you got to stay on the field. Being able to run plays is what allows you to take advantage of your explosiveness. If one out of every 10 plays you run is explosive, that's probably not going to help you much if the other 9 go "3 and Out." That was the problem with our 2017 offense.

Connelly's Offensive Ranking is a factor of Explosive v. Efficient.
 
But the S&P+ adjusts every team for their level of competition, and then ranks them. It's not putting Central Michigan in an unfair comparison. That's what is so nice about its rankings. It's adjusting.

In 2012, which is 3 years into the Enos offensive era at CMU, they were 91st in S&P+ offensive rankings.

Northern Ill. 43
Kent State was 69
Western Michigan 76
Toledo 57
Ball State 60
etc.

These are his "peers." They are all playing relatively similar schedules when averaged out across his time there.
Now the MAC teams that outperform him every year, does change from year to year. BUT, what doesn't change, is a bunch of his MAC peers outperform him every year. If you took the S&P+ and ranked it purely by the MAC, he's still at the bottom end of the conference every year.

Nobody has really explained that. Some people have tried to say "talent," but it's not like Ball State has soooooo much more talent. Everybody in the MAC recruits at a close enough level. This isn't Rutgers having to play Ohio State, Michigan, Penn State, etc. every year in conference.

You got me curious, so I calculated all of the 2012 MAC data on a peer basis (only group of 5 games) and here are the results. There is absolutely no debate that Northern Illinois had the best offense, but I would defy someone to tell me that Central Michigan wasn't the second best offense in the MAC that season. I would listen if you wanted to say Kent State because they were a great rushing attack that was also explosive, but I'm a passing game guy myself and even traditional numbers such as TD/Int suggest Central Michigan was a very good passing offense.
Capture.webp
 
Right. But the S&P+ Offensive Rankings don't measure purely success rate. That is simply a component of the ranking, and you can look specifically at that ranking if you want.
But the S&P+ measures success rate v. explosive rate. How often do you get the minimum number of yards needed to be "successful"? When you are successful, how far above the minimum are you?

I think it was LuCane that talked about this last night. Having an explosive offense is great. Your "Yards Per Play" measure. But at the same time, you got to stay on the field. Being able to run plays is what allows you to take advantage of your explosiveness. If one out of every 10 plays you run is explosive, that's probably not going to help you much if the other 9 go "3 and Out." That was the problem with our 2017 offense.

Connelly's Offensive Ranking is a factor of Explosive v. Efficient.

Yards Per Play measures efficiency. Points Per Play measures explosiveness.
 
Advertisement
Back
Top