Today, we're going to look at a few different ways of measuring a college player and whether or not certain analytics tools are good for projecting NFL production. For this example, we're using a player who may interest the Giants in this upcoming draft; Missouri edge-rusher Shane Ray. It's important to announce at the top that this article is in no way an analysis of Ray as a player. The aim here is to look at the different forms of analysis and the potential strengths and weaknesses of the tools available.
In his latest mock draft, ESPN's Mel Kiper Jr. has Ray going ninth overall to the Giants. His colleague, Todd McShay, has Ray already off the board when the Giants pick. SB Nation's own Dan Kadar has the edge-rusher falling way down to the Arizona Cardinals at No. 24 and over at NFL.com, Ray averages out as the No. 6 pick in the draft. It's very likely that the Giants won't have a shot at the Missouri product, but it's also possible that they pass even if given the opportunity.
The first area to look at is Football Outsiders' SackSEER projections. The formula for estimating a player's future in this system includes:
- The edge rusher's projected draft position. These projections use the rankings from ESPN's Scouts, Inc.
- An "explosion index" that measures the prospect's scores in the 40-yard dash, the vertical jump, and the broad jump in pre-draft workouts.
- The prospect's score on the 3-cone drill.
- A metric called "SRAM" which stands for "sack rate as modified." SRAM measures the prospect's per-game sack productivity, but with adjustments for factors such as early entry in the NFL draft and position switches during college.
- The prospect's college passes defensed divided by college games played.
- The number of medical redshirts the player either received or was eligible for.
SackSEER was developed by Nathan Forster and he mentions outright in his post for ESPN Insider that the system is far from perfect. On the one hand, it correctly predicted success from guys like Jamie Collins and Justin Houston (both Day 2 picks), but on the other hand, it often misses on top guys. The system was not very kind to Jason Pierre-Paul and, at least in some of his seasons, he has proven himself. Forster and Football Outsiders revised the formula in 2012 to sharpen the data, but occasionally, it still crunches some awkward results. Barkevious Mingo was projected with a score of 94.6 percent in 2013. Mingo has notched just 52 pressures (seven sacks) in his first two years in the league.
Projecting a prospect using combine or pro-day data, and using criteria as inexact as their potential draft slot, is always going to cause issues. So it's still not amazing by any means, but at least it has a track record that can alert us to both its successes and failures. The humility to refine the formula after several years shows that this tool could further develop and become more reliable for future seasons.
SackSEER projects Ray as an abject failure. In Forster's article, he compares Ray to legendary draft-bust Vernon Gholston. Again, the author is quick to announce that this is merely a guideline for analysis and mentions that successful players such as Ray Edwards and Calvin Pace received similar scores to Ray, yet still went on to NFL success.
The next set of analytics to look at is pSPARQ. This is a metric that began as a marketing tool developed by Nike for high school athletes. Much like SackSEER, it uses combine and workout values to generate an end number intended to give an idea of overall comparison. Obviously, this is problematic but teams like the Seattle Seahawks have taken to incorporating this metric in player evaluation. I'm skeptical of the system, but who am I to question football minds much greater than mine?
SPARQ stands for
The small "p" I used in the previous paragraph is not a typo. It refers to a form of the system that is position-adjusted and was reverse-formulated for use with NFL prospects by Zach Whitman (You can read a full breakdown of the process over at his site, 3sigmaathlete).
With this, we're two-for-two in data sets suggesting Ray may not be worth a top 10 pick. The flaw with these methodologies so far has been taking workout data and using it to project game-aptitude. Both systems introduce a huge number of variables that could fluctuate depending on the time that they were measured.
The last system is three-pronged and it's absolutely brand new. Pro Football Focus began work on a college football project at the beginning of last season thanks to funds brought on through the sale of the company to former NFL wide-receiver Cris Collinsworth. The end result, College Football Focus, launched last week in a limited format. Unlike the all-encompassing database that PFF provides, CFF is being released through an extended series of blog posts detailing signature stats.
For edge rushers, Ben Stockwell wrote about their three applicable signature stat categories;
- Pass Rush Productivity
- Third Down PRP
- Run Stop Percentage
We do not have access to the controversial grades that PFF is so famous for, but the company has always preached taking their grades in the context of overall performance and not a single number by which to judge a player.
Looking at how our subject compares in the CFF stat categories, we get a more well-rounded picture of the Mizzou hopeful. We can see that basic pass-rushing was not Ray's strength last year. He actually places outside of the 20 players listed for this category but a note at the bottom of the article tells us he finished 22nd overall.
While he doesn't excel at getting to the quarterback all the time, Ray is a much more productive player on third down. PRP takes into account the number of sacks, hits and hurries that a defender generates and turns them into a weighted single score. Ray finished last season with a third down PRP of 13.5 overall, good for 16th overall for edge defenders in this draft class. Still not high enough to justify where draftniks have him rated, but at least he's on the chart. Third down PRP should be easier for a player because of the likelihood that the offense would call a passing play. A big jump between the two PRP categories could suggest that a player needs to work on his ability to recognize plays. We have seen before that talented players can struggle against inferior competition due to a lack of confidence in game literacy.
Finally, the last category offered by CFF shows us that Ray's strength is probably against the run. The run stop percentage stat tells us how successful a player is against the run in terms of tackles, assists, missed tackles and overall stops. This is filtered through the number of run snaps a defender was on the field for and we end up with a contextual percentage that represents overall look at how effective the player was in stopping the offense gain yards on the ground. Ray gets a thumbs up in this category with a 9.4 run stop percentage, which lets him place ninth overall.
It may generate more data than the other tools, but CFF is not without its faults. This is its first year, and for the moment at least, it appears as though this was an NFL analysis tool applied to college football. While this may generate an easy comparison method on the surface, below that lie some problem areas.
No analytics tool is perfect. I struggle to find a single one that doesn't warn against imperfections and the dangers of improper context, but the application of NFL criteria on a college level needs to come with giant warning stickers. Buyer beware, there are caveats in play here.
The PFF system is often criticized for not compensating for competition standards. I'm simplifying it a bit right now, but for example, is a great game in 2014 against the Buccaneers equal to a great game against the Seahawks? I'm on the side that says yes and that there should be no opponent adjustment. This makes sense on an NFL level where everyone involved is expected to be of a certain caliber. Contextually, some of these teams may appear bad but every NFL player is a professional in a finely-tuned scheme.
In college, that's not true at all. The reality is that only a tiny percentage of college players will ever play football on a professional level. Roster turnover is 100 percent every 4-5 years, schemes are not as complex, but most importantly, colleges get to schedule their own games. That's the nail in the coffin right there.
The Missouri Tigers opened last season against the South Dakota State Jackrabbits (a real team, not joking), and Ray is credited with four tackles, two assists and a sack. In the Tigers' regular season closer against fellow SEC competitor, Arkansas, Ray notched just one tackle and two assists. This is all put into the same category marked "Against college competition" and we're done with it.
The CFF system measures production, not players, and I think because of how new it is, people may forget that. In the past, we could supplant these statistics with PFF grades, but with the limited access to CFF, we're forced to look at them in a vacuum. That's a no-no in analysis and makes it troublesome to rely on CFF (in its limited format) as an option for projecting college players to the NFL.
If we analyze a player with the various tools available to us, and use those tools to inform our opinions rather than create one, analysts can operate properly. It's a similar approach that we get taught in school. It doesn't matter if you get a correct answer if you don't show your work. When we use tools such as pSPARQ and SackSEER to formulate our opinions, we're sneaking a look at the answer key. Somewhere between these analytics and the countless hours of film study, a huge number of well-respected draft experts have concluded that Shane Ray is worthy of a top 10 pick. We'll have to wait and see if any NFL teams agree with them.