Also see: Introducing BUBBA, Part Two
This type of question has always bothered me. It's hard enough to compare two hitting prospects on the same team when we're talking about what will happen years in the future. Make it two players of different ages and positions playing at different levels and it seems almost impossible to have any certainty.
Scouting The Scouts
Scouts use the 20-80 scale for grading tools and also for a players overall value. An average major league tool (fastball, hit, power, etc.) is 50 and an average big league player is also a 50. A couple teams will grade by ones (49, 50, 51) and others by fives (45, 50, 55) while many use the most basic tens form (4, 5, 6) to grade players' overall ability. This is the backbone of how teams acquire players and it's very helpful for the above problem, but still has many obvious limitations.
The biggest issue that the 20-80 scale doesn't address is the odds that the player reaches his projected ceiling. This is an issue that both scouts and statisticians alike don't have a good answer for yet. Anything more detailed than "a good chance" or long list of prospect comparables has to be couched in so much uncertainty that it often doesn't get talked about in a nuanced way. Issues like time until the player reaches the big leagues or his ceiling are easier to objectively describe, but still need to be captured in the final answer.
The end product for teams are endless conversations about this player or that player and when a dozen experts in various parts of the game think deeply about it, they normally end up making the right decision. However, no process is perfect and the brain is fallible to biases and groupthink. It would be best if we could get one number to encompass as many pieces of information as possible and then we can tweak that number based on issues that can't be baked into the number, rather than trying to mold a dozen pieces of unlike information into yet another form. Astros GM Jeff Luhnow describes running into this problem:
"So you end up in a discussion and you're ranking two players, and player A has a better performance track record but player B has better tools, and you're trying to compare them both. And then when someone starts arguing to put player B above player A because he has better tools, that's when you need to have discipline in the process and say, ‘We've already incorporated that in our decision, so that's not new information.' Now, if you're bringing new information, you're saying, ‘Player A is sick or has an injury that we didn't know about,' then that's new information and that needs to be baked in there. But to just repeat and sort of disagree with the process and say, ‘Well, I think we should be highlighting tools more,' or ‘I think we should be highlighting performance more,' or ‘I think this tool is more important than that tool or whatever,' we've already gone through the discipline of figuring all that stuff out."
This kind of thinking is something I ran into a lot in front offices and often is the kind of rationale that helps a club pull the trigger on a deal. Countless studies on how the brain works have showed that the more pieces of information considered in a decision reduce the accuracy of decision-making. It's best to get all the information possible into one number to reduce the pieces of information that your brain has to juggle to make a decision.
Old School Wisdom
The good news is the oldest of old school baseball methods can help us. The best book to read about scouting is the long-out-of-print "Dollar Sign On The Muscle." It describes what scouting was like before the amateur draft started in 1965 and the title comes from the most essential task scouts had at the time, a process pioneered by Branch Rickey, scouting OG. A scout would watch a player and write a report like he would today, but then at the top of the report, instead of assigning a 20-80 grade to categorize the player for later discussions, the scout put a specific dollar amount on the player as a suggested bonus.
There's nothing new under the sun and the best ideas have their roots in old solutions. Today, I'm introducing my system to put a dollar value on every player in professional baseball, affectionately dubbed BUBBA.
The idea isn't a new one and my basic approach isn't novel either, but I've added some features to update both methodologies. The first attempt I read to objectively quantify a dollar value for a player was by sabermetric superstar Nate Silver. After the 2005 trade deadline, Silver laid out the economics behind a rumored Manny Ramirez trade. Silver takes the various players projected performance, multiplies by the going rate for wins, subtracts salary and adjusts for time value of money/inflation. Dave Cameron of FanGraphs and Tom Tango of The Book have both done a number of Silver-style breakdowns of free agent contracts (here's Cameron on Jose Reyes' Marlins deal: and their version is sound, similar to the WAR framework that both of them have helped popularize.
A weak point of all these gentlemen's efforts is the guesstimating of the value of prospects. This is understandable given the subjective processes that I've watched teams use when faced with similar problems. Clubs can overpower conundrums of this sort with time, money and manpower of stats and scouting experts and get to the right answer but the Internet baseball community has had to work in the margins.
Victor Wang did the most-known work on this topic (updated some here) and Cameron recently broke down another rumored but not consummated deal using newer versions of Wang's initial study (here's one more commonly-referenced study). The information available for work like this is limited: you take old Baseball America prospect lists, find out what the players did and then say what players will yield in different ranges of the current BA list. With BA's lists being the best and only time capsule of prospect rankings that go far enough back to have definite results, these studies have hit a ceiling.There are two main problems with the BA or John Sickels' lists being the only input: data granularity and list frequency. The 20th ranked prospect could be a sure thing, low-ceiling 28-year-old Japanese import or a sky-high ceiling/risk type in rookie ball. The studies are forced to consider them completely equal in the absence of more granular data. Also, in the middle of the season, players have changed or graduated to the majors and there isn't a fresh, well-researched list for reference. Essentially, these studies can only be accurately implemented in January/February when the lists are out and players aren't playing. Once word gets out that Prospect X has gained 3 ticks on his fastball in spring training, it'll be another 10 months before we have an accurate feel for what his true value is.
What I'm Doing
In my time working in the scouting departments of three MLB clubs, I've been fortunate to be included in a lot of things the average baseball fan would love to be a fly on the wall for. I've been in trade deadline war rooms, draft rooms and organization-wide meetings. I've worked in administration, scouting and statistical research and have had more than my fair share of hypothetical discussions about what both these departments know, don't know, could know and want to know. I've been fortunate for the past two years to have jobs in the media that have me at games on most days, talking to scouts and scouting games myself.
Long story short, I've been able to take my experience in these areas and combine it with the freely-available information previous studies have relied on to make an algorithm that answers a lot of the questions I've raised over the years about valuing players.
The basic framework follows Nate Silver's initial offering to the community: project performance, multiply by market rate, subtract salary and adjust for time value of money/inflation. For big leaguers, that's enough to be quite accurate and what I can offer is having an automated system that's generated a value for every player, rather than having to manually do it when the need arises. This makes things like objective trade value rankings, trade machines and things like that a reality.
There are inputs beyond those four basic ones, including debut date, date to reach ceiling, super two adjustments, scarcity and the opportunity for an extension. Those last two require some explanation and I'll get into it more in part two of this introduction.
The value of BUBBA comes with the minor leaguers. The same information as with the big leaguers is included, but the dollar value is driven by three factors: the player's upside, his chance to reach it and when he'll reach it. If you know those three things, the calculation isn't much more complicated than for the big leaguers. The upside and when he'll reach it are information you can glean from a good scouting report, which are becoming easier to find on the Internet.
The most valuable piece of information for minor leaguers in BUBBA is the odds of players of various skill levels, ages, league levels and positions reaching their ceiling. This part is the key to BUBBA and will be a bit of a black box for a number of reasons. The publicly available data has been included and converted to the 20-80 scale. This means I can scout a player or talk to a scout, give the player a grade and get an accurate dollar value immediately. Once the information is gathered, the number is generated and possible applications like ranking farm systems based on their dollar worth are all in play.
Extending beyond these two groups, the framework can be applied to players anywhere. While the information isn't quite as sound as it is for professional players, I can apply the algorithm to college, high school and even July 2nd prospects to get dollar values.
In tomorrow's part two, I'll give a bunch of examples of the dollar values for various pro players, the challenges to getting a reasonable value and how to use the values to build a trade.