- Primary Points
- Posts
- Best Puck Manifesto - Volume 1
Best Puck Manifesto - Volume 1
A deep dive into what works - and what doesn't - in NHL Best Puck contests
Best Puck Summer is upon us, with the Best Puck Classic opening up on Wednesday, July 10th.
This is the first of several newsletter editions digging into various bits of strategy surrounding this contest, ranging from roster structure to player takes to what optimizing for the playoffs actually looks like. Some of these concepts have been covered in past years on this newsletter and via the Morning Skate Podcast, some of them are newer and thus will be applied for the first time to NHL.
I absolutely love the strategizing around these games that are new and have several competing strategies surrounding the lobby all at once. I would like to shout out Michael Leone of Establish The Run (@2Hats1Mike) and a number of fabulous entries from last year’s BBM Best Ball Data Bowl (I will credit these individuals specifically when we reach those stages of analysis!) for blazing a path forward, providing not only ideas about functional tests and data structures to help analyze results, but also the aesthetic framework for displaying these results in a clear, coherent, and oftentimes beautiful way. Although the game they are nearly all looking to beat is NFL, with larger entrant bases and fundamentally different scoring structures and player distributions, their work was superbly done and made it easy to transform to an NHL landscape.
We’ve intuited over the past iterations of Best Puck what the best NHL strategies are and executed them about as well as one could possibly ask for. My protégé, DJ Mitchell, the unofficial NHL GOAT and the Best Puck Classic 22-23 champion for $10,000, put out two of the most dominant performances across the entirety of Underdog, maxing the contest each of the past two years and managing a 50% advancement (vs. 25% field) and a 27% advancement (vs. 16.7% field), along with the aforementioned 5-figure takedown. Fortunately for all of us, last year’s contest resulted in him striking out on his six finals entries (the most bullets of all entrants… for the second straight year), or else I might have spent my time petitioning for the site to shut down the Best Puck game.
Picking up a third-place finish last year, on 2 finals tickets and 35 playoff squads, earned me about 4/5ths of a DJ. While a successful endeavor, such a result is not good enough, and simply will not happen again.
What you need to know, however, is that if you clicked on this article, you’re almost certainly looking at 300 entries between DJ and I, and countless hours spent streaming and discussing the contest in a way that you simply will not get anywhere else in the space. If you’re looking for expertise... you’re in the right place.
I have two suggestions if you’re looking to improve your game:
Hop into the Morning Skate Podcast Discord, with lots of discussion and hundreds if not thousands more of the 15k Best Puck Classic entries hanging out in one spot. It’s the place to be to discuss the latest news, especially when we get into training camp, and the best way to get a hold of us outside of streaming hours.
Which brings us to the streams - which will primarily be hosted on DJ’s YouTube channel, with a weekly audio version going up on the MSP podcast feed up through the beginning of the season. Sub to both so you don’t miss a second! There’s lots of relevant discussion about structure and strategy in last year’s content, helpfully labeled with UD or Underdog in the title.
Best Puck Manifesto, pt. 1: Draft Capital Allocation
I don’t feel the need to start at the literal bottom, as Billy Jones did a phenomenal job laying out the table stakes of Best Puck in this (free) article over at SpikeWeek: The Formula: Underdog Best Puck Classic Data Driven Strategy Guide - Spike Week.
I will cover some of this as we go, but make sure you’re familiar with the full set of data from Billy (and our prior year Best Puck UD shows, on DJ’s channel linked above) based on positional builds and overall roster structure to get the most out of the below.
Intro to ADP Capital
The first task is a simple application of ADP Draft Capital. The logic behind this is simple: at every spot in the draft, you should attempt to take the best possible player. There are minor positional discrepancies to sort out (namely, C scoring more than W and D, G existing on a rather separate plane), but these aspects generally exist in some way in the NFL and we’ve gained far more than we’ve lost from such analyses. It’s helpful to idealize our draft capital such that every subsequent pick has a lower value than the pick before it.
Much like the actual NFL or NHL draft, the delta, or slope of the change, between picks is not linear - the elite talent separates at a much greater level, thus fitting a curve takes a bit more precision:
After much testing, I found that logarithmic equations generated the best performing R-squareds, a fact that is now true across multiple years of data. Another consideration is to think about what we want to build toward - advancement to the playoffs (via regular season scoring, in green), playoff-round performance (aggregated, in red), or full-season production (shown in blue). The scale of these graphs changes based on this decision, but the curves each produce are shockingly consistent with one another. While it doesn’t really matter what we use, I have implemented the blue curve, or total points for the full season, to assign each individual draft pick an ADP Capital value:
draft_capital =
945.3161 + log(overall_pick_number) * -148.3622
For simplicity, all ADP Capital values are rounded to the nearest whole number.
I am eager to determine whether controlling for positional scarcity in this formula, tweaking it for each position, leads to better results, but that is an analysis for another time.
Regular Season Results by ADP Capital Bucket
Now that we have assigned all possible draft slots 1-192 (16 rounds with 12 teams) a value, it’s time to apply these to each individual draft. Armed with positional allocation data by draft (14,100 drafts in total), we can then bucket these teams by specific positions to determine where each team fits in as far as allocation of draft capital against the field as a whole.
Keep in mind that this process is intentionally agnostic of:
who the drafter is and their skill level
the player that was drafted - taking a backup goalie in round 2 is counted the same toward your goalie capital as taking Igor Shesterkin in round 2
whether a drafted player outperformed or underperformed their draft slot in terms of actual results
Bucket 1 is the highest level of investment, while Bucket 5 is the least. The dotted line reflects the Overall advancement rate of 2/12, or 16.7%.
We start with W, because this is the point that most directly confirms my priors and illustrates what we have been pounding the table about for multiple seasons now: the field is still dramatically underestimating how much W production, and thus investment in the position, matters.
It is inarguably true, as Billy Jones shows in the piece linked above, that Centers score more points, a fact that remains true no matter where they are drafted.
But when the best pick on the board, at all points in the draft, is a C, inexperienced users tend to draft too many of them precisely because there are no better options available. As you may suspect, given the W graph, their results suffer greatly:
It’s not that the position as a whole failed greatly, or even that the Cs found in Bucket 1 most often failed. To wit, when Auston Matthews was drafted it was to Bucket 1 teams 32% of the time, and Bucket 3 34% of the time. He was a major driver of results, yet these two buckets in aggregate, finished with a 13.7% advancement rate and 19.2%. We’ll revisit this idea in future works, as there’s a lot more to learn about the players that make up these buckets.
Incredibly, the Wing discrepancy may not be the biggest leak we’ve uncovered in this edition of the Best Puck Manifesto. While Late D seems to have an advantage (which might just be a skew away from Makar, and a skew toward pairing Josi with late D such as MacKenzie Weegar and Morgan Rielly), we truly see a vast advantage gained simply by not underinvesting in the goaltender position. That’s right, team Goalies Don’t Matter has gone fully off the rails. Goalies are all that matter in Best Puck. If you avoid drafting into the fifth (lowest) bucket at Goalie, your advancement improves by ~50% over those who finish the lowest in G investment.
Candidly, I have been long in defense of the G position, noting in years past that while in DFS every goalie in a given night has the same opportunity (one start), in a Best Puck season the best goalies receive ~25% more starts than the mid-tier in net, which is about 55 starts vs. 42-45 starts. Furthermore, we can be rather confident that the 55-start goalies have a 50-60 start projection - the 42 start guys may very well wind up with the low-end of a 1A-1B draw, which equates to approximately 35 starts! As the NHL shifts further and further into a 1A-1B setup, this gulf will only grow greater between the elites and the also-rans, in my opinion.
Most importantly, the weekly upside is what really matters - finding goalies that in some weeks take all three starts (think a Tues - Thurs - Sat week of team games) as opposed to splitting two and one coud be the push you need in the Best Puck finals. Finding the hypothetical goalie (that doesn’t exist - yet) that starts four games in one week, zero in the next, and three the following week, and so on in the name of rest, is the ideal.
No matter how you frame it, the goalies who are most likely to start 3-4 games in a week are those who go early.. so don’t miss out.
Playoff Performance by ADP Capital Bucket
OK - so teams that don’t draft enough W get dusted in the regular season, and those that draft in the bottom 20% of goalies also get erased at a much higher clip than all other teams. What we really care about is winning that $25K top prize - so how do we win it all?
In our world, we have the regular season, a two-week sprint, another two-week sprint, and the finals. Once you make the playoffs by finishing top two of twelve, you must finish top two of ten in consecutive rounds, and only then can you compete with 93 other entrants for first prize. Stumble once, and you’re out. This isn’t a particularly fair way of determining which team is the best.
To combat this, I’ve devised a simple set-up to gauge viability of playoff teams. First, I filtered only on playoff teams. This ensures that only “good” teams, or at least “good enough to be competitive” teams, are being considered. Secondly, I’ve divided the final 16 weeks (of 25 total) into unique two-week blocks. This gives us 8 sets of potential “finals weeks” and 4 blocks of "potential “quarters + semis” weeks, while also somewhat allowing for drafters who aren’t entirely focused on maximizing early-season performance (vis-a-vis known injuries among other factors) to not be penalized with an immediate start to this tabulation.
All scores here are real, based on the actual stats accrued by the players in the various windows.
For quarters, I calculated the EV of advancing from the quarters, finishing 1st or 2nd out of 10, as if an advancing team had an equal probability to all others of finishing in any place in the forthcoming rounds, that is, dividing the prize pool yet-to-be paid out by the number of remaining teams, or roughly $208. If you make it out of the quarters, your playoff team is worth $208. To avoid pod variance dynamics (which are influential, but ultimately not controllable), the entire pool of 2,350 teams is used, and the top 470 teams (235 1st place teams, 235 2nd place teams) are counted as advancing in each quarters block. It’s that simple.
In the semis, the process is identical, except in this case any team that has been eliminated in the quarters in that particular block is not eligible to be awarded the semis EV, or about $890. I chose to do this rather than re-construct the environment for only advancing teams for simplicity, as I don’t feel it has a strong impact on results.
In both of these cases, there is a minimal impact on finishing top half of your respective pod, reflected by a $10 vs. $15 difference and a $30 vs. $50 payout for qtrs/semis losers, respectively. This is also attributed accurately in the below EV calculations.
Finals EV is calculated for all playoff teams, regardless of prior-week advancement. Additionally, while we only have four blocks of qtrs + semis (since they span 4 weeks in total, each block), we have 8 blocks of finals results to rely upon to smooth out further variance in results and determine what helps us win first place.
One decision I made that could impact these results is a pooling factor that values all “Top Ten Finishes” equally. That is, in a 94-team final, I am crediting 10th place the same as 1st place and all places in between, which is the average value of places 1-10, or $6,547. Ranging from $25K down to $1K, I felt as if we achieve the same goal (crediting high end finishes positively) without overly influencing the end results within the top ten.
For what it’s worth, in Leone’s Best Ball Manifesto, he treated the Finals as a binary outcome, valuing first as the top prize of $2M and spots 2-500 (? however many spots there are in the BBM final) as equal to one another at ~$10K. I don’t particularly like the aesthetics of that, and I am not sure that leads to any useful outcomes without an intense amount of post-hoc massaging, so let’s shoot for a top ten finish and I’ll revisit and update the Finals EV calcs if I find something that better represents how I would like to think about a successful Best Puck team. After all, if you finish Top Ten, you probably did something right, even if you didn’t achieve the ultimate prize.
Early W drafters who felt good about the regular season advancement data I shared earlier, avert your eyes:
Centers show a clear skew toward high investment = positive results
Wings, conversely, tend to fare poorly in the higher-investment buckets
Defenders were largely safe if you avoided over-investing into bucket #1
Once again, low-investment goalie builds fared terribly.
Once you make it to the playoffs, we find that the script flips dramatically - teams that had heavy investments in G or C fared incredibly well, while each of the top two buckets of W investment are estimated to have negative impacts on your playoff EV. While the average Finals seat is worth approximately $890, a team that fits into the first bucket of W investment is estimated to be worth only $808, a -9.3% drop in equity. Impact on the other playoff rounds before that is even more negative for this W bucket.
It seems that the high-end performers at C drove these results, although we’ll cover player-level insights in future discussions, and the impact of MacKinnon/Matthews is not as clearly obvious/positive as you might suspect, so I believe there is more work to be done here. I’m particularly interested in the granular-level data: right now I can tell you what proportion of MacKinnon teams were #1 bucket C-investment builds, but I can’t tell you precisely how they fared vs. other MacKinnon teams in the other buckets.
Another clear next step in this analysis is to calculate regular season EV in a comparable manner - after all, you can’t win if you don’t make the playoffs, and the C-heavy builds sucked at that. Perhaps there is information we can glean from regular season performance that is predictive of future playoff success, such as team scores, high-end finishes, or even something as simple as getting ADP value within your draft or which positions hit the flex on a weekly basis. All are ideas I’ve had, and all will be touched on in some way through my various means of communication.
I am also open to the idea of this methodology being susceptible to adverse impact from the regular season advance rates themselves - that is, if so many W-heavy teams advanced, perhaps the “power players” at W were fundamentally less powerful than the same needle-movers at other positions because they were more owned across the full population of playoff teams, and this is just game theory and survivorship bias in action.
Similarly, if so many high-investment C teams were killed off in the regular season, the ones that advanced may be great in a way that is not yet uncovered here, to the point that they managed to survive an incontrovertible flaw.
ADP Capital Bucket Construction - What Do Drafts Look Like?
It’s one thing to look at drafts after the fact and segment them accordingly - it’s another to plan out what a specific draft board looks like. Let’s end things with a brief tutorial as to what these buckets actually looked like in 2023-24. Does taking McDavid 1st Overall lock you into the 1st bucket of C? Who are the most common Gs in the bottom tier bucket that performed so terribly? These bits of info can help enlighten us on why these buckets may have fared as they did, and help set our minds straight on how to apply this to current and future drafts.
If you managed a top four pick, you almost certainly used it on one of these elite Cs last year. In doing so, you immediately lock yourself out of buckets 4 and 5, but drafters sort rather evenly into 1-3, with only McDavid’s ADP of 1 pushing his teams out of that third capital bucket. It should surprise no one, then, that of players that were drafted more than 10 times, McDavid shows up the most in Bucket 1 teams. Second is Elias Pettersson, surprisingly, at 48% of Pettersson drafts, then the other three Cs you see above. Pettersson was a classic ADP-mistake of being the best player on the board at the 2-3 turn, where McDavid drafters (or the drafters immediately around him) locked in Pettersson despite already having their alpha C. We’ll gauge how these teams fared specifically in a future newsletter.
I think I figured out why the Bucket #5 teams at W fared poorly (12.3% reg season advancement rate), guys. Yeesh at these names:
These are the most common fully-drafted players that skew toward Bucket 5. It’s interesting that Panarin was positioned in the draft so that his teams were wildly balanced across the board - I will file that one away to bring out in the player-specific analysis, seeing as he had a fantastic season and was surely a league-winner, and he offers lots of sample in each bucket to assess what worked and what did not.
I didn’t find much interesting in D or G, with G pictured below:
Igor went decently before the rest of the next tier, so his teams veered way toward the top two buckets. Skinner is an interesting case in that he jumps a number of top-tier goalies in bucket_1 percentage, which perhaps is a testament to his drafters, his ADP, or something else.
Sus Aho broke everything, so you have to deal with his presence here because he nearly cost me my sanity. Quinn Hughes and Morrissey, on the other hand, are an extremely interesting test-case. Why do two players with near-identical ADPs have such an obvious tilt toward high-and-low end bucket outcomes? Those who took Quinn Hughes seemed to like him as the second D in, and Morrissey as the first, but I don’t know why that’s the case or if that’s meaningful.
Example Draft Buckets (following a 3-7-3-3 standard build, all rows are unique teams with no crossover)
C1: Matthews (2), Pettersson (26), Bedard (47)
C2: Draisaitl (2), Kempe (95), Eriksson Ek (102)
C3: MacKinnon (4), Kyrou (100), Mercer (189)
C4: Stamkos (54), Hintz (67), Tavares (78)
C5: Tavares (77), Malkin (116), Cozens (149)
W1: Kaprizov (8), Marner (17), Svechnikov (32), Bratt (65), Marchessault (89), Lehkonen (161), Duclair (176)
W2: Guentzel (24), Hyman (25), Kreider (49), Ehlers (72), Terry (96), Stone (120), Vrana (169)
W3: Guentzel (23), Skinner (50), Ehlers (71), Konecny (74), Kuzmenko (98), Hall (122), Slafkovsky (191)
W4: Nylander (49), Marchessault (72), Forsberg (73), DeBrusk (97), Hall (120), Batherson (121), Wheeler (144)
W5 (I have a manual search process - not turning up any 7 W teams in this bucket, to no surprise): Marner (21), Tuch (45), Tippett (93), DeBrusk (117), Wheeler (189)
D1: Josi (16),Karlsson (40), Dunn (160)
D2: Makar (17), S. Jones (137), R. Andersson (176)
D3: Hamilton (47), Pietrangelo (98), Weegar (170)
D4: Bouchard (48), Chabot (145), Larsson (192)
D5: Nurse (120), Chabot (121), Pionk (192)
G1: Oettinger (30), Skinner (67), Bobrovsky (91)
G2: Sorokin (25), Kuemper (96), Fleury (169)
G3: Oettinger (31), Binnington (138), Vejmelka (151)
G4: Vanecek (80), Markstrom (89), Merzlikins (152)
G5: Bobrovsky (99), Campbell (147), Schmid (190)
Thanks for reading! Subscribe to the newsletter and stay tuned to Discord and my Xwitter to make sure you don’t miss a thing!