How we chose this year's FLAME Instructors

For two years now, FLAME Festival has offered a unique experience inside the flow/fire festival world in that we allow our attendees to help us determine our lineup. Last year was our first time trying out this experiment and we learned a lot from doing it--a lot of what worked and a lot of what didn’t work. This year, in an effort to improve upon the system we created last year we set out to take many of those lessons learned and apply them to voting and teacher selection to do our best to assemble the best festival the Southeastern United States has ever seen. Last year we set a rare standard for transparency in the flow festival world by writing in depth about how the selection process worked and how each teacher fared in it. We’d like to continue with this trend in the hopes that it helps both attendees as well as instructors navigate this system in the future.

First off, last year’s system was exceedingly simple: we weighted classes by where they fell inside of brackets. If you made it into the top 10 most voted on classes, you were given a free ride to the festival, no questions asked. Beyond that it took a certain number of classes that made it within the top 100 to offset any travel costs. If you’d like to see a full accounting of the math from last year, you can read the blog entry on it here. It was an incredibly simple system and like any simple system it had huge flaws that became readily apparent at the festival itself:

First, giving a free ride to any teacher in the top 10 proved to be a bit of a nightmare for us because a couple of them were only offering us 2 classes. There is a more or less tacit rule within the festival world that 3 classes is what’s necessary to earn a comp ticket to an event, so we wound up compensating certain instructors beyond their contributions to the festival in ways we weren’t 100% comfortable with.

Second, the voting system was vastly weighted for poi and against tools like hoop, which became a huge problem at the event itself because at some points we wound up with three poi classes going on simultaneously and many hoopers that were asking us why the lack of classes geared toward them.

Third, we’d originally conceived of this system as being a way to measure the value-add of each instructor to our lineup, the theory being that if they could accumulate a certain number of votes that it meant their classes were indispensable to our schedule and therefore worth the extra consideration. Frequently, we found classes that made it into the top ten would have low attendance, suggesting that their vote counts had no direct correlation to their popularity at the event itself.

With these three shortcomings in mind, we approached this year with a new series of directives to ensure we wouldn’t repeat the problems of the previous year. They were:

  1. No teacher gets a comp or stipend without getting at least 3 classes on the schedule
  2. Select classes to create a schedule that is more evenly distributed between tools
  3. Determine a better metric for a teacher’s value add

One quick note: it’s been suggested by many people (including among the FLAME organizers) that we switch to a system where only people with tickets to the event could vote on classes. A well-known flaw in our system is that anybody can vote and thus instructors can stack the vote by asking friends and family members who many never attend the event to vote for them and skew the system in their favor. There are two major flaws with this approach: the first is that most of our ticket sales come in the last two weeks before the event. Given that we need to arrange travel for our instructors at least a month ahead of time, it would mean a comparatively small percentage of attendees would determine the makeup of the event for everyone. Second: this year the festival switched to a third-party ticket vendor, so ticket purchases are no longer directly looped into our website. This means that creating accounts for ticket holders on the site would likely be a manual and time-consuming process for us at the festival. The costs to us unfortunately in time and effort did not outweigh the possible benefits.

With this in mind, we created a much more complicated system, but one that we hope serves the need of our attendees more fully. We began by first recruiting “headliners” for the event--10 teachers with established reputations for being at the forefront of their art and formidable competence as instructors to establish an initial level of excitement with regard to the event as well as to seed the schedule and ensure we were adequately serving the tools we neglected last year. These 10 teachers were Baxter, Bags & Valentina, Chris Kelly, Marvin Ong, Doodle, Corey White, Lux Luminous, Sticky, and Spades and they would be responsible for 26 of the classes on our final schedule. We originally planned on a schedule of 90 classes (5 slots per day with 9 tool tracks).

Our next step was to determine how many classes from each tool we were going to need. This was done by distributing the remaining tools on our schedule based upon the total vote counts for each tool performed on the voting site. The assumption here was that we could determine on a certain level the demand for any given tool based upon the total vote counts for the classes associated with that tool. Essentially, we divided the percentage of classes on the schedule a tool got by the percentage of the votes it received and established a variance of about +/- 0.5 to account for the low quantity of total classes. We changed the quantity of classes for each tool until each were as close to this variance as possible while still getting to our 90 total classes and getting as close as possible to complete tracks for every major tool. Happily, we can report that the sole outlier was flow wand and this was only due to having a headliner selected for this particular tool, throwing its weighting off in the final vote count given only one other class was offered in this particular tool.

With the quantity of classes for each tool determined, it was time to determine which classes from each instructor would be included from each tool. We were working from a budget of around $2100, so we started with working on figuring out an upper and lower threshold of how much we were willing to pay per instructor. As it turned out, the average travel stipend requested by our instructors came to $90, so we started with the assumption that our average instructor would need this amount, fully anticipating needs at either end of the spectrum. At this point, we were paying $30/class to get an instructor there. As such the base cost of any instructor came to $90. We would then measure each instructor against this base cost in the final assessment.

Next came factoring in the vote count--this was an incredible headache to factor into our calculations because it was so difficult to assign a dollar value to each vote to offset the cost of each teacher. Further, it was important to us to measure a teacher’s value not by single classes, but by the overall demand for them. If we were to select a teacher who managed to have a single breakthrough class while the rest of their classes remained low in vote counts, it was not a huge incentive to bring that teacher to the festival. Ultimately, we began by throwing out the vote counts for all but the top 3 classes that teacher offered and adding the total number of votes from them together. The logic for this was the same as that of the electoral college--we didn’t want people who could win on one or two classes, we wanted instructors with a consistent performance. No teacher was able to accumulate more than 150 votes for a given tool, so a direct correlation between their base cost and votes received was impossible. To weight the votes, we first started by determining the average number of votes accumulated by any given instructor (52), and comparing it to the greatest number of votes received by any instructor (153). On the assumption that with 131 possible teachers we wanted all instructors who performed better than average, 52 votes again became our baseline for performance and 153 our standard of exceptional. No teacher asked for more than $400 for a travel stipend, so with this established as our spending ceiling for any given teacher, we arrived at the following equation for determining the value of votes:

Where T=total votes a teacher received for their top 3 classes and V=value add:

V=(T-52)*4

In other words, a teacher with 153 votes would have a value add of $400, our spending ceiling. With this value add formula in place, each teacher’s value to the festival was determined with the following equation:

Where B=base cost of an instructor, V=value add as outlined above, and S=requested stipend, and T=teacher’s overall value:

T=B+V-S

In other words, a teacher’s value to our festival was a product of their base cost plus their value add minus the stipend they requested from us. The theory being that any instructor for whom a positive result was achieved was then considered to be an instructor from whom the festival would profit by having on the schedule.

With this in mind, we went tool-by-tool to determine each instructor’s individual value, selecting only the top performers until we had enough classes to fill each track as determined by our class allotment algorithm. The results were a little mixed because we would frequently wind up with instructors who would make the cut for one tool but not another. In those cases, we just gave the instructor a pass on all tools and tried to put them on the schedule with their highest voted classes. We also frequently found we’d get close to the totals we needed for particular tools but never the exact number we hoped for if we took 3 classes from every teacher. In the interest of giving as many teachers as possible a spot on our schedule, we established a ceiling of no more than 3 classes per instructor. Because of the latter requirement, it very quickly became clear that we would not be able to fulfill both that directive as well as limit ourselves to only 90 slots.

In the end, we opted to expand our schedule to 12 tracks for a total of 120 class slots. We still wound up with some very odd gaps in the schedule, so we decided to fill these in by adding classes from event organizers who were already getting comp tickets but receiving no stipends so they would not upset the model at all. A few outliers remained that could not offer us 3 classes, so ½ price tickets were offered to them instead. A little bit of last-minute tweaking was necessary to ensure we wouldn’t wind up with overlapping classes on the schedule, but in the end we were happy with the results. The full list of instructors added through this process is:

Alex Branham, Knight, Willow, Christina Berkshire, Gonzo, Casey Houle, Jonah DiGirolamo, Cosmic Greg Lee, Jandro Nerdo, Flo Fox, Sennyo, Kimberly Bucki, Melissa Coffey, Perkulator, Becca Becker, Brian Thompson, Cassie, Corey Glover, Drex, Echo, Eli Harrod, Kassandra Morrison, Kyle Owen/Gib, Nick Garcia, Ninja Pyrate, Spidey, Spinnabel Lee, Tesla, Timbo Slice, Tali, Penelope Tate, Patricia Farmer, Jacob Wetzel, and Carl Sparks.

Thanks so much to all our instructors and everyone who participated in voting! We hope that this year is the best FLAME ever. As always, you’re free to check our work by visiting http://dev.drexfactor.com/top-rated-points and do the math yourself. 

No votes yet

Subscribe for updates!

* indicates required