by    in Data Science, prediction, Rankings

Cognitive Models for the Intelligent Aggregation of Lists

Ranker is constantly working to improve our crowdsourced list algorithms, in order to surface the best possible answers to the questions on our site.  As part of this effort, we work with leading academics who research the “wisdom of crowds”, and below is a poster we recently presented at the annual meeting for the Association for Psychological Science (led by Ravi Selker at the University of Amsterdam and in collaboration with Michael Lee from the University of California-Irvine).

While the math behind the aggregation model may be complex (a paper describing it in detail will hopefully be published shortly), the principle being demonstrated is relatively simple.  Specifically, aggregating lists using models that take into account the inferred expertise of the list maker outperform simple averages, when compared to real-world ground truths (e.g. box office revenue).  While Ranker’s algorithms for determining our crowdsourced rankings may be similarly complex, they are similarly designed to produce the best answers possible.

 

cognitive_model_aggregating_lists

 

– Ravi Iyer

by    in Data Science, Pop Culture, prediction

Ranker Predicts Spurs to beat Cavaliers for 2015 NBA Championship

The NBA Season starts tonight and building on the proven success of our World Cup and movie box office predictions, as well as the preliminary success of our NFL predictions, Ranker is happy to announce our 2015 NBA Championship Predictions, based upon the aggregated data from basketball fans who have weighed in on our NBA and basketball lists.

Ranker's 2015 NBA Championship Predictions as Compared to ESPN and FiveThirtyEight
Ranker’s 2015 NBA Championship Predictions as Compared to ESPN and FiveThirtyEight

For comparison’s sake, I included the current ESPN power rankings as well as FiveThirtyEight’s teams that have the most percentage chance of winning the championship.  As with any sporting event, chance will play a large role in the outcome, but the premise of producing our predictions regularly is to validate our belief that the aggregated opinions of many will generally outperform expert opinions (ESPN) or models based on non-opinion data (e.g. player performance data plays a large role in FiveThirtyEight’s predictions).  Our ultimate goal is to prove the utility of crowdsourced data, as while something like NBA predictions is a crowded space where many people attempt to answer this question, Ranker produces the world’s only significant data model for equally important questions, such as determining the world’s best DJseveryone’s biggest turn-ons or the best cheeses for a grilled cheese sandwich.

– Ravi Iyer

by    in prediction

Ranker Predicts Jacksonville Jaguars to have NFL’s worst record in 2014

Today is the start of the NFL season and building on our success in using crowdsourcing to predict the World Cup, we’d like to release our predictions for the upcoming NFL season.  Using data from our “Which NFL Team Will Have the Worst Record in 2014?” list, which was largely voted on by the community at WalterFootball.com (using a Ranker widget), we would predict the following order of finish, from worst to first.  Unfortunately for fans in Florida, the wisdom of crowds predicts that the Jacksonville Jaguars will finish last this year.

As a point of comparison, I’ll also include predictions from WalterFootball’s Walter Cherepinsky, ESPN (based on power rankings), and Betfair (basted on betting odds for winning the Super Bowl).  Since we are attempting to predict the teams with the worst records in 2014, the worst teams are listed first and the best teams are listed last.

Ranker NFL Worst Team Predictions 2014

The value proposition of Ranker is that we believe that the combined judgments of many individuals is smarter than even the most informed individual experts.  Our predictions were based on over 27,000 votes from 2,900+ fans, taking into account both positive and negative sentiment by combining the raw magnitude of positive votes with the ratio of positive to negative votes.  As research on the wisdom of crowds predicts, the crowd sourced judgments from Ranker should outperform those from the experts.  Of course, there is a lot of luck and randomness that occurs throughout the NFL season, so our results, good or bad, should be taken with a grain of salt.  What is perhaps more interesting is the proposition that crowdsourced data can approximate the results of a betting market like BetFair, for the real value of Ranker data is in predicting things where there is no betting market (e.g. what content should Netflix pursue?).

Stay tuned til the end of the season for results.

– Ravi Iyer

by    in prediction

Ranker World Cup Predictions Outperform Betfair & FiveThirtyEight

Former England international player turned broadcaster Gary Lineker famously said “Football is a simple game; 22 men chase a ball for 90 minutes and at the end, the Germans always win.” That proved true for the 2014 World Cup, with a late German goal securing a 1-0 win over Argentina.

Towards the end of March, we posted predictions for the final ordering of teams in the World Cup, based on Ranker’s re-ranks and voting data. During the tournament, we posted an update, including comparisons with predictions made by FiveThirtyEight and Betfair. With the dust settled in Brazil (and the fireworks in Berlin shelved), it is time to do a final evaluation.

Our prediction was a little different from many others, in that we tried to predict the entire final ordering of all 32 teams. This is different from sites like Betfair, which provided an ordering in terms of the predicted probability each team would be the overall winner. In order to assess our order against the true final result, we used a standard statistical measure called partial tau. It is basically an error measure — 0 would be a perfect prediction, and the larger the value grows the worse the prediction — based on how many “swaps” of a predicted order need to be made to arrive at the true order. The “partial” part of partial tau allows for the fact that the final result of the tournament is not a strict ordering. While the final and 3rd place play-off determined the order of the first four teams: Germany, Argentina, the Netherlands, and Brazil, other groups of teams are effectively tied from then on.  All of the teams eliminated in the quarter finals can be regarded as having finished in equal fifth place. All of the teams eliminated in the first game past the group stage finished equal sixth. And all of the 32 teams eliminated in group play finished equal last.

The model we used to make our predictions involved three sources of information. The first was the ranks and re-ranks provided by users. The second was the up and down votes provided by users. The third was the bracket structure of the tournament itself. As we emphasized in our original post, the initial group stage structure of the World Cup provides strong constraints on where teams can and cannot finish in the final order. Thus, we were interested to test how our model predictions depended on each sources of information. This lead to a total of 8 separate models

  • Random: Using no information, but just placing all 32 teams in a random order.
  • Bracket: Using no information beyond the bracket structure, placing all the teams in an order that was a possible finish, but treating each game as a coin toss.
  • Rank: Using just the ranking data.
  • Vote: Using just the voting data.
  • Rank+Vote: Using the ranking and voting data, but not the bracket structure.
  • Bracket+Vote: Using the voting data and bracket structure, but not the ranking data.
  • Bracket+Rank: Using the ranking data and bracket structure, but not the voting data.
  • Rank+Vote+Bracket: Using all of the information, as per the predictions made in our March blog post.

We also considered the Betfair and FiveThirtyEight rankings, as well as the Ranker Ultimate List at the start of the tournament, as interesting (but maybe slightly unfair, given their different goals) comparisons. The partial taus for all these predictions, with those based on less information on the left, and those based on more information on the right, are shown in the graph below. Remember, lower is better.

The prediction we made using the votes, ranks, and bracket structure out-performed Betfair, FiveThirtyEight, and the Ranker Ultimate List. This is almost certainly because of the use of the bracket information. Interestingly, just using the ranking and bracket structure information, but not the votes, resulted in a slightly better prediction. It seems as if our modeling needs to improve how it benefits from using both ranking and voting data. The Rank+Vote prediction was worse than either source alone. It is also interesting to note that the Bracket information by itself is not useful — it performs almost as poorly as a random order — but it is powerful when combined with people’s opinions, as the improvement from Rank to Bracket+Rank and from Vote to Bracket+Vote show.

by    in Data Science, Pop Culture, prediction

Comparing World Cup Prediction Algorithms – Ranker vs. FiveThirtyEight

Like most Americans, I pay attention to soccer/football once every four years.  But I think about prediction almost daily and so this year’s World Cup will be especially interesting to me as I have a dog in this fight.  Specifically, UC-Irvine Professor Michael Lee put together a prediction model based on the combined wisdom of Ranker users who voted on our Who will win the 2014 World Cup list, plus the structure of the tournament itself.  The methodology runs in contrast to the FiveThirtyEight model, which uses entirely different data (national team results plus the results of players who will be playing for the national team in league play) to make predictions.  As such, the battle lines are clearly drawn.  Will the Wisdom of Crowds outperform algorithmic analyses based on match results?  Or a better way of putting it might be that this is a test of whether human beings notice things that aren’t picked up in the box scores and statistics that form the core of FiveThirtyEight’s predictions or sabermetrics.

So who will I be rooting for?  Both methodologies agree that Brazil, Germany, Argentina, and Spain are the teams to beat.  But the crowds believe that those four teams are relatively evenly matched while the FiveThirtyEight statistical model puts Brazil as having a 45% chance to win.  After those first four, the models diverge quite a bit with the crowd picking the Netherlands, Italy, and Portugal amongst the next few (both models agree on Colombia), while the FiveThirtyEight model picks Chile, France, and Uruguay.  Accordingly, I’ll be rooting for the Netherlands, Italy, and Portugal and against Chile, France, and Uruguay.

In truth, the best model would combine the signal from both methodologies, similar to how the Netflix prize was won or how baseball teams combine scout and sabermetric opinions.  I’m pretty sure that Nate Silver would agree that his model would be improved by adding our data (or similar data from betting markets like Betfair that similarly thought that FiveThirtyEight was underrating Italy and Portugal) and vice versa.  Still, even as I know that chance will play a big part in the outcome, I’m hoping Ranker data wins in this year’s world cup.

– Ravi Iyer

Ranker’s Pre-Tournament Predictions:

FiveThirtyEight’s Pre-Tournament Predictions:

by    in prediction

Predicting the Movie Box Office

The North American market for films totaled about US$11,000 million in 2013, with over 1300 million admissions. The film industry is a big business that not even Ishtar, nor Jaws: The Revenge, nor even the 1989 Australian film “Houseboat Horror” manages to derail. (Check out Houseboat Horror next time you’re low on self-esteem, and need to be reminded there are many people in the world much less talented than you.)

Given the importance of the film industry, we were interested in using Ranker data to make predictions about box office grosses for different movies. The ranker list dealing with the Most Anticipated 2013 Films gave us some opinions — both in the form of re-ranked lists, and up and down votes — on which to base predictions. We used the same cognitive modeling approach previously applied to make Football (Soccer) World Cup predictions, trying to combine the wisdom of the ranker crowd.

Our basic results are shown in the figure below. The movies people had ranked are listed from the heavily anticipated Iron Man 3, Star Trek: Into Darkness, and Thor: The Dark World down to less anticipated films like Simon Killing, The Conjuring, and Alan Partridge: Alpha Papa. The voting information is shown in the middle panel, with the light bar showing the number of up-votes and the dark bar showing the number of down-votes for each movie. The ranking information is shown in the right panel, with the size of the circles showing how often each movie was placed in each ranking position by a user.

This analysis gives us an overall crowd rank order of the movies, but that is still a step away from making direct predictions about the number of dollars a movie will gross. To bridge this gap, we consulted historical data. The Box Office Mojo site provides movie gross totals for the top 100 movies each year for about the last 20 years. There is a fairly clear relationship between the ranking of a movie in a year, and the money it grosses. As the figure below shows, a few highest grossing movies return a lot more than the rest, following a “U-shaped” pattern that is often found in real-world statistics. If a movie is the 5th top grossing in a given year, for example, it grosses between about 100 and 300 million dollars. if it is the 50th highest grossing, it makes between about 10 and 80 million.

We used this historical relationship between ranking and dollars to map our predictions about ranking to predictions about dollars. The resulting predictions about the 2013 movies are shown below. These predictions are naturally uncertain, and so cover a range of possible values, for two reasons. We do not know exactly where the crowd believed they would finish in the ranking list, and we only know a range of possible historical grossed dollars for each rank. Our predictions acknowledge both of those sources of uncertainty, and the blue bars in the figure below show the region in which we predicted it was 95% likely to final outcome would lie. To assess our predictions, we looked up the answers (again at Box Office Mojo), and overlayed them as red crosses.

Many of our predictions are good, for both high grossing (Iron Man 3, Star Trek) and more modest grossing (Percy Jackson, Hansel and Gretel) movies. Forecasting social behavior, though, is very difficult, and we missed a few high grossing movies (Gravity) and over-estimated some relative flops (47 Ronin, Kick Ass 2). One interesting finding came from contrasting an analysis based on ranking and voting data with similar analyses based on just ranking or just voting. Combining both sorts of data led to more accurate predictions than using either alone.

We’re repeating this analysis for 2014, waiting for user re-ranks and votes for the Most Anticipated Films of 2014. The X-men and Hunger Games franchises are currently favored, but we’d love to incorporate your opinion. Just don’t up-vote Houseboat Horror.

by    in Data Science, prediction, Rankings

World Cup 2014 Predictions

An octopus called Paul was one of the media stars of the 2010 soccer world cup. Paul correctly predicted 11 out of 13 matches, including the final in which Spain defeated the Netherlands. The 2014 world cup is in Brazil and, in an attempt to avoid eating mussels painted with national flags, we made predictions by analyzing data from Ranker’s “Who Will Win The 2014 World Cup?” list.

Ranker lists provide two sources of information, and we used both to make our predictions. One source is the original ranking, and the re-ranks provided by other users. For the world cup list, some users were very thorough, ranking all (or nearly all) of the 32 teams who qualified for the world cup. Other users were more selective, listing just the teams they thought would finish in the top places. An interesting question for data analysis is how much weight should be given to different rankings, depending on how complete they are.

The second source of information on Ranker are the thumbs-up and thumbs-down votes other users make in response to the master list of rankings. Often ranker lists have many more votes than they have re-ranks, and so the voting data potentially are very valuable. So, another interesting question for data analysis is how the voting information should be combined with the ranking information.

A special feature of making world cup predictions is that there is very useful information provided by the structure of the competition itself. The 32 teams have been drawn in 8 brackets with 4 teams each. Within a bracket, every team plays every other team once in initial group play. The top two teams from each bracket then advance to a series of elimination games. This system places strong constraints on possible outcomes, which a good prediction should follow. For example, Although Group B contains Spain, the Netherlands, and Chile — all strong teams, currently ranked in the top 16 in the world according to FIFA rankings — only two can progress from group play and finish in the top 16 for the world cup.

We developed a model that accounts for all three of these sources of information. It uses the ranking and re-ranking data, the voting data, and the constraints coming from the brackets, to make an overall prediction. The results of this analysis are shown in the figure. The left panel shows the thumbs-up (to the right, lighter) and thumbs-down (to the left, darker) votes for each team. The middle panel summarizes the ranking data, with the area of the circles corresponding to how often each team was ranked in each position. The right hand panel shows the inferred “strength” of each team on which we based our predicted order.

Our overall prediction has host-nation Brazil winning. But the distribution of strengths shown in the model inferences panel suggests it is possible Germany, Argentina, or Spain could win. There is little to separate the remainder of the top 16, with any country from the Netherlands to Algeria capable of doing well in the finals. The impact of the drawn brackets on our predictions is clear, with a raft of strong countries — the England, USA, Uruguay, and Chile — predicted to miss the finals, because they have been drawn in difficult brackets.

– Michael Lee

How Netflix’s AltGenre Movie Grammar Illustrates the Future of Search Personalization

I recently got sent this Atlantic article on how Netflix reverse engineered Hollywood by a few contacts, and it happens to mirror my long term vision for how Ranker’s data fits into the future of search personalization.  Netflix’s goal, to put “the right title in front of the right person at the right time,” is very similar to what Apple, Bing, Google, and Facebook are attempting to do with regards to personalized contextual search.  Rather than you having to type in “best kitchen gadgets for mothers”, applications like Google Now and Cue (bought by Apple) hope to eventually be able to surface this information to you in real time, knowing not only when your mother’s birthday is, but also that you tend to buy kitchen gadgets for her, and knowing what the best rated kitchen gadgets that aren’t too complex and are in your price range happen to be.  If the application was good enough, a lot of us would trust it to simply charge our credit card and send the right gift.  But obviously we are a long way from that reality.

Netflix’s altgenre movie grammar (e.g. Irreverent Werewolf Movies Of The 1960s) gives us a glimpse of the level of specificity that would be required to get us there.  Consider what you need to know to buy the right gift for your mom.  You aren’t just looking for a kitchen gadget, but one with specific attributes.  In altgenre terminology, you might be looking for “best simple, beautifully designed kitchen gadgets of 2014 that cost between $25 and $100” or “best kitchen gadgets for vegetarian technophobes”.  Google knows that simple text matching is not going to get it the level of precision necessary to provide such answers, which is why semantic search, where the precise meaning of pages is mapped, has become a strategic priority.

However, the universe of altgenre equivalents in the non-movie world is nearly endless (e.g. Netflix has thousands of ways just to classify movies), which is where Ranker comes in, as one of the world’s largest sources for collecting explicit cross-domain altgenre-like opinions.  Semantic data from sources like wikipedia, dbpedia, and freebase can help you put together factual altgenres like “of the 60s” or “that starred Brad Pitt“, but you need opinion ratings to put together subtler data like “guilty pleasures” or “toughest movie badasses“.  Netflix’s success is proof of the power of this level of specificity in personalizing movies and consider how they produced this knowledge.  Not through running machine learning algorithms on their endless stream of user behavior data, but rather by soliciting explicit ratings along these dimensions by paying “people to watch films and tag them with all kinds of metadata” using a “36-page training document that teaches them how to rate movies on their suggestive content, goriness, romance levels, and even narrative elements like plot conclusiveness.”  Some people may think that with enough data, TripAdvisor should be able to tell you which cities are “cool”, but big data is not always better data.  Most data scientists will tell you the importance of defining the features in any recommendation task (see this article for technical detail on this), rather than assuming that a large amount of data will reveal all of the right dimensions.  The wrong level of abstraction can make prediction akin to trying to predict who will win the superbowl by knowing the precise position and status of every cell in every player on every NFL team.  Netflix’s system allows them to make predictions at the right level of abstraction.

The future of search needs a Netflix grammar that goes beyond movies.  It needs to able to understand not only which movies are dark versus gritty, but also which cities are better babymoon destinations versus party cities and which rock singers are great vocalists versus great frontmen.  Ranker lists actually have a similar grammar to Netflix movies, except that we apply this grammar beyond the movie domain.  In a subsequent post, I’ll go into more detail about this, but suffice it to say for now that I’m hopeful that our data will eventually play a similar role in the personalization of non-movie content that Netflix’s microtagging plays in film recommendations.

– Ravi Iyer

 

Why Topsy/Twitter Data may never predict what matters to the rest of us

Recently Apple paid a reported $200 million for Topsy and some speculate that the reason for this purchase is to improve recommendations for products consumed using Apple devices, leveraging the data that Topsy has from Twitter.  This makes perfect sense to me, but the utility of Twitter data in predicting what people want is easy to overstate, largely because people often confuse bigger data with better data.  There are at least 2 reasons why there is a fairly hard ceiling on how much Twitter data will ever allow one to predict about what regular people want.

1.  Sampling – Twitter has a ton of data, with daily usage of around 10%.  Sample size isn’t the issue here as there is plenty of data, but rather the people who use Twitter are a very specific set of people.  Even if you correct for demographics, the psychographic of people who want to share their opinion publicly and regularly (far more people have heard of Twitter than actually use it) is way too unique to generalize to the average person, in the same way that surveys of landline users cannot be used to predict what psychographically distinct cellphone users think.

2. Domain Comprehensiveness – The opinions that people share on Twitter are biased by the medium, such that they do not represent the spectrum of things many people care about.  There are tons of opinions on entertainment, pop culture, and links that people want to promote, since they are easy to share quickly, but very little information on people’s important life goals or the qualities we admire most in a person or anything where people’s opinions are likely to be more nuanced.  Even where we have opinions in those domains, they are likely to be skewed by the 140 character limit.

Twitter (and by extension, companies that use their data like Topsy and DataSift) has a treasure trove of information, but people working on next generation recommendations and semantic search should realize that it is a small part of the overall puzzle given the above limitations.  The volume of information gives you a very precise measure of a very specific group of people’s opinions about very specific things, leaving out the vast majority of people’s opinions about the vast majority of things.  When you add in the bias introduced by analyzing 140 character natural language, there is a great deal of variance in recommendations that likely will have to be provided by other sources.

At Ranker, we have similar sampling issues, in that we collect much of our data at Ranker.com, but we are actively broadening our reach through our widget program, that now collects data on thousands of partner sites.  Our ranked list methodology certainly has bias too, which we attempt to mitigate that through combining voting and ranking data.  The key is not in the volume of data, but rather in the diversity of data, which helps mitigate the bias inherent in any particular sampling/data collection method.

Similarly, people using Twitter data would do well to consider issues of data diversity and not be blinded by large numbers of users and data points.  Certainly Twitter is bound to be a part of understanding consumer opinions, but the size of the dataset alone will not guarantee that it will be a central part.  Given these issues, either Twitter will start to diversify the ways that it collects consumer sentiment data or the best semantic search algorithms will eventually use Twitter data as but one narrowly targeted input of many.

– Ravi Iyer

by    in Data Science, prediction

Combining Preferences for Pizza Toppings to Predict Sales

The world’s most expensive pizza, auctioned for $4,200 as a charity gift in 2007, was topped with edible gold, lobster marinated in cognac, champagne-soaked caviar, smoked salmon, and medallions of venison. While most of us prefer (or can only afford to prefer) more humble ingredients, our preferences are similarly diverse.  Ranker has a Tastiest Pizza Toppingslist that asks people to express their preferences. At the time of writing there are 29 re-ranks of this list, and a total of 64 different ingredients mentioned. Edible gold, by the way, is not one of them.

Equipped with this data about popular pizza toppings, we were interested in finding out if pizzerias were actually selling the toppings that people say that they want. We also wanted to see if we could predict sales for individual ingredients by looking at one list that combined all of the responses about pizza topping preferences. This “Ultimate List” contains all of toppings that were listed in individual lists (known as re-ranks) and is ordered in a way that reflects how many times each ingredient was mentioned and where they ranked on individual lists. Many of the re-ranks only list a few ingredients, so it is fitting to combine lists and rely on the “wisdom of the crowd” to get a more complete ranking of many possible ingredients.

As a real-world test of how people’s preferences correspond to sales, we used Strombolini’s New York Pizzeria’s list of their top 10 selling ingredients. Pepperoni, cheese, sausage and mushrooms topped the list, followed by: pineapple, bacon, ham, shrimp, onion, and green peppers. All of these ingredients, save for shrimp, are included in the Ranker lists so we considered the 9 overlapping ingredients and measured how close each user’s preference list was to the pizzeria’s sales list.

To compare lists, we used a standard statistical measure known as Kendall’s tau, which counts how many times we would need to swap one item for another (known as a pair-wise swap) before two lists are identical. A Kendall’s tau of zero means the two lists are exactly the same. The larger the Kendall’s tau value becomes, the further one list is from another.

The figure shows, using little stick people, the Kendall’s tau distances between users’ lists, and the Strombolini’s sales list. The green dot corresponds to a perfect tau of zero, and the red dot is the highest possible tau (if two lists are the exact opposite of the other). The dotted line is provided as a reference to show how likely each Kendall’s tau value is by chance (that is, how often different Kendall’s tau values occur for random lists of the ingredients). It is clear that there are large differences in how close individual users’ lists came to the sales-based list. It is also clear that many users produced rankings that were quite different from the sales-based list.

Using this model, the combined list came out to be: cheese, pepperoni, bacon, mushrooms, sausage, onion, pineapple, ham, and green peppers. This is a Kendall’s tau of 7 pair-wise swaps from the Strombolini list, as shown in the figure by the blue dot representing the crowd. This means the combined list is closer to the sales list than all but one of the individual users.

Our “wisdom of the crowd” analysis, combining all the users’ lists, used the same approach we previously applied to predicting celebrity deaths using Ranker data. It is a “Top-N” variant of the psychological approach developed in our work modeling decision-making and individual differences for ranking lists, and has the nice property of naturally incorporating individual differences.

This analysis is a beginning example of a couple of interesting ideas. One is that it is possible to extract relatively complete information from a set of incomplete opinions provided by many people. The other is that this combined knowledge can be compared to, and possibly be predictive of, real-world ground truths, like whether more pizzas have bacon or green peppers on them.  It may never begin to explain, however, why someone would waste champagne-soaked caviar on pizza, as a topping.

Page 1 of 212