Collecting and Connecting Millions of Opinions

insights_logo_transparent

Ranker Insights is the Most Precise Data for Entertainment, Personalities, Sports, Brands and More

Advertisers, Marketers, Publishers, Retailers and the Media now have the ability to target audiences with a level of granularity that wasn’t previously possible. Each votable Ranker poll collects data in a very specific context that allows us to differentiate people who like an actor’s talent versus those who are fans of their appearance, or who appreciate a college for its educational reputation or its sports prowess.

HOW RANKER COLLECTS DATA:

A leading digital media company for opinion-based, crowdsourced rankings on just about everything, Ranker’s platform was built entirely for optimizing real-time data collected from constantly updated polls. A Quantcast top-100 site, Ranker has millions of visitors looking for crowdsourced answers. Ranker users vote on multiple items across multiple lists, allowing us to cross-correlate user votes across items and domains. This has enabled Ranker to develop highly targeted, first party data audience segments and an opinion graph covering over 50,000 items and more than 50mm affinity edges connecting these each individual items.

Ranker Insights:

  • Provides Connected Data – cross-vertical, psychographic profile built by relating opinion data
  • Collects quantitative data about qualitative attributes such as “Best,” “Attractive” and “Trustworthy”
  • Generates Interest Breakdown and Taste Correlations – data on what fans also like and dislike, including brand preferences
  • Knows Demographic Preferences by Gender, Age, Location and more.

Ranker Insights ensures you’re making the most informed decisions for your company, brand and clients based on real time consumer opinions.

  • Advertising/Marketing – Get higher engagement than ever before by knowing real audiences to target and where to spend your $
  • Get significantly increased social engagement. Our Data generates 3-7x more reach than using Facebook Insights.
  • Pre-production/casting – Use our data to make inform decisions on talent, storyline and marketing
  • Film, TV, Music, Talent, Food and Beverage, Lifestyle Brands – Stop guessing and see who your real fans are, and what else they truly care about
by   Ranker
Staff
in Popular Lists

Why do Ranker voters think Ellen should be president?

Yesterday, Ellen talked about being voted #1 on our list of Celebrities who should run for President.

What is it that makes a celebrity “president”-worthy?  Because Ranker polls about each person along dozens of dimensions (e.g. cool vs. hot vs. good actor vs. trustworthy vs. ?), we can see how ratings on other lists relate to being voted as someone who should run for president.  For example, below we can see that being seen as “cool” is only weakly related to being seen as presidential, with actors like Tom Hanks and Clint Eastwood scoring as relatively cool, but not relatively presidential.

CoolVsPresident

Being good at your job seems to relate moderately to being seen as presidential.  For example, below you can see how being seen as a good actor positively relates to being seen as presidential, with people like Meryl Streep, Leonardo Di Caprio, and Morgan Freeman scoring well on both fronts.

GoodActorVsPresident

It also relates well to likability.  Below you can see how the men who people want to have a beer with, like Johnny Depp, Morgan Freeman, and Di Caprio, also tend to be people they rate well as potential presidential candidates.

BeerVsPresident

It seems to relate best to trust as people like Ellen, Meryl Streep, and Morgan Freeman seem to be rated as both Trustworthy and as someone who should run for President.  Notice how the items below form a fairly straight line going up and to the right.

TrustVsPresident

In all, looking at the relationship between Ranker lists yields comparable results to what political scientists find drives evaluations of presidential candidates.  People want a president who is competent, likable, and trustworthy.  And clearly Ellen fits all three buckets as she ranks as one of the best comedians of all-time, someone people would want to have a beer with, and as trustworthy.  Hence, Ranker users vote her as the #1 Celebrity Who Should Run for President.

Ravi Iyer

by   Ranker
Staff
in Popular Lists

Ranker Users Predict Final Four Teams Accurately Based on Limited Bias

In 2015, Ranker’s voters predicted seven teams in the NCAA tournament’s Elite Eight. With the field for the Sweet 16 now set, we can see how well our rankings can predict how far a particular team will in this year’s tournament. This has been a historically tumultuous season of college basketball. Top-10 teams lost regularly, upsets were commonplace, and no teams were safe.

We can use Ranker’s data to see which team is having a year that matches their historical reputation as a powerhouse, and vice versa.  Ranker visitors drew a clear line around North Carolina, Michigan State, Kansas and Villanova as favorites to make it into the Final Four.  Kentucky is notable because it ranks highest in the overall best college programs poll, but is not predicted by our voters to end up in the Final Four.  Villanova, which is not ranked among the top historical teams, is the main outlier of teams that aren’t as strong in the same way that Kentucky and UNC are, yet is expected to have a good tournament showing.

The rankings provide an insight into how our voting data is based on the current season instead of a bias towards teams based on their longstanding reputations.

 

Here are our results from the 2015 tournament:

Screen Shot 2016-03-08 at 12.25.50 PM

 

Here are our results for this year’s tournament:

Screen Shot 2016-03-08 at 10.43.38 AM

 

 

by   Ranker
Staff
in Popular Lists

Duke and Kentucky Among Teams with the Most Annoying Fans

With March Madness tipping off, we turn to Ranker’s voters to learn more about college basketball and what to expect in this year’s tournament!

Which college basketball fan base wears their pride the best way?  We all know the traditional powers in college basketball, but sometimes their gloating can be a bit much.  In two separate lists, Ranker visitors ranked which college basketball team was the best, and which had the most annoying fans.  When we combine these two lists, we can see which team is best respected for its prowess on the court and how this relates to how annoying its fans are to the rest of the world.  As it happens, powerhouse bluebloods Duke and Kentucky are ranked among the top teams for both being historically successful, and for having annoying fans.  The most successful team with only moderately annoying fans is North Carolina.  The least annoying but still respected team fan base was Villanova.  Ohio State and Florida stand out for having annoying fans, but not particularly respected as programs overall.

 

Screen Shot 2016-03-01 at 1.08.08 PM

 

by   Ranker
Staff
in Popular Lists

Combining Best and Worst Lists to find Polarizing TV Shows

Ranker lists are expressions of people’s opinions, and it is possible for people to have opposite views. The same movie, television show, song, or celebrity can be loved or hated by different groups of people. (If this is not immediately obvious, think about Donald Trump for a moment). Social psychology has long been interested in differences of opinion, and has gathered all sorts of evidence that people will take more extreme views in an argument (attitude polarization), that they will focus on evidence that reinforces what they already believe (confirmation bias), and that they tend to judge new items and experiences based on their previous knowledge (apperception).

Ranker can provide evidence of polarization, since people’s ranks can express different opinions about the same items. This polarization can be especially clear when looking at “best” and “worst” lists on the same general topic. At the moment, it is easy to imagine Donald Trump at the top of both a “Best Presidential Candidates” and a “Worst Presidential Candidates” list. About the only way to explain this pattern of opinions is to identify Trump as a polarizing person. He doesn’t lead to one opinion or attitude. He polarizes people into “lovers” and “hater”.

Previously, we have developed cognitive models to analyze Ranker lists as diverse as the Soccer World Cup, movie box office takings, and how people feel about pizza toppings. None of these models, however, allowed for polarization. The assumption has always been that each item was perceived in a similar way by everybody. So, we extended our cognitive modeling approach to allow for polarizing items, perceived by some users with a “positive spin” and by others with a “negative spin”.

Not wanting to give Trump any more publicity, we decided to test the new model by looking at people’s opinions of recent TV shows. The two lists we looked at were The Best New TV Series of 2015 and The Most Disappointing New TV Shows of 2015. Together these lists involve 22 users — 17 in the best list and 5 from the worst list — ranking a total of 67 shows, with 14 of shows appearing on both best and worst lists. Some of the lists had as few as 3 shows, while others had as many as 27, with an average of about 9 shows per list.

Our new model assumes each TV show is represented in one of two ways. One possibility is that everybody has the same opinion, and the show is not polarizing. This means if a TV show is good, for example, people put it high in their best list, and low in the worst list, or doesn’t list it on their worst list at all. On the other hand, if a TV show is bad people put it high in their bad list and low in in their good list, or don’t mention it in their bad list at all. The new possibility in our model is that a show is polarizing, and so some people believe it is good while others believe it is bad. These polarizing shows need two separate representations: one for the “lovers”, and one for the “haters”.

TVShowBlog

The model we created determined which shows were polarizing and which were not, and how each should be represented on a scale from best to worst. The results are summarized in the graph. The shows are listed from best at the top to worst at the bottom. If a show is not polarizing, it is listed once in gray. If a show is polarizing, it is listed twice: once in green in for in its positive form, and once in red for in its negative form. The graph also summarizes the Ranker data that lead to these conclusions. The green circles indicate when a show was included in the “best” list, starting from rank 1 on the left, to lower ranks moving to the right. The larger the area of the circle, the more people ranked the show in that position. The red crosses indicate when a show was included in the “worst” list, again starting from rank 1 on the left, and again with size of the cross indicating how often it was ranked in that position.

It is clear from the figure that shows identified as polarizing — Better Call Saul, Empire, Ballers, Backstrom, and so on — generally were included in high positions on both the “best” and “worst” lists. Other shows are not polarizing: Last Man on Earth is consistently highly rated, and Schitt’s Creek seems to review itself with its name. A good question for the producers, marketers, and consumers of these TV shows is why some are polarizing. Better Call Saul, which is perhaps the most polarizing show in our results, is a nice example. It has a “lover” representation at the top of the overall list, and a “hater” representation near the bottom. One possibility is that the polarization arises is because Better Call Saul was created as a spin-off prequel to Breaking Bad, and many people would argue that Breaking Bad is one of the greatest television series of all time (and we’d agree). We guess that the people who had a negative opinion of Better Call Saul were die-hard fans of Breaking Bad, and found it didn’t match their lofty expectations. On the other hand, people with positive opinions of Better Call Saul probably evaluated it largely independent of Breaking Bad, as a good new crime television series.

Whatever the causes of polarization, it seems clear that Ranker data provide useful measures, and we think our modeling approach can lead to deeper insights. Finding what is polarizing, and identifying the “lovers” and “haters” should apply not just to TV shows, but to rappers, directors, songs, and everything else where not everybody feels the same way about everything. There is lots for us to do. Or, as Donald Trump has it: “If you’re going to be thinking, you may as well think big.”

– Crystal Velazquez and Michael Lee

by   Ranker
Staff
in Press Releases

New Poll by Ranker Reveals Top 10 Tequila Brands Based on more than 18,500 Votes

Ranker, the #1 online destination for crowdsourced rankings of everything, has released the results of its public poll asking voters to rank the Best Tequila Brands to determine their favorites in celebration of National Tequila Day (July 24).

Tequilas come in many varieties, and can have very distinct tastes depending on how they are produced. Whether you’re a connoisseur or just a casual drinker, everyone has their own opinion when it comes to Mexico’s signature liquor.

The poll was open to voters until July 21, 2015 and included 36 varieties to rank based on taste and preference. Don Julio dominated the #1 spot ranking first across nearly all demographics. The Top 10Tequilas as determined by 18,567 votes by 6,031 voters on Ranker are as follows:

1. Don Julio
2. Patron
3. 1800 Tequila
4. Cabo Wabo
5. Herradura
6. Corralejo
7. Tres Generationes
8. Milagro
9. El Jimador
10. Cazadores

Ranker’s poll also reveals:

  • Millennials and Generation X voters like Patron.
  • Men and Baby Boomers prefer Casa Noble and 1800 Tequila.
  • Women’s preferences include Herradura and Tres Generationes.
  • International and Northeastern voters ranked Casa Noble among their favorites.
  • West Coast voters picked Corralejo, Tres Generationes and Cazadores among their top five.

About Ranker:
Ranker is the #1 online destination for broad, opinion-based, crowdsourced rankings of everything. The company’s technology is centered on user engagement, turning its lists into the “best possible rankings” based on the wisdom of crowds.

A Quantcast Top 150 site, Ranker attracts more than 20 million monthly unique visitors. As a result, Ranker has one of the world’s largest databases of opinions with more than 100 million votes gathered on 50,000 items. Follow Ranker on Facebook at facebook.com/Ranker and on Twitter @Ranker.

by   Ranker
Staff
in Data, Data Science, Popular Lists

Applying Machine Learning to the Diversity within our Worst Presidents List

Ranker visitors come from a diverse array of backgrounds, perspectives and opinions.  The diversity of the visitors, however, is often lost when we look at the overall rankings of the lists, due to the fact that the rankings reflect a raw average of all the votes on a given item–regardless of how voters behave on multiple other items.  It would be useful then, to figure out more about how users are voting across a range of items, and to recreate some of the diversity inherent in how people vote on the lists.

Take for instance, one of our most popular lists: Ranking the Worst U.S. Presidents, which has been voted on by over 60,000 people, and is comprised of over a half a million votes.

In this partisan age, it is easy to imagine that such a list would create some discord. So when we look at the average voting behavior of all the voters, the list itself has some inconsistencies.  For instance, the five worst-rated presidents alternate along party lines–which is unlikely to represent a historically accurate account of which presidents are actually the worst.  The result is a list that represents our partisan opinions about our nation’s presidents:

 

ListScreenShot

 

The list itself provides an interesting glimpse of what happens when two parties collide in voting for the worst presidents, but we are missing interesting data that can inform us about how diverse our visitors are.  So how can we reconstruct the diverse groups of voters on the list such that we can see how clusters of voters might be ranking the list?

To solve this, we turn to a common machine learning technique referred to as “k-means clustering.” K-means clustering takes the voting data for each user, summarizes it into a result, and then finds other users with similar voting patterns.  The k-means algorithm is not given any information whatsoever from me as the data scientist, and has no real idea what the data mean at all.  It is just looking at each Ranker visitor’s votes and looking for people who vote similarly, then clustering the patterns according to the data itself.  K-means can be done to parse as many clusters of data as you like, and there are ways to determine how many clusters should be used.  Once the clusters are drawn, I re-rank the presidents for each cluster using Ranker’s algorithm, and the we can see how different clusters ranked the presidents.

As it happens, there are some differences in how clusters of Ranker visitors voted on the list.  In a two-cluster analysis, we find two groups of people with almost completely opposite voting behavior.

(*Note that since this is a list of voting on the worst president, the rankings are not asking voters to rank the presidents from best to worst, it is more a ranking of how much worse each president is compared to the others)

The k-means analysis found one cluster that appears to think Republican presidents are worst:

ClusterOneB

Here is the other cluster, with opposite voting behavior:

ClusterTwoB

In this two-cluster analysis, the shape of the data is pretty clear, and fits our preconceived picture of how partisan politics might be voting on the list.  But there is a bias toward recent presidents, and the lists do not mimic academic lists and polls ranking the worst presidents.

To explore the data further, I used a five cluster analysis–in other words, looking for five different types of voters in the data.

Here is what the five cluster analysis returned:

FiveClusterRankings

The results show a little more diversity in how the clusters ranked the presidents.  Again, we see some clusters that are more or less voting along party lines based on recent presidents (Clusters 5 and 4).  Cluster 1 and 3 also are interesting in that the algorithm also seems to be picking up clusters of visitors who are voting for people that have not been president (Hillary Clinton, Ben Carson), and thankfully were never president (Adolf Hitler).  Cluster 2 and 3 are most interesting to me however, as they seem to show a greater resemblance to the academic lists of worst presidents, (for reference, see wikipedia’s rankings of presidents) but the clusters tend toward a more historical bent on how we think of these presidents–I think of this as a more informed partisan-ship.

By understanding the diverse sets of users that make up our crowdranked lists, we are able to improve our overall rankings, and also provide more nuanced understanding how different group opinions compare, beyond the demographic groups we currently expose on our Ultimate Lists.  Such analyses help us determine outliers and agenda pushers in the voting patterns, as well as allowing us to rebalance our sample to make lists that more closely resemble a national average.

  • Glenn Fox

 

 

by   Ranker
Staff
in Data Science, Popular Lists, Rankings

In Good Company: Varieties of Women we would like to Drink With

MainImagesvg

They say you’re defined by the company you keep.  But how are you defined by the company you want to keep?

The list “Famous Women You’d Want to Have a Beer With”  provides an interesting way to examine this idea.  In other words, how people vote on this list can define something about what kind of person is doing the voting.

We can think of people as having many traits, or dimensions.  The traits and dimensions that are most important to the voters will be given higher rankings.  For instance, some people may rank the list thinking about the trait of how funny the person is, so may be more inclined to rate comedians higher than drama actresses.  Others may vote just on attractiveness, or based on singing talent, etc…  It may be the case that some people rank comedians and singers in a certain way, whereas others would only spend time with models and actresses.  By examining how people rank the various celebrities along these dimensions, we can learn something about the people doing the voting.

The rankings on the site, however, are based on the sum of all of the voters’ behavior on the list, so the final rankings do not tell us about how certain types of people are voting on the list.  While we could manually go through the list to sort the celebrities according to their traits, i.e. put comedians with comedians, singers with singers,  we would risk using our own biases to put voters into categories where they do not naturally belong.  It would be much better to let the voter’s own voting decide how the celebrities should be clustered.  To do this, we can use some fancy-math techniques from machine learning, called clustering algorithms, to let a computer examine the voting patterns and then tell us which patterns are similar between all the voters.   In other words, we use the algorithm to find patterns in the voting data, to then put similar patterns together into groups of voters, and then examine how the different groups of voters ranked the celebrities.  How each group ranked the celebrities tells us something about the group, and about the type of people they would like to keep them company.

As it happens, using this approach actually finds unique clusters, or groups, in the voting data, and we can then guess for ourselves how the voters from each group can be defined based on the company they wish to keep.

Here are the results:

Cluster 1:

Cluster4_MakeCelebPanels

Cluster 1 includes females known to be funny, and includes established comedians like Carol Burnett and Ellen DeGeneres. What is interesting is that Emma Stone and Jennifer Lawrence are also included, who are also highly ranked on lists based on physical attractiveness, they also have a reputation for being funny.  The clustering algorithm is showing us that they are often categorized alongside other funny females as well.  Among the clusters, this cluster has the highest proportion of female voters, which may explain why the celebrities are ranked along dimensions other than attractiveness.

 

Cluster 2:

Cluster1_MakeCelebPanels

Cluster 2 appears to consist of celebrities that are more in the nerdy camp, with Yvonne Strahovski and Morena Baccarin, both of whom play roles on shows popular with science fiction fans.  In the bottom of this list we see something of a contrarian streak as well, with downvotes handed out to some of the best known celebrities who rank highly on the list overall.

Cluster 3:

Cluster2_MakeCelebPanels

Cluster 3 is a bit more of a puzzle.  The celebrities tend to be a bit older, and come from a wide variety of backgrounds that are less known for a single role or attribute.  This cluster could be basing their votes more on the celebrity’s degree of uniqueness, which is somewhat in contrast with the bottom ranked celebrities who represent the most common and regularly listed female celebrities on Ranker.

Cluster 4:

Cluster3_MakeCelebPanels

We would also expect a list such as this to be heavily correlated with physical attractiveness, or perhaps for the celebrity’s role as a model.  Cluster 4 is perhaps the best example of this, and likely represents our youngest cluster.  The top ranked women are from the entertainment sector and are known for their looks, whereas in the bottom ranked people are from politics, comedy, or are older and probably less well known to the younger voters.  As we might expect, cluster 3 also has a high proportion of younger voters.

Here is the list of the top and bottom ten for each cluster (note that the order within these lists is not particularly important since the celebrity’s scores will be very close to one another):

TopCelebsPerClusterTable

 

 

In the end, the adage that we are defined by the company we keep appears to have some merit–and can be detected with machine learning approaches.  Though not a perfect split among the groups, there were trends in each group that drew the people of the cluster together.  This approach can provide a useful tool as we improve the site and improve the content for our visitors.   We are using these approaches to help improve the site and to provide better content to our visitors.

 

–Glenn R. Fox, PhD

 

 

by   Ranker
Staff
in About Ranker

Ranker is the YouTube of Opinions for Influencers

RankerInfluencersYouTube has created stars that are more popular than their mainstream counterparts and who leverage their videos into millions of dollars in annual revenue.  The success of many popular social media channels is based on providing opinions about topics like  toys, video games, or outfits whether on youtube, twitter, or instagram.  Most influencers have a presence on multiple online media channels and Ranker offers a unique way to broadcast opinions, via the ranked list, which would otherwise be an awkward fit for existing channels.  With over 20 million unique visitors each month consuming Ranker lists, Ranker is a unique platform for extending an influencer’s online presence.  Here are a few specific ways that an influencer can leverage Ranker into even greater influence.

1) The simplest use of Ranker is to post a ranked list of your opinions.  Where else could the Iron Sheik post his 8 favorite places in New York or 5 biggest jabronis?  For far less effort than would be necessary for a YouTube video or Tumblr post and in a far easier and more cohesive format to digest than a series of twitter or instagram posts, influencers can post a ranked list of their favorite musicians or most ridiculous movie scenes.

2) Many influencers are actually items on specific Ranker lists, and a great community-building exercise is to ask one’s audience to help influence the list, which also implicitly tells your audience about the quality of your work and explicitly tells other visitors to that list that you should indeed be ranked higher.  Examples include: Fans of Outlander working together to move Outlander up on our list of current tv shows or Ice T promoting his ranking on Best West Coast Rappers.

3) Lastly, fans often appreciate that you care about their opinions and so another unique way to use Ranker is to ask your fans a specific question, which naturally generates a ton of engagement, data, and comments.  For example, you can see here how Tim Howard generates organic engagement by asking his fans who they think is the Best Soccer Player of All-time.

As the world’s biggest source of crowdsourced opinions, Ranker is a natural place for influencers to make their opinions known, promote positive opinions about the influencers themselves, and solicit their fans’ opinions, and we would especially love to work with influencers who would like to take advantage of our unique platform in these ways.

– Ravi Iyer

by   Ranker
Staff
in Popular Lists

Tracking Votes to Measure Changing Opinions

A key part of any Ranker list are the votes are associated with each item, counting how often a user has given that item the “thumbs up” or “thumbs down”. These votes measure people’s opinions about politics, movies, celebrities, music, sports, and all of the other issues Ranker lists cover.

A natural question is how the opinions that votes measure relate to external assessments. As an example, we considered the The Most Dangerous Cities in America list. Forbes magazine lists the top 10 as Detroit, St. Louis, Oakland, Memphis, Birmingham, Atlanta, Baltimore, Stockton, Cleveland, and Buffalo.

The graph below show the proportion of up-votes, evolving over time up towards the end of last year, for all of the cities voted on by Ranker users. Eight of the Forbes’ list are included, and are highlighted. They are all in the top half of the worst cities in the list, and Detroit is correctly placed clearly as the overall worst city. Only Stockton and Buffalo, at positions 8 and 10 on the Forbes list, are missing. There is considerable agreement between the expert opinion from Forbes’ analysis, and the voting patterns of Ranker users.

MostDangerousCitiesAmerica

Because Ranker votes are recorded as they happen, they can potentially also track changes in people’s opinions. To test this possibility, we turned to a pop-culture topic that has generated a lot of votes. The Walking Dead is the most watched drama series telecast in basic cable history, with 17.3 million viewers tuning in to watch the season 5 premiere. With such a large fan base of zombie lovers and characters regularly dying left and right, there is a lot of interest in The Walking Dead Season 5 Death Pool list.

The figure below shows the pattern of change in the proportion of up-votes for the characters in this list, and highlights three people. For the first four seasons, Gareth had been one of the main antagonists and survivors on the show. His future as a survivor became unclear in an October 13th episode where Rick vowed to kill Gareth with a machete and Gareth, undeterred, simply laughed at the threat. Two episodes later on October 26th, Rick fulfilled his promise and killed Gareth using the machete While Gareth apparently did not take the threat seriously, the increase in up-votes for Gareth during this time makes it clear many viewers did.

WalkingDeadDeathPool

A second highlighted character, Gabriel, is a priest introduced in the latest season of the October 19th episode. Upon his arrival, Rick has already expressed his distrust in the priest and threatened that, if his own sins ends up hurting his family, it will be Gabriel who has to face the consequences. Since Rick is a man of many sins, the threat seems to be real. Ranker voters agree, as shown by the jump in up-votes around mid-October, coinciding with Gabriel’s arrival on the show.

The votes also sometimes tell us who has a good chance of surviving. Carol Peletier had been a mainstay in the season, but was kidnapped in the October 19th episode and did not appear in the following episode. She briefly appeared again in the subsequent episode, only to be rendered unconscious. Despite the ambiguity surrounding her survival, her proportion of up-votes decreased significantly, perhaps driven by her mention by another character, which provided a sort of “spoiler” hinting at survival.

While these two examples are just suggestive, the enormous number of votes made by Ranker uses, and the variety of topics they cover, makes the possibility of measuring opinions, and detecting and understanding change in opinions, an intriguing one. If there were a list of “Research uses for Ranker data”, we would give this item a clear thumbs up.

  • Emily Liu & Michael Lee
Page 1 of 2512345...1020...Last »