A Ranker Opinion Graph of the Domains of the World of Comedy

One unique aspect of Ranker data is that people rank a wide variety of lists, allowing us to look at connections beyond the scope of any individual topic.  We compiled data from all of the lists on Ranker with the word “funny” to get a bigger picture of the interconnected world of comedy.  Using Gephi layout algorithms, we were able to create an Opinion Graph which categorizes comedy domains and identify points of intersection between them (click to make larger).

all3sm

In the following graphs, colors indicate different comedic categories that emerged from a cluster analysis, and the connecting lines indicate correlations between different nodes with thicker lines indicating stronger relationships.  Circles (or nodes) that are closest together are most similar.  The classification algorithm produced 7 comedy domains:

 

CurrentTVwAmerican TV Shows and Characters: 26% of comedy, central nodes =  It’s Always Sunny in Philadelphia, ALF, The Daily Show, Chappelle’s Show, and Friends.

NowComedianwContemporary Comedians on American Television: 25% of nodes, includes Dave Chappelle, Eddie Izzard, Ricky Gervais, Billy Connolly, and Bill Hicks.

 

ClassicComedianswClassic Comedians: 15% of comedy, central nodes = John Cleese, Eric Idle, Michael Palin, Charlie Chaplin, and George Carlin.

ClassicTVClassic TV Shows and Characters: 14% of comedy, central nodes = The Muppet Show, Monty Python’s Flying Circus, In Living Color, WKRP in Cincinnati, and The Carol Burnett Show.

BritComwBritish Comedians: 9% of comedy, central nodes = Rowan Atkinson, Jennifer Saunders, Stephen Fry, Hugh Laurie, and Dawn French.

AnimwAnimated TV Shows and Characters: 9% of comedy, central nodes = South Park, Family Guy, Futurama, The Simpsons, and Moe Szyslak.

MovieswClassic Comedy Movies: 1.5% of comedy, central nodes = National Lampoon’s Christmas Vacation, Ghostbusters, Airplane!, Vacation, and Caddyshack.

 

 

Clusters that are the most similar (most overlap/closest together):

  • Classic TV Shows and Contemporary TV Shows
  • British Comedians and Classic TV shows
  • British Comedians and Contemporary Comedians on American Television
  • Animated TV Shows and Contemporary TV Shows

Clusters that are the most distinct (lest overlap/furthest apart):

  • Classic Comedy Movies do not overlap with any other comedy domains
  • Animated TV Shows and British Comedians
  • Contemporary Comedians on American Television and Classic TV Shows

 

Take a look at our follow-up post on the individuals who connect the comedic universe.

– Kate Johnson

 

Why Topsy/Twitter Data may never predict what matters to the rest of us

Recently Apple paid a reported $200 million for Topsy and some speculate that the reason for this purchase is to improve recommendations for products consumed using Apple devices, leveraging the data that Topsy has from Twitter.  This makes perfect sense to me, but the utility of Twitter data in predicting what people want is easy to overstate, largely because people often confuse bigger data with better data.  There are at least 2 reasons why there is a fairly hard ceiling on how much Twitter data will ever allow one to predict about what regular people want.

1.  Sampling – Twitter has a ton of data, with daily usage of around 10%.  Sample size isn’t the issue here as there is plenty of data, but rather the people who use Twitter are a very specific set of people.  Even if you correct for demographics, the psychographic of people who want to share their opinion publicly and regularly (far more people have heard of Twitter than actually use it) is way too unique to generalize to the average person, in the same way that surveys of landline users cannot be used to predict what psychographically distinct cellphone users think.

2. Domain Comprehensiveness – The opinions that people share on Twitter are biased by the medium, such that they do not represent the spectrum of things many people care about.  There are tons of opinions on entertainment, pop culture, and links that people want to promote, since they are easy to share quickly, but very little information on people’s important life goals or the qualities we admire most in a person or anything where people’s opinions are likely to be more nuanced.  Even where we have opinions in those domains, they are likely to be skewed by the 140 character limit.

Twitter (and by extension, companies that use their data like Topsy and DataSift) has a treasure trove of information, but people working on next generation recommendations and semantic search should realize that it is a small part of the overall puzzle given the above limitations.  The volume of information gives you a very precise measure of a very specific group of people’s opinions about very specific things, leaving out the vast majority of people’s opinions about the vast majority of things.  When you add in the bias introduced by analyzing 140 character natural language, there is a great deal of variance in recommendations that likely will have to be provided by other sources.

At Ranker, we have similar sampling issues, in that we collect much of our data at Ranker.com, but we are actively broadening our reach through our widget program, that now collects data on thousands of partner sites.  Our ranked list methodology certainly has bias too, which we attempt to mitigate that through combining voting and ranking data.  The key is not in the volume of data, but rather in the diversity of data, which helps mitigate the bias inherent in any particular sampling/data collection method.

Similarly, people using Twitter data would do well to consider issues of data diversity and not be blinded by large numbers of users and data points.  Certainly Twitter is bound to be a part of understanding consumer opinions, but the size of the dataset alone will not guarantee that it will be a central part.  Given these issues, either Twitter will start to diversify the ways that it collects consumer sentiment data or the best semantic search algorithms will eventually use Twitter data as but one narrowly targeted input of many.

– Ravi Iyer

by    in Data Science, prediction

Combining Preferences for Pizza Toppings to Predict Sales

The world’s most expensive pizza, auctioned for $4,200 as a charity gift in 2007, was topped with edible gold, lobster marinated in cognac, champagne-soaked caviar, smoked salmon, and medallions of venison. While most of us prefer (or can only afford to prefer) more humble ingredients, our preferences are similarly diverse.  Ranker has a Tastiest Pizza Toppingslist that asks people to express their preferences. At the time of writing there are 29 re-ranks of this list, and a total of 64 different ingredients mentioned. Edible gold, by the way, is not one of them.

Equipped with this data about popular pizza toppings, we were interested in finding out if pizzerias were actually selling the toppings that people say that they want. We also wanted to see if we could predict sales for individual ingredients by looking at one list that combined all of the responses about pizza topping preferences. This “Ultimate List” contains all of toppings that were listed in individual lists (known as re-ranks) and is ordered in a way that reflects how many times each ingredient was mentioned and where they ranked on individual lists. Many of the re-ranks only list a few ingredients, so it is fitting to combine lists and rely on the “wisdom of the crowd” to get a more complete ranking of many possible ingredients.

As a real-world test of how people’s preferences correspond to sales, we used Strombolini’s New York Pizzeria’s list of their top 10 selling ingredients. Pepperoni, cheese, sausage and mushrooms topped the list, followed by: pineapple, bacon, ham, shrimp, onion, and green peppers. All of these ingredients, save for shrimp, are included in the Ranker lists so we considered the 9 overlapping ingredients and measured how close each user’s preference list was to the pizzeria’s sales list.

To compare lists, we used a standard statistical measure known as Kendall’s tau, which counts how many times we would need to swap one item for another (known as a pair-wise swap) before two lists are identical. A Kendall’s tau of zero means the two lists are exactly the same. The larger the Kendall’s tau value becomes, the further one list is from another.

The figure shows, using little stick people, the Kendall’s tau distances between users’ lists, and the Strombolini’s sales list. The green dot corresponds to a perfect tau of zero, and the red dot is the highest possible tau (if two lists are the exact opposite of the other). The dotted line is provided as a reference to show how likely each Kendall’s tau value is by chance (that is, how often different Kendall’s tau values occur for random lists of the ingredients). It is clear that there are large differences in how close individual users’ lists came to the sales-based list. It is also clear that many users produced rankings that were quite different from the sales-based list.

Using this model, the combined list came out to be: cheese, pepperoni, bacon, mushrooms, sausage, onion, pineapple, ham, and green peppers. This is a Kendall’s tau of 7 pair-wise swaps from the Strombolini list, as shown in the figure by the blue dot representing the crowd. This means the combined list is closer to the sales list than all but one of the individual users.

Our “wisdom of the crowd” analysis, combining all the users’ lists, used the same approach we previously applied to predicting celebrity deaths using Ranker data. It is a “Top-N” variant of the psychological approach developed in our work modeling decision-making and individual differences for ranking lists, and has the nice property of naturally incorporating individual differences.

This analysis is a beginning example of a couple of interesting ideas. One is that it is possible to extract relatively complete information from a set of incomplete opinions provided by many people. The other is that this combined knowledge can be compared to, and possibly be predictive of, real-world ground truths, like whether more pizzas have bacon or green peppers on them.  It may never begin to explain, however, why someone would waste champagne-soaked caviar on pizza, as a topping.

Why We Still Play Board Games: An Opinion Graph Analysis

It’s hard reading studies about people my age when research scientists haven’t agreed upon a term for us yet. In one study I’m a member of “Gen Y” (lazy), in another I’m from the “iGeneration” (Orwellian), or worse still, a “Millennial” (…no). You beleaguered and cynical 30-somethings had things easy with the “Generation X” thing. Let the record reflect that no one from my generation is even remotely okay with any of these terms. Furthermore, we all collectively check out whenever we hear the term “aughties”.

I’m whining about the nomenclature only because there’s a clear need for distinction between my generation and those who have/will come before/after us. This isn’t just from a cultural standpoint (although calling us “Generation Spongebob” might be the most ubiquitous touchstone you could get), but from a technical one. If this Kaiser Family Foundation study is to be believed (via NYT), 8-18 year olds today are the first to spend the majority of their waking hours interacting with the internet.

Yet despite this monumental change, there are still many childhood staples that have not been forsaken by an increasingly digital generation. One of the most compelling examples of this anomaly lies in board games. In a day and age where Apple is selling two billion apps a month (Apple), companies peddling games for our increasingly elusive away-from-keyboard time are still holding their own. For example, Hasbro’s board-and-card game based revenue grew to $1.19b dollars over the course of the last fiscal year (a 2% gain from last year).

What drove this growth? Hasbro’s earnings reports primarily accredits this growth to three products: Magic: The Gathering, Twister, and Battleship. All of these products have been mainstays of their line-up for quite some time (prepare to feel old: if Magic: The Gathering was a child, it could buy booze this year), so what’s compelling people to keep buying? Fortunately, Ranker has some pretty in-depth data on all of these products, based on people who vote on it’s best board games list, which receives thousands of opinions each month, as well as voting on other Ranker lists.

Twister’s continuous sales were the easiest to explain: users who expressed interest in the game were most likely to be a fan of other board games (Candy Land, Chutes and Ladders, Monopoly and so forth). Twister also correlated with many other programs/products with fairly universal appeal (Friends, Gremlins). This would seem to indicate that the chief reason for Twister’s continued high sales lies in its simplicity and ubiquity. The game is a cultural touchstone for that reason: more than any other game on the list, it’s the one hardest to picture a childhood without.

Battleship’s success lies in the same roots: our data shows great overlap between fans of the game and fans of Mouse Trap, Monopoly, etc. But Battleship has attracted fans of a different stripe, interest in films such as Doom, Independence Day, and Terminator were highly correlated with the game. In all likelihood, this is due to the recent silver-screen adaptation of the game. Although the movie only faired modestly within the United States, the film clearly did propel the game back into the public consciousness, which translated nicely into sales.

Finally, Magic: The Gathering’s success came from support of another nature. Interest in Magic correlated primarily with other role-play and strategy games (Settlers of Catan, Dominion, Heroscape). Simply put, most fans of Magic are likely to enjoy other traditionally “nerdy” games. The large correlation overlap between Magic and other role-playing games is a testament to how voraciously this group consumes these products.

The crowd-sourced information we have here neatly divides the consumers of each game into three pools. With this sort of individualized knowledge, targeting and marketing to each archetype of consumer is a far easier task.

– Eamon Levesque

by    in Data Science, prediction

Recent Celebrity Deaths as Predicted by the Wisdom of Ranker Crowds

At the end of each year, there are usually media stories that compile lists of famous people who have passed away. These lists usually cause us to pause and reflect. Lists like Celebrity Death Pool 2013 on Ranker, however, give us an opportunity to make (macabre) predictions about recent celebrity deaths.

We were interested in whether “wisdom of the crowd” methods could be applied to aggregate the individual predictions. The wisdom of the crowd is about making more complete and more accurate predictions, and both completeness and accuracy seem relevant here. Being complete means building an aggregate list that identifies as many celebrity deaths as possible. Being accurate means, in a list where only some predictions are borne out, placing those who do die near the top of the list.

Our Ranker data involved the lists provided by a total of 27 users up until early in 2013. (Some them were done after at least one celebrity, Patti Page, had passed away, but we thought they still provided useful predictions about other celebrities). Some users predicted as many as 25 deaths, while some made a single prediction. The median number of predictions was eight, and, in total, 99 celebrities were included in at least one list. At the time of posting, six of the 99 celebrities have passed away.

One way to measure how well a user made predictions is to work down their list, keeping track of every time they correctly predicted a recent celebrity death. This approach to scoring is shown for all 27 users in the graph below. Each blue circle corresponds to a user, and represents their final tally. The location of the circle on the x-axis corresponds to the total length of their list, and the location on the y-axis corresponds to the total number of correct predictions they made. The blue lines leading up to the circles track the progress for each user, working down their ranked lists. We can see that the best any user did was predict two out or the current six deaths, and most users currently have none or one correct predictions in their list.

To try and find some wisdom in this crowd of users, we applied an approach to combining rank data developed as part of our general research into human decision-making, memory, and individual differences. The approach is based on classic models in psychology that go all the way back to the work of Thurstone in 1931, but has some modern tweaks. Our approach allows for individual differences, and naturally identifies expert users, upweighting their opinions in determining the aggregated crowd list. A paper describing the nuts and bolts of our modeling approach can be found here (but note we used a modified version for this problem, because users only provide their “Top-N” responses, and they get to choose N, which is the length of their list).

The net result of our modeling is a list of all 99 celebrities, in an order that combines the rankings provided by everybody. The top 5 in our aggregated list, for the morbidly curious, are Hugo Chavez (already a correct prediction), Fidel Castro, Zsa Zsa Gabor, Abe Vigoda, and Kirk Douglas. We can assess the wisdom of the crowd in the same way we did individuals, by working down the list, and keeping track of correct predictions. This assessment is shown by the green line in the graph below. Because the list includes all 99 celebrities, it will always find the six who have already recently passed away, and the names of those celebrities are shown at the top, in the place they occur in the aggregated list.

Recent Celebrity Deaths and Predictions

The interesting part assessing the wisdom of the crowd is how early in the list it makes correct predictions about recent celebrity deaths. Thus, the more quickly the green line goes up as it moves to the right, the better the predictions of the crowd. From the graph, we can see that the crowd is currently performing quite well, and is certainly about the “chance” line, represented by the dotted diagonal. (This line corresponds to the average performance of a randomly-ordered list).

We can also see that the crowd is performing as well as, or better than, all but one of the individual users. Their blue circles are shown again along with crowd performance. Circles that lie above and to the left of the green line indicate users outperforming the crowd, and there is only one of these. Interestingly, predicting celebrity deaths by using age, and starting with the oldest celebrity first, does not perform well. This seemingly sensible heuristic is assessed by the red line, but is outperformed by the crowd and many users.

Of course, it is only May, so the predictions made by users on Ranker have time to be borne out. Our wisdom of the crowd predictions are locked in, and we will continue to update the assessment graphs.

– Michael Lee

by    in Market Research

A Look Inside the Ranker Data Tool

You may have looked through some of the more fascinating, insightful posts her on the Ranker Data Blog and thought… how can he possibly come up with some of these connections?

Well, to be perfectly fair, the Ranker data tool does a lot of the heavy lifting. It allows me to quickly look through topics that have received a lot of up or down votes on Ranker, and make quick comparisons to other topics easily.

And here’s a quick look at how it all works…

We start by picking a general category we want and a specific item (or “node” in this case) from that topic. So under the category of TV, I’m going to pick the item “Boardwalk Empire.”

Now the tool knows that I only want to look at people who voted on “Boardwalk Empire.” The next step involves the tool looking for correlations – that is, relationships between “Boardwalk Empire” votes and other votes cast on Ranker. I could compare votes cast for or against “Boardwalk Empire” with votes cast on pretty much any other subject – films, foods, people, gadgets… you name it. Sometimes, this can be very interesting, as in this post, where we correlated people’s taste in breakfast cereals vs. films and tv shows.

But for the sake of explanation, let’s look at a more direct comparison, which usually yields more interesting results. So we’ll compare votes on “Boardwalk Empire” to votes on other TV shows, to see how well we can predict what fans of HBO’s Prohibition drama might also enjoy on the tube.

The results are pretty standard, and really show off exactly what the tool can do. When searching “Boardwalk Empire” correlated with other TV shows, here’s what I see:

Those percentages to the right represent what we call the “Lift %,” which basically just means “how much more likely is a “Boardwalk Empire” fan to enjoy X show, over a random person who does not have an opinion about Boardwalk Empire”? I’d ask Ravi to explain it to you directly, but his answer would likely involve fractals, and I don’t want to put you through that.

Trust me on this part, though… The higher the Lift %, the MORE likely a “Boardwalk Empire” fan will also enjoy whatever show we’re discussing.

Keeping that in mind, most of the results seem fairly predictable and straight-forward. A “Boardwalk Empire” fan would naturally be likely to enjoy “The Shield” or “The Killing,” two different hard-edged crime dramas with occasionally similar themes. Similarly, “Deadwood” seems an obvious fit – both are violent HBO series exploring crime in different periods of American history. In fact, there’s really only two outliers that make this list kind of compelling… What the hell are “Thundercats” and “Police Squad!” doing there?

There’s probably a very reasonable explanation for this. Maybe a big chunk of people went to the “Boardwalk Empire” page and then immediately voted on their favorite ’80s cartoon series as well? It’s possible, but seems unlikely, as there aren’t any other animated shows in the Top 10 (or even 20!) of this group. Maybe people who like “Boardwalk Empire” – or crime shows more generally – also enjoy occasionally making light of a very serious subject by throwing on the adventures of Detective Frank Drebin of “Police Squad!” To investigate this, I’d probably look at a similar chart for the show “Police Squad!” and see if a lot of more serious crime fare appeared.

And what do you know? It does! Along with the expected other comedy series from the same era – “Welcome Back Kotter,” “WKRP in Cincinnati” and so on, sure enough we see that “Police Squad!” fans have also voted positively on “The Sopranos,” “Boardwalk Empire” and even “Miami Vice.” We could certainly do more research to confirm, but this definitely points me towards a preliminary hypothesis – fans of crime shows don’t really differentiate between funny or serious content. They just like the topic of crime and criminals.

To keep investigating, I’d probably look at some other crime dramas and comedies to see if I also got similar results. If, say, “The Wire” fans also tended to enjoy “Pink Panther” movies, or fans of “Hackers” also cited “Sneakers” as a favorite film, we’d be on our way to a full-fledged theory. But that’s a blog post for a different day, kids. Now it’s time for bed.

– Lon

by    in Data Science

The Long Tail of Opinion Data

If you want to find out what the best restaurant in your area is, what the best printer under $80 is, or what the best movie of 2010 was, there are many websites out there that can help you.  Sites like Yelp, Rotten Tomatoes, and Engadget have built sustainable businesses by providing opinions in these vertical domains.  Ranker also has a best movies of all time list and while I might argue that our list is better than Rotten Tomatoes list (is Man on Wire really the best movie ever?), there isn’t anything particularly novel about having a list of best movies.  At the point where Ranker is the go-to site for opinions about restaurants, electronics, and movies, it will be a very big business indeed.

We are actually competitive already for movies, but where Ranker has unique value is in the long tail of opinions.  There are lots of domains where opinions are valuable, but are rarely systematically polled.  As this Motley Fool writer points out, we are one of the few places with opinions about companies with the worst customer service, and the only one that updates in real time.  Memes are arguably some of the most valuable things to know about, yet there is little data oriented competition for our funniest memes lists.  As inherently social creatures, opinions about people are obviously of tremendous value, yet outside of Gallup polls about politicians, there is little systematic knowledge of people’s opinions about people in the news, outside of our votable opinions about people lists.

Not only are there countless domains where systematic opinions are not collected, but even in the domains that exist, opinions tend to be unidimensionally focused on “best”, with little differentiation for other adjectives.  What if you want to identify the funniest, most annoying, dumbest, worst, or hottest item in a domain?  “Best” searches far outnumber “worst” searches on Google (about 50 to 1 according to Google trends), but if you combine all the adjectives (e.g. funniest, dumbest) and combine them with all the qualifers (e.g. of 2011, that remind you of college, that you love to hate), there is a long tail of opinions even in the most popular domains that is unserved.  Where else is data systematically collected on British Comedians?

When you combine the opportunities available in the long tail of domains plus the long tail of adjectives and qualifiers, you get a truly large set of opinions that make up the long tail of opinions on the internet.  There are myriad companies trying to mine Twitter for this data, which somewhat validates my intuition that there is opportunity here, but clever algorithms will never make up for the imperfections of mining 140 character text.  Many companies will try and compete by squeezing the last bit of signal from imperfect data, but my experience in academia and in technology has taught me that there is no substitute for collecting better data. If my previous assertion that the knowledge graph is more than just facts is true, then there will be great demand for this long tail of opinions, just as there is great demand for the long tail of niche searches.  And Ranker is one of the few companies empirically sampling this long tail.

– Ravi Iyer

by    in Market Research

Battle of the Sexiests: Maxim vs. Ranker

Maxim Magazine is at it again, recently publishing the 2012 edition of its annual Hot 100 list of the year’s sexiest lady types. And while most of the talk surrounding the list has centered on the inclusion of sexy non-lady type Stephen Colbert, here at Ranker, we’re far more interested in digging through the data looking for interesting tidbits. (Seriously, we just read Maxim for the articles, guys.)

Fortunately, Ranker user Greg had the foresight to ask our readers who they thought SHOULD have made the cut for the 2012 Maxim Hot 100 list. You can find his list, and the Ranker community’s results, right here, and rather surprisingly, the “Ranker Hot 100” differs greatly from the Maxim version (which was also based in part on Maxim reader polling.)


Here are Maxim’s picks for the year’s 10 sexiest women:

1. Bar Refaeli
2. Olivia Munn
3. Mila Kunis
4. Katy Perry
5. Olivia Wilde
6. Jennifer Lawrence
7. Emma Stone
8. Megan Fox
9. Malin Akerman
10. Adrianne Palicki

And here is the Top 10 chosen by Ranker voters:

1. Olivia Wilde
2. Kate Upton
3. Mila Kunis
4. Adriana Lima
5. Kristen Bell
6. Kate Beckinsale
7. Natalie Portman
8. Brooklyn Decker
9. Blake Lively
10. Megan Fox

Kind of surprisingly, the lists only have 3 out of 10 women in common. Olivia Wilde was first choice among Ranker voters, and came in 5th in the Maxim list. Megan Fox just eked into the Ranker Top 10 in 10th place, while Maxim readers had her a bit higher at 8. And Mila Kunis landed in 3rd place on BOTH lists, clearly the safest possible pick for the bronze medal.


So that’s KIND of interesting, but we thought a deeper look at some of the Maxim picks and Ranker picks might give us even more insight. Specifically, we sort of wondered if we could tell the differences between the tastes of Maxim and Ranker readers and voters. And not just taste in women, but movies and TV as well.

Step One was to do some quick data analysis of votes for women who made the Ranker list and the Maxim list, and compare and contrast the results. To do this, we plugged all of the ladies listed above – the Ranker Top 10 and Maxim Top 10 – into our data comparison tool. This tells us all sorts of deep information about how users who have voted these people “up” in Ranker’s “Hot 100” (meaning they find them attractive) feel about other women, about TV shows, about movies, even their picks for favorite foods and bands.

(Just to clarify, we’re always talking about “odds” here, not certainty. If you like Katy Perry, we think there’s a 784% increased chance you’ll also like Kate Upton vs. a random person with no interest in Katy Perry. But it’s not a guarantee. You’re still your own person with free, independent will. For now.)

In fact, the first and most obvious thing we learned is that EVERYONE, and we mean EVERYONE, loves them some Kate Upton. We barely looked up any women at all whose fans hadn’t also identified Kate Upton as a personal favorite. It’s pretty rare to find anything approaching a “unanimous” decision when it comes to votes in Ranker. We’re talking thousands of people individually voting on thousands of lists containing tens of thousands of items. Yet the acclaim for Ms. Upton approaches the popularity of things like oxygen and drinkable water.


Some other women were popular among both Maxim favorites AND Ranker favorites. She didn’t make the Ranker Top 10, but Adrianne Palicki certainly has some fans on our site. Jennifer Lawrence – #6 on Maxim’s list but #16 on Ranker – was also popular with pretty much everyone. And up-and-coming TV star Krysten Ritter of “Don’t Trust the B—- in Apartment 23” semi-fame also showed up a decent amount. (Other women popular with just about everyone included Alison Brie, Bar Refaeli, Kristen Bell, Kate Beckinsale and Brooklyn Decker.)

There were some STRONG disagreements, however, to compliment the universal love for The Upton. Maxim fans STRONGLY took issue with the popular Ranker picks Adriana Lima, Blake Lively and Natalie Portman. In fact, NONE of the women in Maxim’s Top 10 had fans who also liked Adriana Lima. NOT ONE.


As well, Ranker fans took issue with Maxim’s selection of Katy Perry and Olivia Munn. Neither of these women were particularly popular with fans of the Ranker Top 10, and in fact, Olivia Munn actually had negative results in a few cases. (Meaning people who like Kate Beckinsale and Natalie Portman are significantly less likely than an average person to like be a Munn fan.) People who tend to agree with the Ranker Top 10 also aren’t too crazy for Emma Stone or Malin Akerman, which is kind of surprising, given their overall sexiness and popularity. (Who doesn’t like Emma Stone? COME ON!)

Some more quick observations to justify spending hours researching the tastes of people who like hot ladies…

– Maxim fans love Martin Scorsese films and TV shows. “Boardwalk Empire” was STAGGERINGLY popular with people who liked Olivia Munn and Jennifer Lawrence. (You’re about 400% more likely to enjoy it if you also enjoy looking at these women.) “Casino” and “Goodfellas” also showed up frequently in this group. True, fans of some of the Ranker picks – especially Natalie Portman – also cited Casino as a favorite – but not in numbers that were as overwhelming.


– Maxim readers seem to have a strong affinity for ’90s animation nostalgia. “Tiny Toon Adventures” and “Alvin and the Chipmunks” were apparently large, significant cultural touchstones for this group. (Your random fact of the day: People who like Katy Perry are nearly 2000% times more likely to love “Tiny Toon Adventures” than non-Katy Perry fans. Do with that information what you will.) Voters who preferred the Ranker Top 10 list also dug ’90s nostalgia – “Gremlins” seemed to pop up a lot – but didn’t share the love of “Tiny Toons.”

– Lots of voters from both groups enjoyed “House” and “Louie” and NBC’s sitcoms, “30 Rock” and “Community” in particular.

– Lon

by    in About Ranker

Everybody’s Ranking on the Weekend

Random observation looking over some of our Ranker pageview trends today. And I figured, why not share?

Here’s a graph showing the traffic to all Ranker “filmography” pages from March 14th to May 14th of this year. These would be lists of all films made by a certain actor or director, like this collection of Goldie Hawn movies or this rundown of the films of Martin Scorsese.

Those “peaks” you see are Saturdays (or sometimes Saturdays and Sundays together.) Search traffic for “filmographies” and lists of movies goes way way up over the weekend. Which makes sense – that’s when most people have some free time to rent or stream films, and research new stuff to throw on.

In and of itself, probably not blog post-worthy. But I’ve brought you here for a reason! Here’s what traffic looks like for the same time period to BIBLIOGRAPHY or author pages. (These are pages listing all the books written by a given author.)

The peaks on this list are in the beginning of the week (usually Monday, but sometimes Tuesday.) So unlike movie fans, book fans are doing most of their research for new titles mid-week. Could it be that book people are putting in some of this time… WHILE AT WORK?!?! Perish the thought. Perhaps it’s easier getting away with loading some new titles on your Kindle during office hours than, say, figuring out which “Hellraiser” films are missing from your Netflix queue? Or people are just finishing up their books over the weekend and then figuring out what to read next once they get to the office.

I guess Loverboy had it right all along.

– Lon

by    in Market Research

The Darker Side of “30 Rock”

NBC’s trend-setting ensemble sitcom “30 Rock” is wrapping up its sixth season, and remains one of the most discussed shows on all of Ranker. The series currently ranks 17th on our list of History’s Greatest Sitcoms as well as having a strong showing on the Funniest Shows of 2011 round-up. As well, main characters Tracy Jordan and Jack Donaghy BOTH cracked the Top 20 on our Funniest TV Characters Of All Time list. (No Liz Lemon until #35? Come on, gang!)

This many votes on “30 Rock” spanning this many lists gives us A LOT of data to sift through for interesting correlations. And wouldn’t you know, we found one. Namely, fans of “30 Rock” by and large seem to enjoy surprisingly dark film entertainment. More so than you’d think from a show about the wacky behind-the-scenes hijinks on a sketch comedy show that contains this many fart jokes and Werewolf Bar Mitzvahs.

The 2 films “30 Rock” aficionados are most likely to enjoy? You guessed it, “The Deer Hunter” and “Raging Bull.” “30 Rock” fans are… get this… nearly 2000% more likely to enjoy “Deer Hunter” than some schmo off the street, and almost as enthusiastic about Scorsese’s boxing biopic.

Aside from the presence of Robert De Niro, and generally being really really awesome, these films have in common an unsettling, gritty outlook, not to mention protagonists who may not always be relateable. It’s kind of hard picturing people sympathizing with Jake LaMotta’s violent temper and fits of jealous rage, then switching over to chuckle at Jack Donaghy’s doppelganger, El Comandante. Yet that’s apparently just what’s happening.


Grab some Sabor de Soledad, niños, cause we’re gonna watch Bobby D get tortured in a Vietnamese POW camp.

It doesn’t stop there. We also noticed that some of “30 Rock” fans’ favorite film and TV characters are not what you’d expect from people who can’t get enough of Kenneth Parcell’s down-home folksy wisdom. For example, aside from Liz Lemon herself, and “Arrested Development’s” Buster Bluth, the most popular fictional character among “30 Rock” fans is Kurt Russell’s Snake Plissken from the John Carpenter “Escape” movies. Now, granted, those movies are sort of funny, but not quite in the same way that “30 Rock” is funny. Although both projects do involve a love of shoddy greenscreen effects:

Sarah Connor from “The Terminator” films also wins surprisingly favorable reviews from loyal “30 Rock” viewers. No word on whether they like the more girly mall-rat version from “The Terminator” or her later, tormented and also super-buff self. Maybe we’ll dig that up for a future post.

– Lon

Page 1 of 212