Netflix Prize
Encyclopedia
The Netflix Prize was an open competition for the best collaborative filtering
Collaborative filtering
Collaborative filtering is the process of filtering for information or patterns using techniques involving collaboration among multiple agents, viewpoints, data sources, etc. Applications of collaborative filtering typically involve very large data sets...

 algorithm
Algorithm
In mathematics and computer science, an algorithm is an effective method expressed as a finite list of well-defined instructions for calculating a function. Algorithms are used for calculation, data processing, and automated reasoning...

 to predict user ratings for film
Film
A film, also called a movie or motion picture, is a series of still or moving images. It is produced by recording photographic images with cameras, or by creating images using animation techniques or visual effects...

s, based on previous ratings.

The competition was held by Netflix
Netflix
Netflix, Inc., is an American provider of on-demand internet streaming media in the United States, Canada, and Latin America and flat rate DVD-by-mail in the United States. The company was established in 1997 and is headquartered in Los Gatos, California...

, an online DVD
DVD
A DVD is an optical disc storage media format, invented and developed by Philips, Sony, Toshiba, and Panasonic in 1995. DVDs offer higher storage capacity than Compact Discs while having the same dimensions....

-rental service, and was open to anyone not connected with Netflix (current and former employees, agents, close relatives of Netflix employees, etc.) or a resident of Cuba
Cuba
The Republic of Cuba is an island nation in the Caribbean. The nation of Cuba consists of the main island of Cuba, the Isla de la Juventud, and several archipelagos. Havana is the largest city in Cuba and the country's capital. Santiago de Cuba is the second largest city...

, Iran
Iran
Iran , officially the Islamic Republic of Iran , is a country in Southern and Western Asia. The name "Iran" has been in use natively since the Sassanian era and came into use internationally in 1935, before which the country was known to the Western world as Persia...

, Syria
Syria
Syria , officially the Syrian Arab Republic , is a country in Western Asia, bordering Lebanon and the Mediterranean Sea to the West, Turkey to the north, Iraq to the east, Jordan to the south, and Israel to the southwest....

, North Korea
North Korea
The Democratic People’s Republic of Korea , , is a country in East Asia, occupying the northern half of the Korean Peninsula. Its capital and largest city is Pyongyang. The Korean Demilitarized Zone serves as the buffer zone between North Korea and South Korea...

, Myanmar
Myanmar
Burma , officially the Republic of the Union of Myanmar , is a country in Southeast Asia. Burma is bordered by China on the northeast, Laos on the east, Thailand on the southeast, Bangladesh on the west, India on the northwest, the Bay of Bengal to the southwest, and the Andaman Sea on the south....

, or Sudan
Sudan
Sudan , officially the Republic of the Sudan , is a country in North Africa, sometimes considered part of the Middle East politically. It is bordered by Egypt to the north, the Red Sea to the northeast, Eritrea and Ethiopia to the east, South Sudan to the south, the Central African Republic to the...

. On 21 September 2009, the grand prize of was given to the BellKor's Pragmatic Chaos team which bested Netflix's own algorithm for predicting ratings by 10%.

Problem and data sets

Netflix provided a training data set of 100,480,507 ratings that 480,189 users gave to 17,770 movies. Each training rating is a quadruplet of the form . The user and movie fields are integer
Integer
The integers are formed by the natural numbers together with the negatives of the non-zero natural numbers .They are known as Positive and Negative Integers respectively...

 IDs, while grades are from 1 to 5 (integral) stars.

The qualifying data set contains over 2,817,131 triplet
Triplet
-Science:* A series of three nucleotide bases that form Genetic code* J-coupling as part of NMR spectroscopy* Opal in preparation to be a gemstone* Spin triplet in quantum mechanics — as in triplet oxygen, or simply triplet state in general....

s of the form , with grades known only to the jury. A participating team's algorithm must predict grades on the entire qualifying set, but they are only informed of the score for half of the data, the quiz set of 1,408,342 ratings. The other half is the test set of 1,408,789, and performance on this is used by the jury to determine potential prize winners. Only the judges know which ratings are in the quiz set, and which are in the test set—this arrangement is intended to make it difficult to hill climb
Hill climbing
In computer science, hill climbing is a mathematical optimization technique which belongs to the family of local search. It is an iterative algorithm that starts with an arbitrary solution to a problem, then attempts to find a better solution by incrementally changing a single element of the solution...

 on the test set. Submitted predictions are scored against the true grades in terms of root mean squared error (RMSE), and the goal is to reduce this error as much as possible. Note that while the actual grades are integers in the range 1 to 5, submitted predictions need not be. Netflix also identified a probe subset of 1,408,395 ratings within the training data set. The probe, quiz, and test data sets were chosen to have similar statistical properties.

In summary, the data used in the Netflix Prize looks as follows:
  • Training set (99,072,112 ratings)
  • Probe set (1,408,395 ratings)
  • Qualifying set (2,817,131 ratings) consisting of:
    • Test set (1,408,789 ratings), used to determine winners
    • Quiz set (1,408,342 ratings), used to calculate leaderboard scores


For each movie, title and year of release are provided in a separate dataset. No information at all is provided about users. In order to protect the privacy of customers, "some of the rating data for some customers in the training and qualifying sets have been deliberately perturbed in one or more of the following ways: deleting ratings; inserting alternative ratings and dates; and modifying rating dates".

The training set is such that the average user rated over 200 movies, and the average movie was rated by over 5000 users. But there is wide variance
Variance
In probability theory and statistics, the variance is a measure of how far a set of numbers is spread out. It is one of several descriptors of a probability distribution, describing how far the numbers lie from the mean . In particular, the variance is one of the moments of a distribution...

 in the data—some movies in the training set have as few as 3 ratings, while one user rated over 17,000 movies.

There was some controversy as to the choice of RMSE as the defining metric. Would a reduction of the RMSE by 10% really benefit the users? It has been claimed that even as small an improvement as 1% RMSE results in a significant difference in the ranking of the "top-10" most recommended movies for a user.

Prizes

Prizes were based on improvement over Netflix's own algorithm, called Cinematch, or the previous year's score if a team has made improvement beyond a certain threshold. A trivial algorithm that predicts for each movie in the quiz set its average grade from the training data produces an RMSE of 1.0540. Cinematch uses "straightforward statistical linear model
Linear model
In statistics, the term linear model is used in different ways according to the context. The most common occurrence is in connection with regression models and the term is often taken as synonymous with linear regression model. However the term is also used in time series analysis with a different...

s with a lot of data conditioning".

Using only the training data, Cinematch scores an RMSE of 0.9514 on the quiz data, roughly a 10% improvement over the trivial algorithm. Cinematch has a similar performance on the test set, 0.9525. In order to win the grand prize of $1,000,000, a participating team had to improve this by another 10%, to achieve 0.8572 on the test set. Such an improvement on the quiz set corresponds to an RMSE of 0.8563.

As long as no team won the grand prize, a progress prize of $50,000 was awarded every year for the best result thus far. However, in order to win this prize, an algorithm had to improve the RMSE on the quiz set by at least 1% over the previous progress prize winner (or over Cinematch, the first year). If no submission succeeded, the progress prize was not to be awarded for that year.

To win a progress or grand prize a participant had to provide source code and a description of the algorithm to the jury within one week after being contacted by them. Following verification the winner also had to provide a non-exclusive license to Netflix. Netflix would publish only the description, not the source code, of the system. A team could choose to not claim a prize, in order to keep their algorithm and source code secret. The jury also kept their predictions secret from other participants. A team could send as many attempts to predict grades as they wish. Originally submissions were limited to once a week, but the interval was quickly modified to once a day. A team's best submission so far counted as their current submission.

Once one of the teams succeeded to improve the RMSE by 10% or more, the jury would issue a last call, giving all teams 30 days to send their submissions. Only then, the team with best submission was asked for the algorithm description, source code, and non-exclusive license, and, after successful verification; declared a grand prize winner.

The contest would last until the grand prize winner was declared. Had no one received the grand prize, it would have lasted for at least five years (until October 2, 2011). After that date, the contest could have been terminated at any time at Netflix's sole discretion.

Progress over the years

The competition began on October 2, 2006. By October 8, a team called WXYZConsulting had already beaten Cinematch's results.

By October 15, there were three teams who had beaten Cinematch, one of them by 1.06%, enough to qualify for the annual progress prize. By June 2007 over 20,000 teams had registered for the competition from over 150 countries. 2,000 teams had submitted over 13,000 prediction sets.

Over the first year of the competition, a handful of front-runners traded first place. The more prominent ones were:
  • WXYZConsulting, a team by Yi Zhang and Wei Xu. (A front runner during Nov-Dec 2006.)
  • ML@UToronto A, a team from the University of Toronto
    University of Toronto
    The University of Toronto is a public research university in Toronto, Ontario, Canada, situated on the grounds that surround Queen's Park. It was founded by royal charter in 1827 as King's College, the first institution of higher learning in Upper Canada...

     led by Prof. Geoffrey Hinton
    Geoffrey Hinton
    Geoffrey Hinton is a British born informatician most noted for his work on the mathematics and applications of neural networks, and their relationship to information theory.-Career:...

    . (A front runner during parts of Oct-Dec 2006.)
  • Gravity, a team of four scientists from the Budapest University of Technology (A front runner during Jan-May 2007.)
  • BellKor, a group of scientists from AT&T Labs
    AT&T Labs
    AT&T Labs, Inc. is the research & development division of AT&T, where scientists and engineers work to understand and advance innovative technologies relevant to networking, communications, and information. Over 1800 employees work in six locations: Florham Park, NJ; Middletown, NJ; Austin, TX;...

    . (A front runner since May 2007.)


On August 12, 2007, many contestants gathered at the KDD Cup and Workshop 2007, held at San Jose, California
San Jose, California
San Jose is the third-largest city in California, the tenth-largest in the U.S., and the county seat of Santa Clara County which is located at the southern end of San Francisco Bay...

. During the workshop all four of the top teams on the leaderboard at that time presented their techniques. The team from IBM Research Yan Liu, Saharon Rosset, Claudia Perlich, Zhenzhen Kou won the third place in Task 1 and first place in Task 2.

On September 2, 2007, the competition entered the "last call" period for the 2007 Progress Prize. Teams had thirty days to tender submissions for consideration. At the beginning of this period the leading team was BellKor, with an RMSE of 0.8728 (8.26% improvement). followed by Dinosaur Planet (RMSE = 0.8769; 7.83% improvement), and Gravity (RMSE = 0.8785; 7.66% improvement). In the last hour of the last call period, an entry by "KorBell" took first place. This turned out to be an alternate name for Team BellKor.

Over the second year of the competition, only three teams reached the leading position:
  • BellKor, a group of scientists from AT&T Labs
    AT&T Labs
    AT&T Labs, Inc. is the research & development division of AT&T, where scientists and engineers work to understand and advance innovative technologies relevant to networking, communications, and information. Over 1800 employees work in six locations: Florham Park, NJ; Middletown, NJ; Austin, TX;...

    . (front runner during May 2007 - Sept 2008.)
  • BigChaos, a team of Austrian scientists from commendo research & consulting (single team front runner since Oct 2008)
  • BellKor in BigChaos, a joint team of the two leading single teams (A front runner since Sept. 2008)

2007 Progress Prize

On November 13, 2007, team KorBell (aka BellKor) was declared the winner of the $50,000 Progress Prize with an RMSE of 0.8712 (8.43% improvement). The team consisted of three researchers from AT&T Labs
AT&T Labs
AT&T Labs, Inc. is the research & development division of AT&T, where scientists and engineers work to understand and advance innovative technologies relevant to networking, communications, and information. Over 1800 employees work in six locations: Florham Park, NJ; Middletown, NJ; Austin, TX;...

, Yehuda Koren, Robert Bell, and Chris Volinsky. As required, they published a description of their algorithm.

2008 Progress Prize

The 2008 Progress Prize was awarded to the team BellKor in BigChaos. The winning submission achieved a 9.44% improvement over Cinematch (an RMSE of 0.8616).
The joint-team consisted of two researchers from commendo research & consulting GmbH, Andreas Töscher and Michael Jahrer (originally team BigChaos) and three researchers from AT&T Labs
AT&T Labs
AT&T Labs, Inc. is the research & development division of AT&T, where scientists and engineers work to understand and advance innovative technologies relevant to networking, communications, and information. Over 1800 employees work in six locations: Florham Park, NJ; Middletown, NJ; Austin, TX;...

, Yehuda Koren, Robert Bell, and Chris Volinsky (originally team BellKor). As required, they published a description of their algorithm.

This was the final Progress Prize because obtaining the required 1% improvement over the 2008 Progress Prize would be sufficient to qualify for the Grand Prize.

2009

On June 26, 2009 the team "BellKor's Pragmatic Chaos", a merger of teams "Bellkor in BigChaos" and "Pragmatic Theory", achieved a 10.05% improvement over Cinematch (a Quiz RMSE of 0.8558). The Netflix Prize competition then entered the "last call" period for the Grand Prize. In accord with the Rules, teams had thirty (30) days, until July 26, 2009 18:42:37 UTC, to make submissions that will be considered for this Prize.

On July 25, 2009 the team "The Ensemble", a merger of the teams "Grand Prize Team" and "Opera Solutions
Opera Solutions
Opera Solutions, LLC is a technology and analytics company mainly focused on capturing profit growth opportunities emerging from Big Data. The firm uses a combination of analytics, technology, machine learning science, large-scale data management, and human expertise to build and deliver analytics...

 and Vandelay United", achieved a 10.09% improvement over Cinematch (a Quiz RMSE of 0.8554).

On July 26, 2009, Netflix stopped gathering submissions for the Netflix Prize contest.

The final standing of the Leaderboard at that time showed that two teams met the minimum requirements for the Grand Prize. "The Ensemble" with a 10.10% improvement over Cinematch on the Qualifying set (a Quiz RMSE of 0.8553), and "BellKor's Pragmatic Chaos" with a 10.09% improvement over Cinematch on the Qualifying set (a Quiz RMSE of 0.8554). The Grand Prize winner was to be the one with the better performance on the Test set.

On September 18, 2009, Netflix announced team "BellKor's Pragmatic Chaos" as the prize winner (a Test RMSE of 0.8567), and the prize was awarded to the team in a ceremony on September 21, 2009. "The Ensemble" team had in fact succeeded to match BellKor's result, but since BellKor submitted their results 20 minutes earlier, the rules award the prize to them.

The joint-team "BellKor's Pragmatic Chaos" consisted of two Austrian researchers from Commendo Research & Consulting GmbH, Andreas Töscher and Michael Jahrer (originally team BigChaos), two researchers from AT&T Labs
AT&T Labs
AT&T Labs, Inc. is the research & development division of AT&T, where scientists and engineers work to understand and advance innovative technologies relevant to networking, communications, and information. Over 1800 employees work in six locations: Florham Park, NJ; Middletown, NJ; Austin, TX;...

, Robert Bell, and Chris Volinsky, Yehuda Koren from Yahoo!
Yahoo!
Yahoo! Inc. is an American multinational internet corporation headquartered in Sunnyvale, California, United States. The company is perhaps best known for its web portal, search engine , Yahoo! Directory, Yahoo! Mail, Yahoo! News, Yahoo! Groups, Yahoo! Answers, advertising, online mapping ,...

 (originally team BellKor) and two researchers from Pragmatic Theory, Martin Piotte and Martin Chabbert. As required, they published a description of their algorithm.

The team reported to have achieved the "dubious honors" (sic Netflix) of the worst RMSEs on the Quiz and Test data sets from among the 44,014 submissions made by 5,169 teams was "Lanterne Rouge", led by J.M. Linacre, who also a member of "The Ensemble" team.

Cancelled sequel

On March 12, 2010, Netflix announced that it would not pursue a second Prize competition that it had announced the previous August. The decision was in response to a lawsuit and Federal Trade Commission privacy concerns.

Privacy concerns

Although the data sets were changed to preserve customer privacy, the Prize has been criticized by privacy advocates. In 2007 two researchers from the University of Texas were able to identify individual users by matching the data sets with film ratings on the Internet Movie Database
Internet Movie Database
Internet Movie Database is an online database of information related to movies, television shows, actors, production crew personnel, video games and fictional characters featured in visual entertainment media. It is one of the most popular online entertainment destinations, with over 100 million...

.

In December 2009, an anonymous Netflix user sued Netflix in Doe v. Netflix, alleging that Netflix had violated U.S. fair trade
Fair trade
Fair trade is an organized social movement and market-based approach that aims to help producers in developing countries make better trading conditions and promote sustainability. The movement advocates the payment of a higher price to producers as well as higher social and environmental standards...

 laws and the Video Privacy Protection Act
Video Privacy Protection Act
The Video Privacy Protection Act was a bill passed by the United States Congress in 1988 as and signed into law by President Ronald Reagan...

by releasing the datasets.

External links

The source of this article is wikipedia, the free encyclopedia.  The text of this article is licensed under the GFDL.
 
x
OK