Data mining of very large amounts of data, that is, data so large it does not fit in main memory. Because of the emphasis on size, many of our examples are about the Web or data derived from the Web. Data mining is about applying algorithms to data, rather than using data to “train” a machine-learning engine of some sort.
Statisticians were the first to use the term “data mining.” Originally, “data mining” or “data dredging” was a derogatory term referring to attempts to extract information that was not supported by the data. Now, statisticians view data mining as the construction of a statistical model, that is, an underlying distribution from which the visible data is drawn.
Knowledge discovery - Principal Techniques
· Distributed file systems and map-reduce as a tool for creating parallel algorithms that succeed on very large amounts of data.
· Similarity search, including the key techniques of minhashing and locality sensitive hashing.
· Data-stream processing and specialized algorithms for dealing with data that arrives so fast it must be processed immediately or lost.
· The technology of search engines, including Google’s PageRank, link-spam detection, and the hubs-and-authorities approach.
· Frequent-itemset mining, including association rules, market-baskets, the A-Priori Algorithm and its improvements.
· Algorithms for clustering very large, high-dimensional datasets.
· Two key problems for Web applications: managing advertising and recommendation systems, A technique known as collaborative filtering.