Saturday, July 6, 2013

What's New at Hadoop Summit 2013?

YARN

YARN, the new cluster resource manager in Hadoop 2.0, was a major theme at last week's Hadoop Summit. Although the project itself is not new (in fact, it has been in development for several years), what's new is its growing adoption by the Hadoop community. YARN (Yet Another Resource Negotiator) plays a central role in Hadoop 2.0: it is the cluster resource manager that allows you to run multiple computing frameworks, such as Storm or Spark, in addition to MapReduce, all on the same Hadoop cluster.

There was evidence of community adoption of YARN throughout Hadoop Summit: (1) a keynote by Yahoo! describing their production analytics stack built on YARN (video), (2) talks about the Stinger Initiative to speed up Hive by 100x, which relies on YARN (through the new Tez framework), and (3) the announcement of YARN's release in HDP 2.0, Hortonworks' latest distribution of Hadoop.

YARN at Yahoo!

According to the Yahoo! keynote, YARN has been undergoing some serious load testing in production Yahoo! systems for personalization and ad targeting. Their YARN clusters run Storm, Spark, and HBase, in addition to MapReduce. This includes a 320-node Storm/YARN cluster that does stream processing, and an overall total of 400k YARN jobs per day. This Strata blogpost contains more details about the talk.

Tez

Tez is a new compute framework that runs on YARN. Tez improves upon MapReduce by supporting the execution of a complex DAG of tasks, beyond the simple map-reduce pattern of MapReduce. Tez is thus more suitable for expressing SQL queries, and will be leveraged to speed up Hive jobs.

Hadoop on Rasberry Pi

Hadoop Summit has traditionally been the place to brag about who has the biggest clusters; however, this LinkedIn demo goes to the opposite extreme :) 

Friday, July 5, 2013

Holding a Hackathon at Your Company

Are you interested in holding a hackathon at your company? An internal hackathon for employees can be a lot of fun. Cathy Polinsky (Salesforce.com) gave a nice introduction to the subject at last year's Grace Hopper Conference, with advice about how to run your own hackathon. Another way to get started is to hold a "ShipIt Day", modeled after Atlassian's (the software tools company) internal hackathons. This ShipIt Day FAQ provides a suggested timeline on planning and running a hackathon. For more ideas, take a look at what happens at LinkedIn Hackdays.

Sunday, May 12, 2013

Real-time Big Data Analytics with Solr

You have heard of Solr as a search engine, but did you know it can be used for real-time analytics as well? I first encountered this idea at a tutorial on Solr at Strata Conference. The ideal use case is when you want to provide real-time data ingestion and analytics on streaming data, with an all-in-one solution that is horizontally scalable and fault-tolerant. And, because of Lucene, which is built into Solr, you also get text-search based queries for free.

Example: Stock Tick

Stock tick data is an example of a fast data stream: every second stock price updates come in as different stocks are traded on an exchange. Suppose you wanted to build a dashboard that displays real-time updates of stock prices as well as other statistics, such as the moving average. You also want to support ad-hoc queries that analyze the historical performance of various stocks. Your denormalized data rows may look something like:

symbol:AAPL, time:2013-05-10 08:15:23, price:415.92, volume:900, ...

Real Time

Real-time ingestion is possible because as soon as a new data row comes in, it is immediately added and indexed in Solr. Internally, Lucene handles high-volume inserts to its indexes with a log-structured merge-tree, in which new, small index segments get merged as an index grows. To further support high insert rates, a Solr index can be split into shards across multiple nodes of a cluster. Real-time queries are answered with fast lookups on the index. Sharding also helps speed up large queries by spreading the query processing across multiple nodes.

Analytics

Solr supports many of the same queries as SQL: aggregation, boolean filters and range queries, grouping, and sorting (or, in SQL, COUNT/SUM/AVG, WHERE, GROUP BY, and ORDER BY). As a tradeoff of horizontal scalability, there are no Joins; you would need to denormalize the data to reframe those queries as aggregations. Because of its search roots, Solr also supports text search in queries (for text fields), custom scoring (ranking) of query results, and facets.

Scalable, Fault-tolerant

With SolrCloud, the support for distributed indexes in Solr, it is easy to grow a cluster horizontally. SolrCloud takes care of load balancing, replication, and automatic fail-over of shards. Internally, Zookeeper is used for distributed coordination.

Gotchas

One of the drawbacks of using an indexing system is with schema changes. Adding new fields is easy, but changing the datatype or (text) analyzer of an existing field requires a full reindex of the raw data, which can be time consuming. Another drawback is with storage of large text blocks. While text fields are well supported, Solr does not efficiently store large blocks of text, such as the full text of a news article or web page. In these cases, a separate NoSQL data store such as HBase/Cassandra may be brought in to provide that storage.

Related Systems

I only discussed Solr above, but there are a couple of other open-source, Lucene-based search engines that have similar analytics capabilities. Elasticsearch was built specifically with a focus on ease of distributed deployment, and has a JSON-based API. SenseiDB features a SQL-like query language and Hadoop integration.

More Info

Slides from Strata tutorial.



Wednesday, March 6, 2013

Strata Conference 2013 Wrap-up

Introduction

Strata has been one of the best conferences for data science, and this year's conference did not disappoint. It brought together developers, data scientists, startups, and business people who are interested in "making data work". It was divided into seven tracks, including design and "Hadoop in Practice". Spending most of my time in the "Beyond Hadoop" and "Data Science" tracks, I noticed one of the themes this year was real-time data processing.

Tutorial: Search and Real Time Analytics (slides)

This was a really good tutorial presented by Ryan Tabora (Think Big Analytics) and Jason Rutherglen (Datastax). I learned that in addition to search, Solr has support for real time analytics: the equivalents of sort and group by queries in SQL (you can't do joins, however). An example application would be ad-hoc queries on streaming stock tick data. The second half of the tutorial was an in-depth look at Lucene and some use cases (O'Reilly book coming soon). Rutherglen also talked about the DataStax Enterprise platform which integrates Solr with Cassandra for scalability: Cassandra is the NoSQL data store for the raw data, and each Cassandra row maps into a Solr document.

Tutorial: Core Data Science Skills (code & slides)

This was an interesting tutorial that introduced the basic methods and tools of supervised machine learning. It was led by William Cukierski and Ben Hamner, both from Kaggle. They talked about decision trees, random forests, and naive Bayes classifiers as the basic algorithms. And, they demo'ed  analyses in R with R-Studio, and Python with IPython Notebook. The coolest part was the last hour, when all attendees practiced these skills by participating in a real Kaggle competition.

Keynotes

All of the keynote speakers were really good. I'll just highlight:

  • Human Fault-tolerance (slides & video): Nathan Marz (Twitter) talked about the importance of immutability in distributed system design. I'm hoping to read more about it in his book on Big Data.
  • Hidden Biases of Big Data (video): Kate Crawford (Microsoft Research) warned us that big data often does not tell the whole story, that context and small data are also needed.

Sketching Techniques for Real-time Big Data (slides)

Bahman Bahmani (Stanford) explained that sketches of data are useful in streaming computation because they take up little memory, and allow for fast updates and queries. One example of a sketching data structure is a bloom filter. Bahmani described sketches for fast approximate counting as well as on-the-fly PageRank computation.

Sight, a Short Film (video)

This was an amazing short film depicting a futuristic augmented reality. It envisions a future that brings together many of the ideas from the conference: mobile, connected world, recommendations, gamification, and ubiquitous Internet of things.

Real Time Systems

There were a number of talks about ingesting and processing big data in real time. I'll cover them in a future post.