Big Data Laboratory
Posted by J Singh | Filed under HBase, Google App Engine, Hadoop
The Hadoop Ecosystem
Posted by J Singh | Filed under Map Reduce, Hadoop, NoSQL
Here's a more detailed outline of my talk on March 12.
Introduction
- What Hadoop is, and what it's not
- Origins and History
- Hello Hadoop, how to get started.
The Hadoop Bestiary
- Core: Hadoop Map Reduce and Hadoop Distributed File System (HDFS)
- Data Access: HBase, Pig and Hive
- Algorithms: Mahout
- Data Import: Flume, Sqoop and Nutch
The Hadoop Providers
- Apache
- Cloudera
- What to do if your data is in a database
The Hadoop Alternatives
- Amazon EMR
- Google App Engine
Hands on with Hadoop
Posted by J Singh | Filed under Amazon EC2, Hadoop
This post begins a series of exercises on Hadoop and its ecosystem. Here is a rough outline of the series:
- Hello Hadoop World, including loading data into and copying results out of HDFS.
- Hadoop Programming environments with Python, Pig and Hive.
- Importing data into HDFS.
- Importing data into HBase.
- Other topics TBD.
Hello Hadoop World
- Make sure python 2.6 ...
Stop the Data Warehouse Creep
Posted by J Singh | Filed under ETL process, Hadoop, Big Data, Data Warehouse
- Your data warehouse takes about 8 hours to load the cube. The cube is updated weekly; 8 hours per week is a small price to pay and everyone is happy.
- It is missing a few data elements that another group of analysts really needs.
You add those and now the cube takes 9 hours to load. No big deal.
- Time passes, the business is doing well, the size of the data quadruples. It now takes 36 hours per week to load the cube.
- Time passes, some of the data elements are not needed any more but it is too hard to take them out of the process — it continues to take 36 hours.
- You add yet another group of analysts as user, a few more data elements, it now takes 44 hours per week!
- You get the picture… the situation gets more and more precarious over time.
One example … was a credit card company working to implement fraud detection functionality. A traditional SQL data warehouse is more than likely already in place, and it may work well enough but without enough granularity for ...
Big data: Does size matter?
Posted by J Singh | Filed under big data, nosql, hadoop, map reduce, statistical analysis, numerical methods
Big data is about so many things:
- Size, of course, but you don't have to be Google-scale to need big data technologies. Heck, a few hundred gigabytes will suffice.
- Ad-hoc. Big Data platforms enable ad-hoc analytics on non-relational (ie unmodelled data). This allows you to uncover insights to questions that you never think to ask.
- Streaming. You cannot deliver true analytics of Big Data relying only on batch insights. You must deliver streaming and real-time analytics.
- Inconsistent. Air or water quality is measured in impurities-per-million. Perhaps we should have similar consistency metrics for data?
Hands on Hadoop with Amazon EC2
Posted by J Singh | Filed under Amazon AWS, Map Reduce, Hadoop
A few months ago, I gave a talk at the Chelmsford Technology Skill Share Group. It was focused on the whys and wherefores of NoSQL and Map/Reduce. If you are interested in a copy of the presentation, please contact me.
Next week (9/14/11, 4:00 pm, Chelmsford Public Library, McCarthy Meeting Room. Directions.), I'll be giving a hands-on introduction to running Hadoop on Amazon EC2.
It will be as hands-on as the previous talk was conceptual. For the actual material, we will use the excellent tutorial by Michael Noll. Michael first wrote it in 2007 and has kept it up to date. The tutorial helps you install map/reduce and use it for computing the count of every word in a large text. We will use Ulysses by James Joyce as our sample text. Can we do this in 90 minutes? Yes, we can. But the goal is to get the most out of the journey.
To get the most out of the talk, you should be prepared to sign up for an Amazon account. They require a credit card but the credit card won't actually get charged because our usage will be a few ...