We all know that both MapReduce and Hadoop have a lot to do with the genius that is Google. The former was developed within Google itself to process large volumes of raw data. The latter, on the other hand, was inspired by MapReduce but adds more components like a distributed file system, data management tools, and a scripting language.
MapR, a company who rebuilt Hadoop and offers the new code as proprietary software, continues the trend by attempting to mimic Google’s Dremel. The Wired.com article “Google’s Mind-Blowing Big-Data Tool Grows Open Source Twin” tells us more about the new project:
MapR opened up the development of Drill because it hopes to turn the platform into the de facto standard for rapidly analyzing data stored in Hadoop.
…Hadoop already serves as a data analysis tool, thanks to sister projects such as Hive and Pig, but it’s a “batch” tool, meaning that data query takes a fair amount of time. Drill is meant to analyze large amounts of data almost instantly, following in the footsteps of Dremel.
Similar to Google’s goals with Dremel, MapR wishes to create an open source tool that will reduce the analysis of petabytes of data to a few seconds. And because the project is open source, eventually more contributors will join soon hopefully establishing an architecture that will support various languages and data formats and sources.
Also, it’s good to know that Drill isn’t meant to replace Hadoop. In fact, it will be designed to work with it. This is very good news for LucidWorks, who recently partnered with MapR who integrated their commercial Hadoop distribution into LucidWorks Search for better performance and reliability.
Lauren Llamanzares, September 14, 2012