Monday, November 19, 2012

c-Through: Part-time Optics in Data Centers

- Motivation
Nowadays, the applications normally handles data-intensive workloads such as those generated by Hadoop and Dryad. The problems may be happened when the applications uses the mass data distributed in different server racks in data-center i.e. traditional hierarchical topologies which use tree-structure Ethernet can be a bottleneck. There are some solutions to resolve this problems such as Fat trees, DCell, and BCube, but these solutions require a large number of links and switches, and complex structured wiring, and expanding the networks after construction is challenging. So this paper proposes a hybrid architecture prototype design “c-Through”, which mixes the advantages of optical circuit-switched network and traditional hierarchy of packet switch network. The goal of this paper is trying to demonstrate the fundamental feasibility and applicability question of hybrid data center network.

- System Architecture
Optical circuit switching provides much higher bandwidth but it is limited to have only paired communication and reconfiguration for pairing needs some time. The paper gives us some experimental results e.g. “only a few ToRs are hot and most of their traffic goes to a few other ToRs.” to explain why it is worth to keep considering the optical circuit to resolve the problems. The paper suggests that new hybrid packet and circuit switched data-center network by combining high speed switching from traditional packet switching and high bandwidth from optical circuit-switched network. So in this system, there are traditional packet networks and optical networks which connect the two racks instead of connecting all of servers.

- How it works
To operate both optical and electrical networks, each end-host (server) runs a monitor program in their kernel to estimate traffic demands by simply enlarge output buffer limits of sockets. The Optical Manager receives all these packet info from each server. Given the cross-rack traffic matrix, the optical manager determines how to connect the server racks by optical paths in order to maximize the amount of traffic offloaded to the optical network.
Each server makes multiplexing decision using two virtual interfaces with VLAN-s and VLAN-c that mapping to electrical and optical network. Every time the optical network reconfigured, the server will be informed this and the De-MUX in server will tag packets with the appropriate VLAN ID.

- Related work
There are two similar works, Flyways to de-congest data center networks and Helios. Helios which explores similar hybrid electrical/optical data center architecture is very similar with c-Through. The main difference of two systems is where the implementation for estimation and traffic demultiplexing features exist. While c-Through implements the function in end-host (server), Helios does this in switches. The supercomputing community also try to use circuit switched optics. However the goal of use optical switching is different each other substantially. The main difference is within supercomputer, they try to use of node-to-node circuit switched while c-Though tries to use rack-to-rack circuit. The paper said their approach  is more appropriate for a commodity data-center because c-Through deliberately amortizes the potentially high cost of optical ports.

- Advantages
1. Don’t need to change previous networking system. Moreover, also no applications are needed to be modified because this system modifies the kernel yet this can be also disadvantage as I stated below in criticism.
2. Using both electronically and optical switching at the same time depending on the analysis of data flow between racks.
3. Several experiments show us good performance.
4. As the author mentioned, even though current system design is not best way to gain the goal, but the system demonstrates the fundamental feasibility and gives us kinds of valuable research topic.

- Criticism
1. Using kernel memory for buffering, is it safe for server system? Can c-Through be applied to any kinds of server system i.e. can the kernel in server system be modified all the time? Because this paper says that this system is appropriate commodity data-center, we may consider these questions. 
2. If there are so many servers in data center, is there any problem with numerous optical managers and then is it scalable?
3. This paper gives us results from simulating experiments instead of real system. This can make us hesitate to use this system even though paper said that the system works well several data-intensive applications such as Hadoop and MPI.
4. I’m not sure that it is valuable to use hybrid Packet and Circuit all the time since I think that many current data-centers have seemed  to handle current data-intensive application pretty well without Hybrid packet and circuit. We need to consider which one is easier to handle between new complexities from hybrid structure and traditional complexities from number of links and switches i.e. which design can handle better the problems efficiently?

- Conclusions
I think that such hybrid design is a point of compromise of transition period for the technology improvement and there will be a leading network technology from such experimental design in the near future.
The paper tries to solve the problems that can be happened under specific conditions such as current network architecture and data intensive applications with new approach. Both optics and electrical networks have each own pros and cons: optics is good for bandwidth and electrical is good for low latency. By mixing both pros, the paper introduces us new type of hybrid architecture. It seems that it is not easy to conclude that such new type of architecture (hybrid architecture) is the best way or better than previous architecture right now. However, one thing clear is that the new architecture, approach, gives us very valuable research topic and different way of thinking to resolve the problems. 

Wednesday, November 14, 2012

Hive - A Petabyte Scale Data Warehouse Using Hadoop


- Motivation
Facebook has to take care of huge size of data in everyday for their applications which may need large computing power such as analysis peta-bytes size of data. So they decided to use the Hadoop, software library, which allows for the distributed processing of large data sets across clusters of computers using simple programming models for handling their huge data within scalable and reliable way. However, Hadoop system does not provide an explicit structured data processing framework. So the basic reason why they build Hive system using SQL-like, HiveQL is to make it easy to use Hadoop for ones who are not familiar with it by making map/reduce jobs with HiveQL.

-  System Architecture

Hive runs on the top of Hadoop. By giving some interfaces to communicate with Hadoop, Hive can hide the complicated pipelining of multiple map-reduce jobs from the programmers to make their life happy. With the SQL-like language, programmers may be able to write simple and complicated queries without huge efforts for the analysis or optimization of the map-reduce jobs.
l  Metastore – stores system catalog and metadata about tables, columns, partitions, etc.
l  Driver – manages the lifecycle of a HiveQL statement. Also handle a session handle.
l  Query Compiler – compiles HiveQL into a directed acyclic graph of map/reduce tasks.
l  Execution engine – interact with Hadoop and executes the task produced by compiler.
l  Hiveserver – provides a thrift interface and JDBC/ODBC server.
l  CLI – Command Line Interface, the web UI.
l  Extensible Interfaces – includes the SerDe and ObjectInspector interfaces.

- How it works
The basic idea of Hive is to provide users a SQL-like language, HiveQL. Programs written in HiveQL will be input from CLI or WebUI and then the system will send it to the Query compiler. Then the program will be compiled into map-reduce jobs that are executed using Hadoop through execution engine.

- Rleated work
They mentioned about Scope which is an SQL-like language on top of Microsoft’s proprietary Cosmos map/reduce and PIG which allows users to write declarative scripts to process data. The main difference with them is providing a system catalog as a Metastore which used for data exploration, query optimization, query compilation. Even though it seems that Hive is very much influenced by these related systems, the author does not describe about the relation with other system.

- Advantages
1. First SQL-like system using Hadoop.
2. Supports not only primitive data types but also multiple customizable data type by using SerDe.
3. Working actively on by Facebook so the system may continuously be improved.
4. Open Source system can be improved.
5. By Supporting SQL syntax, system can integrate with existing commercial BI tools.
6. I think this system can be used for other system by replacing Driver component.

- Criticism
1. They don’t give exact comparison to competitive systems with analysis or benchmark. Yet they give us only their own result that the system works 20% better than other system.  How can we estimate and trust its performances and how many times does it take for whole single map/task? We need such information to decide whether use this system or not.
2. Do any of components can cause a performance issue? How many times does it take to operate each component such as compiler, Thrift server and etc.? Does Facebook was faced any problem with this system?
3. Optimizer is only rule-based not support cost-based. Moreover programmers have to provide query hint on doing “MAPJOIN” on small tables and on 2-stage map-reduce for “GROUP BY” aggregates where the group-by columns have highly skewed data.
4. Some operations do not be supported such as INSERT INTO, UPDATE and DELETE.
5. This paper may not be written for academic purpose. It seems to focus on more giving the examples of queries of HiveQL and to be likely for more general purpose to introduce about HIVE to the general.

- Conclusions
Actually, I think that there are not new things on this system such as parser, graph generator and optimizer ideas. I think that though the Hive gives us easy way to use Hadoop without huge efforts but we may be needed to learn making and running our own map/reduce jobs for the best performance. However, by giving SQL-Like query express to users, the system provides many benefits to programmers and non-programmers who want to use Hadoop but are not familiar with it. Although this paper is not written well and we cannot get detailed information from this paper about HIVE, yet Hive system seems that it will be very improved. The author interests and works in optimizing Hive and subsuming SQL syntax. Moreover it becomes now Open-source system. So if some problems such as limited optimizer, SQL-expressions and etc are solved, then the system seems to be more popular and accepted.