Hadoop provides a framework for distributed processing of large data sets across multiple computers. It depends on applications rather than hardware for high availability.
Hadoop applications use HDFS as the primary storage system. HDFS replicates data blocks and distributes them across nodes in a cluster.
Hive is a data warehouse system for Hadoop. You can use Hive to add structure to datasets stored in file systems that are compatible with Hadoop.