Data Migration Using Apache Sqoop

 Data Migration Using Apache Sqoop 

Sqoop:

sql + hadoop = sqoop

actually sqoop is one of the component in hadoop and it was initially developed and maintained by Cloudera.sqoop is a data pipeline tool used to transfer the data between RDBMS(Relational Database Management System) to Hadoop. It can be a any rdbms data.

There are three ways to import from rdbms to hadoop

1) RDBMS to HDFS

2) RDBMS to Hive

3) RDBMS to Hbase

we can do export to rdbms from hive and hdfs  only 

1)HDFS to RDBMS 

2)Hive to RDBMS

Here we use the Mysql(Rdbms)

commands for import   

1) MySql to HDFS

bin/sqoop import --connect jdbc:mysql://localhost/db -username root -password 123 --table Persons -m1

2)Mysql to Hive

bin/sqoop import --connect jdbc:mysql://localhost/db -username root -password 123 --table test --hive-table mysqltohive --create-hive-table --hive-import -m1

3)Mysql to Hbase

bin/sqoop import --connect jdbc:mysql://localhost/test --username root --password root --table demo
--hbase-table testhbase --column-family cf --hbase-row-key pid --hbase-create-table -m1

commands for sqoop export

1)Hdfs to Mysql

bin/sqoop export --connect jdbc:mysql://localhost/db -username root -password 123 --table htom --export-dir /usr_data.txt -m1
--driver com.mysql.jdbc.Driver --input-fields-terminated-by ','

2)Hive to Mysql

bin/sqoop export --connect jdbc:mysql://localhost/db -username root -password 123
--table hivetomysql --export-dir /user/hive/warehouse/data.db/hive -m1



Comments

Popular posts from this blog

APACHE-SPARK INSTALLATION ON UBUNTU

APACHE -CASSANDRA INSTALLATION ON UBUNTU