Record Hbase data migration and problems encountered once

Because clusters are not interworking, manual migration is adopted.

1、Download target cluster data

hadoop fs -get /apps/hbase/data/data/default/*c4be21d3000064c0 /mnt/data

2、Remote replication of data, you can compress it.

scp ***

 

3、Uploading data to HDFS must switch to HBase users, otherwise there will be errors.

su hdfs
hadoop fs -put /app/hbase/* /apps/hbase/data/data/default/

 

4、Restore metadata, etc.

hbase hbck Only do a checkupHBase HBCK-fixMeta Generate the meta table according to the.Regioninfo in the region directory.HBase HBCK-fixAssignments Assign the region recorded in the meta table to regionserverHBase HBCK-fixHdfsOrphans Repair.Regioninfo file
hbase hbck -repair  Table name

 

Problems encountered:

There is a hole in the region chain between  and .  You need to create a new .regioninfo and region dir in hdfs to plug the hole

Found inconsistency in table

Final discovery is upload data to HDFS is not using HBase user, modify the file owner and repair it

hadoop fs -chown -R hbase:hdfs /apps/hbase/data/data/default

hbase hbck -repair  Table name

 

Leave a Reply

Your email address will not be published. Required fields are marked *