Hdfs dfs -put took 5 hrs to load data from a single file 1.5 TB to hdfs dir. what to do


Hi, if I need to load a large file into HDFS (1.5TB), 6 node cluster
replication factor of 3. The load takes 5 hours or so from client
machine when using hadoop fs -put. Is there a better way to do this in
parallel? I guess my question is what is the standard way of loading
large files into HDFS. Should I split the file into smaller chunks in
child process in bash shell script and load the split files in
parallel with hadoop fs -put?