Hi, @amit0900, when you import a table, a Map Reduce program is run and the program emits output files, depending upon the number of mappers. If the number of mappers specified is 4, then you will find 4 files in the output folder. If you specify just one mapper then there will be only 1 file in the output folder.
When you put the data into an HDFS location, underneath the data is partitioned depending upon the HDFS block size specified. The HDFS path is just logical and an abstraction so that it becomes easy for the users, that is the reason you find only one file but underneath it is many blocks.
2. Yes, depending upon the available nodes in the cluster, any machine can process the partition but it depends upon on the master JobTracker, to schedule jobs on the the available slave machines.