hive> insert into student values(1, 'laiyy'); Query ID = root_20191223102633_3f3c7996-6af7-4718-9634-7b0e13adc979 Total jobs = 3 Launching Job 1 out of 3 Number of reduce tasks is set to 0 since there's no reduce operator Starting Job = job_1577067217027_0001, Tracking URL = http://hadoop03:8088/proxy/application_1577067217027_0001/ Kill Command = /opt/module/hadoop-2.7.2/bin/hadoop job -kill job_1577067217027_0001 Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 0 2019-12-23 10:26:48,722 Stage-1 map = 0%, reduce = 0% 2019-12-23 10:26:57,020 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 2.4 sec MapReduce Total cumulative CPU time: 2 seconds 400 msec Ended Job = job_1577067217027_0001 Stage-4 is selected by condition resolver. Stage-3 is filtered out by condition resolver. Stage-5 is filtered out by condition resolver. Moving data to: hdfs://hadoop02:9000/user/hive/warehouse/student/.hive-staging_hive_2019-12-23_10-26-33_045_4325354072781105943-1/-ext-10000 Loading data to table default.student Table default.student stats: [numFiles=1, numRows=1, totalSize=8, rawDataSize=7] MapReduce Jobs Launched: Stage-Stage-1: Map: 1 Cumulative CPU: 2.4 sec HDFS Read: 3551 HDFS Write: 79 SUCCESS Total MapReduce CPU Time Spent: 2 seconds 400 msec OK Time taken: 26.301 seconds
hive> select * from student; OK 1 laiyy 2 laiyy1 Time taken: 0.039 seconds, Fetched: 2 row(s)
hive> select name from student; OK laiyy laiyy1 Time taken: 0.069 seconds, Fetched: 2 row(s)
hive> select count(*) from student; Query ID = root_20191223103741_2b3573f9-5941-49c7-b4b7-dfd6e402e3d6 Total jobs = 1 Launching Job 1 out of 1 Number of reduce tasks determined at compile time: 1 In order to change the average load for a reducer (in bytes): set hive.exec.reducers.bytes.per.reducer=<number> In order to limit the maximum number of reducers: set hive.exec.reducers.max=<number> In order to set a constant number of reducers: set mapreduce.job.reduces=<number> Starting Job = job_1577067217027_0003, Tracking URL = http://hadoop03:8088/proxy/application_1577067217027_0003/ Kill Command = /opt/module/hadoop-2.7.2/bin/hadoop job -kill job_1577067217027_0003 Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 1 2019-12-23 10:37:53,751 Stage-1 map = 0%, reduce = 0% 2019-12-23 10:37:58,136 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 0.99 sec 2019-12-23 10:38:02,253 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 2.52 sec MapReduce Total cumulative CPU time: 2 seconds 520 msec Ended Job = job_1577067217027_0003 MapReduce Jobs Launched: Stage-Stage-1: Map: 1 Reduce: 1 Cumulative CPU: 2.52 sec HDFS Read: 6633 HDFS Write: 2 SUCCESS Total MapReduce CPU Time Spent: 2 seconds 520 msec OK 2 Time taken: 22.124 seconds, Fetched: 1 row(s)