You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
[LOG-PATH]: /home/hadoop/module/dolphinscheduler/worker-server/logs/20240607/13421671136576_15-3879-13096.log, [HOST]: Host{address='10.122.130.109:1234', ip='10.122.130.109', port=1234}
[INFO] 2024-06-07 16:12:38.889 +0800 - Begin to pulling task
[INFO] 2024-06-07 16:12:38.890 +0800 - Begin to initialize task
[INFO] 2024-06-07 16:12:38.890 +0800 - Set task startTime: Fri Jun 07 16:12:38 CST 2024
[INFO] 2024-06-07 16:12:38.890 +0800 - Set task envFile: /home/hadoop/module/dolphinscheduler/worker-server/conf/dolphinscheduler_env.sh
[INFO] 2024-06-07 16:12:38.890 +0800 - Set task appId: 3879_13096
[INFO] 2024-06-07 16:12:38.890 +0800 - End initialize task
[INFO] 2024-06-07 16:12:38.890 +0800 - Set task status to TaskExecutionStatus{code=1, desc='running'}
[INFO] 2024-06-07 16:12:38.891 +0800 - TenantCode:hadoop check success
[INFO] 2024-06-07 16:12:38.891 +0800 - ProcessExecDir:/home/hadoop/module/dolphinscheduler/data/exec/process/hadoop/12835742987584/13421671136576_15/3879/13096 check success
[INFO] 2024-06-07 16:12:38.891 +0800 - get resource file from path:/dolphinscheduler2/hadoop/resources/xxx.sql
[INFO] 2024-06-07 16:12:38.916 +0800 - Resources:{xxx.sql=hadoop} check success
[INFO] 2024-06-07 16:12:38.917 +0800 - Task plugin: SHELL create success
[INFO] 2024-06-07 16:12:38.917 +0800 - shell task params {"localParams":[],"rawScript":"#!/bin/bash\r\n\r\n/home/hadoop/spark-3.1.1-bin-hadoop3.2/bin/beeline -u jdbc:hive2://node2:11240/default -n hive -p hive -f xxx.sql --hivevar ads ${ads} --hivevar dim ${dim} --hivevar partition_version ${partition_version} --hivevar dws ${dws} --hivevar dwd ${dwd}","resourceList":[{"id":40,"resourceName":"ads_sub_pn_sub_region_ym_ur.sql","res":"xxx.sql"}]}
[INFO] 2024-06-07 16:12:38.917 +0800 - Success initialized task plugin instance success
[INFO] 2024-06-07 16:12:38.917 +0800 - Success set taskVarPool: null
[INFO] 2024-06-07 16:12:38.917 +0800 - raw script : #!/bin/bash
/home/hadoop/spark-3.1.1-bin-hadoop3.2/bin/beeline -u jdbc:hive2://node2:11240/default -n hive -p hive -f ads_sub_pn_sub_region_ym_ur.sql --hivevar ads ads_test --hivevar dim dim_tmp --hivevar partition_version default_latest --hivevar dws dws_test --hivevar dwd dwd_tmp
#################################################
Error: org.apache.hive.service.cli.HiveSQLException: Error running query: org.apache.spark.sql.AnalysisException: `ads_partition_test`.`ads_sub_pn_sub_region_ym_ur` requires that the data to be inserted have the same number of columns as the target table: target table has 43 column(s) but the inserted data has 42 column(s), including 1 partition column(s) having constant value(s).
Actually, the table structure is not same at the different database ads_test and ads_partition_test. And in the previous log, the hivevar value is correct(its ads_test), but in the execution, it turns to wrong value(ads_partition_test).
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
In past days, this issue happened twice.
Here are some logs fragment
Actually, the table structure is not same at the different database ads_test and ads_partition_test. And in the previous log, the hivevar value is correct(its ads_test), but in the execution, it turns to wrong value(ads_partition_test).
Any idea about this issue?
Beta Was this translation helpful? Give feedback.
All reactions