云服务器内容精选

  • 回答 不可以,会抛HoodieKeyException异常。 Caused by: org.apache.hudi.exception.HoodieKeyException: recordKey value: "null" for field: "name" cannot be null or empty. at org.apache.hudi.keygen.SimpleKeyGenerator.getKey(SimpleKeyGenerator.java:58) at org.apache.hudi.HoodieSparkSqlWriter$$anonfun$1.apply(HoodieSparkSqlWriter.scala:104) at org.apache.hudi.HoodieSparkSqlWriter$$anonfun$1.apply(HoodieSparkSqlWriter.scala:100)
  • 问题 使用Spark SQL删除MOR表后重新建表写入数据不能实时同步ro、rt表,报错如下: WARN HiveSyncTool: Got runtime exception when hive syncing, but continuing as ignoreExceptions config is set java.lang.IllegalArgumentException: Failed to get schema for table hudi_table2_ro does not exist at org.apache.hudi.hive.HoodieHiveClient.getTableSchema(HoodieHiveClient.java:183) at org.apache.hudi.hive.HiveSyncTool.syncHoodieTable(HiveSyncTool.java:286) at org.apache.hudi.hive.HiveSyncTool.doSync(HiveSyncTool.java:213)
  • 问题 Hive同步数据时报错: Caused by: java.sql.SQLException: Error while processing statement: FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. Unable to alter table. The following columns have types incompatible with the existing columns in their respective positions : __col1,__col2
  • 回答 原因: Hudi表数据含有Decimal类型数据。 初始入库BULK_INSET方式会使用Spark内部parquet文件的写入类进行写入,Spark对不同精度的Decimal类型处理是不同的。 UPSERT操作时,Hudi使用Avro兼容的parquet文件写入类进行写入,这个和Spark的写入方式是不兼容的。 解决方案: 执行BULK_INSERT时指定设置“hoodie.datasource.write.row.writer.enable = false”,使hoodie采用Avro兼容的parquet文件写入类进行写入。