华为云用户手册

  • 类型对照 表1 集合类型对照表 MySQL集合类型 MySQL INPUT GaussDB (DWS) OUTPUT ENUM ENUM VARCHAR SET SET VARCHAR 对于ENUM的类型转换,将转换为VARCHAR类型,精度大小为枚举值中最长字段长度的4倍,并使用CHECK()函数确保输入枚举值的正确性 对于SET的类型转换,将转换为VARCHAR类型,精度大小为各枚举值字段长度与分隔符数量和的4倍。 输入示例ENUM 1 2 3 4 CREATE TABLE IF NOT EXISTS `runoob_dataType_test`( id int(2) PRIMARY KEY, `dataType_17` ENUM('dws-1', 'dws-2', 'dws-3') ); 输出示例 1 2 3 4 5 6 7 8 CREATE TABLE IF NOT EXISTS "public"."runoob_datatype_test" ( "id" INTEGER(2) PRIMARY KEY, "datatype_17" VARCHAR(20) CHECK (dataType_17 IN('dws-1','dws-2','dws-3','', null)) ) WITH ( ORIENTATION = ROW, COMPRESSION = NO ) NOCOMPRESS DISTRIBUTE BY HASH ("id"); 输入示例SET 1 2 3 CREATE TABLE IF NOT EXISTS `runoob_tbl_test`( `dataType_18` SET('dws-1', 'dws-2', 'dws-3') ); 输出示例 1 2 3 4 5 6 7 CREATE TABLE IF NOT EXISTS "public"."runoob_tbl_test" ( "datatype_18" VARCHAR(68) ) WITH ( ORIENTATION = ROW, COMPRESSION = NO ) NOCOMPRESS DISTRIBUTE BY HASH ("datatype_18");
  • 类型对照 表1 大对象类型对照表 MySQL大对象类型 MySQL INPUT GaussDB(DWS) OUTPUT TINYBLOB TINYBLOB BLOB BLOB BLOB BLOB MEDIUMBLOB MEDIUMBLOB BLOB LONGBLOB LONGBLOB BLOB 输入示例[TINY|MEDIUM|LONG]BLOB 1 2 3 4 5 6 7 CREATE TABLE IF NOT EXISTS `runoob_dataType_test`( `dataType_1` BIGINT, `dataType_2` TINYBLOB, `dataType_3` BLOB, `dataType_4` MEDIUMBLOB, `dataType_5` LONGBLOB ); 输出示例 1 2 3 4 5 6 7 8 9 10 11 CREATE TABLE IF NOT EXISTS "public"."runoob_datatype_test" ( "datatype_1" BIGINT, "datatype_2" BLOB, "datatype_3" BLOB, "datatype_4" BLOB, "datatype_5" BLOB ) WITH ( ORIENTATION = ROW, COMPRESSION = NO ) NOCOMPRESS DISTRIBUTE BY HASH ("datatype_1");
  • 类型对照 表1 字符串类型对照表 MySQL字符串类型 MySQL INPUT GaussDB(DWS) OUTPUT CHAR CHAR[(0)] CHAR[(n)] CHAR[(1)] CHAR[(4n)] CHARACTER CHARACTER[(0)] CHARACTER[(n)] CHAR[(1)] CHAR[(4n)] NCHAR NCHAR[(0)] NCHAR[(n)] CHAR[(1)] CHAR[(4n)] LONGTEXT LONGTEXT TEXT MEDIUMTEXT MEDIUMTEXT TEXT TEXT TEXT TEXT TINYTEXT TINYTEXT TEXT VARCHAR VARCHAR[(0)] VARCHAR[(n)] VARCHAR[(1)] VARCHAR[(4n)] NVARCHAR NVARCHAR[(0)] NVARCHAR[(n)] VARCHAR[(1)] VARCHAR[(4n)] CHARACTE VARYING CHARACTE VARYING VARCHAR CHAR/CHARACTER/NCHAR进行转换时,如果其精度小于等于0时,转换后为CHAR(1),如果精度大于0,则转换为CHAR类型4倍的精度大小。 VARCHAR/NVARCHAR进行转换时,如果其精度小于等于0时,转换后为VARCHAR(1),如果精度大于0,则转换为VARCHAR类型4倍的精度大小。 输入示例CHAR MySQL一个长度CHAR列被固定在创建表声明的长度,长度可以是从0到255之间的任何值。CHAR存储值时,它们将空格填充到指定的长度。 1 2 3 4 5 CREATE TABLE IF NOT EXISTS `runoob_dataType_test`( `dataType_1` CHAR NOT NULL, `dataType_2` CHAR(0) NOT NULL, `dataType_3` CHAR(255) NOT NULL ); 输出示例 1 2 3 4 5 6 7 8 9 CREATE TABLE IF NOT EXISTS "public"."runoob_datatype_test" ( "datatype_1" CHAR NOT NULL, "datatype_2" CHAR(1) NOT NULL, "datatype_3" CHAR(1020) NOT NULL ) WITH ( ORIENTATION = ROW, COMPRESSION = NO ) NOCOMPRESS DISTRIBUTE BY HASH ("datatype_1"); 输入示例[LONG|MEDIUM|TINY]TEXT 1 2 3 4 5 6 CREATE TABLE IF NOT EXISTS `runoob_dataType_test`( `dataType_1` LONGTEXT, `dataType_2` MEDIUMTEXT, `dataType_3` TEXT, `dataType_4` TINYTEXT ); 输出示例 1 2 3 4 5 6 7 8 9 10 CREATE TABLE IF NOT EXISTS "public"."runoob_datatype_test" ( "datatype_1" TEXT, "datatype_2" TEXT, "datatype_3" TEXT, "datatype_4" TEXT ) WITH ( ORIENTATION = ROW, COMPRESSION = NO ) NOCOMPRESS DISTRIBUTE BY HASH ("datatype_1"); 输入示例VARCHAR MySQL VARCHAR列中的 值是可变长度的字符串。长度可以指定为0到65,535之间的值。 1 2 3 4 CREATE TABLE IF NOT EXISTS `runoob_dataType_test`( `dataType_1` VARCHAR(0), `dataType_2` VARCHAR(1845) ); 输出示例 1 2 3 4 5 6 7 8 CREATE TABLE IF NOT EXISTS "public"."runoob_datatype_test" ( "datatype_1" VARCHAR(1), "datatype_2" VARCHAR(7380) ) WITH ( ORIENTATION = ROW, COMPRESSION = NO ) NOCOMPRESS DISTRIBUTE BY HASH ("datatype_1");
  • 类型对照 表1 日期和时间类型对照表 MySQL日期时间类型 MySQL INPUT GaussDB(DWS) OUTPUT DATETIME DATETIME[(fsp)] TIMESTAMP[(fsp)] WITHOUT TIME ZONE TIME TIME[(fsp)] TIME[(fsp)] WITHOUT TIME ZONE TIMESTAMP TIMESTAMP[(fsp)] TIMESTAMP[(fsp)] WITH TIME ZONE YEAR YEAR[(4)] SMALLINT(4) 该fsp值如果给出,则必须在0到6的范围内。值为0表示没有小数部分。如果省略,则默认精度为0。 输入示例DATETIME 1 2 3 4 5 6 7 CREATE TABLE IF NOT EXISTS `runoob_dataType_test`( `dataType_1` DATETIME, `dataType_2` DATETIME(0), `dataType_3` DATETIME(6), `dataType_4` DATETIME DEFAULT NULL, `dataType_5` DATETIME DEFAULT '2018-10-12 15:27:33.999999' ); 输出示例 1 2 3 4 5 6 7 8 9 10 11 CREATE TABLE IF NOT EXISTS "public"."runoob_datatype_test" ( "datatype_1" TIMESTAMP WITHOUT TIME ZONE, "datatype_2" TIMESTAMP(0) WITHOUT TIME ZONE, "datatype_3" TIMESTAMP(6) WITHOUT TIME ZONE, "datatype_4" TIMESTAMP WITHOUT TIME ZONE DEFAULT NULL, "datatype_5" TIMESTAMP WITHOUT TIME ZONE DEFAULT '2018-10-12 15:27:33.999999' ) WITH ( ORIENTATION = ROW, COMPRESSION = NO ) NOCOMPRESS DISTRIBUTE BY HASH ("datatype_1"); 输入示例TIME 1 2 3 4 5 6 7 8 9 10 11 CREATE TABLE IF NOT EXISTS `runoob_dataType_test`( `dataType_1` TIME DEFAULT '20:58:10', `dataType_2` TIME(3) DEFAULT '20:58:10', `dataType_3` TIME(6) DEFAULT '20:58:10', `dataType_4` TIME(6) DEFAULT '2018-10-11 20:00:00', `dataType_5` TIME(6) DEFAULT '20:58:10.01234', `dataType_6` TIME(6) DEFAULT '2018-10-11 20:00:00.01234', `dataType_7` TIME DEFAULT NULL, `dataType_8` TIME(6) DEFAULT NULL, PRIMARY KEY (dataType_1) ); 输出示例 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 CREATE TABLE IF NOT EXISTS "public"."runoob_datatype_test" ( "datatype_1" TIME WITHOUT TIME ZONE DEFAULT '20:58:10', "datatype_2" TIME(3) WITHOUT TIME ZONE DEFAULT '20:58:10', "datatype_3" TIME(6) WITHOUT TIME ZONE DEFAULT '20:58:10', "datatype_4" TIME(6) WITHOUT TIME ZONE DEFAULT '2018-10-11 20:00:00', "datatype_5" TIME(6) WITHOUT TIME ZONE DEFAULT '20:58:10.01234', "datatype_6" TIME(6) WITHOUT TIME ZONE DEFAULT '2018-10-11 20:00:00.01234', "datatype_7" TIME WITHOUT TIME ZONE DEFAULT NULL, "datatype_8" TIME(6) WITHOUT TIME ZONE DEFAULT NULL, PRIMARY KEY ("datatype_1") ) WITH ( ORIENTATION = ROW, COMPRESSION = NO ) NOCOMPRESS DISTRIBUTE BY HASH ("datatype_1"); 输入示例TIMESTAMP 1 2 3 4 5 6 7 8 9 CREATE TABLE IF NOT EXISTS `runoob_dataType_test`( `dataType_1` TIMESTAMP, `dateType_4` TIMESTAMP DEFAULT '2018-10-12 15:27:33', `dateType_5` TIMESTAMP DEFAULT '2018-10-12 15:27:33.999999', `dateType_6` TIMESTAMP DEFAULT '2018-10-12 15:27:33', `dateType_7` TIMESTAMP DEFAULT '2018-10-12 15:27:33', `dataType_8` TIMESTAMP(0) DEFAULT '2018-10-12 15:27:33', `dateType_9` TIMESTAMP(6) DEFAULT '2018-10-12 15:27:33' ); 输出示例 1 2 3 4 5 6 7 8 9 10 11 12 13 CREATE TABLE IF NOT EXISTS "public"."runoob_datatype_test" ( "datatype_1" TIMESTAMP WITH TIME ZONE, "datetype_4" TIMESTAMP WITH TIME ZONE DEFAULT '2018-10-12 15:27:33', "datetype_5" TIMESTAMP WITH TIME ZONE DEFAULT '2018-10-12 15:27:33.999999', "datetype_6" TIMESTAMP WITH TIME ZONE DEFAULT '2018-10-12 15:27:33', "datetype_7" TIMESTAMP WITH TIME ZONE DEFAULT '2018-10-12 15:27:33', "datatype_8" TIMESTAMP(0) WITH TIME ZONE DEFAULT '2018-10-12 15:27:33', "datetype_9" TIMESTAMP(6) WITH TIME ZONE DEFAULT '2018-10-12 15:27:33' ) WITH ( ORIENTATION = ROW, COMPRESSION = NO ) NOCOMPRESS DISTRIBUTE BY HASH ("datatype_1"); 输入示例YEAR 1 2 3 4 5 6 CREATE TABLE IF NOT EXISTS `runoob_dataType_test`( `dataType_1` YEAR, `dataType_2` YEAR(4), `dataType_3` YEAR DEFAULT '2018', `dataType_4` TIME DEFAULT NULL ); 输出示例 1 2 3 4 5 6 7 8 9 10 CREATE TABLE IF NOT EXISTS "public"."runoob_datatype_test" ( "datatype_1" SMALLINT, "datatype_2" SMALLINT, "datatype_3" VARCHAR(4) DEFAULT '2018', "datatype_4" TIME WITHOUT TIME ZONE DEFAULT NULL ) WITH ( ORIENTATION = ROW, COMPRESSION = NO ) NOCOMPRESS DISTRIBUTE BY HASH ("datatype_1");
  • 类型对照 表1 数字类型对照表 MySQL数字类型 MySQL INPUT GaussDB(DWS) OUTPUT DEC DEC DEC[(M[,D])] [UNSIGNED] [ZEROFILL] DECIMAL DECIMAL[(M[,D])] DECIMAL DECIMAL[(M[,D])] [UNSIGNED] [ZEROFILL] DECIMAL[(M[,D])] DOUBLE PRECISION DOUBLE PRECISION DOUBLE PRECISION [(M[,D])] [UNSIGNED] [ZEROFILL] DOUBLE PRECISION DOUBLE PRECISION DOUBLE DOUBLE[(M[,D])] [UNSIGNED] [ZEROFILL] DOUBLE PRECISION FIXED FIXED FIXED[(M[,D])] [UNSIGNED] [ZEROFILL] DECIMAL DECIMAL[(M[,D])] FLOAT FLOAT FLOAT [(M[,D])] [UNSIGNED] [ZEROFILL] FLOAT(p) [UNSIGNED] [ZEROFILL] REAL REAL REAL INT INT INT(p) [UNSIGNED] [ZEROFILL] INTEGER INTEGER(p) INTEGER INTEGER INTEGER(p) [UNSIGNED] [ZEROFILL] INTEGER INTEGER(p) MEDIUMINT MEDIUMINT MEDIUMINT(p) [UNSIGNED] [ZEROFILL] INTEGER INTEGER(p) NUMERIC NUMERIC NUMERIC [(M[,D])] [UNSIGNED] [ZEROFILL] DECIMAL DECIMAL[(M[,D])] REAL REAL[(M[,D])] REAL/DOUBLE PRECISION SMALLINT SMALLINT SMALLINT(p) [UNSIGNED] [ZEROFILL] SMALLINT TINYINT TINYINT TINYINT(n) TINYINT(n) ZEROFILL TINYINT(n) UNSIGNED ZEROFILL SMALLINT SMALLINT SMALLINT TINYINT TINYINT类型做转换时,如果存在无符号类型(UNSIGNED)修饰则转换为TINYINT,否则转换为SMALLINT。 REAL类型做转换时,默认转换为DOUBLE PRECISION,如果配置文件(features-mysql.properties)中table.database.realAsFlag标志为true时(默认false),转换为REAL 输入示例TINYINT 1 2 3 4 5 6 7 CREATE TABLE IF NOT EXISTS `runoob_dataType_test`( `dataType_1` TINYINT, `dataType_2` TINYINT(0), `dataType_3` TINYINT(255), `dataType_4` TINYINT(255) UNSIGNED ZEROFILL, `dataType_5` TINYINT(255) ZEROFILL ); 输出示例 1 2 3 4 5 6 7 8 9 10 11 CREATE TABLE IF NOT EXISTS "public"."runoob_datatype_test" ( "datatype_1" SMALLINT, "datatype_2" SMALLINT, "datatype_3" SMALLINT, "datatype_4" TINYINT, "datatype_5" SMALLINT ) WITH ( ORIENTATION = ROW, COMPRESSION = NO ) NOCOMPRESS DISTRIBUTE BY HASH ("datatype_1");
  • 类型转换时优先执行数据库操作 输入 输出 1 2 3 4 5 6 7 8 9 10 11 REPLACE VIEW SC.VIEW_1 ( col_1 ) LOCKING TABLE sc.tab FOR AC CES S AS SEL (COALESCE(TRIM(TGT.col_1),'')) || '_' || (COALESCE(TRIM(TGT.col_1),'')) (CHAR(22)) AS col_1 FROM sc.tab TGT ; 1 2 3 4 5 6 7 8 9 10 11 CREATE OR REPLACE VIEW SC.VIEW_1 (col_1) /*LOCKING TABLE sc.tab FOR ACCESS */ AS ( SELECT CAST( ( COALESCE( TRIM( TGT.col_1 ) ,'' ) ) || '_' || ( COALESCE( TRIM( TGT.col_1 ) ,'' ) ) AS CHAR( 22 ) ) AS col_1 FROM sc.tab TGT ) ;
  • 以YYYYMMDD格式输入的日期 输入 输出 1 2 3 SELECT 1 FROM tb_dt_fmtyyyymmdd WHERE JobName ='${JOB_NAME}' AND TXDATE = ${TX_DATE} - 19000000; SELECT 1 FROM tb_dt_fmtyyyymmdd WHERE JobName ='${JOB_NAME}' AND TXDATE = TO_DATE(${TX_DATE}, 'YYYYMMDD');
  • 以YYYYDDD格式输入日期 输入 输出 1 2 3 4 5 6 7 8 9 10 11 12 REPLACE VIEW SC.VIEW_1 ( col_1 ) LOCKING TABLE sc.tab FOR ACCESS AS SEL --tgt.col_1 is date type CAST( CAST(TGT.col_1 AS DATE FORMAT 'YYYYDDD') AS CHAR(7) ) AS col_1 FROM sc.tab TGT ; 1 2 3 4 5 6 7 8 9 10 CREATE OR REPLACE VIEW SC.VIEW_1 (col_1) /*LOCKING TABLE sc.tab FOR ACCESS */ AS ( SELECT /* tgt.col_1 is date type */ CAST( TO_DATE(TGT.col_1, 'YYYYDDD') AS CHAR( 7 ) ) AS col_1 FROM sc.tab TGT ) ;
  • 以#开头的列名 输入 输出 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 REPLACE VIEW SC.VIEW_1 ( ,col_1 ,#_col_2 ,#_col_3 ) LOCKING TABLE sc.tab FOR ACCESS AS SEL Tgt.col1 ,Tgt.#_col_2 ,Tgt.#_col_3 FROM sc.tab TGT ; 1 2 3 4 5 6 7 8 9 10 11 12 13 14 CREATE OR REPLACE VIEW SC.VIEW_1 ( ,col_1 ,"#_COL_2" ,"#_COL_3" ) /*LOCKING TABLE sc.tab FOR ACCESS */ AS ( SELECT Tgt.col1 ,Tgt."#_COL_2" ,Tgt."#_COL_3" FROM sc.tab TGT ) ;
  • MERGE MERGE是ANSI标准的SQL语法操作符,用于从一个或多个来源中选择行来更新或插入到表或视图中,可以指定更新或插入到目标表或视图的条件。 输入:MERGE 1 2 3 4 5 6 7 8 MERGE INTO tab1 A using ( SELECT c1, c2, ... FROM tab2 WHERE ...) AS B ON A.c1 = B.c1 WHEN MATCHED THEN UPDATE SET c2 = c2 , c3 = c3 WHEN NOT MATCHED THEN INSERT VALUES (B.c1, B.c2, B.c3); 输出 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 WITH B AS ( SELECT c1 ,c2 ,... FROM tab2 WHERE ... ) ,UPD_REC AS ( UPDATE tab1 A SET c2 = c2 ,c3 = c3 FROM B WHERE A.c1 = B.c1 returning A. * ) INSERT INTO tab1 SELECT B.c1 ,B.c2 ,B.c3 FROM B WHERE NOT EXISTS ( SELECT 1 FROM UPD_REC A WHERE A.c1 = B.c1 ) ; 父主题: 数据操作语句(DML)
  • 支持指定部分列 DSC支持在执行INSERT期间指定部分列(非全部列)。当输入的INSERT语句不包含输入的CREATE语句中提到的所有列时会出现这种情况。在迁移时,会向这些列添加指定的默认值。 session_mode设为Teradata时支持此功能。 INSERT-INTO-SELECT中的SELECT语句不得包含以下内容: SET操作符 MERGE、使用PERCENT的TOP、使用TIES的TOP PERCENT 输入:TABLE,且INSERT语句中未指定CREATE中的全部列 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 CREATE VOLATILE TABLE Convert_Data3 ,NO LOG ( zoneno CHAR( 6 ) ,brno CHAR( 6 ) ,currtype CHAR( 4 ) ,Commuteno CHAR( 4 ) ,Subcode CHAR( 12 ) ,accdate DATE format 'YYYY-MM-DD' NOT NULL ,acctime INTEGER ,quoteno CHAR( 1 ) ,quotedate DATE FORMAT 'YYYY-MM-DD' ,lddrbaL DECIMAL( 18 ,0 ) DEFAULT 0 ,ldcrbal DECIMAL( 18 ,0 ) ,tddramt DECIMAL( 18 ,0 ) DEFAULT 25 ,tdcramt DECIMAL( 18 ,0 ) ,tddrbal DECIMAL( 18 ,2 ) ,tdcrbal DECIMAL( 18 ,2 ) ) PRIMARY INDEX ( BRNO ,CURRTYPE ,SUBCODE ) ON COMMIT PRESERVE ROWS ; INSERT INTO Convert_Data3 ( zoneno ,brno ,currtype ,commuteno ,subcode ,accdate ,acctime ,quoteno ,quotedate ,tddrbal ,tdcrbal ) SELECT A.zoneno ,A.brno ,'014' currtype ,'2' commuteno ,A.subcode ,A.Accdate ,A.Acctime ,'2' quoteno ,B.workdate quoteDate ,CAST( ( CAST( SUM ( CAST( A.tddrbal AS FLOAT ) * CAST( B.USCVRATE AS FLOAT ) ) AS FLOAT ) ) AS DEC ( 18 ,2 ) ) AS tddrbal ,CAST( ( CAST( SUM ( CAST( A.tdcrbal AS FLOAT ) * CAST( B.USCVRATE AS FLOAT ) ) AS FLOAT ) ) AS DEC ( 18 ,2 ) ) AS tdcrbal FROM table2 A ; 输出: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 CREATE LOCAL TEMPORARY TABLE Convert_Data3 ( zoneno CHAR( 6 ) ,brno CHAR( 6 ) ,currtype CHAR( 4 ) ,Commuteno CHAR( 4 ) ,Subcode CHAR( 12 ) ,accdate DATE NOT NULL ,acctime INTEGER ,quoteno CHAR( 1 ) ,quotedate DATE ,lddrbaL DECIMAL( 18 ,0 ) DEFAULT 0 ,ldcrbal DECIMAL( 18 ,0 ) ,tddramt DECIMAL( 18 ,0 ) DEFAULT 25 ,tdcramt DECIMAL( 18 ,0 ) ,tddrbal DECIMAL( 18 ,2 ) ,tdcrbal DECIMAL( 18 ,2 ) ) ON COMMIT PRESERVE ROWS DISTRIBUTE BY HASH ( BRNO ,CURRTYPE ,SUBCODE ) ; INSERT INTO Convert_Data3 ( lddrbaL ,ldcrbal ,tddramt ,tdcramt ,zoneno ,brno ,currtype ,commuteno ,subcode ,accdate ,acctime ,quoteno ,quotedate ,tddrbal ,tdcrbal ) SELECT 0 ,NULL ,25 ,NULL ,A.zoneno ,A.brno ,'014' currtype ,'2' commuteno ,A.subcode ,A.Accdate ,A.Acctime ,'2' quoteno ,B.workdate quoteDate ,CAST( ( CAST( SUM ( CAST( A.tddrbal AS FLOAT ) * CAST( B.USCVRATE AS FLOAT ) ) AS FLOAT ) ) AS DECIMAL( 18 ,2 ) ) AS tddrbal ,CAST( ( CAST( SUM ( CAST( A.tdcrbal AS FLOAT ) * CAST( B.USCVRATE AS FLOAT ) ) AS FLOAT ) ) AS DECIMAL( 18 ,2 ) ) AS tdcrbal FROM table2 A MINUS SELECT lddrbaL ,ldcrbal ,tddramt ,tdcramt ,zoneno ,brno ,currtype ,commuteno ,subcode ,accdate ,acctime ,quoteno ,quotedate ,tddrbal ,tdcrbal FROM CONVERT_DATA3 ; 父主题: 表迁移
  • COLUMN STORE 表的存储方式可使用CREATE TABLE语句中的WITH(ORIENTATION=COLUMN)从ROW-STORE转换为COLUMN存储。可使用rowstoreToColumnstore参数启用/禁用此功能。 输入:CREATE TABLE,修改存储模式为 COLUMN STORE 1 2 3 4 5 6 7 CREATE MULTISET VOLATILE TABLE tab1 ( c1 VARCHAR(30) CHARACTER SET UNICODE , c2 DATE , ... ) PRIMARY INDEX (c1, c2) ON COMMIT PRESERVE ROWS; 输出: 1 2 3 4 5 6 7 CREATE LOCAL TEMPORARY TABLE tab1 ( c1 VARCHAR(30) , c2 DATE , ... ) WITH (ORIENTATION = COLUMN) ON COMMIT PRESERVE ROWS DISTRIBUTE BY HASH (c1, c2); 父主题: 表迁移
  • 创建带索引分区表 如果配置参数tdMigrateRANGE_N为true。 输入: CREATE SET TABLE SC.TAB , NO FALLBACK, NO BEFORE JOURNAL, NO AFTER JOURNAL, CHECKSUM=DEFAULT, DEFAULT MERGEBLOCKRATIO ( ACCOUNT_NUM VARCHAR(255) CHARACTER SET LATIN NOT CASESPECIFIC NOT NULL ,ACCOUNT_MODIFIER_NUM CHAR(18) CHARACTER SET LATIN NOT CASESPECIFIC NOT NULL ,END_DT DATE FORMAT 'YYYY-MM-DD' ,UPD_TXF_BATCHTD INTEGER COMPRESS ) PRIMARY INDEX XPKT0300_AGREEMENT (ACCOUNT_NUM,ACCOUNT_MODIFIER_NUM) PARTITION BY RANGE_N(END_DT BETWEEN '2001-01-01' AND '2020-12-31' EACH INTERVAL '1' DAY, NO RANGE ,UNKNOWN) INDEX (UPD_TXF_BATCHTD) ; 输出: CREATE TABLE SC.TAB ( ACCOUNT_NUM VARCHAR( 255 ) /* CHARACTER SET LATIN*/ /* NOT CASESPECIFIC*/ NOT NULL ,ACCOUNT_MODIFIER_NUM CHAR( 18 ) /* CHARACTER SET LATIN*/ /* NOT CASESPECIFIC*/ NOT NULL ,END_DT DATE ,UPD_TXF_BATCHTD INTEGER /* COMPRESS */ ) DISTRIBUTE BY HASH ( ACCOUNT_NUM ,ACCOUNT_MODIFIER_NUM ) PARTITION BY RANGE (END_DT) ( PARTITION TAB_1 start ('2001-01-01') END ('2020-12-31') EVERY ( INTERVAL '1' DAY ) ) ; CREATE INDEX ON SC.TAB (UPD_TXF_BATCHTD) LOCAL;
  • TITLE和CREATE VIEW 输入: REPLACE VIEW ${STG_VIEW}.B971_AUMSUMMARY${TABLE_SUFFIX_INC} AS LOCK TABLE ${STG_DATA}.B971_AUMSUMMARY${TABLE_SUFFIX_INC} FOR ACCESS SELECT AUM_DATE (TITLE ' ') ,CLNTCODE (TITLE ' ') ,ACCTYPE (TITLE ' ') ,CCY (TITLE ' ') ,BAL_AMT (TITLE ' ') ,MON_BAL_AMT (TITLE ' ') ,HK_CLNTCODE (TITLE ' ') ,MNT_DATE (TITLE ' ') FROM ${STG_DATA}.B971_AUMSUMMARY${TABLE_SUFFIX_INC}; it should be migrated as below: CREATE OR REPLACE VIEW ${STG_VIEW}.B971_AUMSUMMARY${TABLE_SUFFIX_INC} AS /*LOCK TABLE ${STG_DATA}.B971_AUMSUMMARY${TABLE_SUFFIX_INC} FOR ACCESS */ SELECT AUM_DATE /* (TITLE ' ') */ ,CLNTCODE /* (TITLE ' ') */ ,ACCTYPE /* (TITLE ' ') */ ,CCY /* (TITLE ' ') */ ,BAL_AMT /* (TITLE ' ') */ ,MON_BAL_AMT /* (TITLE ' ') */ ,HK_CLNTCODE /* (TITLE ' ') */ ,MNT_DATE /* (TITLE ' ') */ FROM ${STG_DATA}.B971_AUMSUMMARY${TABLE_SUFFIX_INC}; 输出: CREATE OR REPLACE VIEW ${STG_VIEW}.B971_AUMSUMMARY${TABLE_SUFFIX_INC} AS /*LOCK TABLE ${STG_DATA}.B971_AUMSUMMARY${TABLE_SUFFIX_INC} FOR ACCESS */ SELECT AUM_DATE /* (TITLE ' ') */ ,CLNTCODE /* (TITLE ' ') */ ,ACCTYPE /* (TITLE ' ') */ ,CCY /* (TITLE ' ') */ ,BAL_AMT /* (TITLE ' ') */ ,MON_BAL_AMT /* (TITLE ' ') */ ,HK_CLNTCODE /* (TITLE ' ') */ ,MNT_DATE /* (TITLE ' ') */ FROM ${STG_DATA}.B971_AUMSUMMARY${TABLE_SUFFIX_INC};
  • MULTISET MULTISET是一个普通表,所有数据库都支持这个表。迁移工具同时支持MULTISET和SET表。 MULTISET表支持与VOLATILE一起使用。 输入:CREATE MULTISET TABLE 1 CREATE VOLATILE MULTISET TABLE T1 (c1 int ,c2 int); 输出: 1 2 3 4 5 6 7 CREATE LOCAL TEMPORARY TABLE T1 ( c1 INTEGER ,c2 INTEGER ) ; 父主题: 表迁移
  • SET SET是Teradata的独有功能。它不允许重复的记录。它使用MINUS集合操作符来实现。DSC支持MULTISET和SET表。SET表支持与VOLATILE一起使用。 输入:SET TABLE 1 2 3 4 5 CREATE SET VOLATILE TABLE tab1 … ; INSERT INTO tab1 SELECT expr1, expr2, … FROM tab1, … WHERE ….; 输出: 1 2 3 4 5 6 7 CREATE LOCAL TEMPORARY TABLE tab1 … ; INSERT INTO tab1 SELECT expr1, expr2, … FROM tab1, … WHERE …. MINUS SELECT * FROM tab1 ; 父主题: 表迁移
  • VOLATILE 输入文件中包含表的专用关键词VOLATILE,但GaussDB(DWS)不支持该关键词。因此,DSC在迁移过程中用关键词LOCAL TEMPORARY替换该关键词。根据配置输入,Volatile表在迁移中标记为本地临时表或无日志表。 输入:CREATE VOLATILE TABLE 1 CREATE VOLATILE TABLE T1 (c1 int ,c2 int); 输出: 1 2 3 4 5 6 7 CREATE LOCAL TEMPORARY TABLE T1 ( c1 INTEGER ,c2 INTEGER ) ; 输入:CREATE VOLATILE TABLE AS WITH DATA(session_mode=Teradata) 如果源表具有PRIMARY KEY(主键)或UNIQUE CONSTRAINT(唯一约束),则该表不包含任何重复记录。在这种情况下,不需要添加MINUS操作符删除重复的记录。 1 2 3 4 5 6 7 8 9 10 CREATE VOLATILE TABLE tabV1 ( C1 INTEGER DEFAULT 99 ,C2 INTEGER ,C3 INTEGER ,C4 NUMERIC (20,0) DEFAULT NULL (BIGINT) ,CONSTRAINT XX1 PRIMARY KEY ( C1, C2 ) ) PRIMARY INDEX (C1, C3 ); CREATE TABLE tabV2 AS tabV1 WITH DATA PRIMARY INDEX (C1) ON COMMIT PRESERVE ROWS; 输出: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 CREATE LOCAL TEMPORARY TABLE tabV1 ( C1 INTEGER DEFAULT 99 ,C2 INTEGER ,C3 INTEGER ,C4 NUMERIC (20,0) DEFAULT CAST( NULL AS BIGINT ) ,CONSTRAINT XX1 PRIMARY KEY ( C1, C2 ) ) DISTRIBUTE BY HASH (C1); BEGIN CREATE TABLE tabV2 ( LIKE tabV1 INCLUDING ALL EXCLUDING PARTITION EXCLUDING RELOPTIONS EXCLUDING DISTRIBUTION ) DISTRIBUTE BY HASH (C1); INSERT INTO tabV2 SELECT * FROM tabV1; END ; / 父主题: 表迁移
  • 表操作符 可以在查询的FROM子句中调用函数,该函数包含在表操作符内部。 输入:表操作符,使用RETURNS 1 2 SELECT * FROM TABLE( sales_retrieve (9005) RETURNS ( store INTEGER, item CLOB, quantity BYTEINT) ) AS ret; 输出: 1 2 SELECT * FROM sales_retrieve(9005) AS ret (store, item, quantity); 父主题: 函数和操作符
  • TIME FORMAT 输入: COALESCE(t3.Crt_Tm , CAST('00:00:00' AS TIME FORMAT 'HH:MI:SS')) COALESCE(LI07_F3EABCTLP.CTLREGTIM,CAST('${NULL_TIME}' AS TIME FORMAT 'HH:MI:sS')) trim(cast(cast(a.Ases_Orig_Tm as time format'hhmiss') as varchar(10)))
  • TIMESTAMP FORMAT 输入: select a.Org_Id as Brn_Org_Id /* */ ,a.Evt_Id as Vst_Srl_Nbr /* */ ,a.EAC_Id as EAC_Id /* */ ,cast(cast(cast(Prt_Tm as timestamp format 'YYYY-MM-DDBHH:MI:SS' ) as varchar(19) )as timestamp(0)) as Tsk_Start_Tm /* */ from ${BRTL_VCOR}.BRTL_BC_SLF_TMN_RTL_PRT_JNL as a /* BC_ */ where a.DW_Dat_Dt = CAST('${v_Trx_Dt}' AS DATE FORMAT 'YYYY-MM-DD') ;
  • TIMESTAMP(n) FORMAT 输入: select cast('${v_Trx_Dt}' as date format 'yyyy-mm-dd') as DW_Snsh_Dt /* */ ,coalesce(a.CRE_DAT,cast('0001-01-01 00:00:01' as timestamp(6) format 'yyyy-mm-ddbhh:mi:ssds(6)')) as Crt_Tm /* */ ,cast('${v_Trx_Dt}' as date format 'yyyy-mm-dd') as DW_ETL_Dt /* */ ,cast(current_date as date format 'yyyy-mm-dd') as DW_Upd_Dt /* */ ,current_time(0) as DW_Upd_Tm /* */ ,1 as DW_Job_Seq /* */ from ${NDS_VIEW}.NLV65_MGM_GLDCUS_INF_NEW as a /* MGM */ ; ----------- cast('0001-01-01 00:00:00' as timestamp(6) format 'yyyy-mm-ddbhh:mi:ssds(6)') TO_TIMESTAMP('0001-01-01 00:00:00', 'yyyy-mm-dd HH24:MI:SS.US' ) ---------- cast('0001-01-01 00:00:00.000000' as timestamp(6)) cast('0001-01-01 00:00:00.000000' as timestamp(6)) ---------- CAST('0001-01-01 00:00:00.000000' AS TIMESTAMP(6) FORMAT 'YYYY-MM-DDBHH:MI:SS.S(6)') TO_TIMESTAMP('0001-01-01 00:00:00.000000', 'yyyy-mm-dd HH24:MI:SS.US' ) ---------- cast(LA02_USERLOG_M.LOGTIME as TIMESTAMP(6) FORMAT 'YYYY-MM-DD HH:MI:SS.S(0)' ) TO_TIMESTAMP(LA02_USERLOG_M.LOGTIME, 'YYYY-MM-DD HH24:MI:SS' ) ---------- cast('0001-01-01 00:00:00' as timestamp(3) format 'yyyy-mm-ddbhh:mi:ssds(3)') TO_TIMESTAMP('0001-01-01 00:00:00', 'yyyy-mm-dd HH24:MI:SS.MS' ) ----------- CAST( '0001-01-01 00:00:01.000000' AS TIMESTAMP ( 6 ) format 'yyyy-mm-ddbhh:mi:ssds(6)' ) TO_TIMESTAMP('0001-01-01 00:00:01.000000', 'yyyy-mm-dd HH24:MI:SS.US' )
  • 声明Hexadecimal Binary Literal值 输入: CREATE MULTISET TABLE bvalues (IDVal INTEGER, CodeVal BYTE(2)); INSERT INTO bvalues VALUES (112193, '7879'XB) ; SELECT IDVal, CodeVal FROM bvalues WHERE CodeVal = '7879'XB ; 输出:
  • 声明Hexadecimal Character Literal值 输入: SELECT (COALESCE(TRIM(BOTH FROM VTX_D_RPT_0017_WMSE12_01_01.ID),'')) ||'7E'xc||(COALESCE(TRIM(BOTH FROM VTX_D_RPT_0017_WMSE12_01_01.Code),'')) ||'7E'xc||(COALESCE(TRIM(BOTH FROM VTX_D_RPT_0017_WMSE12_01_01.Description),'')) ||'7E'xc||(COALESCE(TRIM(BOTH FROM VTX_D_RPT_0017_WMSE12_01_01.Name),'')) ||'7E'xc||(COALESCE(TRIM(BOTH FROM VTX_D_RPT_0017_WMSE12_01_01.Host_Product_Id),'')) FROM DP_VTXEDW.VTX_D_RPT_0017_WMSE12_01_01 VTX_D_RPT_0017_WMSE12_01_01 WHERE 1=1 ; 输出:
  • Teradata 如果含有FORMAT参数的case语句未用半角括号“()”括起来,该语句不会处理。 例如: 1 case when column1='0' then column1='value' end (FORMAT 'YYYYMMDD')as alias1 在该示例中,case when column1='0' then column1='value' end未用半角括号括起,因此不会处理该语句。
  • 活动日志 DSC将所有日志和错误信息保存到DSC.log文件中。该文件位于log文件夹中。DSC.log文件包含执行迁移的用户、迁移的文件、时间戳等详细信息。活动日志的记录级别为INFO。 DSC.log的文件结构如下: 2020-01-22 09:35:10,769 INFO CLMigrationUtility:159 DSC is initiated by xxxxx 2020-01-22 09:35:10,828 INFO CLMigrationUtility:456 Successfully changed permission of files in D:\Migration\Gauss_Tools_18_Migration\code\migration\config 2020-01-22 09:35:10,832 INFO PropertyLoader:90 Successfully loaded Property file : D:\Migration\Gauss_Tools_18_Migration\code\migration\config\application.properties 2020-01-22 09:35:10,833 INFO ApplicationPropertyLoader:42 Application properties have been loaded Successfully 2020-01-22 09:35:10,917 INFO MigrationValidatorService:549 Files in output directory has been overwritten as configured by xxxxx 2020-01-22 09:35:10,920 INFO PropertyLoader:90 Successfully loaded Property file : D:\Migration\Gauss_Tools_18_Migration\code\migration\config\features-oracle.properties 2020-01-22 09:35:10,921 INFO FeatureLoader:41 Features have been loaded Successfully 2020-01-22 09:35:10,926 INFO MigrationService:80 DSC process start time : Wed Jan 22 09:35:10 GMT+05:30 2020 2020-01-22 09:35:10,933 INFO FileHandler:179 File is not supported. D:\Migration_Output\Source\ARRYTYPE.sql- 2020-01-22 09:35:10,934 INFO FileHandler:179 File is not supported. D:\Migration_Output\Source\varray.sql- 2020-01-22 09:35:12,816 INFO PropertyLoader:90 Successfully loaded Property file : D:\Migration\Gauss_Tools_18_Migration\code\migration\config\global-temp-tables.properties 2020-01-22 09:35:12,830 INFO PropertyLoader:90 Successfully loaded Property file : D:\Migration\Gauss_Tools_18_Migration\code\migration\config\create-types-UDT.properties 2020-01-22 09:35:12,834 INFO PropertyLoader:90 Successfully loaded Property file : D:\Migration\Gauss_Tools_18_Migration\code\migration\config\package-names-oracle.properties 2020-01-22 09:35:12,849 INFO DBMigrationService:76 Number of Available Processors: 4 2020-01-22 09:35:12,850 INFO DBMigrationService:78 Configured simultaneous processes in the Tool : 3 2020-01-22 09:35:13,032 INFO MigrationProcessor:94 File name: D:\Migration_Output\Source\Input.sql is started 2020-01-22 09:35:13,270 INFO FileHandler:606 guessencoding command output = Error: Unable to access jarfile D:\Migration\Gauss_Tools_18_Migration\code\migration\RE_migration\target\dsctool.jar , for file= D:\Migration_Output\Source\Input.sql 2020-01-22 09:35:13,272 INFO FileHandler:625 couldn't get the encoding format, so using the default charset for D:\Migration_Output\Source\Input.sql 2020-01-22 09:35:13,272 INFO FileHandler:310 File D:\Migration_Output\Source\Input.sql will be read with charset : UTF-8 2020-01-22 09:35:13,390 INFO FileHandler:668 D:\Migration_Output\target\output\Input.sql - File already exists/Failed to create target file 2020-01-22 09:35:13,562 INFO FileHandler:606 guessencoding command output = Error: Unable to access jarfile D:\Migration\Gauss_Tools_18_Migration\code\migration\RE_migration\target\dsctool.jar , for file= D:\Migration_Output\Source\Input.sql 2020-01-22 09:35:13,563 INFO FileHandler:625 couldn't get the encoding format, so using the default charset for D:\Migration_Output\Source\Input.sql 2020-01-22 09:35:13,563 INFO FileHandler:675 File D:\Migration_Output\Source\Input.sql will be written with charset : UTF-8 2020-01-22 09:35:13,604 INFO MigrationProcessor:139 File name: D:\Migration_Output\Source\Input.sql is processed successfully 2020-01-22 09:35:13,605 INFO MigrationService:147 Total number of files in Input folder : 3 2020-01-22 09:35:13,605 INFO MigrationService:148 Total number of queries : 1 22020-01-22 09:35:13,607 INFO PropertyLoader:164 Successfully updated Property file : D:\Migration\Gauss_Tools_18_Migration\code\migration\config\global-temp-tables.properties 2020-01-22 09:35:13,630 INFO PropertyLoader:164 Successfully updated Property file : D:\Migration\Gauss_Tools_18_Migration\code\migration\config\create-types-UDT.properties 2020-01-22 09:35:13,631 INFO PropertyLoader:164 Successfully updated Property file : D:\Migration\Gauss_Tools_18_Migration\code\migration\config\package-names-oracle.properties 2020-01-22 09:35:13,632 INFO CLMigrationUtility:305 Log file : dsc.log and the file is present in the path : D:\Migration_Output\log 2020-01-22 09:35:13,632 INFO CLMigrationUtility:312 DSC process end time : Wed Jan 22 09:35:13 GMT+05:30 2020 2020-01-22 09:35:13,632 INFO CLMigrationUtility:217 Total process time : 2842 seconds
  • 成功读 在DSC读取文件之后,该文件将被记录日志以进行跟踪。在某些情况下,用户可通过这些日志获取文件执行状态的信息。该文件位于log文件夹中。日志文件包括日期、时间、文件名等详细信息。此日志文件的日志记录级别为INFO。 successRead.log的文件结构如下: 2017-07-21 14:13:00,461 INFO readlogger:213 /home/testmigration/Documentation/is not in.sql is read successfully. 2017-07-21 14:13:00,957 INFO readlogger:213 /home/testmigration/Documentation/date quotes.sql is read successfully. 2017-07-21 14:13:01,509 INFO readlogger:213 /home/testmigration/Documentation/column alias replace.sql is read successfully. 2017-07-21 14:13:02,034 INFO readlogger:213 /home/testmigration/Documentation/sampleRownum.sql is read successfully. 2017-07-21 14:13:02,578 INFO readlogger:213 /home/testmigration/Documentation/samp.sql is read successfully. 2017-07-21 14:13:03,145 INFO readlogger:213 /home/testmigration/Documentation/2.6BuildInputs/testWithNodataSamples.sql is read successfully.
  • 成功写 DSC读取、处理文件并将输出写入磁盘。这个过程被记录到成功写日志文件中。在某些情况下,用户可通过此文件了解哪些文件已处理成功。在重新运行的情况下,用户可以跳过这些文件运行剩余的文件。该文件位于log文件夹中。日志文件包括日期、时间、文件名等详细信息。此日志文件的日志记录级别为INFO。 successWrite.log的文件结构如下: 2017-07-21 14:13:00,616 INFO writelogger:595 /home/testmigration/Documentation/is not in.sql has written successfully. 2017-07-21 14:13:01,055 INFO writelogger:595 /home/testmigration/Documentation/date quotes.sql has written successfully. 2017-07-21 14:13:01,569 INFO writelogger:595 /home/testmigration/Documentation/column alias replace.sql has written successfully. 2017-07-21 14:13:02,055 INFO writelogger:595 /home/testmigration/Documentation/sampleRownum.sql has written successfully. 2017-07-21 14:13:02,597 INFO writelogger:595 /home/testmigration/Documentation/samp.sql has written successfully. 2017-07-21 14:13:03,178 INFO writelogger:595 /home/testmigration/Documentation/testWithNodataSamples.sql has written successfully.
  • 错误日志 DSC仅将迁移过程中发生的错误记录到DSCError.log文件中。该文件位于log文件夹中。DSCError.log文件包含这些错误的日期、时间,文件详细信息(如文件名),以及查询位置等信息。错误日志的记录级别为ERROR。 DSCError.log的文件结构如下: 2017-06-29 14:07:39,585 ERROR TeradataBulkHandler:172 Error occurred during processing of input in Bulk Migration. PreQueryValidation failed in not proper termination or exclude keyword. /home/testmigration/Documentation/Input/c005.sql for Query in position : 4 2017-06-29 14:07:39,962 ERROR TeradataBulkHandler:172 Error occurred during processing of input in Bulk Migration. PreQueryValidation failed in not proper termination or exclude keyword. /home/testmigration/Documentation/Input/c013.sql for Query in position : 11 2017-06-29 14:07:40,136 ERROR QueryConversionUtility:250 Query is not converted as it contains unsupported keyword: join select 2017-06-29 14:07:40,136 ERROR TeradataBulkHandler:172 Error occurred during processing of input in Bulk Migration. PreQueryValidation failed in not proper termination or exclude keyword. /home/testmigration/Documentation/Input/sample.sql for Query in position : 1 2017-06-29 14:07:40,136 ERROR TeradataBulkHandler:172 Error occurred during processing of input in Bulk Migration. PreQueryValidation failed in not proper termination or exclude keyword. /home/testmigration/Documentation/Input/sample.sql for Query in position : 3
  • 概述 本节描述Teradata Perl文件迁移过程的详细信息。 请使用runDSC.sh或runDSC.bat命令并设置--application-lang=perl,将Perl文件中的Teradata BTEQ或SQL_LANG脚本迁移到兼容Perl文件的GaussDB(DWS)中。迁移Perl文件后,可使用对比工具比较输入和输出文件进行验证。 Perl文件迁移流程如下: 完成前提条件中的步骤。 创建输入文件夹,并将待迁移Perl文件复制到该文件夹。例如:/migrationfiles/perlfiles 执行DSC迁移Perl脚本,并将db-bteq-tag-name设为BTEQ或db-tdsql-tag-name设为SQL_LANG。 DSC从Perl文件中提取BTEQ或SQL_LANG类型脚本。 BTEQ是标签名称,包含一组BTEQ脚本,可以通过perl-migration.properties文件中的db-bteq-tag-name参数来配置。 SQL_LANG也是标签名称,包含Teradata SQL语句,可以通过db-tdsql-tag-name参数来配置。 DSC通过调用Teradata SQL来迁移提取到的SQL脚本。有关Teradata SQL迁移的详细信息,请参见Teradata SQL迁移。 Perl文件嵌入迁移后脚本。 在指定的输出文件夹中创建迁移后的Perl文件。如果未指定输出文件夹,则工具会在输入文件夹内创建一个名为converted的输出文件夹,例如:/migrationfiles/perlfiles/converted。 包含SQL命令的Perl变量也可以通过migrate-variables参数迁移为SQL。 Perl v 5.10.0及以上提供兼容能力。
  • 任务示例 示例:将Teradata数据库的SQL文件迁移到适用于Linux系统下的GaussDB(DWS)的SQL脚本中。 1 ./runDSC.sh --source-db Teradata --input-folder D:\test\conversion\input --output-folder D:\test\conversion\output --log-folder D:\test\conversion\log --conversion-type ddl --targetdb gaussdb 示例:执行以下命令,将Teradata数据库的SQL文件迁移到适用于Windows操作系统下的GaussDB(DWS)的SQL脚本中。 1 runDSC.bat --source-db Teradata --input-folder D:\test\conversion\input --output-folder D:\test\conversion\output --log-folder D:\test\conversion\log --conversion-type ddl --targetdb gaussdb 控制台上显示迁移详情(包括进度和完成状态): ********************** Schema Conversion Started ************************* DSC process start time : Mon Jan 20 17:24:49 IST 2020 Statement count progress 100% completed [FILE(1/1)] Schema Conversion Progress 100% completed ************************************************************************** Total number of files in input folder : 1 Total number of valid files in input folder : 1 ************************************************************************** Log file path :....../DSC/DSC/log/dsc.log Error Log file : DSC process end time : Mon Jan 20 17:24:49 IST 2020 DSC total process time : 0 seconds ********************* Schema Conversion Completed ************************
共100000条