Package google.cloud.bigquery.migration.v2alpha

Index

MigrationService

Service to handle EDW migrations.

CreateMigrationWorkflow

rpc CreateMigrationWorkflow(CreateMigrationWorkflowRequest) returns (MigrationWorkflow)

Creates a migration workflow.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the parent resource:

  • bigquerymigration.workflows.create

For more information, see the IAM documentation.

DeleteMigrationWorkflow

rpc DeleteMigrationWorkflow(DeleteMigrationWorkflowRequest) returns (Empty)

Deletes a migration workflow by name.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the name resource:

  • bigquerymigration.workflows.delete

For more information, see the IAM documentation.

GetMigrationSubtask

rpc GetMigrationSubtask(GetMigrationSubtaskRequest) returns (MigrationSubtask)

Gets a previously created migration subtask.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the name resource:

  • bigquerymigration.subtasks.get

For more information, see the IAM documentation.

GetMigrationWorkflow

rpc GetMigrationWorkflow(GetMigrationWorkflowRequest) returns (MigrationWorkflow)

Gets a previously created migration workflow.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the name resource:

  • bigquerymigration.workflows.get

For more information, see the IAM documentation.

ListMigrationSubtasks

rpc ListMigrationSubtasks(ListMigrationSubtasksRequest) returns (ListMigrationSubtasksResponse)

Lists previously created migration subtasks.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the parent resource:

  • bigquerymigration.subtasks.list

For more information, see the IAM documentation.

ListMigrationWorkflows

rpc ListMigrationWorkflows(ListMigrationWorkflowsRequest) returns (ListMigrationWorkflowsResponse)

Lists previously created migration workflow.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the parent resource:

  • bigquerymigration.workflows.list

For more information, see the IAM documentation.

StartMigrationWorkflow

rpc StartMigrationWorkflow(StartMigrationWorkflowRequest) returns (Empty)

Starts a previously created migration workflow. I.e., the state transitions from DRAFT to RUNNING. This is a no-op if the state is already RUNNING. An error will be signaled if the state is anything other than DRAFT or RUNNING.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the name resource:

  • bigquerymigration.workflows.update

For more information, see the IAM documentation.

AssessmentOrchestrationResultDetails

Details for an assessment task orchestration result.

Fields
output_tables_schema_version

string

Optional. The version used for the output table schemas.

report_uri

string

Optional. The URI of the Data Studio report.

AssessmentTaskDetails

Assessment task config.

Fields
input_path

string

Required. The Cloud Storage path for assessment input files.

output_dataset

string

Required. The BigQuery dataset for output.

querylogs_path

string

Optional. An optional Cloud Storage path to write the query logs (which is then used as an input path on the translation task)

data_source

string

Required. The data source or data warehouse type (eg: TERADATA/REDSHIFT) from which the input data is extracted.

AzureSynapseDialect

This type has no fields.

The dialect definition for Azure Synapse.

BigQueryDialect

This type has no fields.

The dialect definition for BigQuery.

BteqOptions

BTEQ translation task related settings.

Fields
project_dataset

DatasetReference

Specifies the project and dataset in BigQuery that will be used for external table creation during the translation.

default_path_uri

string

The Cloud Storage location to be used as the default path for files that are not otherwise specified in the file replacement map.

file_replacement_map

map<string, string>

Maps the local paths that are used in BTEQ scripts (the keys) to the paths in Cloud Storage that should be used in their stead in the translation (the value).

CreateMigrationWorkflowRequest

Request to create a migration workflow resource.

Fields
parent

string

Required. The name of the project to which this migration workflow belongs. Example: projects/foo/locations/bar

migration_workflow

MigrationWorkflow

Required. The migration workflow to create.

DB2Dialect

This type has no fields.

The dialect definition for DB2

DatasetReference

Reference to a BigQuery dataset.

Fields
dataset_id

string

A unique ID for this dataset, without the project name. The ID must contain only letters (a-z, A-Z), numbers (0-9), or underscores (_). The maximum length is 1,024 characters.

project_id

string

The ID of the project containing this dataset.

DeleteMigrationWorkflowRequest

A request to delete a previously created migration workflow.

Fields
name

string

Required. The unique identifier for the migration workflow. Example: projects/123/locations/us/workflows/1234

Dialect

The possible dialect options for translation.

Fields
Union field dialect_value. The possible dialect options that this message represents. dialect_value can be only one of the following:
bigquery_dialect

BigQueryDialect

The BigQuery dialect

hiveql_dialect

HiveQLDialect

The HiveQL dialect

redshift_dialect

RedshiftDialect

The Redshift dialect

teradata_dialect

TeradataDialect

The Teradata dialect

oracle_dialect

OracleDialect

The Oracle dialect

sparksql_dialect

SparkSQLDialect

The SparkSQL dialect

snowflake_dialect

SnowflakeDialect

The Snowflake dialect

netezza_dialect

NetezzaDialect

The Netezza dialect

azure_synapse_dialect

AzureSynapseDialect

The Azure Synapse dialect

vertica_dialect

VerticaDialect

The Vertica dialect

sql_server_dialect

SQLServerDialect

The SQL Server dialect

postgresql_dialect

PostgresqlDialect

The Postgresql dialect

presto_dialect

PrestoDialect

The Presto dialect

mysql_dialect

MySQLDialect

The MySQL dialect

ErrorDetail

Provides details for errors, e.g. issues that where encountered when processing a subtask.

Fields
location

ErrorLocation

Optional. The exact location within the resource (if applicable).

error_info

ErrorInfo

Required. Describes the cause of the error with structured detail.

ErrorLocation

Holds information about where the error is located.

Fields
line

int32

Optional. If applicable, denotes the line where the error occurred. A zero value means that there is no line information.

column

int32

Optional. If applicable, denotes the column where the error occurred. A zero value means that there is no columns information.

Filter

The filter applied to fields of translation details.

Fields
input_file_exclusion_prefixes[]

string

The list of prefixes used to exclude processing for input files.

GcsReportLogMessage

A record in the aggregate CSV report for a migration workflow

Fields
severity

string

Severity of the translation record.

category

string

Category of the error/warning. Example: SyntaxError

file_path

string

The file path in which the error occurred

filename

string

The file name in which the error occurred

source_script_line

int32

Specifies the row from the source text where the error occurred (0 based, -1 for messages without line location). Example: 2

source_script_column

int32

Specifies the column from the source texts where the error occurred. (0 based, -1 for messages without column location) example: 6

message

string

Detailed message of the record.

script_context

string

The script context (obfuscated) in which the error occurred

action

string

Category of the error/warning. Example: SyntaxError

effect

string

Category of the error/warning. Example: SyntaxError

object_name

string

Name of the affected object in the log message.

GetMigrationSubtaskRequest

A request to get a previously created migration subtasks.

Fields
name

string

Required. The unique identifier for the migration subtask. Example: projects/123/locations/us/workflows/1234/subtasks/543

read_mask

FieldMask

Optional. The list of fields to be retrieved.

GetMigrationWorkflowRequest

A request to get a previously created migration workflow.

Fields
name

string

Required. The unique identifier for the migration workflow. Example: projects/123/locations/us/workflows/1234

read_mask

FieldMask

The list of fields to be retrieved.

HiveQLDialect

This type has no fields.

The dialect definition for HiveQL.

IdentifierSettings

Settings related to SQL identifiers.

Fields
output_identifier_case

IdentifierCase

The setting to control output queries' identifier case.

identifier_rewrite_mode

IdentifierRewriteMode

Specifies the rewrite mode for SQL identifiers.

IdentifierCase

The identifier case type.

Enums
IDENTIFIER_CASE_UNSPECIFIED The identifier case is not specified.
ORIGINAL Identifiers' cases will be kept as the original cases.
UPPER Identifiers will be in upper cases.
LOWER Identifiers will be in lower cases.

IdentifierRewriteMode

The SQL identifier rewrite mode.

Enums
IDENTIFIER_REWRITE_MODE_UNSPECIFIED SQL Identifier rewrite mode is unspecified.
NONE SQL identifiers won't be rewrite.
REWRITE_ALL All SQL identifiers will be rewrite.

ListMigrationSubtasksRequest

A request to list previously created migration subtasks.

Fields
parent

string

Required. The migration task of the subtasks to list. Example: projects/123/locations/us/workflows/1234

read_mask

FieldMask

Optional. The list of fields to be retrieved.

page_size

int32

Optional. The maximum number of migration tasks to return. The service may return fewer than this number.

page_token

string

Optional. A page token, received from previous ListMigrationSubtasks call. Provide this to retrieve the subsequent page.

When paginating, all other parameters provided to ListMigrationSubtasks must match the call that provided the page token.

filter

string

Optional. The filter to apply. This can be used to get the subtasks of a specific tasks in a workflow, e.g. migration_task = "ab012" where "ab012" is the task ID (not the name in the named map).

ListMigrationSubtasksResponse

Response object for a ListMigrationSubtasks call.

Fields
migration_subtasks[]

MigrationSubtask

The migration subtasks for the specified task.

next_page_token

string

A token, which can be sent as page_token to retrieve the next page. If this field is omitted, there are no subsequent pages.

ListMigrationWorkflowsRequest

A request to list previously created migration workflows.

Fields
parent

string

Required. The project and location of the migration workflows to list. Example: projects/123/locations/us

read_mask

FieldMask

The list of fields to be retrieved.

page_size

int32

The maximum number of migration workflows to return. The service may return fewer than this number.

page_token

string

A page token, received from previous ListMigrationWorkflows call. Provide this to retrieve the subsequent page.

When paginating, all other parameters provided to ListMigrationWorkflows must match the call that provided the page token.

ListMigrationWorkflowsResponse

Response object for a ListMigrationWorkflows call.

Fields
migration_workflows[]

MigrationWorkflow

The migration workflows for the specified project / location.

next_page_token

string

A token, which can be sent as page_token to retrieve the next page. If this field is omitted, there are no subsequent pages.

Literal

Literal data.

Fields
relative_path

string

Required. The identifier of the literal entry.

Union field literal_data. The literal SQL contents. literal_data can be only one of the following:
literal_string

string

Literal string data.

literal_bytes

bytes

Literal byte data.

MigrationSubtask

A subtask for a migration which carries details about the configuration of the subtask. The content of the details should not matter to the end user, but is a contract between the subtask creator and subtask worker.

Fields
name

string

Output only. Immutable. The resource name for the migration subtask. The ID is server-generated.

Example: projects/123/locations/us/workflows/345/subtasks/678

task_id

string

The unique ID of the task to which this subtask belongs.

type

string

The type of the Subtask. The migration service does not check whether this is a known type. It is up to the task creator (i.e. orchestrator or worker) to ensure it only creates subtasks for which there are compatible workers polling for Subtasks.

state

State

Output only. The current state of the subtask.

processing_error

ErrorInfo

Output only. An explanation that may be populated when the task is in FAILED state.

resource_error_details[]

ResourceErrorDetail

Output only. Provides details to errors and issues encountered while processing the subtask. Presence of error details does not mean that the subtask failed.

resource_error_count

int32

The number or resources with errors. Note: This is not the total number of errors as each resource can have more than one error. This is used to indicate truncation by having a resource_error_count that is higher than the size of resource_error_details.

create_time

Timestamp

Time when the subtask was created.

last_update_time

Timestamp

Time when the subtask was last updated.

metrics[]

TimeSeries

The metrics for the subtask.

State

Possible states of a migration subtask.

Enums
STATE_UNSPECIFIED The state is unspecified.
ACTIVE The subtask is ready, i.e. it is ready for execution.
RUNNING The subtask is running, i.e. it is assigned to a worker for execution.
SUCCEEDED The subtask finished successfully.
FAILED The subtask finished unsuccessfully.
PAUSED The subtask is paused, i.e., it will not be scheduled. If it was already assigned,it might still finish but no new lease renewals will be granted.
PENDING_DEPENDENCY The subtask is pending a dependency. It will be scheduled once its dependencies are done.

MigrationTask

A single task for a migration which has details about the configuration of the task.

Fields
id

string

Output only. Immutable. The unique identifier for the migration task. The ID is server-generated.

type

string

The type of the task. This must be one of the supported task types: Translation_Teradata2BQ, Translation_Redshift2BQ, Translation_Bteq2BQ, Translation_Oracle2BQ, Translation_HiveQL2BQ, Translation_SparkSQL2BQ, Translation_Snowflake2BQ, Translation_Netezza2BQ, Translation_AzureSynapse2BQ, Translation_Vertica2BQ, Translation_SQLServer2BQ, Translation_Presto2BQ, Translation_MySQL2BQ, Translation_Postgresql2BQ.

details

Any

DEPRECATED! Use one of the task_details below. The details of the task. The type URL must be one of the supported task details messages and correspond to the Task's type.

state

State

Output only. The current state of the task.

processing_error

ErrorInfo

Output only. An explanation that may be populated when the task is in FAILED state.

create_time

Timestamp

Time when the task was created.

last_update_time

Timestamp

Time when the task was last updated.

orchestration_result

MigrationTaskOrchestrationResult

Output only. Additional information about the orchestration.

resource_error_details[]

ResourceErrorDetail

Output only. Provides details to errors and issues encountered while processing the task. Presence of error details does not mean that the task failed.

resource_error_count

int32

The number or resources with errors. Note: This is not the total number of errors as each resource can have more than one error. This is used to indicate truncation by having a resource_error_count that is higher than the size of resource_error_details.

metrics[]

TimeSeries

The metrics for the task.

Union field task_details. The details of the task. task_details can be only one of the following:
assessment_task_details

AssessmentTaskDetails

Task configuration for Assessment.

translation_task_details

TranslationTaskDetails

Task configuration for Batch SQL Translation.

translation_config_details

TranslationConfigDetails

Task configuration for CW Batch/Offline SQL Translation.

translation_details

TranslationDetails

Task details for unified SQL Translation.

State

Possible states of a migration task.

Enums
STATE_UNSPECIFIED The state is unspecified.
PENDING The task is waiting for orchestration.
ORCHESTRATING The task is assigned to an orchestrator.
RUNNING The task is running, i.e. its subtasks are ready for execution.
PAUSED Tha task is paused. Assigned subtasks can continue, but no new subtasks will be scheduled.
SUCCEEDED The task finished successfully.
FAILED The task finished unsuccessfully.

MigrationTaskOrchestrationResult

Additional information from the orchestrator when it is done with the task orchestration.

Fields
Union field details. Details specific to the task type. details can be only one of the following:
assessment_details

AssessmentOrchestrationResultDetails

Details specific to assessment task types.

translation_task_result

TranslationTaskResult

Details specific to translation task types.

MigrationWorkflow

A migration workflow which specifies what needs to be done for an EDW migration.

Fields
name

string

Output only. Immutable. Identifier. The unique identifier for the migration workflow. The ID is server-generated.

Example: projects/123/locations/us/workflows/345

display_name

string

The display name of the workflow. This can be set to give a workflow a descriptive name. There is no guarantee or enforcement of uniqueness.

tasks

map<string, MigrationTask>

The tasks in a workflow in a named map. The name (i.e. key) has no meaning and is merely a convenient way to address a specific task in a workflow.

state

State

Output only. That status of the workflow.

create_time

Timestamp

Time when the workflow was created.

last_update_time

Timestamp

Time when the workflow was last updated.

State

Possible migration workflow states.

Enums
STATE_UNSPECIFIED Workflow state is unspecified.
DRAFT Workflow is in draft status, i.e. tasks are not yet eligible for execution.
RUNNING Workflow is running (i.e. tasks are eligible for execution).
PAUSED Workflow is paused. Tasks currently in progress may continue, but no further tasks will be scheduled.
COMPLETED Workflow is complete. There should not be any task in a non-terminal state, but if they are (e.g. forced termination), they will not be scheduled.

MySQLDialect

This type has no fields.

The dialect definition for MySQL.

NetezzaDialect

This type has no fields.

The dialect definition for Netezza.

OracleDialect

This type has no fields.

The dialect definition for Oracle.

Point

A single data point in a time series.

Fields
interval

TimeInterval

The time interval to which the data point applies. For GAUGE metrics, the start time does not need to be supplied, but if it is supplied, it must equal the end time. For DELTA metrics, the start and end time should specify a non-zero interval, with subsequent points specifying contiguous and non-overlapping intervals. For CUMULATIVE metrics, the start and end time should specify a non-zero interval, with subsequent points specifying the same start time and increasing end times, until an event resets the cumulative value to zero and sets a new start time for the following points.

value

TypedValue

The value of the data point.

PostgresqlDialect

This type has no fields.

The dialect definition for Postgresql.

PrestoDialect

This type has no fields.

The dialect definition for Presto.

RedshiftDialect

This type has no fields.

The dialect definition for Redshift.

ResourceErrorDetail

Provides details for errors and the corresponding resources.

Fields
resource_info

ResourceInfo

Required. Information about the resource where the error is located.

error_details[]

ErrorDetail

Required. The error details for the resource.

error_count

int32

Required. How many errors there are in total for the resource. Truncation can be indicated by having an error_count that is higher than the size of error_details.

SQLServerDialect

This type has no fields.

The dialect definition for SQL Server.

SnowflakeDialect

This type has no fields.

The dialect definition for Snowflake.

SourceLocation

Represents one path to the location that holds source data.

Fields
Union field location. The location of the source data. location can be only one of the following:
gcs_path

string

The Cloud Storage path for a directory of files.

SourceSpec

Represents one path to the location that holds source data.

Fields
encoding

string

Optional. The optional field to specify the encoding of the sql bytes.

Union field source. The specific source SQL. source can be only one of the following:
base_uri

string

The base URI for all files to be read in as sources for translation.

literal

Literal

Source literal.

SourceTargetLocationMapping

Represents one mapping from a source location path to an optional target location path.

Fields
source_location

SourceLocation

The path to the location of the source data.

target_location

TargetLocation

The path to the location of the target data.

SourceTargetMapping

Represents one mapping from a source SQL to a target SQL.

Fields
source_spec

SourceSpec

The source SQL or the path to it.

target_spec

TargetSpec

The target SQL or the path for it.

SparkSQLDialect

This type has no fields.

The dialect definition for SparkSQL.

StartMigrationWorkflowRequest

A request to start a previously created migration workflow.

Fields
name

string

Required. The unique identifier for the migration workflow. Example: projects/123/locations/us/workflows/1234

TargetLocation

// Represents one path to the location that holds target data.

Fields
Union field location. The location of the target data. location can be only one of the following:
gcs_path

string

The Cloud Storage path for a directory of files.

TargetSpec

Represents one path to the location that holds target data.

Fields
relative_path

string

The relative path for the target data. Given source file base_uri/input/sql, the output would be target_base_uri/sql/relative_path/input.sql.

TeradataDialect

The dialect definition for Teradata.

Fields
mode

Mode

Which Teradata sub-dialect mode the user specifies.

Mode

The sub-dialect options for Teradata.

Enums
MODE_UNSPECIFIED Unspecified mode.
SQL Teradata SQL mode.
BTEQ BTEQ mode (which includes SQL).

TeradataOptions

This type has no fields.

Teradata SQL specific translation task related settings.

TimeInterval

A time interval extending just after a start time through an end time. If the start time is the same as the end time, then the interval represents a single point in time.

Fields
start_time

Timestamp

Optional. The beginning of the time interval. The default value for the start time is the end time. The start time must not be later than the end time.

end_time

Timestamp

Required. The end of the time interval.

TimeSeries

The metrics object for a SubTask.

Fields
metric

string

Required. The name of the metric.

If the metric is not known by the service yet, it will be auto-created.

value_type

ValueType

Required. The value type of the time series.

metric_kind

MetricKind

Optional. The metric kind of the time series.

If present, it must be the same as the metric kind of the associated metric. If the associated metric's descriptor must be auto-created, then this field specifies the metric kind of the new descriptor and must be either GAUGE (the default) or CUMULATIVE.

points[]

Point

Required. The data points of this time series. When listing time series, points are returned in reverse time order.

When creating a time series, this field must contain exactly one point and the point's type must be the same as the value type of the associated metric. If the associated metric's descriptor must be auto-created, then the value type of the descriptor is determined by the point's type, which must be BOOL, INT64, DOUBLE, or DISTRIBUTION.

TranslationConfigDetails

The translation config to capture necessary settings for a translation task and subtask.

Fields
source_dialect

Dialect

The dialect of the input files.

target_dialect

Dialect

The target dialect for the engine to translate the input to.

source_env

SourceEnv

The default source environment values for the translation.

source_target_location_mapping[]

SourceTargetLocationMapping

The mapping from source location paths to target location paths.

request_source

string

The indicator to show translation request initiator.

Union field source_location. The chosen path where the source for input files will be found. source_location can be only one of the following:
gcs_source_path

string

The Cloud Storage path for a directory of files to translate in a task.

Union field target_location. The chosen path where the destination for output files will be found. target_location can be only one of the following:
gcs_target_path

string

The Cloud Storage path to write back the corresponding input files to.

Union field output_name_mapping. The mapping of full SQL object names from their current state to the desired output. output_name_mapping can be only one of the following:
name_mapping_list

ObjectNameMappingList

The mapping of objects to their desired output names in list form.

TranslationDetails

The translation details to capture the necessary settings for a translation job.

Fields
source_target_mapping[]

SourceTargetMapping

The mapping from source to target SQL.

target_base_uri

string

The base URI for all writes to persistent storage.

source_environment

SourceEnvironment

The default source environment values for the translation.

target_return_literals[]

string

The list of literal targets that will be directly returned to the response. Each entry consists of the constructed path, EXCLUDING the base path. Not providing a target_base_uri will prevent writing to persistent storage.

target_types[]

string

The types of output to generate, e.g. sql, sqlx, lineage, analysis, etc If not specified, a default set of targets will be generated. Some additional target types may be slower to generate. See the documentation for the set of available target types.

TranslationFileMapping

Mapping between an input and output file to be translated in a subtask.

Fields
input_path

string

The Cloud Storage path for a file to translation in a subtask.

output_path

string

The Cloud Storage path to write back the corresponding input file to.

TranslationTaskDetails

The translation task config to capture necessary settings for a translation task and subtask.

Fields
input_path

string

The Cloud Storage path for translation input files.

output_path

string

The Cloud Storage path for translation output files.

file_paths[]

TranslationFileMapping

Cloud Storage files to be processed for translation.

schema_path

string

The Cloud Storage path to DDL files as table schema to assist semantic translation.

file_encoding

FileEncoding

The file encoding type.

identifier_settings

IdentifierSettings

The settings for SQL identifiers.

special_token_map

map<string, TokenType>

The map capturing special tokens to be replaced during translation. The key is special token in string. The value is the token data type. This is used to translate SQL query template which contains special token as place holder. The special token makes a query invalid to parse. This map will be applied to annotate those special token with types to let parser understand how to parse them into proper structure with type information.

filter

Filter

The filter applied to translation details.

translation_exception_table

string

Specifies the exact name of the bigquery table ("dataset.table") to be used for surfacing raw translation errors. If the table does not exist, we will create it. If it already exists and the schema is the same, we will re-use. If the table exists and the schema is different, we will throw an error.

Union field language_options. The language specific settings for the translation task. language_options can be only one of the following:
teradata_options

TeradataOptions

The Teradata SQL specific settings for the translation task.

bteq_options

BteqOptions

The BTEQ specific settings for the translation task.

FileEncoding

The file encoding types.

Enums
FILE_ENCODING_UNSPECIFIED File encoding setting is not specified.
UTF_8 File encoding is UTF_8.
ISO_8859_1 File encoding is ISO_8859_1.
US_ASCII File encoding is US_ASCII.
UTF_16 File encoding is UTF_16.
UTF_16LE File encoding is UTF_16LE.
UTF_16BE File encoding is UTF_16BE.

TokenType

The special token data type.

Enums
TOKEN_TYPE_UNSPECIFIED Token type is not specified.
STRING Token type as string.
INT64 Token type as integer.
NUMERIC Token type as numeric.
BOOL Token type as boolean.
FLOAT64 Token type as float.
DATE Token type as date.
TIMESTAMP Token type as timestamp.

TranslationTaskResult

Translation specific result details from the migration task.

Fields
translated_literals[]

Literal

The list of the translated literals.

report_log_messages[]

GcsReportLogMessage

The records from the aggregate CSV report for a migration workflow.

TypedValue

A single strongly-typed value.

Fields
Union field value. The typed value field. value can be only one of the following:
bool_value

bool

A Boolean value: true or false.

int64_value

int64

A 64-bit integer. Its range is approximately +/-9.2x10^18.

double_value

double

A 64-bit double-precision floating-point number. Its magnitude is approximately +/-10^(+/-300) and it has 16 significant digits of precision.

string_value

string

A variable-length string value.

distribution_value

Distribution

A distribution value.

VerticaDialect

This type has no fields.

The dialect definition for Vertica.