Google Cloud Native is in preview. Google Cloud Classic is fully supported.
Google Cloud Native v0.32.0 published on Wednesday, Nov 29, 2023 by Pulumi
google-native.aiplatform/v1.getModelDeploymentMonitoringJob
Explore with Pulumi AI
Google Cloud Native is in preview. Google Cloud Classic is fully supported.
Google Cloud Native v0.32.0 published on Wednesday, Nov 29, 2023 by Pulumi
Gets a ModelDeploymentMonitoringJob.
Using getModelDeploymentMonitoringJob
Two invocation forms are available. The direct form accepts plain arguments and either blocks until the result value is available, or returns a Promise-wrapped result. The output form accepts Input-wrapped arguments and returns an Output-wrapped result.
function getModelDeploymentMonitoringJob(args: GetModelDeploymentMonitoringJobArgs, opts?: InvokeOptions): Promise<GetModelDeploymentMonitoringJobResult>
function getModelDeploymentMonitoringJobOutput(args: GetModelDeploymentMonitoringJobOutputArgs, opts?: InvokeOptions): Output<GetModelDeploymentMonitoringJobResult>def get_model_deployment_monitoring_job(location: Optional[str] = None,
                                        model_deployment_monitoring_job_id: Optional[str] = None,
                                        project: Optional[str] = None,
                                        opts: Optional[InvokeOptions] = None) -> GetModelDeploymentMonitoringJobResult
def get_model_deployment_monitoring_job_output(location: Optional[pulumi.Input[str]] = None,
                                        model_deployment_monitoring_job_id: Optional[pulumi.Input[str]] = None,
                                        project: Optional[pulumi.Input[str]] = None,
                                        opts: Optional[InvokeOptions] = None) -> Output[GetModelDeploymentMonitoringJobResult]func LookupModelDeploymentMonitoringJob(ctx *Context, args *LookupModelDeploymentMonitoringJobArgs, opts ...InvokeOption) (*LookupModelDeploymentMonitoringJobResult, error)
func LookupModelDeploymentMonitoringJobOutput(ctx *Context, args *LookupModelDeploymentMonitoringJobOutputArgs, opts ...InvokeOption) LookupModelDeploymentMonitoringJobResultOutput> Note: This function is named LookupModelDeploymentMonitoringJob in the Go SDK.
public static class GetModelDeploymentMonitoringJob 
{
    public static Task<GetModelDeploymentMonitoringJobResult> InvokeAsync(GetModelDeploymentMonitoringJobArgs args, InvokeOptions? opts = null)
    public static Output<GetModelDeploymentMonitoringJobResult> Invoke(GetModelDeploymentMonitoringJobInvokeArgs args, InvokeOptions? opts = null)
}public static CompletableFuture<GetModelDeploymentMonitoringJobResult> getModelDeploymentMonitoringJob(GetModelDeploymentMonitoringJobArgs args, InvokeOptions options)
public static Output<GetModelDeploymentMonitoringJobResult> getModelDeploymentMonitoringJob(GetModelDeploymentMonitoringJobArgs args, InvokeOptions options)
fn::invoke:
  function: google-native:aiplatform/v1:getModelDeploymentMonitoringJob
  arguments:
    # arguments dictionaryThe following arguments are supported:
- Location string
- ModelDeployment stringMonitoring Job Id 
- Project string
- Location string
- ModelDeployment stringMonitoring Job Id 
- Project string
- location String
- modelDeployment StringMonitoring Job Id 
- project String
- location string
- modelDeployment stringMonitoring Job Id 
- project string
- location String
- modelDeployment StringMonitoring Job Id 
- project String
getModelDeploymentMonitoringJob Result
The following output properties are available:
- AnalysisInstance stringSchema Uri 
- YAML schema file uri describing the format of a single instance that you want Tensorflow Data Validation (TFDV) to analyze. If this field is empty, all the feature data types are inferred from predict_instance_schema_uri, meaning that TFDV will use the data in the exact format(data type) as prediction request/response. If there are any data type differences between predict instance and TFDV instance, this field can be used to override the schema. For models trained with Vertex AI, this field must be set as all the fields in predict instance formatted as string.
- BigqueryTables List<Pulumi.Google Native. Aiplatform. V1. Outputs. Google Cloud Aiplatform V1Model Deployment Monitoring Big Query Table Response> 
- The created bigquery tables for the job under customer project. Customer could do their own query & analysis. There could be 4 log tables in maximum: 1. Training data logging predict request/response 2. Serving data logging predict request/response
- CreateTime string
- Timestamp when this ModelDeploymentMonitoringJob was created.
- DisplayName string
- The user-defined name of the ModelDeploymentMonitoringJob. The name can be up to 128 characters long and can consist of any UTF-8 characters. Display name of a ModelDeploymentMonitoringJob.
- EnableMonitoring boolPipeline Logs 
- If true, the scheduled monitoring pipeline logs are sent to Google Cloud Logging, including pipeline status and anomalies detected. Please note the logs incur cost, which are subject to Cloud Logging pricing.
- EncryptionSpec Pulumi.Google Native. Aiplatform. V1. Outputs. Google Cloud Aiplatform V1Encryption Spec Response 
- Customer-managed encryption key spec for a ModelDeploymentMonitoringJob. If set, this ModelDeploymentMonitoringJob and all sub-resources of this ModelDeploymentMonitoringJob will be secured by this key.
- Endpoint string
- Endpoint resource name. Format: projects/{project}/locations/{location}/endpoints/{endpoint}
- Error
Pulumi.Google Native. Aiplatform. V1. Outputs. Google Rpc Status Response 
- Only populated when the job's state is JOB_STATE_FAILEDorJOB_STATE_CANCELLED.
- Labels Dictionary<string, string>
- The labels with user-defined metadata to organize your ModelDeploymentMonitoringJob. Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. See https://goo.gl/xmQnxf for more information and examples of labels.
- LatestMonitoring Pulumi.Pipeline Metadata Google Native. Aiplatform. V1. Outputs. Google Cloud Aiplatform V1Model Deployment Monitoring Job Latest Monitoring Pipeline Metadata Response 
- Latest triggered monitoring pipeline metadata.
- LogTtl string
- The TTL of BigQuery tables in user projects which stores logs. A day is the basic unit of the TTL and we take the ceil of TTL/86400(a day). e.g. { second: 3600} indicates ttl = 1 day.
- LoggingSampling Pulumi.Strategy Google Native. Aiplatform. V1. Outputs. Google Cloud Aiplatform V1Sampling Strategy Response 
- Sample Strategy for logging.
- ModelDeployment List<Pulumi.Monitoring Objective Configs Google Native. Aiplatform. V1. Outputs. Google Cloud Aiplatform V1Model Deployment Monitoring Objective Config Response> 
- The config for monitoring objectives. This is a per DeployedModel config. Each DeployedModel needs to be configured separately.
- ModelDeployment Pulumi.Monitoring Schedule Config Google Native. Aiplatform. V1. Outputs. Google Cloud Aiplatform V1Model Deployment Monitoring Schedule Config Response 
- Schedule config for running the monitoring job.
- ModelMonitoring Pulumi.Alert Config Google Native. Aiplatform. V1. Outputs. Google Cloud Aiplatform V1Model Monitoring Alert Config Response 
- Alert config for model monitoring.
- Name string
- Resource name of a ModelDeploymentMonitoringJob.
- NextSchedule stringTime 
- Timestamp when this monitoring pipeline will be scheduled to run for the next round.
- PredictInstance stringSchema Uri 
- YAML schema file uri describing the format of a single instance, which are given to format this Endpoint's prediction (and explanation). If not set, we will generate predict schema from collected predict requests.
- SamplePredict objectInstance 
- Sample Predict instance, same format as PredictRequest.instances, this can be set as a replacement of ModelDeploymentMonitoringJob.predict_instance_schema_uri. If not set, we will generate predict schema from collected predict requests.
- ScheduleState string
- Schedule state when the monitoring job is in Running state.
- State string
- The detailed state of the monitoring job. When the job is still creating, the state will be 'PENDING'. Once the job is successfully created, the state will be 'RUNNING'. Pause the job, the state will be 'PAUSED'. Resume the job, the state will return to 'RUNNING'.
- StatsAnomalies Pulumi.Base Directory Google Native. Aiplatform. V1. Outputs. Google Cloud Aiplatform V1Gcs Destination Response 
- Stats anomalies base folder path.
- UpdateTime string
- Timestamp when this ModelDeploymentMonitoringJob was updated most recently.
- AnalysisInstance stringSchema Uri 
- YAML schema file uri describing the format of a single instance that you want Tensorflow Data Validation (TFDV) to analyze. If this field is empty, all the feature data types are inferred from predict_instance_schema_uri, meaning that TFDV will use the data in the exact format(data type) as prediction request/response. If there are any data type differences between predict instance and TFDV instance, this field can be used to override the schema. For models trained with Vertex AI, this field must be set as all the fields in predict instance formatted as string.
- BigqueryTables []GoogleCloud Aiplatform V1Model Deployment Monitoring Big Query Table Response 
- The created bigquery tables for the job under customer project. Customer could do their own query & analysis. There could be 4 log tables in maximum: 1. Training data logging predict request/response 2. Serving data logging predict request/response
- CreateTime string
- Timestamp when this ModelDeploymentMonitoringJob was created.
- DisplayName string
- The user-defined name of the ModelDeploymentMonitoringJob. The name can be up to 128 characters long and can consist of any UTF-8 characters. Display name of a ModelDeploymentMonitoringJob.
- EnableMonitoring boolPipeline Logs 
- If true, the scheduled monitoring pipeline logs are sent to Google Cloud Logging, including pipeline status and anomalies detected. Please note the logs incur cost, which are subject to Cloud Logging pricing.
- EncryptionSpec GoogleCloud Aiplatform V1Encryption Spec Response 
- Customer-managed encryption key spec for a ModelDeploymentMonitoringJob. If set, this ModelDeploymentMonitoringJob and all sub-resources of this ModelDeploymentMonitoringJob will be secured by this key.
- Endpoint string
- Endpoint resource name. Format: projects/{project}/locations/{location}/endpoints/{endpoint}
- Error
GoogleRpc Status Response 
- Only populated when the job's state is JOB_STATE_FAILEDorJOB_STATE_CANCELLED.
- Labels map[string]string
- The labels with user-defined metadata to organize your ModelDeploymentMonitoringJob. Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. See https://goo.gl/xmQnxf for more information and examples of labels.
- LatestMonitoring GooglePipeline Metadata Cloud Aiplatform V1Model Deployment Monitoring Job Latest Monitoring Pipeline Metadata Response 
- Latest triggered monitoring pipeline metadata.
- LogTtl string
- The TTL of BigQuery tables in user projects which stores logs. A day is the basic unit of the TTL and we take the ceil of TTL/86400(a day). e.g. { second: 3600} indicates ttl = 1 day.
- LoggingSampling GoogleStrategy Cloud Aiplatform V1Sampling Strategy Response 
- Sample Strategy for logging.
- ModelDeployment []GoogleMonitoring Objective Configs Cloud Aiplatform V1Model Deployment Monitoring Objective Config Response 
- The config for monitoring objectives. This is a per DeployedModel config. Each DeployedModel needs to be configured separately.
- ModelDeployment GoogleMonitoring Schedule Config Cloud Aiplatform V1Model Deployment Monitoring Schedule Config Response 
- Schedule config for running the monitoring job.
- ModelMonitoring GoogleAlert Config Cloud Aiplatform V1Model Monitoring Alert Config Response 
- Alert config for model monitoring.
- Name string
- Resource name of a ModelDeploymentMonitoringJob.
- NextSchedule stringTime 
- Timestamp when this monitoring pipeline will be scheduled to run for the next round.
- PredictInstance stringSchema Uri 
- YAML schema file uri describing the format of a single instance, which are given to format this Endpoint's prediction (and explanation). If not set, we will generate predict schema from collected predict requests.
- SamplePredict interface{}Instance 
- Sample Predict instance, same format as PredictRequest.instances, this can be set as a replacement of ModelDeploymentMonitoringJob.predict_instance_schema_uri. If not set, we will generate predict schema from collected predict requests.
- ScheduleState string
- Schedule state when the monitoring job is in Running state.
- State string
- The detailed state of the monitoring job. When the job is still creating, the state will be 'PENDING'. Once the job is successfully created, the state will be 'RUNNING'. Pause the job, the state will be 'PAUSED'. Resume the job, the state will return to 'RUNNING'.
- StatsAnomalies GoogleBase Directory Cloud Aiplatform V1Gcs Destination Response 
- Stats anomalies base folder path.
- UpdateTime string
- Timestamp when this ModelDeploymentMonitoringJob was updated most recently.
- analysisInstance StringSchema Uri 
- YAML schema file uri describing the format of a single instance that you want Tensorflow Data Validation (TFDV) to analyze. If this field is empty, all the feature data types are inferred from predict_instance_schema_uri, meaning that TFDV will use the data in the exact format(data type) as prediction request/response. If there are any data type differences between predict instance and TFDV instance, this field can be used to override the schema. For models trained with Vertex AI, this field must be set as all the fields in predict instance formatted as string.
- bigqueryTables List<GoogleCloud Aiplatform V1Model Deployment Monitoring Big Query Table Response> 
- The created bigquery tables for the job under customer project. Customer could do their own query & analysis. There could be 4 log tables in maximum: 1. Training data logging predict request/response 2. Serving data logging predict request/response
- createTime String
- Timestamp when this ModelDeploymentMonitoringJob was created.
- displayName String
- The user-defined name of the ModelDeploymentMonitoringJob. The name can be up to 128 characters long and can consist of any UTF-8 characters. Display name of a ModelDeploymentMonitoringJob.
- enableMonitoring BooleanPipeline Logs 
- If true, the scheduled monitoring pipeline logs are sent to Google Cloud Logging, including pipeline status and anomalies detected. Please note the logs incur cost, which are subject to Cloud Logging pricing.
- encryptionSpec GoogleCloud Aiplatform V1Encryption Spec Response 
- Customer-managed encryption key spec for a ModelDeploymentMonitoringJob. If set, this ModelDeploymentMonitoringJob and all sub-resources of this ModelDeploymentMonitoringJob will be secured by this key.
- endpoint String
- Endpoint resource name. Format: projects/{project}/locations/{location}/endpoints/{endpoint}
- error
GoogleRpc Status Response 
- Only populated when the job's state is JOB_STATE_FAILEDorJOB_STATE_CANCELLED.
- labels Map<String,String>
- The labels with user-defined metadata to organize your ModelDeploymentMonitoringJob. Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. See https://goo.gl/xmQnxf for more information and examples of labels.
- latestMonitoring GooglePipeline Metadata Cloud Aiplatform V1Model Deployment Monitoring Job Latest Monitoring Pipeline Metadata Response 
- Latest triggered monitoring pipeline metadata.
- logTtl String
- The TTL of BigQuery tables in user projects which stores logs. A day is the basic unit of the TTL and we take the ceil of TTL/86400(a day). e.g. { second: 3600} indicates ttl = 1 day.
- loggingSampling GoogleStrategy Cloud Aiplatform V1Sampling Strategy Response 
- Sample Strategy for logging.
- modelDeployment List<GoogleMonitoring Objective Configs Cloud Aiplatform V1Model Deployment Monitoring Objective Config Response> 
- The config for monitoring objectives. This is a per DeployedModel config. Each DeployedModel needs to be configured separately.
- modelDeployment GoogleMonitoring Schedule Config Cloud Aiplatform V1Model Deployment Monitoring Schedule Config Response 
- Schedule config for running the monitoring job.
- modelMonitoring GoogleAlert Config Cloud Aiplatform V1Model Monitoring Alert Config Response 
- Alert config for model monitoring.
- name String
- Resource name of a ModelDeploymentMonitoringJob.
- nextSchedule StringTime 
- Timestamp when this monitoring pipeline will be scheduled to run for the next round.
- predictInstance StringSchema Uri 
- YAML schema file uri describing the format of a single instance, which are given to format this Endpoint's prediction (and explanation). If not set, we will generate predict schema from collected predict requests.
- samplePredict ObjectInstance 
- Sample Predict instance, same format as PredictRequest.instances, this can be set as a replacement of ModelDeploymentMonitoringJob.predict_instance_schema_uri. If not set, we will generate predict schema from collected predict requests.
- scheduleState String
- Schedule state when the monitoring job is in Running state.
- state String
- The detailed state of the monitoring job. When the job is still creating, the state will be 'PENDING'. Once the job is successfully created, the state will be 'RUNNING'. Pause the job, the state will be 'PAUSED'. Resume the job, the state will return to 'RUNNING'.
- statsAnomalies GoogleBase Directory Cloud Aiplatform V1Gcs Destination Response 
- Stats anomalies base folder path.
- updateTime String
- Timestamp when this ModelDeploymentMonitoringJob was updated most recently.
- analysisInstance stringSchema Uri 
- YAML schema file uri describing the format of a single instance that you want Tensorflow Data Validation (TFDV) to analyze. If this field is empty, all the feature data types are inferred from predict_instance_schema_uri, meaning that TFDV will use the data in the exact format(data type) as prediction request/response. If there are any data type differences between predict instance and TFDV instance, this field can be used to override the schema. For models trained with Vertex AI, this field must be set as all the fields in predict instance formatted as string.
- bigqueryTables GoogleCloud Aiplatform V1Model Deployment Monitoring Big Query Table Response[] 
- The created bigquery tables for the job under customer project. Customer could do their own query & analysis. There could be 4 log tables in maximum: 1. Training data logging predict request/response 2. Serving data logging predict request/response
- createTime string
- Timestamp when this ModelDeploymentMonitoringJob was created.
- displayName string
- The user-defined name of the ModelDeploymentMonitoringJob. The name can be up to 128 characters long and can consist of any UTF-8 characters. Display name of a ModelDeploymentMonitoringJob.
- enableMonitoring booleanPipeline Logs 
- If true, the scheduled monitoring pipeline logs are sent to Google Cloud Logging, including pipeline status and anomalies detected. Please note the logs incur cost, which are subject to Cloud Logging pricing.
- encryptionSpec GoogleCloud Aiplatform V1Encryption Spec Response 
- Customer-managed encryption key spec for a ModelDeploymentMonitoringJob. If set, this ModelDeploymentMonitoringJob and all sub-resources of this ModelDeploymentMonitoringJob will be secured by this key.
- endpoint string
- Endpoint resource name. Format: projects/{project}/locations/{location}/endpoints/{endpoint}
- error
GoogleRpc Status Response 
- Only populated when the job's state is JOB_STATE_FAILEDorJOB_STATE_CANCELLED.
- labels {[key: string]: string}
- The labels with user-defined metadata to organize your ModelDeploymentMonitoringJob. Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. See https://goo.gl/xmQnxf for more information and examples of labels.
- latestMonitoring GooglePipeline Metadata Cloud Aiplatform V1Model Deployment Monitoring Job Latest Monitoring Pipeline Metadata Response 
- Latest triggered monitoring pipeline metadata.
- logTtl string
- The TTL of BigQuery tables in user projects which stores logs. A day is the basic unit of the TTL and we take the ceil of TTL/86400(a day). e.g. { second: 3600} indicates ttl = 1 day.
- loggingSampling GoogleStrategy Cloud Aiplatform V1Sampling Strategy Response 
- Sample Strategy for logging.
- modelDeployment GoogleMonitoring Objective Configs Cloud Aiplatform V1Model Deployment Monitoring Objective Config Response[] 
- The config for monitoring objectives. This is a per DeployedModel config. Each DeployedModel needs to be configured separately.
- modelDeployment GoogleMonitoring Schedule Config Cloud Aiplatform V1Model Deployment Monitoring Schedule Config Response 
- Schedule config for running the monitoring job.
- modelMonitoring GoogleAlert Config Cloud Aiplatform V1Model Monitoring Alert Config Response 
- Alert config for model monitoring.
- name string
- Resource name of a ModelDeploymentMonitoringJob.
- nextSchedule stringTime 
- Timestamp when this monitoring pipeline will be scheduled to run for the next round.
- predictInstance stringSchema Uri 
- YAML schema file uri describing the format of a single instance, which are given to format this Endpoint's prediction (and explanation). If not set, we will generate predict schema from collected predict requests.
- samplePredict anyInstance 
- Sample Predict instance, same format as PredictRequest.instances, this can be set as a replacement of ModelDeploymentMonitoringJob.predict_instance_schema_uri. If not set, we will generate predict schema from collected predict requests.
- scheduleState string
- Schedule state when the monitoring job is in Running state.
- state string
- The detailed state of the monitoring job. When the job is still creating, the state will be 'PENDING'. Once the job is successfully created, the state will be 'RUNNING'. Pause the job, the state will be 'PAUSED'. Resume the job, the state will return to 'RUNNING'.
- statsAnomalies GoogleBase Directory Cloud Aiplatform V1Gcs Destination Response 
- Stats anomalies base folder path.
- updateTime string
- Timestamp when this ModelDeploymentMonitoringJob was updated most recently.
- analysis_instance_ strschema_ uri 
- YAML schema file uri describing the format of a single instance that you want Tensorflow Data Validation (TFDV) to analyze. If this field is empty, all the feature data types are inferred from predict_instance_schema_uri, meaning that TFDV will use the data in the exact format(data type) as prediction request/response. If there are any data type differences between predict instance and TFDV instance, this field can be used to override the schema. For models trained with Vertex AI, this field must be set as all the fields in predict instance formatted as string.
- bigquery_tables Sequence[GoogleCloud Aiplatform V1Model Deployment Monitoring Big Query Table Response] 
- The created bigquery tables for the job under customer project. Customer could do their own query & analysis. There could be 4 log tables in maximum: 1. Training data logging predict request/response 2. Serving data logging predict request/response
- create_time str
- Timestamp when this ModelDeploymentMonitoringJob was created.
- display_name str
- The user-defined name of the ModelDeploymentMonitoringJob. The name can be up to 128 characters long and can consist of any UTF-8 characters. Display name of a ModelDeploymentMonitoringJob.
- enable_monitoring_ boolpipeline_ logs 
- If true, the scheduled monitoring pipeline logs are sent to Google Cloud Logging, including pipeline status and anomalies detected. Please note the logs incur cost, which are subject to Cloud Logging pricing.
- encryption_spec GoogleCloud Aiplatform V1Encryption Spec Response 
- Customer-managed encryption key spec for a ModelDeploymentMonitoringJob. If set, this ModelDeploymentMonitoringJob and all sub-resources of this ModelDeploymentMonitoringJob will be secured by this key.
- endpoint str
- Endpoint resource name. Format: projects/{project}/locations/{location}/endpoints/{endpoint}
- error
GoogleRpc Status Response 
- Only populated when the job's state is JOB_STATE_FAILEDorJOB_STATE_CANCELLED.
- labels Mapping[str, str]
- The labels with user-defined metadata to organize your ModelDeploymentMonitoringJob. Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. See https://goo.gl/xmQnxf for more information and examples of labels.
- latest_monitoring_ Googlepipeline_ metadata Cloud Aiplatform V1Model Deployment Monitoring Job Latest Monitoring Pipeline Metadata Response 
- Latest triggered monitoring pipeline metadata.
- log_ttl str
- The TTL of BigQuery tables in user projects which stores logs. A day is the basic unit of the TTL and we take the ceil of TTL/86400(a day). e.g. { second: 3600} indicates ttl = 1 day.
- logging_sampling_ Googlestrategy Cloud Aiplatform V1Sampling Strategy Response 
- Sample Strategy for logging.
- model_deployment_ Sequence[Googlemonitoring_ objective_ configs Cloud Aiplatform V1Model Deployment Monitoring Objective Config Response] 
- The config for monitoring objectives. This is a per DeployedModel config. Each DeployedModel needs to be configured separately.
- model_deployment_ Googlemonitoring_ schedule_ config Cloud Aiplatform V1Model Deployment Monitoring Schedule Config Response 
- Schedule config for running the monitoring job.
- model_monitoring_ Googlealert_ config Cloud Aiplatform V1Model Monitoring Alert Config Response 
- Alert config for model monitoring.
- name str
- Resource name of a ModelDeploymentMonitoringJob.
- next_schedule_ strtime 
- Timestamp when this monitoring pipeline will be scheduled to run for the next round.
- predict_instance_ strschema_ uri 
- YAML schema file uri describing the format of a single instance, which are given to format this Endpoint's prediction (and explanation). If not set, we will generate predict schema from collected predict requests.
- sample_predict_ Anyinstance 
- Sample Predict instance, same format as PredictRequest.instances, this can be set as a replacement of ModelDeploymentMonitoringJob.predict_instance_schema_uri. If not set, we will generate predict schema from collected predict requests.
- schedule_state str
- Schedule state when the monitoring job is in Running state.
- state str
- The detailed state of the monitoring job. When the job is still creating, the state will be 'PENDING'. Once the job is successfully created, the state will be 'RUNNING'. Pause the job, the state will be 'PAUSED'. Resume the job, the state will return to 'RUNNING'.
- stats_anomalies_ Googlebase_ directory Cloud Aiplatform V1Gcs Destination Response 
- Stats anomalies base folder path.
- update_time str
- Timestamp when this ModelDeploymentMonitoringJob was updated most recently.
- analysisInstance StringSchema Uri 
- YAML schema file uri describing the format of a single instance that you want Tensorflow Data Validation (TFDV) to analyze. If this field is empty, all the feature data types are inferred from predict_instance_schema_uri, meaning that TFDV will use the data in the exact format(data type) as prediction request/response. If there are any data type differences between predict instance and TFDV instance, this field can be used to override the schema. For models trained with Vertex AI, this field must be set as all the fields in predict instance formatted as string.
- bigqueryTables List<Property Map>
- The created bigquery tables for the job under customer project. Customer could do their own query & analysis. There could be 4 log tables in maximum: 1. Training data logging predict request/response 2. Serving data logging predict request/response
- createTime String
- Timestamp when this ModelDeploymentMonitoringJob was created.
- displayName String
- The user-defined name of the ModelDeploymentMonitoringJob. The name can be up to 128 characters long and can consist of any UTF-8 characters. Display name of a ModelDeploymentMonitoringJob.
- enableMonitoring BooleanPipeline Logs 
- If true, the scheduled monitoring pipeline logs are sent to Google Cloud Logging, including pipeline status and anomalies detected. Please note the logs incur cost, which are subject to Cloud Logging pricing.
- encryptionSpec Property Map
- Customer-managed encryption key spec for a ModelDeploymentMonitoringJob. If set, this ModelDeploymentMonitoringJob and all sub-resources of this ModelDeploymentMonitoringJob will be secured by this key.
- endpoint String
- Endpoint resource name. Format: projects/{project}/locations/{location}/endpoints/{endpoint}
- error Property Map
- Only populated when the job's state is JOB_STATE_FAILEDorJOB_STATE_CANCELLED.
- labels Map<String>
- The labels with user-defined metadata to organize your ModelDeploymentMonitoringJob. Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. See https://goo.gl/xmQnxf for more information and examples of labels.
- latestMonitoring Property MapPipeline Metadata 
- Latest triggered monitoring pipeline metadata.
- logTtl String
- The TTL of BigQuery tables in user projects which stores logs. A day is the basic unit of the TTL and we take the ceil of TTL/86400(a day). e.g. { second: 3600} indicates ttl = 1 day.
- loggingSampling Property MapStrategy 
- Sample Strategy for logging.
- modelDeployment List<Property Map>Monitoring Objective Configs 
- The config for monitoring objectives. This is a per DeployedModel config. Each DeployedModel needs to be configured separately.
- modelDeployment Property MapMonitoring Schedule Config 
- Schedule config for running the monitoring job.
- modelMonitoring Property MapAlert Config 
- Alert config for model monitoring.
- name String
- Resource name of a ModelDeploymentMonitoringJob.
- nextSchedule StringTime 
- Timestamp when this monitoring pipeline will be scheduled to run for the next round.
- predictInstance StringSchema Uri 
- YAML schema file uri describing the format of a single instance, which are given to format this Endpoint's prediction (and explanation). If not set, we will generate predict schema from collected predict requests.
- samplePredict AnyInstance 
- Sample Predict instance, same format as PredictRequest.instances, this can be set as a replacement of ModelDeploymentMonitoringJob.predict_instance_schema_uri. If not set, we will generate predict schema from collected predict requests.
- scheduleState String
- Schedule state when the monitoring job is in Running state.
- state String
- The detailed state of the monitoring job. When the job is still creating, the state will be 'PENDING'. Once the job is successfully created, the state will be 'RUNNING'. Pause the job, the state will be 'PAUSED'. Resume the job, the state will return to 'RUNNING'.
- statsAnomalies Property MapBase Directory 
- Stats anomalies base folder path.
- updateTime String
- Timestamp when this ModelDeploymentMonitoringJob was updated most recently.
Supporting Types
GoogleCloudAiplatformV1BigQueryDestinationResponse      
- OutputUri string
- BigQuery URI to a project or table, up to 2000 characters long. When only the project is specified, the Dataset and Table is created. When the full table reference is specified, the Dataset must exist and table must not exist. Accepted forms: * BigQuery path. For example: bq://projectIdorbq://projectId.bqDatasetIdorbq://projectId.bqDatasetId.bqTableId.
- OutputUri string
- BigQuery URI to a project or table, up to 2000 characters long. When only the project is specified, the Dataset and Table is created. When the full table reference is specified, the Dataset must exist and table must not exist. Accepted forms: * BigQuery path. For example: bq://projectIdorbq://projectId.bqDatasetIdorbq://projectId.bqDatasetId.bqTableId.
- outputUri String
- BigQuery URI to a project or table, up to 2000 characters long. When only the project is specified, the Dataset and Table is created. When the full table reference is specified, the Dataset must exist and table must not exist. Accepted forms: * BigQuery path. For example: bq://projectIdorbq://projectId.bqDatasetIdorbq://projectId.bqDatasetId.bqTableId.
- outputUri string
- BigQuery URI to a project or table, up to 2000 characters long. When only the project is specified, the Dataset and Table is created. When the full table reference is specified, the Dataset must exist and table must not exist. Accepted forms: * BigQuery path. For example: bq://projectIdorbq://projectId.bqDatasetIdorbq://projectId.bqDatasetId.bqTableId.
- output_uri str
- BigQuery URI to a project or table, up to 2000 characters long. When only the project is specified, the Dataset and Table is created. When the full table reference is specified, the Dataset must exist and table must not exist. Accepted forms: * BigQuery path. For example: bq://projectIdorbq://projectId.bqDatasetIdorbq://projectId.bqDatasetId.bqTableId.
- outputUri String
- BigQuery URI to a project or table, up to 2000 characters long. When only the project is specified, the Dataset and Table is created. When the full table reference is specified, the Dataset must exist and table must not exist. Accepted forms: * BigQuery path. For example: bq://projectIdorbq://projectId.bqDatasetIdorbq://projectId.bqDatasetId.bqTableId.
GoogleCloudAiplatformV1BigQuerySourceResponse      
- InputUri string
- BigQuery URI to a table, up to 2000 characters long. Accepted forms: * BigQuery path. For example: bq://projectId.bqDatasetId.bqTableId.
- InputUri string
- BigQuery URI to a table, up to 2000 characters long. Accepted forms: * BigQuery path. For example: bq://projectId.bqDatasetId.bqTableId.
- inputUri String
- BigQuery URI to a table, up to 2000 characters long. Accepted forms: * BigQuery path. For example: bq://projectId.bqDatasetId.bqTableId.
- inputUri string
- BigQuery URI to a table, up to 2000 characters long. Accepted forms: * BigQuery path. For example: bq://projectId.bqDatasetId.bqTableId.
- input_uri str
- BigQuery URI to a table, up to 2000 characters long. Accepted forms: * BigQuery path. For example: bq://projectId.bqDatasetId.bqTableId.
- inputUri String
- BigQuery URI to a table, up to 2000 characters long. Accepted forms: * BigQuery path. For example: bq://projectId.bqDatasetId.bqTableId.
GoogleCloudAiplatformV1EncryptionSpecResponse     
- KmsKey stringName 
- The Cloud KMS resource identifier of the customer managed encryption key used to protect a resource. Has the form: projects/my-project/locations/my-region/keyRings/my-kr/cryptoKeys/my-key. The key needs to be in the same region as where the compute resource is created.
- KmsKey stringName 
- The Cloud KMS resource identifier of the customer managed encryption key used to protect a resource. Has the form: projects/my-project/locations/my-region/keyRings/my-kr/cryptoKeys/my-key. The key needs to be in the same region as where the compute resource is created.
- kmsKey StringName 
- The Cloud KMS resource identifier of the customer managed encryption key used to protect a resource. Has the form: projects/my-project/locations/my-region/keyRings/my-kr/cryptoKeys/my-key. The key needs to be in the same region as where the compute resource is created.
- kmsKey stringName 
- The Cloud KMS resource identifier of the customer managed encryption key used to protect a resource. Has the form: projects/my-project/locations/my-region/keyRings/my-kr/cryptoKeys/my-key. The key needs to be in the same region as where the compute resource is created.
- kms_key_ strname 
- The Cloud KMS resource identifier of the customer managed encryption key used to protect a resource. Has the form: projects/my-project/locations/my-region/keyRings/my-kr/cryptoKeys/my-key. The key needs to be in the same region as where the compute resource is created.
- kmsKey StringName 
- The Cloud KMS resource identifier of the customer managed encryption key used to protect a resource. Has the form: projects/my-project/locations/my-region/keyRings/my-kr/cryptoKeys/my-key. The key needs to be in the same region as where the compute resource is created.
GoogleCloudAiplatformV1GcsDestinationResponse     
- OutputUri stringPrefix 
- Google Cloud Storage URI to output directory. If the uri doesn't end with '/', a '/' will be automatically appended. The directory is created if it doesn't exist.
- OutputUri stringPrefix 
- Google Cloud Storage URI to output directory. If the uri doesn't end with '/', a '/' will be automatically appended. The directory is created if it doesn't exist.
- outputUri StringPrefix 
- Google Cloud Storage URI to output directory. If the uri doesn't end with '/', a '/' will be automatically appended. The directory is created if it doesn't exist.
- outputUri stringPrefix 
- Google Cloud Storage URI to output directory. If the uri doesn't end with '/', a '/' will be automatically appended. The directory is created if it doesn't exist.
- output_uri_ strprefix 
- Google Cloud Storage URI to output directory. If the uri doesn't end with '/', a '/' will be automatically appended. The directory is created if it doesn't exist.
- outputUri StringPrefix 
- Google Cloud Storage URI to output directory. If the uri doesn't end with '/', a '/' will be automatically appended. The directory is created if it doesn't exist.
GoogleCloudAiplatformV1GcsSourceResponse     
- Uris List<string>
- Google Cloud Storage URI(-s) to the input file(s). May contain wildcards. For more information on wildcards, see https://cloud.google.com/storage/docs/gsutil/addlhelp/WildcardNames.
- Uris []string
- Google Cloud Storage URI(-s) to the input file(s). May contain wildcards. For more information on wildcards, see https://cloud.google.com/storage/docs/gsutil/addlhelp/WildcardNames.
- uris List<String>
- Google Cloud Storage URI(-s) to the input file(s). May contain wildcards. For more information on wildcards, see https://cloud.google.com/storage/docs/gsutil/addlhelp/WildcardNames.
- uris string[]
- Google Cloud Storage URI(-s) to the input file(s). May contain wildcards. For more information on wildcards, see https://cloud.google.com/storage/docs/gsutil/addlhelp/WildcardNames.
- uris Sequence[str]
- Google Cloud Storage URI(-s) to the input file(s). May contain wildcards. For more information on wildcards, see https://cloud.google.com/storage/docs/gsutil/addlhelp/WildcardNames.
- uris List<String>
- Google Cloud Storage URI(-s) to the input file(s). May contain wildcards. For more information on wildcards, see https://cloud.google.com/storage/docs/gsutil/addlhelp/WildcardNames.
GoogleCloudAiplatformV1ModelDeploymentMonitoringBigQueryTableResponse         
- BigqueryTable stringPath 
- The created BigQuery table to store logs. Customer could do their own query & analysis. Format: bq://.model_deployment_monitoring_._
- LogSource string
- The source of log.
- LogType string
- The type of log.
- BigqueryTable stringPath 
- The created BigQuery table to store logs. Customer could do their own query & analysis. Format: bq://.model_deployment_monitoring_._
- LogSource string
- The source of log.
- LogType string
- The type of log.
- bigqueryTable StringPath 
- The created BigQuery table to store logs. Customer could do their own query & analysis. Format: bq://.model_deployment_monitoring_._
- logSource String
- The source of log.
- logType String
- The type of log.
- bigqueryTable stringPath 
- The created BigQuery table to store logs. Customer could do their own query & analysis. Format: bq://.model_deployment_monitoring_._
- logSource string
- The source of log.
- logType string
- The type of log.
- bigquery_table_ strpath 
- The created BigQuery table to store logs. Customer could do their own query & analysis. Format: bq://.model_deployment_monitoring_._
- log_source str
- The source of log.
- log_type str
- The type of log.
- bigqueryTable StringPath 
- The created BigQuery table to store logs. Customer could do their own query & analysis. Format: bq://.model_deployment_monitoring_._
- logSource String
- The source of log.
- logType String
- The type of log.
GoogleCloudAiplatformV1ModelDeploymentMonitoringJobLatestMonitoringPipelineMetadataResponse           
- RunTime string
- The time that most recent monitoring pipelines that is related to this run.
- Status
Pulumi.Google Native. Aiplatform. V1. Inputs. Google Rpc Status Response 
- The status of the most recent monitoring pipeline.
- RunTime string
- The time that most recent monitoring pipelines that is related to this run.
- Status
GoogleRpc Status Response 
- The status of the most recent monitoring pipeline.
- runTime String
- The time that most recent monitoring pipelines that is related to this run.
- status
GoogleRpc Status Response 
- The status of the most recent monitoring pipeline.
- runTime string
- The time that most recent monitoring pipelines that is related to this run.
- status
GoogleRpc Status Response 
- The status of the most recent monitoring pipeline.
- run_time str
- The time that most recent monitoring pipelines that is related to this run.
- status
GoogleRpc Status Response 
- The status of the most recent monitoring pipeline.
- runTime String
- The time that most recent monitoring pipelines that is related to this run.
- status Property Map
- The status of the most recent monitoring pipeline.
GoogleCloudAiplatformV1ModelDeploymentMonitoringObjectiveConfigResponse        
- DeployedModel stringId 
- The DeployedModel ID of the objective config.
- ObjectiveConfig Pulumi.Google Native. Aiplatform. V1. Inputs. Google Cloud Aiplatform V1Model Monitoring Objective Config Response 
- The objective config of for the modelmonitoring job of this deployed model.
- DeployedModel stringId 
- The DeployedModel ID of the objective config.
- ObjectiveConfig GoogleCloud Aiplatform V1Model Monitoring Objective Config Response 
- The objective config of for the modelmonitoring job of this deployed model.
- deployedModel StringId 
- The DeployedModel ID of the objective config.
- objectiveConfig GoogleCloud Aiplatform V1Model Monitoring Objective Config Response 
- The objective config of for the modelmonitoring job of this deployed model.
- deployedModel stringId 
- The DeployedModel ID of the objective config.
- objectiveConfig GoogleCloud Aiplatform V1Model Monitoring Objective Config Response 
- The objective config of for the modelmonitoring job of this deployed model.
- deployed_model_ strid 
- The DeployedModel ID of the objective config.
- objective_config GoogleCloud Aiplatform V1Model Monitoring Objective Config Response 
- The objective config of for the modelmonitoring job of this deployed model.
- deployedModel StringId 
- The DeployedModel ID of the objective config.
- objectiveConfig Property Map
- The objective config of for the modelmonitoring job of this deployed model.
GoogleCloudAiplatformV1ModelDeploymentMonitoringScheduleConfigResponse        
- MonitorInterval string
- The model monitoring job scheduling interval. It will be rounded up to next full hour. This defines how often the monitoring jobs are triggered.
- MonitorWindow string
- The time window of the prediction data being included in each prediction dataset. This window specifies how long the data should be collected from historical model results for each run. If not set, ModelDeploymentMonitoringScheduleConfig.monitor_interval will be used. e.g. If currently the cutoff time is 2022-01-08 14:30:00 and the monitor_window is set to be 3600, then data from 2022-01-08 13:30:00 to 2022-01-08 14:30:00 will be retrieved and aggregated to calculate the monitoring statistics.
- MonitorInterval string
- The model monitoring job scheduling interval. It will be rounded up to next full hour. This defines how often the monitoring jobs are triggered.
- MonitorWindow string
- The time window of the prediction data being included in each prediction dataset. This window specifies how long the data should be collected from historical model results for each run. If not set, ModelDeploymentMonitoringScheduleConfig.monitor_interval will be used. e.g. If currently the cutoff time is 2022-01-08 14:30:00 and the monitor_window is set to be 3600, then data from 2022-01-08 13:30:00 to 2022-01-08 14:30:00 will be retrieved and aggregated to calculate the monitoring statistics.
- monitorInterval String
- The model monitoring job scheduling interval. It will be rounded up to next full hour. This defines how often the monitoring jobs are triggered.
- monitorWindow String
- The time window of the prediction data being included in each prediction dataset. This window specifies how long the data should be collected from historical model results for each run. If not set, ModelDeploymentMonitoringScheduleConfig.monitor_interval will be used. e.g. If currently the cutoff time is 2022-01-08 14:30:00 and the monitor_window is set to be 3600, then data from 2022-01-08 13:30:00 to 2022-01-08 14:30:00 will be retrieved and aggregated to calculate the monitoring statistics.
- monitorInterval string
- The model monitoring job scheduling interval. It will be rounded up to next full hour. This defines how often the monitoring jobs are triggered.
- monitorWindow string
- The time window of the prediction data being included in each prediction dataset. This window specifies how long the data should be collected from historical model results for each run. If not set, ModelDeploymentMonitoringScheduleConfig.monitor_interval will be used. e.g. If currently the cutoff time is 2022-01-08 14:30:00 and the monitor_window is set to be 3600, then data from 2022-01-08 13:30:00 to 2022-01-08 14:30:00 will be retrieved and aggregated to calculate the monitoring statistics.
- monitor_interval str
- The model monitoring job scheduling interval. It will be rounded up to next full hour. This defines how often the monitoring jobs are triggered.
- monitor_window str
- The time window of the prediction data being included in each prediction dataset. This window specifies how long the data should be collected from historical model results for each run. If not set, ModelDeploymentMonitoringScheduleConfig.monitor_interval will be used. e.g. If currently the cutoff time is 2022-01-08 14:30:00 and the monitor_window is set to be 3600, then data from 2022-01-08 13:30:00 to 2022-01-08 14:30:00 will be retrieved and aggregated to calculate the monitoring statistics.
- monitorInterval String
- The model monitoring job scheduling interval. It will be rounded up to next full hour. This defines how often the monitoring jobs are triggered.
- monitorWindow String
- The time window of the prediction data being included in each prediction dataset. This window specifies how long the data should be collected from historical model results for each run. If not set, ModelDeploymentMonitoringScheduleConfig.monitor_interval will be used. e.g. If currently the cutoff time is 2022-01-08 14:30:00 and the monitor_window is set to be 3600, then data from 2022-01-08 13:30:00 to 2022-01-08 14:30:00 will be retrieved and aggregated to calculate the monitoring statistics.
GoogleCloudAiplatformV1ModelMonitoringAlertConfigEmailAlertConfigResponse          
- UserEmails List<string>
- The email addresses to send the alert.
- UserEmails []string
- The email addresses to send the alert.
- userEmails List<String>
- The email addresses to send the alert.
- userEmails string[]
- The email addresses to send the alert.
- user_emails Sequence[str]
- The email addresses to send the alert.
- userEmails List<String>
- The email addresses to send the alert.
GoogleCloudAiplatformV1ModelMonitoringAlertConfigResponse       
- EmailAlert Pulumi.Config Google Native. Aiplatform. V1. Inputs. Google Cloud Aiplatform V1Model Monitoring Alert Config Email Alert Config Response 
- Email alert config.
- EnableLogging bool
- Dump the anomalies to Cloud Logging. The anomalies will be put to json payload encoded from proto google.cloud.aiplatform.logging.ModelMonitoringAnomaliesLogEntry. This can be further sinked to Pub/Sub or any other services supported by Cloud Logging.
- NotificationChannels List<string>
- Resource names of the NotificationChannels to send alert. Must be of the format projects//notificationChannels/
- EmailAlert GoogleConfig Cloud Aiplatform V1Model Monitoring Alert Config Email Alert Config Response 
- Email alert config.
- EnableLogging bool
- Dump the anomalies to Cloud Logging. The anomalies will be put to json payload encoded from proto google.cloud.aiplatform.logging.ModelMonitoringAnomaliesLogEntry. This can be further sinked to Pub/Sub or any other services supported by Cloud Logging.
- NotificationChannels []string
- Resource names of the NotificationChannels to send alert. Must be of the format projects//notificationChannels/
- emailAlert GoogleConfig Cloud Aiplatform V1Model Monitoring Alert Config Email Alert Config Response 
- Email alert config.
- enableLogging Boolean
- Dump the anomalies to Cloud Logging. The anomalies will be put to json payload encoded from proto google.cloud.aiplatform.logging.ModelMonitoringAnomaliesLogEntry. This can be further sinked to Pub/Sub or any other services supported by Cloud Logging.
- notificationChannels List<String>
- Resource names of the NotificationChannels to send alert. Must be of the format projects//notificationChannels/
- emailAlert GoogleConfig Cloud Aiplatform V1Model Monitoring Alert Config Email Alert Config Response 
- Email alert config.
- enableLogging boolean
- Dump the anomalies to Cloud Logging. The anomalies will be put to json payload encoded from proto google.cloud.aiplatform.logging.ModelMonitoringAnomaliesLogEntry. This can be further sinked to Pub/Sub or any other services supported by Cloud Logging.
- notificationChannels string[]
- Resource names of the NotificationChannels to send alert. Must be of the format projects//notificationChannels/
- email_alert_ Googleconfig Cloud Aiplatform V1Model Monitoring Alert Config Email Alert Config Response 
- Email alert config.
- enable_logging bool
- Dump the anomalies to Cloud Logging. The anomalies will be put to json payload encoded from proto google.cloud.aiplatform.logging.ModelMonitoringAnomaliesLogEntry. This can be further sinked to Pub/Sub or any other services supported by Cloud Logging.
- notification_channels Sequence[str]
- Resource names of the NotificationChannels to send alert. Must be of the format projects//notificationChannels/
- emailAlert Property MapConfig 
- Email alert config.
- enableLogging Boolean
- Dump the anomalies to Cloud Logging. The anomalies will be put to json payload encoded from proto google.cloud.aiplatform.logging.ModelMonitoringAnomaliesLogEntry. This can be further sinked to Pub/Sub or any other services supported by Cloud Logging.
- notificationChannels List<String>
- Resource names of the NotificationChannels to send alert. Must be of the format projects//notificationChannels/
GoogleCloudAiplatformV1ModelMonitoringObjectiveConfigExplanationConfigExplanationBaselineResponse           
- Bigquery
Pulumi.Google Native. Aiplatform. V1. Inputs. Google Cloud Aiplatform V1Big Query Destination Response 
- BigQuery location for BatchExplain output.
- Gcs
Pulumi.Google Native. Aiplatform. V1. Inputs. Google Cloud Aiplatform V1Gcs Destination Response 
- Cloud Storage location for BatchExplain output.
- PredictionFormat string
- The storage format of the predictions generated BatchPrediction job.
- Bigquery
GoogleCloud Aiplatform V1Big Query Destination Response 
- BigQuery location for BatchExplain output.
- Gcs
GoogleCloud Aiplatform V1Gcs Destination Response 
- Cloud Storage location for BatchExplain output.
- PredictionFormat string
- The storage format of the predictions generated BatchPrediction job.
- bigquery
GoogleCloud Aiplatform V1Big Query Destination Response 
- BigQuery location for BatchExplain output.
- gcs
GoogleCloud Aiplatform V1Gcs Destination Response 
- Cloud Storage location for BatchExplain output.
- predictionFormat String
- The storage format of the predictions generated BatchPrediction job.
- bigquery
GoogleCloud Aiplatform V1Big Query Destination Response 
- BigQuery location for BatchExplain output.
- gcs
GoogleCloud Aiplatform V1Gcs Destination Response 
- Cloud Storage location for BatchExplain output.
- predictionFormat string
- The storage format of the predictions generated BatchPrediction job.
- bigquery
GoogleCloud Aiplatform V1Big Query Destination Response 
- BigQuery location for BatchExplain output.
- gcs
GoogleCloud Aiplatform V1Gcs Destination Response 
- Cloud Storage location for BatchExplain output.
- prediction_format str
- The storage format of the predictions generated BatchPrediction job.
- bigquery Property Map
- BigQuery location for BatchExplain output.
- gcs Property Map
- Cloud Storage location for BatchExplain output.
- predictionFormat String
- The storage format of the predictions generated BatchPrediction job.
GoogleCloudAiplatformV1ModelMonitoringObjectiveConfigExplanationConfigResponse         
- EnableFeature boolAttributes 
- If want to analyze the Vertex Explainable AI feature attribute scores or not. If set to true, Vertex AI will log the feature attributions from explain response and do the skew/drift detection for them.
- ExplanationBaseline Pulumi.Google Native. Aiplatform. V1. Inputs. Google Cloud Aiplatform V1Model Monitoring Objective Config Explanation Config Explanation Baseline Response 
- Predictions generated by the BatchPredictionJob using baseline dataset.
- EnableFeature boolAttributes 
- If want to analyze the Vertex Explainable AI feature attribute scores or not. If set to true, Vertex AI will log the feature attributions from explain response and do the skew/drift detection for them.
- ExplanationBaseline GoogleCloud Aiplatform V1Model Monitoring Objective Config Explanation Config Explanation Baseline Response 
- Predictions generated by the BatchPredictionJob using baseline dataset.
- enableFeature BooleanAttributes 
- If want to analyze the Vertex Explainable AI feature attribute scores or not. If set to true, Vertex AI will log the feature attributions from explain response and do the skew/drift detection for them.
- explanationBaseline GoogleCloud Aiplatform V1Model Monitoring Objective Config Explanation Config Explanation Baseline Response 
- Predictions generated by the BatchPredictionJob using baseline dataset.
- enableFeature booleanAttributes 
- If want to analyze the Vertex Explainable AI feature attribute scores or not. If set to true, Vertex AI will log the feature attributions from explain response and do the skew/drift detection for them.
- explanationBaseline GoogleCloud Aiplatform V1Model Monitoring Objective Config Explanation Config Explanation Baseline Response 
- Predictions generated by the BatchPredictionJob using baseline dataset.
- enable_feature_ boolattributes 
- If want to analyze the Vertex Explainable AI feature attribute scores or not. If set to true, Vertex AI will log the feature attributions from explain response and do the skew/drift detection for them.
- explanation_baseline GoogleCloud Aiplatform V1Model Monitoring Objective Config Explanation Config Explanation Baseline Response 
- Predictions generated by the BatchPredictionJob using baseline dataset.
- enableFeature BooleanAttributes 
- If want to analyze the Vertex Explainable AI feature attribute scores or not. If set to true, Vertex AI will log the feature attributions from explain response and do the skew/drift detection for them.
- explanationBaseline Property Map
- Predictions generated by the BatchPredictionJob using baseline dataset.
GoogleCloudAiplatformV1ModelMonitoringObjectiveConfigPredictionDriftDetectionConfigResponse           
- AttributionScore Dictionary<string, string>Drift Thresholds 
- Key is the feature name and value is the threshold. The threshold here is against attribution score distance between different time windows.
- DefaultDrift Pulumi.Threshold Google Native. Aiplatform. V1. Inputs. Google Cloud Aiplatform V1Threshold Config Response 
- Drift anomaly detection threshold used by all features. When the per-feature thresholds are not set, this field can be used to specify a threshold for all features.
- DriftThresholds Dictionary<string, string>
- Key is the feature name and value is the threshold. If a feature needs to be monitored for drift, a value threshold must be configured for that feature. The threshold here is against feature distribution distance between different time windws.
- AttributionScore map[string]stringDrift Thresholds 
- Key is the feature name and value is the threshold. The threshold here is against attribution score distance between different time windows.
- DefaultDrift GoogleThreshold Cloud Aiplatform V1Threshold Config Response 
- Drift anomaly detection threshold used by all features. When the per-feature thresholds are not set, this field can be used to specify a threshold for all features.
- DriftThresholds map[string]string
- Key is the feature name and value is the threshold. If a feature needs to be monitored for drift, a value threshold must be configured for that feature. The threshold here is against feature distribution distance between different time windws.
- attributionScore Map<String,String>Drift Thresholds 
- Key is the feature name and value is the threshold. The threshold here is against attribution score distance between different time windows.
- defaultDrift GoogleThreshold Cloud Aiplatform V1Threshold Config Response 
- Drift anomaly detection threshold used by all features. When the per-feature thresholds are not set, this field can be used to specify a threshold for all features.
- driftThresholds Map<String,String>
- Key is the feature name and value is the threshold. If a feature needs to be monitored for drift, a value threshold must be configured for that feature. The threshold here is against feature distribution distance between different time windws.
- attributionScore {[key: string]: string}Drift Thresholds 
- Key is the feature name and value is the threshold. The threshold here is against attribution score distance between different time windows.
- defaultDrift GoogleThreshold Cloud Aiplatform V1Threshold Config Response 
- Drift anomaly detection threshold used by all features. When the per-feature thresholds are not set, this field can be used to specify a threshold for all features.
- driftThresholds {[key: string]: string}
- Key is the feature name and value is the threshold. If a feature needs to be monitored for drift, a value threshold must be configured for that feature. The threshold here is against feature distribution distance between different time windws.
- attribution_score_ Mapping[str, str]drift_ thresholds 
- Key is the feature name and value is the threshold. The threshold here is against attribution score distance between different time windows.
- default_drift_ Googlethreshold Cloud Aiplatform V1Threshold Config Response 
- Drift anomaly detection threshold used by all features. When the per-feature thresholds are not set, this field can be used to specify a threshold for all features.
- drift_thresholds Mapping[str, str]
- Key is the feature name and value is the threshold. If a feature needs to be monitored for drift, a value threshold must be configured for that feature. The threshold here is against feature distribution distance between different time windws.
- attributionScore Map<String>Drift Thresholds 
- Key is the feature name and value is the threshold. The threshold here is against attribution score distance between different time windows.
- defaultDrift Property MapThreshold 
- Drift anomaly detection threshold used by all features. When the per-feature thresholds are not set, this field can be used to specify a threshold for all features.
- driftThresholds Map<String>
- Key is the feature name and value is the threshold. If a feature needs to be monitored for drift, a value threshold must be configured for that feature. The threshold here is against feature distribution distance between different time windws.
GoogleCloudAiplatformV1ModelMonitoringObjectiveConfigResponse       
- ExplanationConfig Pulumi.Google Native. Aiplatform. V1. Inputs. Google Cloud Aiplatform V1Model Monitoring Objective Config Explanation Config Response 
- The config for integrating with Vertex Explainable AI.
- PredictionDrift Pulumi.Detection Config Google Native. Aiplatform. V1. Inputs. Google Cloud Aiplatform V1Model Monitoring Objective Config Prediction Drift Detection Config Response 
- The config for drift of prediction data.
- TrainingDataset Pulumi.Google Native. Aiplatform. V1. Inputs. Google Cloud Aiplatform V1Model Monitoring Objective Config Training Dataset Response 
- Training dataset for models. This field has to be set only if TrainingPredictionSkewDetectionConfig is specified.
- TrainingPrediction Pulumi.Skew Detection Config Google Native. Aiplatform. V1. Inputs. Google Cloud Aiplatform V1Model Monitoring Objective Config Training Prediction Skew Detection Config Response 
- The config for skew between training data and prediction data.
- ExplanationConfig GoogleCloud Aiplatform V1Model Monitoring Objective Config Explanation Config Response 
- The config for integrating with Vertex Explainable AI.
- PredictionDrift GoogleDetection Config Cloud Aiplatform V1Model Monitoring Objective Config Prediction Drift Detection Config Response 
- The config for drift of prediction data.
- TrainingDataset GoogleCloud Aiplatform V1Model Monitoring Objective Config Training Dataset Response 
- Training dataset for models. This field has to be set only if TrainingPredictionSkewDetectionConfig is specified.
- TrainingPrediction GoogleSkew Detection Config Cloud Aiplatform V1Model Monitoring Objective Config Training Prediction Skew Detection Config Response 
- The config for skew between training data and prediction data.
- explanationConfig GoogleCloud Aiplatform V1Model Monitoring Objective Config Explanation Config Response 
- The config for integrating with Vertex Explainable AI.
- predictionDrift GoogleDetection Config Cloud Aiplatform V1Model Monitoring Objective Config Prediction Drift Detection Config Response 
- The config for drift of prediction data.
- trainingDataset GoogleCloud Aiplatform V1Model Monitoring Objective Config Training Dataset Response 
- Training dataset for models. This field has to be set only if TrainingPredictionSkewDetectionConfig is specified.
- trainingPrediction GoogleSkew Detection Config Cloud Aiplatform V1Model Monitoring Objective Config Training Prediction Skew Detection Config Response 
- The config for skew between training data and prediction data.
- explanationConfig GoogleCloud Aiplatform V1Model Monitoring Objective Config Explanation Config Response 
- The config for integrating with Vertex Explainable AI.
- predictionDrift GoogleDetection Config Cloud Aiplatform V1Model Monitoring Objective Config Prediction Drift Detection Config Response 
- The config for drift of prediction data.
- trainingDataset GoogleCloud Aiplatform V1Model Monitoring Objective Config Training Dataset Response 
- Training dataset for models. This field has to be set only if TrainingPredictionSkewDetectionConfig is specified.
- trainingPrediction GoogleSkew Detection Config Cloud Aiplatform V1Model Monitoring Objective Config Training Prediction Skew Detection Config Response 
- The config for skew between training data and prediction data.
- explanation_config GoogleCloud Aiplatform V1Model Monitoring Objective Config Explanation Config Response 
- The config for integrating with Vertex Explainable AI.
- prediction_drift_ Googledetection_ config Cloud Aiplatform V1Model Monitoring Objective Config Prediction Drift Detection Config Response 
- The config for drift of prediction data.
- training_dataset GoogleCloud Aiplatform V1Model Monitoring Objective Config Training Dataset Response 
- Training dataset for models. This field has to be set only if TrainingPredictionSkewDetectionConfig is specified.
- training_prediction_ Googleskew_ detection_ config Cloud Aiplatform V1Model Monitoring Objective Config Training Prediction Skew Detection Config Response 
- The config for skew between training data and prediction data.
- explanationConfig Property Map
- The config for integrating with Vertex Explainable AI.
- predictionDrift Property MapDetection Config 
- The config for drift of prediction data.
- trainingDataset Property Map
- Training dataset for models. This field has to be set only if TrainingPredictionSkewDetectionConfig is specified.
- trainingPrediction Property MapSkew Detection Config 
- The config for skew between training data and prediction data.
GoogleCloudAiplatformV1ModelMonitoringObjectiveConfigTrainingDatasetResponse         
- BigquerySource Pulumi.Google Native. Aiplatform. V1. Inputs. Google Cloud Aiplatform V1Big Query Source Response 
- The BigQuery table of the unmanaged Dataset used to train this Model.
- DataFormat string
- Data format of the dataset, only applicable if the input is from Google Cloud Storage. The possible formats are: "tf-record" The source file is a TFRecord file. "csv" The source file is a CSV file. "jsonl" The source file is a JSONL file.
- Dataset string
- The resource name of the Dataset used to train this Model.
- GcsSource Pulumi.Google Native. Aiplatform. V1. Inputs. Google Cloud Aiplatform V1Gcs Source Response 
- The Google Cloud Storage uri of the unmanaged Dataset used to train this Model.
- LoggingSampling Pulumi.Strategy Google Native. Aiplatform. V1. Inputs. Google Cloud Aiplatform V1Sampling Strategy Response 
- Strategy to sample data from Training Dataset. If not set, we process the whole dataset.
- TargetField string
- The target field name the model is to predict. This field will be excluded when doing Predict and (or) Explain for the training data.
- BigquerySource GoogleCloud Aiplatform V1Big Query Source Response 
- The BigQuery table of the unmanaged Dataset used to train this Model.
- DataFormat string
- Data format of the dataset, only applicable if the input is from Google Cloud Storage. The possible formats are: "tf-record" The source file is a TFRecord file. "csv" The source file is a CSV file. "jsonl" The source file is a JSONL file.
- Dataset string
- The resource name of the Dataset used to train this Model.
- GcsSource GoogleCloud Aiplatform V1Gcs Source Response 
- The Google Cloud Storage uri of the unmanaged Dataset used to train this Model.
- LoggingSampling GoogleStrategy Cloud Aiplatform V1Sampling Strategy Response 
- Strategy to sample data from Training Dataset. If not set, we process the whole dataset.
- TargetField string
- The target field name the model is to predict. This field will be excluded when doing Predict and (or) Explain for the training data.
- bigquerySource GoogleCloud Aiplatform V1Big Query Source Response 
- The BigQuery table of the unmanaged Dataset used to train this Model.
- dataFormat String
- Data format of the dataset, only applicable if the input is from Google Cloud Storage. The possible formats are: "tf-record" The source file is a TFRecord file. "csv" The source file is a CSV file. "jsonl" The source file is a JSONL file.
- dataset String
- The resource name of the Dataset used to train this Model.
- gcsSource GoogleCloud Aiplatform V1Gcs Source Response 
- The Google Cloud Storage uri of the unmanaged Dataset used to train this Model.
- loggingSampling GoogleStrategy Cloud Aiplatform V1Sampling Strategy Response 
- Strategy to sample data from Training Dataset. If not set, we process the whole dataset.
- targetField String
- The target field name the model is to predict. This field will be excluded when doing Predict and (or) Explain for the training data.
- bigquerySource GoogleCloud Aiplatform V1Big Query Source Response 
- The BigQuery table of the unmanaged Dataset used to train this Model.
- dataFormat string
- Data format of the dataset, only applicable if the input is from Google Cloud Storage. The possible formats are: "tf-record" The source file is a TFRecord file. "csv" The source file is a CSV file. "jsonl" The source file is a JSONL file.
- dataset string
- The resource name of the Dataset used to train this Model.
- gcsSource GoogleCloud Aiplatform V1Gcs Source Response 
- The Google Cloud Storage uri of the unmanaged Dataset used to train this Model.
- loggingSampling GoogleStrategy Cloud Aiplatform V1Sampling Strategy Response 
- Strategy to sample data from Training Dataset. If not set, we process the whole dataset.
- targetField string
- The target field name the model is to predict. This field will be excluded when doing Predict and (or) Explain for the training data.
- bigquery_source GoogleCloud Aiplatform V1Big Query Source Response 
- The BigQuery table of the unmanaged Dataset used to train this Model.
- data_format str
- Data format of the dataset, only applicable if the input is from Google Cloud Storage. The possible formats are: "tf-record" The source file is a TFRecord file. "csv" The source file is a CSV file. "jsonl" The source file is a JSONL file.
- dataset str
- The resource name of the Dataset used to train this Model.
- gcs_source GoogleCloud Aiplatform V1Gcs Source Response 
- The Google Cloud Storage uri of the unmanaged Dataset used to train this Model.
- logging_sampling_ Googlestrategy Cloud Aiplatform V1Sampling Strategy Response 
- Strategy to sample data from Training Dataset. If not set, we process the whole dataset.
- target_field str
- The target field name the model is to predict. This field will be excluded when doing Predict and (or) Explain for the training data.
- bigquerySource Property Map
- The BigQuery table of the unmanaged Dataset used to train this Model.
- dataFormat String
- Data format of the dataset, only applicable if the input is from Google Cloud Storage. The possible formats are: "tf-record" The source file is a TFRecord file. "csv" The source file is a CSV file. "jsonl" The source file is a JSONL file.
- dataset String
- The resource name of the Dataset used to train this Model.
- gcsSource Property Map
- The Google Cloud Storage uri of the unmanaged Dataset used to train this Model.
- loggingSampling Property MapStrategy 
- Strategy to sample data from Training Dataset. If not set, we process the whole dataset.
- targetField String
- The target field name the model is to predict. This field will be excluded when doing Predict and (or) Explain for the training data.
GoogleCloudAiplatformV1ModelMonitoringObjectiveConfigTrainingPredictionSkewDetectionConfigResponse            
- AttributionScore Dictionary<string, string>Skew Thresholds 
- Key is the feature name and value is the threshold. The threshold here is against attribution score distance between the training and prediction feature.
- DefaultSkew Pulumi.Threshold Google Native. Aiplatform. V1. Inputs. Google Cloud Aiplatform V1Threshold Config Response 
- Skew anomaly detection threshold used by all features. When the per-feature thresholds are not set, this field can be used to specify a threshold for all features.
- SkewThresholds Dictionary<string, string>
- Key is the feature name and value is the threshold. If a feature needs to be monitored for skew, a value threshold must be configured for that feature. The threshold here is against feature distribution distance between the training and prediction feature.
- AttributionScore map[string]stringSkew Thresholds 
- Key is the feature name and value is the threshold. The threshold here is against attribution score distance between the training and prediction feature.
- DefaultSkew GoogleThreshold Cloud Aiplatform V1Threshold Config Response 
- Skew anomaly detection threshold used by all features. When the per-feature thresholds are not set, this field can be used to specify a threshold for all features.
- SkewThresholds map[string]string
- Key is the feature name and value is the threshold. If a feature needs to be monitored for skew, a value threshold must be configured for that feature. The threshold here is against feature distribution distance between the training and prediction feature.
- attributionScore Map<String,String>Skew Thresholds 
- Key is the feature name and value is the threshold. The threshold here is against attribution score distance between the training and prediction feature.
- defaultSkew GoogleThreshold Cloud Aiplatform V1Threshold Config Response 
- Skew anomaly detection threshold used by all features. When the per-feature thresholds are not set, this field can be used to specify a threshold for all features.
- skewThresholds Map<String,String>
- Key is the feature name and value is the threshold. If a feature needs to be monitored for skew, a value threshold must be configured for that feature. The threshold here is against feature distribution distance between the training and prediction feature.
- attributionScore {[key: string]: string}Skew Thresholds 
- Key is the feature name and value is the threshold. The threshold here is against attribution score distance between the training and prediction feature.
- defaultSkew GoogleThreshold Cloud Aiplatform V1Threshold Config Response 
- Skew anomaly detection threshold used by all features. When the per-feature thresholds are not set, this field can be used to specify a threshold for all features.
- skewThresholds {[key: string]: string}
- Key is the feature name and value is the threshold. If a feature needs to be monitored for skew, a value threshold must be configured for that feature. The threshold here is against feature distribution distance between the training and prediction feature.
- attribution_score_ Mapping[str, str]skew_ thresholds 
- Key is the feature name and value is the threshold. The threshold here is against attribution score distance between the training and prediction feature.
- default_skew_ Googlethreshold Cloud Aiplatform V1Threshold Config Response 
- Skew anomaly detection threshold used by all features. When the per-feature thresholds are not set, this field can be used to specify a threshold for all features.
- skew_thresholds Mapping[str, str]
- Key is the feature name and value is the threshold. If a feature needs to be monitored for skew, a value threshold must be configured for that feature. The threshold here is against feature distribution distance between the training and prediction feature.
- attributionScore Map<String>Skew Thresholds 
- Key is the feature name and value is the threshold. The threshold here is against attribution score distance between the training and prediction feature.
- defaultSkew Property MapThreshold 
- Skew anomaly detection threshold used by all features. When the per-feature thresholds are not set, this field can be used to specify a threshold for all features.
- skewThresholds Map<String>
- Key is the feature name and value is the threshold. If a feature needs to be monitored for skew, a value threshold must be configured for that feature. The threshold here is against feature distribution distance between the training and prediction feature.
GoogleCloudAiplatformV1SamplingStrategyRandomSampleConfigResponse        
- SampleRate double
- Sample rate (0, 1]
- SampleRate float64
- Sample rate (0, 1]
- sampleRate Double
- Sample rate (0, 1]
- sampleRate number
- Sample rate (0, 1]
- sample_rate float
- Sample rate (0, 1]
- sampleRate Number
- Sample rate (0, 1]
GoogleCloudAiplatformV1SamplingStrategyResponse     
- RandomSample Pulumi.Config Google Native. Aiplatform. V1. Inputs. Google Cloud Aiplatform V1Sampling Strategy Random Sample Config Response 
- Random sample config. Will support more sampling strategies later.
- RandomSample GoogleConfig Cloud Aiplatform V1Sampling Strategy Random Sample Config Response 
- Random sample config. Will support more sampling strategies later.
- randomSample GoogleConfig Cloud Aiplatform V1Sampling Strategy Random Sample Config Response 
- Random sample config. Will support more sampling strategies later.
- randomSample GoogleConfig Cloud Aiplatform V1Sampling Strategy Random Sample Config Response 
- Random sample config. Will support more sampling strategies later.
- random_sample_ Googleconfig Cloud Aiplatform V1Sampling Strategy Random Sample Config Response 
- Random sample config. Will support more sampling strategies later.
- randomSample Property MapConfig 
- Random sample config. Will support more sampling strategies later.
GoogleCloudAiplatformV1ThresholdConfigResponse     
- Value double
- Specify a threshold value that can trigger the alert. If this threshold config is for feature distribution distance: 1. For categorical feature, the distribution distance is calculated by L-inifinity norm. 2. For numerical feature, the distribution distance is calculated by Jensen–Shannon divergence. Each feature must have a non-zero threshold if they need to be monitored. Otherwise no alert will be triggered for that feature.
- Value float64
- Specify a threshold value that can trigger the alert. If this threshold config is for feature distribution distance: 1. For categorical feature, the distribution distance is calculated by L-inifinity norm. 2. For numerical feature, the distribution distance is calculated by Jensen–Shannon divergence. Each feature must have a non-zero threshold if they need to be monitored. Otherwise no alert will be triggered for that feature.
- value Double
- Specify a threshold value that can trigger the alert. If this threshold config is for feature distribution distance: 1. For categorical feature, the distribution distance is calculated by L-inifinity norm. 2. For numerical feature, the distribution distance is calculated by Jensen–Shannon divergence. Each feature must have a non-zero threshold if they need to be monitored. Otherwise no alert will be triggered for that feature.
- value number
- Specify a threshold value that can trigger the alert. If this threshold config is for feature distribution distance: 1. For categorical feature, the distribution distance is calculated by L-inifinity norm. 2. For numerical feature, the distribution distance is calculated by Jensen–Shannon divergence. Each feature must have a non-zero threshold if they need to be monitored. Otherwise no alert will be triggered for that feature.
- value float
- Specify a threshold value that can trigger the alert. If this threshold config is for feature distribution distance: 1. For categorical feature, the distribution distance is calculated by L-inifinity norm. 2. For numerical feature, the distribution distance is calculated by Jensen–Shannon divergence. Each feature must have a non-zero threshold if they need to be monitored. Otherwise no alert will be triggered for that feature.
- value Number
- Specify a threshold value that can trigger the alert. If this threshold config is for feature distribution distance: 1. For categorical feature, the distribution distance is calculated by L-inifinity norm. 2. For numerical feature, the distribution distance is calculated by Jensen–Shannon divergence. Each feature must have a non-zero threshold if they need to be monitored. Otherwise no alert will be triggered for that feature.
GoogleRpcStatusResponse   
- Code int
- The status code, which should be an enum value of google.rpc.Code.
- Details
List<ImmutableDictionary<string, string>> 
- A list of messages that carry the error details. There is a common set of message types for APIs to use.
- Message string
- A developer-facing error message, which should be in English. Any user-facing error message should be localized and sent in the google.rpc.Status.details field, or localized by the client.
- Code int
- The status code, which should be an enum value of google.rpc.Code.
- Details []map[string]string
- A list of messages that carry the error details. There is a common set of message types for APIs to use.
- Message string
- A developer-facing error message, which should be in English. Any user-facing error message should be localized and sent in the google.rpc.Status.details field, or localized by the client.
- code Integer
- The status code, which should be an enum value of google.rpc.Code.
- details List<Map<String,String>>
- A list of messages that carry the error details. There is a common set of message types for APIs to use.
- message String
- A developer-facing error message, which should be in English. Any user-facing error message should be localized and sent in the google.rpc.Status.details field, or localized by the client.
- code number
- The status code, which should be an enum value of google.rpc.Code.
- details {[key: string]: string}[]
- A list of messages that carry the error details. There is a common set of message types for APIs to use.
- message string
- A developer-facing error message, which should be in English. Any user-facing error message should be localized and sent in the google.rpc.Status.details field, or localized by the client.
- code int
- The status code, which should be an enum value of google.rpc.Code.
- details Sequence[Mapping[str, str]]
- A list of messages that carry the error details. There is a common set of message types for APIs to use.
- message str
- A developer-facing error message, which should be in English. Any user-facing error message should be localized and sent in the google.rpc.Status.details field, or localized by the client.
- code Number
- The status code, which should be an enum value of google.rpc.Code.
- details List<Map<String>>
- A list of messages that carry the error details. There is a common set of message types for APIs to use.
- message String
- A developer-facing error message, which should be in English. Any user-facing error message should be localized and sent in the google.rpc.Status.details field, or localized by the client.
Package Details
- Repository
- Google Cloud Native pulumi/pulumi-google-native
- License
- Apache-2.0
Google Cloud Native is in preview. Google Cloud Classic is fully supported.
Google Cloud Native v0.32.0 published on Wednesday, Nov 29, 2023 by Pulumi