Google Cloud Native is in preview. Google Cloud Classic is fully supported.
Google Cloud Native v0.32.0 published on Wednesday, Nov 29, 2023 by Pulumi
google-native.aiplatform/v1beta1.getEndpoint
Explore with Pulumi AI
Google Cloud Native is in preview. Google Cloud Classic is fully supported.
Google Cloud Native v0.32.0 published on Wednesday, Nov 29, 2023 by Pulumi
Gets an Endpoint.
Using getEndpoint
Two invocation forms are available. The direct form accepts plain arguments and either blocks until the result value is available, or returns a Promise-wrapped result. The output form accepts Input-wrapped arguments and returns an Output-wrapped result.
function getEndpoint(args: GetEndpointArgs, opts?: InvokeOptions): Promise<GetEndpointResult>
function getEndpointOutput(args: GetEndpointOutputArgs, opts?: InvokeOptions): Output<GetEndpointResult>def get_endpoint(endpoint_id: Optional[str] = None,
                 location: Optional[str] = None,
                 project: Optional[str] = None,
                 opts: Optional[InvokeOptions] = None) -> GetEndpointResult
def get_endpoint_output(endpoint_id: Optional[pulumi.Input[str]] = None,
                 location: Optional[pulumi.Input[str]] = None,
                 project: Optional[pulumi.Input[str]] = None,
                 opts: Optional[InvokeOptions] = None) -> Output[GetEndpointResult]func LookupEndpoint(ctx *Context, args *LookupEndpointArgs, opts ...InvokeOption) (*LookupEndpointResult, error)
func LookupEndpointOutput(ctx *Context, args *LookupEndpointOutputArgs, opts ...InvokeOption) LookupEndpointResultOutput> Note: This function is named LookupEndpoint in the Go SDK.
public static class GetEndpoint 
{
    public static Task<GetEndpointResult> InvokeAsync(GetEndpointArgs args, InvokeOptions? opts = null)
    public static Output<GetEndpointResult> Invoke(GetEndpointInvokeArgs args, InvokeOptions? opts = null)
}public static CompletableFuture<GetEndpointResult> getEndpoint(GetEndpointArgs args, InvokeOptions options)
public static Output<GetEndpointResult> getEndpoint(GetEndpointArgs args, InvokeOptions options)
fn::invoke:
  function: google-native:aiplatform/v1beta1:getEndpoint
  arguments:
    # arguments dictionaryThe following arguments are supported:
- EndpointId string
- Location string
- Project string
- EndpointId string
- Location string
- Project string
- endpointId String
- location String
- project String
- endpointId string
- location string
- project string
- endpoint_id str
- location str
- project str
- endpointId String
- location String
- project String
getEndpoint Result
The following output properties are available:
- CreateTime string
- Timestamp when this Endpoint was created.
- DeployedModels List<Pulumi.Google Native. Aiplatform. V1Beta1. Outputs. Google Cloud Aiplatform V1beta1Deployed Model Response> 
- The models deployed in this Endpoint. To add or remove DeployedModels use EndpointService.DeployModel and EndpointService.UndeployModel respectively.
- Description string
- The description of the Endpoint.
- DisplayName string
- The display name of the Endpoint. The name can be up to 128 characters long and can consist of any UTF-8 characters.
- EnablePrivate boolService Connect 
- Deprecated: If true, expose the Endpoint via private service connect. Only one of the fields, network or enable_private_service_connect, can be set.
- EncryptionSpec Pulumi.Google Native. Aiplatform. V1Beta1. Outputs. Google Cloud Aiplatform V1beta1Encryption Spec Response 
- Customer-managed encryption key spec for an Endpoint. If set, this Endpoint and all sub-resources of this Endpoint will be secured by this key.
- Etag string
- Used to perform consistent read-modify-write updates. If not set, a blind "overwrite" update happens.
- Labels Dictionary<string, string>
- The labels with user-defined metadata to organize your Endpoints. Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. See https://goo.gl/xmQnxf for more information and examples of labels.
- ModelDeployment stringMonitoring Job 
- Resource name of the Model Monitoring job associated with this Endpoint if monitoring is enabled by JobService.CreateModelDeploymentMonitoringJob. Format: projects/{project}/locations/{location}/modelDeploymentMonitoringJobs/{model_deployment_monitoring_job}
- Name string
- The resource name of the Endpoint.
- Network string
- Optional. The full name of the Google Compute Engine network to which the Endpoint should be peered. Private services access must already be configured for the network. If left unspecified, the Endpoint is not peered with any network. Only one of the fields, network or enable_private_service_connect, can be set. Format: projects/{project}/global/networks/{network}. Where{project}is a project number, as in12345, and{network}is network name.
- PredictRequest Pulumi.Response Logging Config Google Native. Aiplatform. V1Beta1. Outputs. Google Cloud Aiplatform V1beta1Predict Request Response Logging Config Response 
- Configures the request-response logging for online prediction.
- TrafficSplit Dictionary<string, string>
- A map from a DeployedModel's ID to the percentage of this Endpoint's traffic that should be forwarded to that DeployedModel. If a DeployedModel's ID is not listed in this map, then it receives no traffic. The traffic percentage values must add up to 100, or map must be empty if the Endpoint is to not accept any traffic at a moment.
- UpdateTime string
- Timestamp when this Endpoint was last updated.
- CreateTime string
- Timestamp when this Endpoint was created.
- DeployedModels []GoogleCloud Aiplatform V1beta1Deployed Model Response 
- The models deployed in this Endpoint. To add or remove DeployedModels use EndpointService.DeployModel and EndpointService.UndeployModel respectively.
- Description string
- The description of the Endpoint.
- DisplayName string
- The display name of the Endpoint. The name can be up to 128 characters long and can consist of any UTF-8 characters.
- EnablePrivate boolService Connect 
- Deprecated: If true, expose the Endpoint via private service connect. Only one of the fields, network or enable_private_service_connect, can be set.
- EncryptionSpec GoogleCloud Aiplatform V1beta1Encryption Spec Response 
- Customer-managed encryption key spec for an Endpoint. If set, this Endpoint and all sub-resources of this Endpoint will be secured by this key.
- Etag string
- Used to perform consistent read-modify-write updates. If not set, a blind "overwrite" update happens.
- Labels map[string]string
- The labels with user-defined metadata to organize your Endpoints. Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. See https://goo.gl/xmQnxf for more information and examples of labels.
- ModelDeployment stringMonitoring Job 
- Resource name of the Model Monitoring job associated with this Endpoint if monitoring is enabled by JobService.CreateModelDeploymentMonitoringJob. Format: projects/{project}/locations/{location}/modelDeploymentMonitoringJobs/{model_deployment_monitoring_job}
- Name string
- The resource name of the Endpoint.
- Network string
- Optional. The full name of the Google Compute Engine network to which the Endpoint should be peered. Private services access must already be configured for the network. If left unspecified, the Endpoint is not peered with any network. Only one of the fields, network or enable_private_service_connect, can be set. Format: projects/{project}/global/networks/{network}. Where{project}is a project number, as in12345, and{network}is network name.
- PredictRequest GoogleResponse Logging Config Cloud Aiplatform V1beta1Predict Request Response Logging Config Response 
- Configures the request-response logging for online prediction.
- TrafficSplit map[string]string
- A map from a DeployedModel's ID to the percentage of this Endpoint's traffic that should be forwarded to that DeployedModel. If a DeployedModel's ID is not listed in this map, then it receives no traffic. The traffic percentage values must add up to 100, or map must be empty if the Endpoint is to not accept any traffic at a moment.
- UpdateTime string
- Timestamp when this Endpoint was last updated.
- createTime String
- Timestamp when this Endpoint was created.
- deployedModels List<GoogleCloud Aiplatform V1beta1Deployed Model Response> 
- The models deployed in this Endpoint. To add or remove DeployedModels use EndpointService.DeployModel and EndpointService.UndeployModel respectively.
- description String
- The description of the Endpoint.
- displayName String
- The display name of the Endpoint. The name can be up to 128 characters long and can consist of any UTF-8 characters.
- enablePrivate BooleanService Connect 
- Deprecated: If true, expose the Endpoint via private service connect. Only one of the fields, network or enable_private_service_connect, can be set.
- encryptionSpec GoogleCloud Aiplatform V1beta1Encryption Spec Response 
- Customer-managed encryption key spec for an Endpoint. If set, this Endpoint and all sub-resources of this Endpoint will be secured by this key.
- etag String
- Used to perform consistent read-modify-write updates. If not set, a blind "overwrite" update happens.
- labels Map<String,String>
- The labels with user-defined metadata to organize your Endpoints. Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. See https://goo.gl/xmQnxf for more information and examples of labels.
- modelDeployment StringMonitoring Job 
- Resource name of the Model Monitoring job associated with this Endpoint if monitoring is enabled by JobService.CreateModelDeploymentMonitoringJob. Format: projects/{project}/locations/{location}/modelDeploymentMonitoringJobs/{model_deployment_monitoring_job}
- name String
- The resource name of the Endpoint.
- network String
- Optional. The full name of the Google Compute Engine network to which the Endpoint should be peered. Private services access must already be configured for the network. If left unspecified, the Endpoint is not peered with any network. Only one of the fields, network or enable_private_service_connect, can be set. Format: projects/{project}/global/networks/{network}. Where{project}is a project number, as in12345, and{network}is network name.
- predictRequest GoogleResponse Logging Config Cloud Aiplatform V1beta1Predict Request Response Logging Config Response 
- Configures the request-response logging for online prediction.
- trafficSplit Map<String,String>
- A map from a DeployedModel's ID to the percentage of this Endpoint's traffic that should be forwarded to that DeployedModel. If a DeployedModel's ID is not listed in this map, then it receives no traffic. The traffic percentage values must add up to 100, or map must be empty if the Endpoint is to not accept any traffic at a moment.
- updateTime String
- Timestamp when this Endpoint was last updated.
- createTime string
- Timestamp when this Endpoint was created.
- deployedModels GoogleCloud Aiplatform V1beta1Deployed Model Response[] 
- The models deployed in this Endpoint. To add or remove DeployedModels use EndpointService.DeployModel and EndpointService.UndeployModel respectively.
- description string
- The description of the Endpoint.
- displayName string
- The display name of the Endpoint. The name can be up to 128 characters long and can consist of any UTF-8 characters.
- enablePrivate booleanService Connect 
- Deprecated: If true, expose the Endpoint via private service connect. Only one of the fields, network or enable_private_service_connect, can be set.
- encryptionSpec GoogleCloud Aiplatform V1beta1Encryption Spec Response 
- Customer-managed encryption key spec for an Endpoint. If set, this Endpoint and all sub-resources of this Endpoint will be secured by this key.
- etag string
- Used to perform consistent read-modify-write updates. If not set, a blind "overwrite" update happens.
- labels {[key: string]: string}
- The labels with user-defined metadata to organize your Endpoints. Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. See https://goo.gl/xmQnxf for more information and examples of labels.
- modelDeployment stringMonitoring Job 
- Resource name of the Model Monitoring job associated with this Endpoint if monitoring is enabled by JobService.CreateModelDeploymentMonitoringJob. Format: projects/{project}/locations/{location}/modelDeploymentMonitoringJobs/{model_deployment_monitoring_job}
- name string
- The resource name of the Endpoint.
- network string
- Optional. The full name of the Google Compute Engine network to which the Endpoint should be peered. Private services access must already be configured for the network. If left unspecified, the Endpoint is not peered with any network. Only one of the fields, network or enable_private_service_connect, can be set. Format: projects/{project}/global/networks/{network}. Where{project}is a project number, as in12345, and{network}is network name.
- predictRequest GoogleResponse Logging Config Cloud Aiplatform V1beta1Predict Request Response Logging Config Response 
- Configures the request-response logging for online prediction.
- trafficSplit {[key: string]: string}
- A map from a DeployedModel's ID to the percentage of this Endpoint's traffic that should be forwarded to that DeployedModel. If a DeployedModel's ID is not listed in this map, then it receives no traffic. The traffic percentage values must add up to 100, or map must be empty if the Endpoint is to not accept any traffic at a moment.
- updateTime string
- Timestamp when this Endpoint was last updated.
- create_time str
- Timestamp when this Endpoint was created.
- deployed_models Sequence[GoogleCloud Aiplatform V1beta1Deployed Model Response] 
- The models deployed in this Endpoint. To add or remove DeployedModels use EndpointService.DeployModel and EndpointService.UndeployModel respectively.
- description str
- The description of the Endpoint.
- display_name str
- The display name of the Endpoint. The name can be up to 128 characters long and can consist of any UTF-8 characters.
- enable_private_ boolservice_ connect 
- Deprecated: If true, expose the Endpoint via private service connect. Only one of the fields, network or enable_private_service_connect, can be set.
- encryption_spec GoogleCloud Aiplatform V1beta1Encryption Spec Response 
- Customer-managed encryption key spec for an Endpoint. If set, this Endpoint and all sub-resources of this Endpoint will be secured by this key.
- etag str
- Used to perform consistent read-modify-write updates. If not set, a blind "overwrite" update happens.
- labels Mapping[str, str]
- The labels with user-defined metadata to organize your Endpoints. Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. See https://goo.gl/xmQnxf for more information and examples of labels.
- model_deployment_ strmonitoring_ job 
- Resource name of the Model Monitoring job associated with this Endpoint if monitoring is enabled by JobService.CreateModelDeploymentMonitoringJob. Format: projects/{project}/locations/{location}/modelDeploymentMonitoringJobs/{model_deployment_monitoring_job}
- name str
- The resource name of the Endpoint.
- network str
- Optional. The full name of the Google Compute Engine network to which the Endpoint should be peered. Private services access must already be configured for the network. If left unspecified, the Endpoint is not peered with any network. Only one of the fields, network or enable_private_service_connect, can be set. Format: projects/{project}/global/networks/{network}. Where{project}is a project number, as in12345, and{network}is network name.
- predict_request_ Googleresponse_ logging_ config Cloud Aiplatform V1beta1Predict Request Response Logging Config Response 
- Configures the request-response logging for online prediction.
- traffic_split Mapping[str, str]
- A map from a DeployedModel's ID to the percentage of this Endpoint's traffic that should be forwarded to that DeployedModel. If a DeployedModel's ID is not listed in this map, then it receives no traffic. The traffic percentage values must add up to 100, or map must be empty if the Endpoint is to not accept any traffic at a moment.
- update_time str
- Timestamp when this Endpoint was last updated.
- createTime String
- Timestamp when this Endpoint was created.
- deployedModels List<Property Map>
- The models deployed in this Endpoint. To add or remove DeployedModels use EndpointService.DeployModel and EndpointService.UndeployModel respectively.
- description String
- The description of the Endpoint.
- displayName String
- The display name of the Endpoint. The name can be up to 128 characters long and can consist of any UTF-8 characters.
- enablePrivate BooleanService Connect 
- Deprecated: If true, expose the Endpoint via private service connect. Only one of the fields, network or enable_private_service_connect, can be set.
- encryptionSpec Property Map
- Customer-managed encryption key spec for an Endpoint. If set, this Endpoint and all sub-resources of this Endpoint will be secured by this key.
- etag String
- Used to perform consistent read-modify-write updates. If not set, a blind "overwrite" update happens.
- labels Map<String>
- The labels with user-defined metadata to organize your Endpoints. Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. See https://goo.gl/xmQnxf for more information and examples of labels.
- modelDeployment StringMonitoring Job 
- Resource name of the Model Monitoring job associated with this Endpoint if monitoring is enabled by JobService.CreateModelDeploymentMonitoringJob. Format: projects/{project}/locations/{location}/modelDeploymentMonitoringJobs/{model_deployment_monitoring_job}
- name String
- The resource name of the Endpoint.
- network String
- Optional. The full name of the Google Compute Engine network to which the Endpoint should be peered. Private services access must already be configured for the network. If left unspecified, the Endpoint is not peered with any network. Only one of the fields, network or enable_private_service_connect, can be set. Format: projects/{project}/global/networks/{network}. Where{project}is a project number, as in12345, and{network}is network name.
- predictRequest Property MapResponse Logging Config 
- Configures the request-response logging for online prediction.
- trafficSplit Map<String>
- A map from a DeployedModel's ID to the percentage of this Endpoint's traffic that should be forwarded to that DeployedModel. If a DeployedModel's ID is not listed in this map, then it receives no traffic. The traffic percentage values must add up to 100, or map must be empty if the Endpoint is to not accept any traffic at a moment.
- updateTime String
- Timestamp when this Endpoint was last updated.
Supporting Types
GoogleCloudAiplatformV1beta1AutomaticResourcesResponse     
- MaxReplica intCount 
- Immutable. The maximum number of replicas this DeployedModel may be deployed on when the traffic against it increases. If the requested value is too large, the deployment will error, but if deployment succeeds then the ability to scale the model to that many replicas is guaranteed (barring service outages). If traffic against the DeployedModel increases beyond what its replicas at maximum may handle, a portion of the traffic will be dropped. If this value is not provided, a no upper bound for scaling under heavy traffic will be assume, though Vertex AI may be unable to scale beyond certain replica number.
- MinReplica intCount 
- Immutable. The minimum number of replicas this DeployedModel will be always deployed on. If traffic against it increases, it may dynamically be deployed onto more replicas up to max_replica_count, and as traffic decreases, some of these extra replicas may be freed. If the requested value is too large, the deployment will error.
- MaxReplica intCount 
- Immutable. The maximum number of replicas this DeployedModel may be deployed on when the traffic against it increases. If the requested value is too large, the deployment will error, but if deployment succeeds then the ability to scale the model to that many replicas is guaranteed (barring service outages). If traffic against the DeployedModel increases beyond what its replicas at maximum may handle, a portion of the traffic will be dropped. If this value is not provided, a no upper bound for scaling under heavy traffic will be assume, though Vertex AI may be unable to scale beyond certain replica number.
- MinReplica intCount 
- Immutable. The minimum number of replicas this DeployedModel will be always deployed on. If traffic against it increases, it may dynamically be deployed onto more replicas up to max_replica_count, and as traffic decreases, some of these extra replicas may be freed. If the requested value is too large, the deployment will error.
- maxReplica IntegerCount 
- Immutable. The maximum number of replicas this DeployedModel may be deployed on when the traffic against it increases. If the requested value is too large, the deployment will error, but if deployment succeeds then the ability to scale the model to that many replicas is guaranteed (barring service outages). If traffic against the DeployedModel increases beyond what its replicas at maximum may handle, a portion of the traffic will be dropped. If this value is not provided, a no upper bound for scaling under heavy traffic will be assume, though Vertex AI may be unable to scale beyond certain replica number.
- minReplica IntegerCount 
- Immutable. The minimum number of replicas this DeployedModel will be always deployed on. If traffic against it increases, it may dynamically be deployed onto more replicas up to max_replica_count, and as traffic decreases, some of these extra replicas may be freed. If the requested value is too large, the deployment will error.
- maxReplica numberCount 
- Immutable. The maximum number of replicas this DeployedModel may be deployed on when the traffic against it increases. If the requested value is too large, the deployment will error, but if deployment succeeds then the ability to scale the model to that many replicas is guaranteed (barring service outages). If traffic against the DeployedModel increases beyond what its replicas at maximum may handle, a portion of the traffic will be dropped. If this value is not provided, a no upper bound for scaling under heavy traffic will be assume, though Vertex AI may be unable to scale beyond certain replica number.
- minReplica numberCount 
- Immutable. The minimum number of replicas this DeployedModel will be always deployed on. If traffic against it increases, it may dynamically be deployed onto more replicas up to max_replica_count, and as traffic decreases, some of these extra replicas may be freed. If the requested value is too large, the deployment will error.
- max_replica_ intcount 
- Immutable. The maximum number of replicas this DeployedModel may be deployed on when the traffic against it increases. If the requested value is too large, the deployment will error, but if deployment succeeds then the ability to scale the model to that many replicas is guaranteed (barring service outages). If traffic against the DeployedModel increases beyond what its replicas at maximum may handle, a portion of the traffic will be dropped. If this value is not provided, a no upper bound for scaling under heavy traffic will be assume, though Vertex AI may be unable to scale beyond certain replica number.
- min_replica_ intcount 
- Immutable. The minimum number of replicas this DeployedModel will be always deployed on. If traffic against it increases, it may dynamically be deployed onto more replicas up to max_replica_count, and as traffic decreases, some of these extra replicas may be freed. If the requested value is too large, the deployment will error.
- maxReplica NumberCount 
- Immutable. The maximum number of replicas this DeployedModel may be deployed on when the traffic against it increases. If the requested value is too large, the deployment will error, but if deployment succeeds then the ability to scale the model to that many replicas is guaranteed (barring service outages). If traffic against the DeployedModel increases beyond what its replicas at maximum may handle, a portion of the traffic will be dropped. If this value is not provided, a no upper bound for scaling under heavy traffic will be assume, though Vertex AI may be unable to scale beyond certain replica number.
- minReplica NumberCount 
- Immutable. The minimum number of replicas this DeployedModel will be always deployed on. If traffic against it increases, it may dynamically be deployed onto more replicas up to max_replica_count, and as traffic decreases, some of these extra replicas may be freed. If the requested value is too large, the deployment will error.
GoogleCloudAiplatformV1beta1AutoscalingMetricSpecResponse      
- MetricName string
- The resource metric name. Supported metrics: * For Online Prediction: * aiplatform.googleapis.com/prediction/online/accelerator/duty_cycle*aiplatform.googleapis.com/prediction/online/cpu/utilization
- Target int
- The target resource utilization in percentage (1% - 100%) for the given metric; once the real usage deviates from the target by a certain percentage, the machine replicas change. The default value is 60 (representing 60%) if not provided.
- MetricName string
- The resource metric name. Supported metrics: * For Online Prediction: * aiplatform.googleapis.com/prediction/online/accelerator/duty_cycle*aiplatform.googleapis.com/prediction/online/cpu/utilization
- Target int
- The target resource utilization in percentage (1% - 100%) for the given metric; once the real usage deviates from the target by a certain percentage, the machine replicas change. The default value is 60 (representing 60%) if not provided.
- metricName String
- The resource metric name. Supported metrics: * For Online Prediction: * aiplatform.googleapis.com/prediction/online/accelerator/duty_cycle*aiplatform.googleapis.com/prediction/online/cpu/utilization
- target Integer
- The target resource utilization in percentage (1% - 100%) for the given metric; once the real usage deviates from the target by a certain percentage, the machine replicas change. The default value is 60 (representing 60%) if not provided.
- metricName string
- The resource metric name. Supported metrics: * For Online Prediction: * aiplatform.googleapis.com/prediction/online/accelerator/duty_cycle*aiplatform.googleapis.com/prediction/online/cpu/utilization
- target number
- The target resource utilization in percentage (1% - 100%) for the given metric; once the real usage deviates from the target by a certain percentage, the machine replicas change. The default value is 60 (representing 60%) if not provided.
- metric_name str
- The resource metric name. Supported metrics: * For Online Prediction: * aiplatform.googleapis.com/prediction/online/accelerator/duty_cycle*aiplatform.googleapis.com/prediction/online/cpu/utilization
- target int
- The target resource utilization in percentage (1% - 100%) for the given metric; once the real usage deviates from the target by a certain percentage, the machine replicas change. The default value is 60 (representing 60%) if not provided.
- metricName String
- The resource metric name. Supported metrics: * For Online Prediction: * aiplatform.googleapis.com/prediction/online/accelerator/duty_cycle*aiplatform.googleapis.com/prediction/online/cpu/utilization
- target Number
- The target resource utilization in percentage (1% - 100%) for the given metric; once the real usage deviates from the target by a certain percentage, the machine replicas change. The default value is 60 (representing 60%) if not provided.
GoogleCloudAiplatformV1beta1BigQueryDestinationResponse      
- OutputUri string
- BigQuery URI to a project or table, up to 2000 characters long. When only the project is specified, the Dataset and Table is created. When the full table reference is specified, the Dataset must exist and table must not exist. Accepted forms: * BigQuery path. For example: bq://projectIdorbq://projectId.bqDatasetIdorbq://projectId.bqDatasetId.bqTableId.
- OutputUri string
- BigQuery URI to a project or table, up to 2000 characters long. When only the project is specified, the Dataset and Table is created. When the full table reference is specified, the Dataset must exist and table must not exist. Accepted forms: * BigQuery path. For example: bq://projectIdorbq://projectId.bqDatasetIdorbq://projectId.bqDatasetId.bqTableId.
- outputUri String
- BigQuery URI to a project or table, up to 2000 characters long. When only the project is specified, the Dataset and Table is created. When the full table reference is specified, the Dataset must exist and table must not exist. Accepted forms: * BigQuery path. For example: bq://projectIdorbq://projectId.bqDatasetIdorbq://projectId.bqDatasetId.bqTableId.
- outputUri string
- BigQuery URI to a project or table, up to 2000 characters long. When only the project is specified, the Dataset and Table is created. When the full table reference is specified, the Dataset must exist and table must not exist. Accepted forms: * BigQuery path. For example: bq://projectIdorbq://projectId.bqDatasetIdorbq://projectId.bqDatasetId.bqTableId.
- output_uri str
- BigQuery URI to a project or table, up to 2000 characters long. When only the project is specified, the Dataset and Table is created. When the full table reference is specified, the Dataset must exist and table must not exist. Accepted forms: * BigQuery path. For example: bq://projectIdorbq://projectId.bqDatasetIdorbq://projectId.bqDatasetId.bqTableId.
- outputUri String
- BigQuery URI to a project or table, up to 2000 characters long. When only the project is specified, the Dataset and Table is created. When the full table reference is specified, the Dataset must exist and table must not exist. Accepted forms: * BigQuery path. For example: bq://projectIdorbq://projectId.bqDatasetIdorbq://projectId.bqDatasetId.bqTableId.
GoogleCloudAiplatformV1beta1BlurBaselineConfigResponse      
- MaxBlur doubleSigma 
- The standard deviation of the blur kernel for the blurred baseline. The same blurring parameter is used for both the height and the width dimension. If not set, the method defaults to the zero (i.e. black for images) baseline.
- MaxBlur float64Sigma 
- The standard deviation of the blur kernel for the blurred baseline. The same blurring parameter is used for both the height and the width dimension. If not set, the method defaults to the zero (i.e. black for images) baseline.
- maxBlur DoubleSigma 
- The standard deviation of the blur kernel for the blurred baseline. The same blurring parameter is used for both the height and the width dimension. If not set, the method defaults to the zero (i.e. black for images) baseline.
- maxBlur numberSigma 
- The standard deviation of the blur kernel for the blurred baseline. The same blurring parameter is used for both the height and the width dimension. If not set, the method defaults to the zero (i.e. black for images) baseline.
- max_blur_ floatsigma 
- The standard deviation of the blur kernel for the blurred baseline. The same blurring parameter is used for both the height and the width dimension. If not set, the method defaults to the zero (i.e. black for images) baseline.
- maxBlur NumberSigma 
- The standard deviation of the blur kernel for the blurred baseline. The same blurring parameter is used for both the height and the width dimension. If not set, the method defaults to the zero (i.e. black for images) baseline.
GoogleCloudAiplatformV1beta1DedicatedResourcesResponse     
- AutoscalingMetric List<Pulumi.Specs Google Native. Aiplatform. V1Beta1. Inputs. Google Cloud Aiplatform V1beta1Autoscaling Metric Spec Response> 
- Immutable. The metric specifications that overrides a resource utilization metric (CPU utilization, accelerator's duty cycle, and so on) target value (default to 60 if not set). At most one entry is allowed per metric. If machine_spec.accelerator_count is above 0, the autoscaling will be based on both CPU utilization and accelerator's duty cycle metrics and scale up when either metrics exceeds its target value while scale down if both metrics are under their target value. The default target value is 60 for both metrics. If machine_spec.accelerator_count is 0, the autoscaling will be based on CPU utilization metric only with default target value 60 if not explicitly set. For example, in the case of Online Prediction, if you want to override target CPU utilization to 80, you should set autoscaling_metric_specs.metric_name to aiplatform.googleapis.com/prediction/online/cpu/utilizationand autoscaling_metric_specs.target to80.
- MachineSpec Pulumi.Google Native. Aiplatform. V1Beta1. Inputs. Google Cloud Aiplatform V1beta1Machine Spec Response 
- Immutable. The specification of a single machine used by the prediction.
- MaxReplica intCount 
- Immutable. The maximum number of replicas this DeployedModel may be deployed on when the traffic against it increases. If the requested value is too large, the deployment will error, but if deployment succeeds then the ability to scale the model to that many replicas is guaranteed (barring service outages). If traffic against the DeployedModel increases beyond what its replicas at maximum may handle, a portion of the traffic will be dropped. If this value is not provided, will use min_replica_count as the default value. The value of this field impacts the charge against Vertex CPU and GPU quotas. Specifically, you will be charged for (max_replica_count * number of cores in the selected machine type) and (max_replica_count * number of GPUs per replica in the selected machine type).
- MinReplica intCount 
- Immutable. The minimum number of machine replicas this DeployedModel will be always deployed on. This value must be greater than or equal to 1. If traffic against the DeployedModel increases, it may dynamically be deployed onto more replicas, and as traffic decreases, some of these extra replicas may be freed.
- AutoscalingMetric []GoogleSpecs Cloud Aiplatform V1beta1Autoscaling Metric Spec Response 
- Immutable. The metric specifications that overrides a resource utilization metric (CPU utilization, accelerator's duty cycle, and so on) target value (default to 60 if not set). At most one entry is allowed per metric. If machine_spec.accelerator_count is above 0, the autoscaling will be based on both CPU utilization and accelerator's duty cycle metrics and scale up when either metrics exceeds its target value while scale down if both metrics are under their target value. The default target value is 60 for both metrics. If machine_spec.accelerator_count is 0, the autoscaling will be based on CPU utilization metric only with default target value 60 if not explicitly set. For example, in the case of Online Prediction, if you want to override target CPU utilization to 80, you should set autoscaling_metric_specs.metric_name to aiplatform.googleapis.com/prediction/online/cpu/utilizationand autoscaling_metric_specs.target to80.
- MachineSpec GoogleCloud Aiplatform V1beta1Machine Spec Response 
- Immutable. The specification of a single machine used by the prediction.
- MaxReplica intCount 
- Immutable. The maximum number of replicas this DeployedModel may be deployed on when the traffic against it increases. If the requested value is too large, the deployment will error, but if deployment succeeds then the ability to scale the model to that many replicas is guaranteed (barring service outages). If traffic against the DeployedModel increases beyond what its replicas at maximum may handle, a portion of the traffic will be dropped. If this value is not provided, will use min_replica_count as the default value. The value of this field impacts the charge against Vertex CPU and GPU quotas. Specifically, you will be charged for (max_replica_count * number of cores in the selected machine type) and (max_replica_count * number of GPUs per replica in the selected machine type).
- MinReplica intCount 
- Immutable. The minimum number of machine replicas this DeployedModel will be always deployed on. This value must be greater than or equal to 1. If traffic against the DeployedModel increases, it may dynamically be deployed onto more replicas, and as traffic decreases, some of these extra replicas may be freed.
- autoscalingMetric List<GoogleSpecs Cloud Aiplatform V1beta1Autoscaling Metric Spec Response> 
- Immutable. The metric specifications that overrides a resource utilization metric (CPU utilization, accelerator's duty cycle, and so on) target value (default to 60 if not set). At most one entry is allowed per metric. If machine_spec.accelerator_count is above 0, the autoscaling will be based on both CPU utilization and accelerator's duty cycle metrics and scale up when either metrics exceeds its target value while scale down if both metrics are under their target value. The default target value is 60 for both metrics. If machine_spec.accelerator_count is 0, the autoscaling will be based on CPU utilization metric only with default target value 60 if not explicitly set. For example, in the case of Online Prediction, if you want to override target CPU utilization to 80, you should set autoscaling_metric_specs.metric_name to aiplatform.googleapis.com/prediction/online/cpu/utilizationand autoscaling_metric_specs.target to80.
- machineSpec GoogleCloud Aiplatform V1beta1Machine Spec Response 
- Immutable. The specification of a single machine used by the prediction.
- maxReplica IntegerCount 
- Immutable. The maximum number of replicas this DeployedModel may be deployed on when the traffic against it increases. If the requested value is too large, the deployment will error, but if deployment succeeds then the ability to scale the model to that many replicas is guaranteed (barring service outages). If traffic against the DeployedModel increases beyond what its replicas at maximum may handle, a portion of the traffic will be dropped. If this value is not provided, will use min_replica_count as the default value. The value of this field impacts the charge against Vertex CPU and GPU quotas. Specifically, you will be charged for (max_replica_count * number of cores in the selected machine type) and (max_replica_count * number of GPUs per replica in the selected machine type).
- minReplica IntegerCount 
- Immutable. The minimum number of machine replicas this DeployedModel will be always deployed on. This value must be greater than or equal to 1. If traffic against the DeployedModel increases, it may dynamically be deployed onto more replicas, and as traffic decreases, some of these extra replicas may be freed.
- autoscalingMetric GoogleSpecs Cloud Aiplatform V1beta1Autoscaling Metric Spec Response[] 
- Immutable. The metric specifications that overrides a resource utilization metric (CPU utilization, accelerator's duty cycle, and so on) target value (default to 60 if not set). At most one entry is allowed per metric. If machine_spec.accelerator_count is above 0, the autoscaling will be based on both CPU utilization and accelerator's duty cycle metrics and scale up when either metrics exceeds its target value while scale down if both metrics are under their target value. The default target value is 60 for both metrics. If machine_spec.accelerator_count is 0, the autoscaling will be based on CPU utilization metric only with default target value 60 if not explicitly set. For example, in the case of Online Prediction, if you want to override target CPU utilization to 80, you should set autoscaling_metric_specs.metric_name to aiplatform.googleapis.com/prediction/online/cpu/utilizationand autoscaling_metric_specs.target to80.
- machineSpec GoogleCloud Aiplatform V1beta1Machine Spec Response 
- Immutable. The specification of a single machine used by the prediction.
- maxReplica numberCount 
- Immutable. The maximum number of replicas this DeployedModel may be deployed on when the traffic against it increases. If the requested value is too large, the deployment will error, but if deployment succeeds then the ability to scale the model to that many replicas is guaranteed (barring service outages). If traffic against the DeployedModel increases beyond what its replicas at maximum may handle, a portion of the traffic will be dropped. If this value is not provided, will use min_replica_count as the default value. The value of this field impacts the charge against Vertex CPU and GPU quotas. Specifically, you will be charged for (max_replica_count * number of cores in the selected machine type) and (max_replica_count * number of GPUs per replica in the selected machine type).
- minReplica numberCount 
- Immutable. The minimum number of machine replicas this DeployedModel will be always deployed on. This value must be greater than or equal to 1. If traffic against the DeployedModel increases, it may dynamically be deployed onto more replicas, and as traffic decreases, some of these extra replicas may be freed.
- autoscaling_metric_ Sequence[Googlespecs Cloud Aiplatform V1beta1Autoscaling Metric Spec Response] 
- Immutable. The metric specifications that overrides a resource utilization metric (CPU utilization, accelerator's duty cycle, and so on) target value (default to 60 if not set). At most one entry is allowed per metric. If machine_spec.accelerator_count is above 0, the autoscaling will be based on both CPU utilization and accelerator's duty cycle metrics and scale up when either metrics exceeds its target value while scale down if both metrics are under their target value. The default target value is 60 for both metrics. If machine_spec.accelerator_count is 0, the autoscaling will be based on CPU utilization metric only with default target value 60 if not explicitly set. For example, in the case of Online Prediction, if you want to override target CPU utilization to 80, you should set autoscaling_metric_specs.metric_name to aiplatform.googleapis.com/prediction/online/cpu/utilizationand autoscaling_metric_specs.target to80.
- machine_spec GoogleCloud Aiplatform V1beta1Machine Spec Response 
- Immutable. The specification of a single machine used by the prediction.
- max_replica_ intcount 
- Immutable. The maximum number of replicas this DeployedModel may be deployed on when the traffic against it increases. If the requested value is too large, the deployment will error, but if deployment succeeds then the ability to scale the model to that many replicas is guaranteed (barring service outages). If traffic against the DeployedModel increases beyond what its replicas at maximum may handle, a portion of the traffic will be dropped. If this value is not provided, will use min_replica_count as the default value. The value of this field impacts the charge against Vertex CPU and GPU quotas. Specifically, you will be charged for (max_replica_count * number of cores in the selected machine type) and (max_replica_count * number of GPUs per replica in the selected machine type).
- min_replica_ intcount 
- Immutable. The minimum number of machine replicas this DeployedModel will be always deployed on. This value must be greater than or equal to 1. If traffic against the DeployedModel increases, it may dynamically be deployed onto more replicas, and as traffic decreases, some of these extra replicas may be freed.
- autoscalingMetric List<Property Map>Specs 
- Immutable. The metric specifications that overrides a resource utilization metric (CPU utilization, accelerator's duty cycle, and so on) target value (default to 60 if not set). At most one entry is allowed per metric. If machine_spec.accelerator_count is above 0, the autoscaling will be based on both CPU utilization and accelerator's duty cycle metrics and scale up when either metrics exceeds its target value while scale down if both metrics are under their target value. The default target value is 60 for both metrics. If machine_spec.accelerator_count is 0, the autoscaling will be based on CPU utilization metric only with default target value 60 if not explicitly set. For example, in the case of Online Prediction, if you want to override target CPU utilization to 80, you should set autoscaling_metric_specs.metric_name to aiplatform.googleapis.com/prediction/online/cpu/utilizationand autoscaling_metric_specs.target to80.
- machineSpec Property Map
- Immutable. The specification of a single machine used by the prediction.
- maxReplica NumberCount 
- Immutable. The maximum number of replicas this DeployedModel may be deployed on when the traffic against it increases. If the requested value is too large, the deployment will error, but if deployment succeeds then the ability to scale the model to that many replicas is guaranteed (barring service outages). If traffic against the DeployedModel increases beyond what its replicas at maximum may handle, a portion of the traffic will be dropped. If this value is not provided, will use min_replica_count as the default value. The value of this field impacts the charge against Vertex CPU and GPU quotas. Specifically, you will be charged for (max_replica_count * number of cores in the selected machine type) and (max_replica_count * number of GPUs per replica in the selected machine type).
- minReplica NumberCount 
- Immutable. The minimum number of machine replicas this DeployedModel will be always deployed on. This value must be greater than or equal to 1. If traffic against the DeployedModel increases, it may dynamically be deployed onto more replicas, and as traffic decreases, some of these extra replicas may be freed.
GoogleCloudAiplatformV1beta1DeployedModelResponse     
- AutomaticResources Pulumi.Google Native. Aiplatform. V1Beta1. Inputs. Google Cloud Aiplatform V1beta1Automatic Resources Response 
- A description of resources that to large degree are decided by Vertex AI, and require only a modest additional configuration.
- CreateTime string
- Timestamp when the DeployedModel was created.
- DedicatedResources Pulumi.Google Native. Aiplatform. V1Beta1. Inputs. Google Cloud Aiplatform V1beta1Dedicated Resources Response 
- A description of resources that are dedicated to the DeployedModel, and that need a higher degree of manual configuration.
- DisableExplanations bool
- If true, deploy the model without explainable feature, regardless the existence of Model.explanation_spec or explanation_spec.
- DisplayName string
- The display name of the DeployedModel. If not provided upon creation, the Model's display_name is used.
- EnableAccess boolLogging 
- If true, online prediction access logs are sent to Cloud Logging. These logs are like standard server access logs, containing information like timestamp and latency for each prediction request. Note that logs may incur a cost, especially if your project receives prediction requests at a high queries per second rate (QPS). Estimate your costs before enabling this option.
- EnableContainer boolLogging 
- If true, the container of the DeployedModel instances will send stderrandstdoutstreams to Cloud Logging. Only supported for custom-trained Models and AutoML Tabular Models.
- ExplanationSpec Pulumi.Google Native. Aiplatform. V1Beta1. Inputs. Google Cloud Aiplatform V1beta1Explanation Spec Response 
- Explanation configuration for this DeployedModel. When deploying a Model using EndpointService.DeployModel, this value overrides the value of Model.explanation_spec. All fields of explanation_spec are optional in the request. If a field of explanation_spec is not populated, the value of the same field of Model.explanation_spec is inherited. If the corresponding Model.explanation_spec is not populated, all fields of the explanation_spec will be used for the explanation configuration.
- Model string
- The resource name of the Model that this is the deployment of. Note that the Model may be in a different location than the DeployedModel's Endpoint. The resource name may contain version id or version alias to specify the version. Example: projects/{project}/locations/{location}/models/{model}@2orprojects/{project}/locations/{location}/models/{model}@goldenif no version is specified, the default version will be deployed.
- ModelVersion stringId 
- The version ID of the model that is deployed.
- PrivateEndpoints Pulumi.Google Native. Aiplatform. V1Beta1. Inputs. Google Cloud Aiplatform V1beta1Private Endpoints Response 
- Provide paths for users to send predict/explain/health requests directly to the deployed model services running on Cloud via private services access. This field is populated if network is configured.
- ServiceAccount string
- The service account that the DeployedModel's container runs as. Specify the email address of the service account. If this service account is not specified, the container runs as a service account that doesn't have access to the resource project. Users deploying the Model must have the iam.serviceAccounts.actAspermission on this service account.
- string
- The resource name of the shared DeploymentResourcePool to deploy on. Format: projects/{project}/locations/{location}/deploymentResourcePools/{deployment_resource_pool}
- AutomaticResources GoogleCloud Aiplatform V1beta1Automatic Resources Response 
- A description of resources that to large degree are decided by Vertex AI, and require only a modest additional configuration.
- CreateTime string
- Timestamp when the DeployedModel was created.
- DedicatedResources GoogleCloud Aiplatform V1beta1Dedicated Resources Response 
- A description of resources that are dedicated to the DeployedModel, and that need a higher degree of manual configuration.
- DisableExplanations bool
- If true, deploy the model without explainable feature, regardless the existence of Model.explanation_spec or explanation_spec.
- DisplayName string
- The display name of the DeployedModel. If not provided upon creation, the Model's display_name is used.
- EnableAccess boolLogging 
- If true, online prediction access logs are sent to Cloud Logging. These logs are like standard server access logs, containing information like timestamp and latency for each prediction request. Note that logs may incur a cost, especially if your project receives prediction requests at a high queries per second rate (QPS). Estimate your costs before enabling this option.
- EnableContainer boolLogging 
- If true, the container of the DeployedModel instances will send stderrandstdoutstreams to Cloud Logging. Only supported for custom-trained Models and AutoML Tabular Models.
- ExplanationSpec GoogleCloud Aiplatform V1beta1Explanation Spec Response 
- Explanation configuration for this DeployedModel. When deploying a Model using EndpointService.DeployModel, this value overrides the value of Model.explanation_spec. All fields of explanation_spec are optional in the request. If a field of explanation_spec is not populated, the value of the same field of Model.explanation_spec is inherited. If the corresponding Model.explanation_spec is not populated, all fields of the explanation_spec will be used for the explanation configuration.
- Model string
- The resource name of the Model that this is the deployment of. Note that the Model may be in a different location than the DeployedModel's Endpoint. The resource name may contain version id or version alias to specify the version. Example: projects/{project}/locations/{location}/models/{model}@2orprojects/{project}/locations/{location}/models/{model}@goldenif no version is specified, the default version will be deployed.
- ModelVersion stringId 
- The version ID of the model that is deployed.
- PrivateEndpoints GoogleCloud Aiplatform V1beta1Private Endpoints Response 
- Provide paths for users to send predict/explain/health requests directly to the deployed model services running on Cloud via private services access. This field is populated if network is configured.
- ServiceAccount string
- The service account that the DeployedModel's container runs as. Specify the email address of the service account. If this service account is not specified, the container runs as a service account that doesn't have access to the resource project. Users deploying the Model must have the iam.serviceAccounts.actAspermission on this service account.
- string
- The resource name of the shared DeploymentResourcePool to deploy on. Format: projects/{project}/locations/{location}/deploymentResourcePools/{deployment_resource_pool}
- automaticResources GoogleCloud Aiplatform V1beta1Automatic Resources Response 
- A description of resources that to large degree are decided by Vertex AI, and require only a modest additional configuration.
- createTime String
- Timestamp when the DeployedModel was created.
- dedicatedResources GoogleCloud Aiplatform V1beta1Dedicated Resources Response 
- A description of resources that are dedicated to the DeployedModel, and that need a higher degree of manual configuration.
- disableExplanations Boolean
- If true, deploy the model without explainable feature, regardless the existence of Model.explanation_spec or explanation_spec.
- displayName String
- The display name of the DeployedModel. If not provided upon creation, the Model's display_name is used.
- enableAccess BooleanLogging 
- If true, online prediction access logs are sent to Cloud Logging. These logs are like standard server access logs, containing information like timestamp and latency for each prediction request. Note that logs may incur a cost, especially if your project receives prediction requests at a high queries per second rate (QPS). Estimate your costs before enabling this option.
- enableContainer BooleanLogging 
- If true, the container of the DeployedModel instances will send stderrandstdoutstreams to Cloud Logging. Only supported for custom-trained Models and AutoML Tabular Models.
- explanationSpec GoogleCloud Aiplatform V1beta1Explanation Spec Response 
- Explanation configuration for this DeployedModel. When deploying a Model using EndpointService.DeployModel, this value overrides the value of Model.explanation_spec. All fields of explanation_spec are optional in the request. If a field of explanation_spec is not populated, the value of the same field of Model.explanation_spec is inherited. If the corresponding Model.explanation_spec is not populated, all fields of the explanation_spec will be used for the explanation configuration.
- model String
- The resource name of the Model that this is the deployment of. Note that the Model may be in a different location than the DeployedModel's Endpoint. The resource name may contain version id or version alias to specify the version. Example: projects/{project}/locations/{location}/models/{model}@2orprojects/{project}/locations/{location}/models/{model}@goldenif no version is specified, the default version will be deployed.
- modelVersion StringId 
- The version ID of the model that is deployed.
- privateEndpoints GoogleCloud Aiplatform V1beta1Private Endpoints Response 
- Provide paths for users to send predict/explain/health requests directly to the deployed model services running on Cloud via private services access. This field is populated if network is configured.
- serviceAccount String
- The service account that the DeployedModel's container runs as. Specify the email address of the service account. If this service account is not specified, the container runs as a service account that doesn't have access to the resource project. Users deploying the Model must have the iam.serviceAccounts.actAspermission on this service account.
- String
- The resource name of the shared DeploymentResourcePool to deploy on. Format: projects/{project}/locations/{location}/deploymentResourcePools/{deployment_resource_pool}
- automaticResources GoogleCloud Aiplatform V1beta1Automatic Resources Response 
- A description of resources that to large degree are decided by Vertex AI, and require only a modest additional configuration.
- createTime string
- Timestamp when the DeployedModel was created.
- dedicatedResources GoogleCloud Aiplatform V1beta1Dedicated Resources Response 
- A description of resources that are dedicated to the DeployedModel, and that need a higher degree of manual configuration.
- disableExplanations boolean
- If true, deploy the model without explainable feature, regardless the existence of Model.explanation_spec or explanation_spec.
- displayName string
- The display name of the DeployedModel. If not provided upon creation, the Model's display_name is used.
- enableAccess booleanLogging 
- If true, online prediction access logs are sent to Cloud Logging. These logs are like standard server access logs, containing information like timestamp and latency for each prediction request. Note that logs may incur a cost, especially if your project receives prediction requests at a high queries per second rate (QPS). Estimate your costs before enabling this option.
- enableContainer booleanLogging 
- If true, the container of the DeployedModel instances will send stderrandstdoutstreams to Cloud Logging. Only supported for custom-trained Models and AutoML Tabular Models.
- explanationSpec GoogleCloud Aiplatform V1beta1Explanation Spec Response 
- Explanation configuration for this DeployedModel. When deploying a Model using EndpointService.DeployModel, this value overrides the value of Model.explanation_spec. All fields of explanation_spec are optional in the request. If a field of explanation_spec is not populated, the value of the same field of Model.explanation_spec is inherited. If the corresponding Model.explanation_spec is not populated, all fields of the explanation_spec will be used for the explanation configuration.
- model string
- The resource name of the Model that this is the deployment of. Note that the Model may be in a different location than the DeployedModel's Endpoint. The resource name may contain version id or version alias to specify the version. Example: projects/{project}/locations/{location}/models/{model}@2orprojects/{project}/locations/{location}/models/{model}@goldenif no version is specified, the default version will be deployed.
- modelVersion stringId 
- The version ID of the model that is deployed.
- privateEndpoints GoogleCloud Aiplatform V1beta1Private Endpoints Response 
- Provide paths for users to send predict/explain/health requests directly to the deployed model services running on Cloud via private services access. This field is populated if network is configured.
- serviceAccount string
- The service account that the DeployedModel's container runs as. Specify the email address of the service account. If this service account is not specified, the container runs as a service account that doesn't have access to the resource project. Users deploying the Model must have the iam.serviceAccounts.actAspermission on this service account.
- string
- The resource name of the shared DeploymentResourcePool to deploy on. Format: projects/{project}/locations/{location}/deploymentResourcePools/{deployment_resource_pool}
- automatic_resources GoogleCloud Aiplatform V1beta1Automatic Resources Response 
- A description of resources that to large degree are decided by Vertex AI, and require only a modest additional configuration.
- create_time str
- Timestamp when the DeployedModel was created.
- dedicated_resources GoogleCloud Aiplatform V1beta1Dedicated Resources Response 
- A description of resources that are dedicated to the DeployedModel, and that need a higher degree of manual configuration.
- disable_explanations bool
- If true, deploy the model without explainable feature, regardless the existence of Model.explanation_spec or explanation_spec.
- display_name str
- The display name of the DeployedModel. If not provided upon creation, the Model's display_name is used.
- enable_access_ boollogging 
- If true, online prediction access logs are sent to Cloud Logging. These logs are like standard server access logs, containing information like timestamp and latency for each prediction request. Note that logs may incur a cost, especially if your project receives prediction requests at a high queries per second rate (QPS). Estimate your costs before enabling this option.
- enable_container_ boollogging 
- If true, the container of the DeployedModel instances will send stderrandstdoutstreams to Cloud Logging. Only supported for custom-trained Models and AutoML Tabular Models.
- explanation_spec GoogleCloud Aiplatform V1beta1Explanation Spec Response 
- Explanation configuration for this DeployedModel. When deploying a Model using EndpointService.DeployModel, this value overrides the value of Model.explanation_spec. All fields of explanation_spec are optional in the request. If a field of explanation_spec is not populated, the value of the same field of Model.explanation_spec is inherited. If the corresponding Model.explanation_spec is not populated, all fields of the explanation_spec will be used for the explanation configuration.
- model str
- The resource name of the Model that this is the deployment of. Note that the Model may be in a different location than the DeployedModel's Endpoint. The resource name may contain version id or version alias to specify the version. Example: projects/{project}/locations/{location}/models/{model}@2orprojects/{project}/locations/{location}/models/{model}@goldenif no version is specified, the default version will be deployed.
- model_version_ strid 
- The version ID of the model that is deployed.
- private_endpoints GoogleCloud Aiplatform V1beta1Private Endpoints Response 
- Provide paths for users to send predict/explain/health requests directly to the deployed model services running on Cloud via private services access. This field is populated if network is configured.
- service_account str
- The service account that the DeployedModel's container runs as. Specify the email address of the service account. If this service account is not specified, the container runs as a service account that doesn't have access to the resource project. Users deploying the Model must have the iam.serviceAccounts.actAspermission on this service account.
- str
- The resource name of the shared DeploymentResourcePool to deploy on. Format: projects/{project}/locations/{location}/deploymentResourcePools/{deployment_resource_pool}
- automaticResources Property Map
- A description of resources that to large degree are decided by Vertex AI, and require only a modest additional configuration.
- createTime String
- Timestamp when the DeployedModel was created.
- dedicatedResources Property Map
- A description of resources that are dedicated to the DeployedModel, and that need a higher degree of manual configuration.
- disableExplanations Boolean
- If true, deploy the model without explainable feature, regardless the existence of Model.explanation_spec or explanation_spec.
- displayName String
- The display name of the DeployedModel. If not provided upon creation, the Model's display_name is used.
- enableAccess BooleanLogging 
- If true, online prediction access logs are sent to Cloud Logging. These logs are like standard server access logs, containing information like timestamp and latency for each prediction request. Note that logs may incur a cost, especially if your project receives prediction requests at a high queries per second rate (QPS). Estimate your costs before enabling this option.
- enableContainer BooleanLogging 
- If true, the container of the DeployedModel instances will send stderrandstdoutstreams to Cloud Logging. Only supported for custom-trained Models and AutoML Tabular Models.
- explanationSpec Property Map
- Explanation configuration for this DeployedModel. When deploying a Model using EndpointService.DeployModel, this value overrides the value of Model.explanation_spec. All fields of explanation_spec are optional in the request. If a field of explanation_spec is not populated, the value of the same field of Model.explanation_spec is inherited. If the corresponding Model.explanation_spec is not populated, all fields of the explanation_spec will be used for the explanation configuration.
- model String
- The resource name of the Model that this is the deployment of. Note that the Model may be in a different location than the DeployedModel's Endpoint. The resource name may contain version id or version alias to specify the version. Example: projects/{project}/locations/{location}/models/{model}@2orprojects/{project}/locations/{location}/models/{model}@goldenif no version is specified, the default version will be deployed.
- modelVersion StringId 
- The version ID of the model that is deployed.
- privateEndpoints Property Map
- Provide paths for users to send predict/explain/health requests directly to the deployed model services running on Cloud via private services access. This field is populated if network is configured.
- serviceAccount String
- The service account that the DeployedModel's container runs as. Specify the email address of the service account. If this service account is not specified, the container runs as a service account that doesn't have access to the resource project. Users deploying the Model must have the iam.serviceAccounts.actAspermission on this service account.
- String
- The resource name of the shared DeploymentResourcePool to deploy on. Format: projects/{project}/locations/{location}/deploymentResourcePools/{deployment_resource_pool}
GoogleCloudAiplatformV1beta1EncryptionSpecResponse     
- KmsKey stringName 
- The Cloud KMS resource identifier of the customer managed encryption key used to protect a resource. Has the form: projects/my-project/locations/my-region/keyRings/my-kr/cryptoKeys/my-key. The key needs to be in the same region as where the compute resource is created.
- KmsKey stringName 
- The Cloud KMS resource identifier of the customer managed encryption key used to protect a resource. Has the form: projects/my-project/locations/my-region/keyRings/my-kr/cryptoKeys/my-key. The key needs to be in the same region as where the compute resource is created.
- kmsKey StringName 
- The Cloud KMS resource identifier of the customer managed encryption key used to protect a resource. Has the form: projects/my-project/locations/my-region/keyRings/my-kr/cryptoKeys/my-key. The key needs to be in the same region as where the compute resource is created.
- kmsKey stringName 
- The Cloud KMS resource identifier of the customer managed encryption key used to protect a resource. Has the form: projects/my-project/locations/my-region/keyRings/my-kr/cryptoKeys/my-key. The key needs to be in the same region as where the compute resource is created.
- kms_key_ strname 
- The Cloud KMS resource identifier of the customer managed encryption key used to protect a resource. Has the form: projects/my-project/locations/my-region/keyRings/my-kr/cryptoKeys/my-key. The key needs to be in the same region as where the compute resource is created.
- kmsKey StringName 
- The Cloud KMS resource identifier of the customer managed encryption key used to protect a resource. Has the form: projects/my-project/locations/my-region/keyRings/my-kr/cryptoKeys/my-key. The key needs to be in the same region as where the compute resource is created.
GoogleCloudAiplatformV1beta1ExamplesExampleGcsSourceResponse       
- DataFormat string
- The format in which instances are given, if not specified, assume it's JSONL format. Currently only JSONL format is supported.
- GcsSource Pulumi.Google Native. Aiplatform. V1Beta1. Inputs. Google Cloud Aiplatform V1beta1Gcs Source Response 
- The Cloud Storage location for the input instances.
- DataFormat string
- The format in which instances are given, if not specified, assume it's JSONL format. Currently only JSONL format is supported.
- GcsSource GoogleCloud Aiplatform V1beta1Gcs Source Response 
- The Cloud Storage location for the input instances.
- dataFormat String
- The format in which instances are given, if not specified, assume it's JSONL format. Currently only JSONL format is supported.
- gcsSource GoogleCloud Aiplatform V1beta1Gcs Source Response 
- The Cloud Storage location for the input instances.
- dataFormat string
- The format in which instances are given, if not specified, assume it's JSONL format. Currently only JSONL format is supported.
- gcsSource GoogleCloud Aiplatform V1beta1Gcs Source Response 
- The Cloud Storage location for the input instances.
- data_format str
- The format in which instances are given, if not specified, assume it's JSONL format. Currently only JSONL format is supported.
- gcs_source GoogleCloud Aiplatform V1beta1Gcs Source Response 
- The Cloud Storage location for the input instances.
- dataFormat String
- The format in which instances are given, if not specified, assume it's JSONL format. Currently only JSONL format is supported.
- gcsSource Property Map
- The Cloud Storage location for the input instances.
GoogleCloudAiplatformV1beta1ExamplesResponse    
- ExampleGcs Pulumi.Source Google Native. Aiplatform. V1Beta1. Inputs. Google Cloud Aiplatform V1beta1Examples Example Gcs Source Response 
- The Cloud Storage input instances.
- GcsSource Pulumi.Google Native. Aiplatform. V1Beta1. Inputs. Google Cloud Aiplatform V1beta1Gcs Source Response 
- The Cloud Storage locations that contain the instances to be indexed for approximate nearest neighbor search.
- NearestNeighbor objectSearch Config 
- The full configuration for the generated index, the semantics are the same as metadata and should match NearestNeighborSearchConfig.
- NeighborCount int
- The number of neighbors to return when querying for examples.
- Presets
Pulumi.Google Native. Aiplatform. V1Beta1. Inputs. Google Cloud Aiplatform V1beta1Presets Response 
- Simplified preset configuration, which automatically sets configuration values based on the desired query speed-precision trade-off and modality.
- ExampleGcs GoogleSource Cloud Aiplatform V1beta1Examples Example Gcs Source Response 
- The Cloud Storage input instances.
- GcsSource GoogleCloud Aiplatform V1beta1Gcs Source Response 
- The Cloud Storage locations that contain the instances to be indexed for approximate nearest neighbor search.
- NearestNeighbor interface{}Search Config 
- The full configuration for the generated index, the semantics are the same as metadata and should match NearestNeighborSearchConfig.
- NeighborCount int
- The number of neighbors to return when querying for examples.
- Presets
GoogleCloud Aiplatform V1beta1Presets Response 
- Simplified preset configuration, which automatically sets configuration values based on the desired query speed-precision trade-off and modality.
- exampleGcs GoogleSource Cloud Aiplatform V1beta1Examples Example Gcs Source Response 
- The Cloud Storage input instances.
- gcsSource GoogleCloud Aiplatform V1beta1Gcs Source Response 
- The Cloud Storage locations that contain the instances to be indexed for approximate nearest neighbor search.
- nearestNeighbor ObjectSearch Config 
- The full configuration for the generated index, the semantics are the same as metadata and should match NearestNeighborSearchConfig.
- neighborCount Integer
- The number of neighbors to return when querying for examples.
- presets
GoogleCloud Aiplatform V1beta1Presets Response 
- Simplified preset configuration, which automatically sets configuration values based on the desired query speed-precision trade-off and modality.
- exampleGcs GoogleSource Cloud Aiplatform V1beta1Examples Example Gcs Source Response 
- The Cloud Storage input instances.
- gcsSource GoogleCloud Aiplatform V1beta1Gcs Source Response 
- The Cloud Storage locations that contain the instances to be indexed for approximate nearest neighbor search.
- nearestNeighbor anySearch Config 
- The full configuration for the generated index, the semantics are the same as metadata and should match NearestNeighborSearchConfig.
- neighborCount number
- The number of neighbors to return when querying for examples.
- presets
GoogleCloud Aiplatform V1beta1Presets Response 
- Simplified preset configuration, which automatically sets configuration values based on the desired query speed-precision trade-off and modality.
- example_gcs_ Googlesource Cloud Aiplatform V1beta1Examples Example Gcs Source Response 
- The Cloud Storage input instances.
- gcs_source GoogleCloud Aiplatform V1beta1Gcs Source Response 
- The Cloud Storage locations that contain the instances to be indexed for approximate nearest neighbor search.
- nearest_neighbor_ Anysearch_ config 
- The full configuration for the generated index, the semantics are the same as metadata and should match NearestNeighborSearchConfig.
- neighbor_count int
- The number of neighbors to return when querying for examples.
- presets
GoogleCloud Aiplatform V1beta1Presets Response 
- Simplified preset configuration, which automatically sets configuration values based on the desired query speed-precision trade-off and modality.
- exampleGcs Property MapSource 
- The Cloud Storage input instances.
- gcsSource Property Map
- The Cloud Storage locations that contain the instances to be indexed for approximate nearest neighbor search.
- nearestNeighbor AnySearch Config 
- The full configuration for the generated index, the semantics are the same as metadata and should match NearestNeighborSearchConfig.
- neighborCount Number
- The number of neighbors to return when querying for examples.
- presets Property Map
- Simplified preset configuration, which automatically sets configuration values based on the desired query speed-precision trade-off and modality.
GoogleCloudAiplatformV1beta1ExplanationMetadataResponse     
- FeatureAttributions stringSchema Uri 
- Points to a YAML file stored on Google Cloud Storage describing the format of the feature attributions. The schema is defined as an OpenAPI 3.0.2 Schema Object. AutoML tabular Models always have this field populated by Vertex AI. Note: The URI given on output may be different, including the URI scheme, than the one given on input. The output URI will point to a location where the user only has a read access.
- Inputs Dictionary<string, string>
- Map from feature names to feature input metadata. Keys are the name of the features. Values are the specification of the feature. An empty InputMetadata is valid. It describes a text feature which has the name specified as the key in ExplanationMetadata.inputs. The baseline of the empty feature is chosen by Vertex AI. For Vertex AI-provided Tensorflow images, the key can be any friendly name of the feature. Once specified, featureAttributions are keyed by this key (if not grouped with another feature). For custom images, the key must match with the key in instance.
- LatentSpace stringSource 
- Name of the source to generate embeddings for example based explanations.
- Outputs Dictionary<string, string>
- Map from output names to output metadata. For Vertex AI-provided Tensorflow images, keys can be any user defined string that consists of any UTF-8 characters. For custom images, keys are the name of the output field in the prediction to be explained. Currently only one key is allowed.
- FeatureAttributions stringSchema Uri 
- Points to a YAML file stored on Google Cloud Storage describing the format of the feature attributions. The schema is defined as an OpenAPI 3.0.2 Schema Object. AutoML tabular Models always have this field populated by Vertex AI. Note: The URI given on output may be different, including the URI scheme, than the one given on input. The output URI will point to a location where the user only has a read access.
- Inputs map[string]string
- Map from feature names to feature input metadata. Keys are the name of the features. Values are the specification of the feature. An empty InputMetadata is valid. It describes a text feature which has the name specified as the key in ExplanationMetadata.inputs. The baseline of the empty feature is chosen by Vertex AI. For Vertex AI-provided Tensorflow images, the key can be any friendly name of the feature. Once specified, featureAttributions are keyed by this key (if not grouped with another feature). For custom images, the key must match with the key in instance.
- LatentSpace stringSource 
- Name of the source to generate embeddings for example based explanations.
- Outputs map[string]string
- Map from output names to output metadata. For Vertex AI-provided Tensorflow images, keys can be any user defined string that consists of any UTF-8 characters. For custom images, keys are the name of the output field in the prediction to be explained. Currently only one key is allowed.
- featureAttributions StringSchema Uri 
- Points to a YAML file stored on Google Cloud Storage describing the format of the feature attributions. The schema is defined as an OpenAPI 3.0.2 Schema Object. AutoML tabular Models always have this field populated by Vertex AI. Note: The URI given on output may be different, including the URI scheme, than the one given on input. The output URI will point to a location where the user only has a read access.
- inputs Map<String,String>
- Map from feature names to feature input metadata. Keys are the name of the features. Values are the specification of the feature. An empty InputMetadata is valid. It describes a text feature which has the name specified as the key in ExplanationMetadata.inputs. The baseline of the empty feature is chosen by Vertex AI. For Vertex AI-provided Tensorflow images, the key can be any friendly name of the feature. Once specified, featureAttributions are keyed by this key (if not grouped with another feature). For custom images, the key must match with the key in instance.
- latentSpace StringSource 
- Name of the source to generate embeddings for example based explanations.
- outputs Map<String,String>
- Map from output names to output metadata. For Vertex AI-provided Tensorflow images, keys can be any user defined string that consists of any UTF-8 characters. For custom images, keys are the name of the output field in the prediction to be explained. Currently only one key is allowed.
- featureAttributions stringSchema Uri 
- Points to a YAML file stored on Google Cloud Storage describing the format of the feature attributions. The schema is defined as an OpenAPI 3.0.2 Schema Object. AutoML tabular Models always have this field populated by Vertex AI. Note: The URI given on output may be different, including the URI scheme, than the one given on input. The output URI will point to a location where the user only has a read access.
- inputs {[key: string]: string}
- Map from feature names to feature input metadata. Keys are the name of the features. Values are the specification of the feature. An empty InputMetadata is valid. It describes a text feature which has the name specified as the key in ExplanationMetadata.inputs. The baseline of the empty feature is chosen by Vertex AI. For Vertex AI-provided Tensorflow images, the key can be any friendly name of the feature. Once specified, featureAttributions are keyed by this key (if not grouped with another feature). For custom images, the key must match with the key in instance.
- latentSpace stringSource 
- Name of the source to generate embeddings for example based explanations.
- outputs {[key: string]: string}
- Map from output names to output metadata. For Vertex AI-provided Tensorflow images, keys can be any user defined string that consists of any UTF-8 characters. For custom images, keys are the name of the output field in the prediction to be explained. Currently only one key is allowed.
- feature_attributions_ strschema_ uri 
- Points to a YAML file stored on Google Cloud Storage describing the format of the feature attributions. The schema is defined as an OpenAPI 3.0.2 Schema Object. AutoML tabular Models always have this field populated by Vertex AI. Note: The URI given on output may be different, including the URI scheme, than the one given on input. The output URI will point to a location where the user only has a read access.
- inputs Mapping[str, str]
- Map from feature names to feature input metadata. Keys are the name of the features. Values are the specification of the feature. An empty InputMetadata is valid. It describes a text feature which has the name specified as the key in ExplanationMetadata.inputs. The baseline of the empty feature is chosen by Vertex AI. For Vertex AI-provided Tensorflow images, the key can be any friendly name of the feature. Once specified, featureAttributions are keyed by this key (if not grouped with another feature). For custom images, the key must match with the key in instance.
- latent_space_ strsource 
- Name of the source to generate embeddings for example based explanations.
- outputs Mapping[str, str]
- Map from output names to output metadata. For Vertex AI-provided Tensorflow images, keys can be any user defined string that consists of any UTF-8 characters. For custom images, keys are the name of the output field in the prediction to be explained. Currently only one key is allowed.
- featureAttributions StringSchema Uri 
- Points to a YAML file stored on Google Cloud Storage describing the format of the feature attributions. The schema is defined as an OpenAPI 3.0.2 Schema Object. AutoML tabular Models always have this field populated by Vertex AI. Note: The URI given on output may be different, including the URI scheme, than the one given on input. The output URI will point to a location where the user only has a read access.
- inputs Map<String>
- Map from feature names to feature input metadata. Keys are the name of the features. Values are the specification of the feature. An empty InputMetadata is valid. It describes a text feature which has the name specified as the key in ExplanationMetadata.inputs. The baseline of the empty feature is chosen by Vertex AI. For Vertex AI-provided Tensorflow images, the key can be any friendly name of the feature. Once specified, featureAttributions are keyed by this key (if not grouped with another feature). For custom images, the key must match with the key in instance.
- latentSpace StringSource 
- Name of the source to generate embeddings for example based explanations.
- outputs Map<String>
- Map from output names to output metadata. For Vertex AI-provided Tensorflow images, keys can be any user defined string that consists of any UTF-8 characters. For custom images, keys are the name of the output field in the prediction to be explained. Currently only one key is allowed.
GoogleCloudAiplatformV1beta1ExplanationParametersResponse     
- Examples
Pulumi.Google Native. Aiplatform. V1Beta1. Inputs. Google Cloud Aiplatform V1beta1Examples Response 
- Example-based explanations that returns the nearest neighbors from the provided dataset.
- IntegratedGradients Pulumi.Attribution Google Native. Aiplatform. V1Beta1. Inputs. Google Cloud Aiplatform V1beta1Integrated Gradients Attribution Response 
- An attribution method that computes Aumann-Shapley values taking advantage of the model's fully differentiable structure. Refer to this paper for more details: https://arxiv.org/abs/1703.01365
- OutputIndices List<object>
- If populated, only returns attributions that have output_index contained in output_indices. It must be an ndarray of integers, with the same shape of the output it's explaining. If not populated, returns attributions for top_k indices of outputs. If neither top_k nor output_indices is populated, returns the argmax index of the outputs. Only applicable to Models that predict multiple outputs (e,g, multi-class Models that predict multiple classes).
- SampledShapley Pulumi.Attribution Google Native. Aiplatform. V1Beta1. Inputs. Google Cloud Aiplatform V1beta1Sampled Shapley Attribution Response 
- An attribution method that approximates Shapley values for features that contribute to the label being predicted. A sampling strategy is used to approximate the value rather than considering all subsets of features. Refer to this paper for model details: https://arxiv.org/abs/1306.4265.
- TopK int
- If populated, returns attributions for top K indices of outputs (defaults to 1). Only applies to Models that predicts more than one outputs (e,g, multi-class Models). When set to -1, returns explanations for all outputs.
- XraiAttribution Pulumi.Google Native. Aiplatform. V1Beta1. Inputs. Google Cloud Aiplatform V1beta1Xrai Attribution Response 
- An attribution method that redistributes Integrated Gradients attribution to segmented regions, taking advantage of the model's fully differentiable structure. Refer to this paper for more details: https://arxiv.org/abs/1906.02825 XRAI currently performs better on natural images, like a picture of a house or an animal. If the images are taken in artificial environments, like a lab or manufacturing line, or from diagnostic equipment, like x-rays or quality-control cameras, use Integrated Gradients instead.
- Examples
GoogleCloud Aiplatform V1beta1Examples Response 
- Example-based explanations that returns the nearest neighbors from the provided dataset.
- IntegratedGradients GoogleAttribution Cloud Aiplatform V1beta1Integrated Gradients Attribution Response 
- An attribution method that computes Aumann-Shapley values taking advantage of the model's fully differentiable structure. Refer to this paper for more details: https://arxiv.org/abs/1703.01365
- OutputIndices []interface{}
- If populated, only returns attributions that have output_index contained in output_indices. It must be an ndarray of integers, with the same shape of the output it's explaining. If not populated, returns attributions for top_k indices of outputs. If neither top_k nor output_indices is populated, returns the argmax index of the outputs. Only applicable to Models that predict multiple outputs (e,g, multi-class Models that predict multiple classes).
- SampledShapley GoogleAttribution Cloud Aiplatform V1beta1Sampled Shapley Attribution Response 
- An attribution method that approximates Shapley values for features that contribute to the label being predicted. A sampling strategy is used to approximate the value rather than considering all subsets of features. Refer to this paper for model details: https://arxiv.org/abs/1306.4265.
- TopK int
- If populated, returns attributions for top K indices of outputs (defaults to 1). Only applies to Models that predicts more than one outputs (e,g, multi-class Models). When set to -1, returns explanations for all outputs.
- XraiAttribution GoogleCloud Aiplatform V1beta1Xrai Attribution Response 
- An attribution method that redistributes Integrated Gradients attribution to segmented regions, taking advantage of the model's fully differentiable structure. Refer to this paper for more details: https://arxiv.org/abs/1906.02825 XRAI currently performs better on natural images, like a picture of a house or an animal. If the images are taken in artificial environments, like a lab or manufacturing line, or from diagnostic equipment, like x-rays or quality-control cameras, use Integrated Gradients instead.
- examples
GoogleCloud Aiplatform V1beta1Examples Response 
- Example-based explanations that returns the nearest neighbors from the provided dataset.
- integratedGradients GoogleAttribution Cloud Aiplatform V1beta1Integrated Gradients Attribution Response 
- An attribution method that computes Aumann-Shapley values taking advantage of the model's fully differentiable structure. Refer to this paper for more details: https://arxiv.org/abs/1703.01365
- outputIndices List<Object>
- If populated, only returns attributions that have output_index contained in output_indices. It must be an ndarray of integers, with the same shape of the output it's explaining. If not populated, returns attributions for top_k indices of outputs. If neither top_k nor output_indices is populated, returns the argmax index of the outputs. Only applicable to Models that predict multiple outputs (e,g, multi-class Models that predict multiple classes).
- sampledShapley GoogleAttribution Cloud Aiplatform V1beta1Sampled Shapley Attribution Response 
- An attribution method that approximates Shapley values for features that contribute to the label being predicted. A sampling strategy is used to approximate the value rather than considering all subsets of features. Refer to this paper for model details: https://arxiv.org/abs/1306.4265.
- topK Integer
- If populated, returns attributions for top K indices of outputs (defaults to 1). Only applies to Models that predicts more than one outputs (e,g, multi-class Models). When set to -1, returns explanations for all outputs.
- xraiAttribution GoogleCloud Aiplatform V1beta1Xrai Attribution Response 
- An attribution method that redistributes Integrated Gradients attribution to segmented regions, taking advantage of the model's fully differentiable structure. Refer to this paper for more details: https://arxiv.org/abs/1906.02825 XRAI currently performs better on natural images, like a picture of a house or an animal. If the images are taken in artificial environments, like a lab or manufacturing line, or from diagnostic equipment, like x-rays or quality-control cameras, use Integrated Gradients instead.
- examples
GoogleCloud Aiplatform V1beta1Examples Response 
- Example-based explanations that returns the nearest neighbors from the provided dataset.
- integratedGradients GoogleAttribution Cloud Aiplatform V1beta1Integrated Gradients Attribution Response 
- An attribution method that computes Aumann-Shapley values taking advantage of the model's fully differentiable structure. Refer to this paper for more details: https://arxiv.org/abs/1703.01365
- outputIndices any[]
- If populated, only returns attributions that have output_index contained in output_indices. It must be an ndarray of integers, with the same shape of the output it's explaining. If not populated, returns attributions for top_k indices of outputs. If neither top_k nor output_indices is populated, returns the argmax index of the outputs. Only applicable to Models that predict multiple outputs (e,g, multi-class Models that predict multiple classes).
- sampledShapley GoogleAttribution Cloud Aiplatform V1beta1Sampled Shapley Attribution Response 
- An attribution method that approximates Shapley values for features that contribute to the label being predicted. A sampling strategy is used to approximate the value rather than considering all subsets of features. Refer to this paper for model details: https://arxiv.org/abs/1306.4265.
- topK number
- If populated, returns attributions for top K indices of outputs (defaults to 1). Only applies to Models that predicts more than one outputs (e,g, multi-class Models). When set to -1, returns explanations for all outputs.
- xraiAttribution GoogleCloud Aiplatform V1beta1Xrai Attribution Response 
- An attribution method that redistributes Integrated Gradients attribution to segmented regions, taking advantage of the model's fully differentiable structure. Refer to this paper for more details: https://arxiv.org/abs/1906.02825 XRAI currently performs better on natural images, like a picture of a house or an animal. If the images are taken in artificial environments, like a lab or manufacturing line, or from diagnostic equipment, like x-rays or quality-control cameras, use Integrated Gradients instead.
- examples
GoogleCloud Aiplatform V1beta1Examples Response 
- Example-based explanations that returns the nearest neighbors from the provided dataset.
- integrated_gradients_ Googleattribution Cloud Aiplatform V1beta1Integrated Gradients Attribution Response 
- An attribution method that computes Aumann-Shapley values taking advantage of the model's fully differentiable structure. Refer to this paper for more details: https://arxiv.org/abs/1703.01365
- output_indices Sequence[Any]
- If populated, only returns attributions that have output_index contained in output_indices. It must be an ndarray of integers, with the same shape of the output it's explaining. If not populated, returns attributions for top_k indices of outputs. If neither top_k nor output_indices is populated, returns the argmax index of the outputs. Only applicable to Models that predict multiple outputs (e,g, multi-class Models that predict multiple classes).
- sampled_shapley_ Googleattribution Cloud Aiplatform V1beta1Sampled Shapley Attribution Response 
- An attribution method that approximates Shapley values for features that contribute to the label being predicted. A sampling strategy is used to approximate the value rather than considering all subsets of features. Refer to this paper for model details: https://arxiv.org/abs/1306.4265.
- top_k int
- If populated, returns attributions for top K indices of outputs (defaults to 1). Only applies to Models that predicts more than one outputs (e,g, multi-class Models). When set to -1, returns explanations for all outputs.
- xrai_attribution GoogleCloud Aiplatform V1beta1Xrai Attribution Response 
- An attribution method that redistributes Integrated Gradients attribution to segmented regions, taking advantage of the model's fully differentiable structure. Refer to this paper for more details: https://arxiv.org/abs/1906.02825 XRAI currently performs better on natural images, like a picture of a house or an animal. If the images are taken in artificial environments, like a lab or manufacturing line, or from diagnostic equipment, like x-rays or quality-control cameras, use Integrated Gradients instead.
- examples Property Map
- Example-based explanations that returns the nearest neighbors from the provided dataset.
- integratedGradients Property MapAttribution 
- An attribution method that computes Aumann-Shapley values taking advantage of the model's fully differentiable structure. Refer to this paper for more details: https://arxiv.org/abs/1703.01365
- outputIndices List<Any>
- If populated, only returns attributions that have output_index contained in output_indices. It must be an ndarray of integers, with the same shape of the output it's explaining. If not populated, returns attributions for top_k indices of outputs. If neither top_k nor output_indices is populated, returns the argmax index of the outputs. Only applicable to Models that predict multiple outputs (e,g, multi-class Models that predict multiple classes).
- sampledShapley Property MapAttribution 
- An attribution method that approximates Shapley values for features that contribute to the label being predicted. A sampling strategy is used to approximate the value rather than considering all subsets of features. Refer to this paper for model details: https://arxiv.org/abs/1306.4265.
- topK Number
- If populated, returns attributions for top K indices of outputs (defaults to 1). Only applies to Models that predicts more than one outputs (e,g, multi-class Models). When set to -1, returns explanations for all outputs.
- xraiAttribution Property Map
- An attribution method that redistributes Integrated Gradients attribution to segmented regions, taking advantage of the model's fully differentiable structure. Refer to this paper for more details: https://arxiv.org/abs/1906.02825 XRAI currently performs better on natural images, like a picture of a house or an animal. If the images are taken in artificial environments, like a lab or manufacturing line, or from diagnostic equipment, like x-rays or quality-control cameras, use Integrated Gradients instead.
GoogleCloudAiplatformV1beta1ExplanationSpecResponse     
- Metadata
Pulumi.Google Native. Aiplatform. V1Beta1. Inputs. Google Cloud Aiplatform V1beta1Explanation Metadata Response 
- Optional. Metadata describing the Model's input and output for explanation.
- Parameters
Pulumi.Google Native. Aiplatform. V1Beta1. Inputs. Google Cloud Aiplatform V1beta1Explanation Parameters Response 
- Parameters that configure explaining of the Model's predictions.
- Metadata
GoogleCloud Aiplatform V1beta1Explanation Metadata Response 
- Optional. Metadata describing the Model's input and output for explanation.
- Parameters
GoogleCloud Aiplatform V1beta1Explanation Parameters Response 
- Parameters that configure explaining of the Model's predictions.
- metadata
GoogleCloud Aiplatform V1beta1Explanation Metadata Response 
- Optional. Metadata describing the Model's input and output for explanation.
- parameters
GoogleCloud Aiplatform V1beta1Explanation Parameters Response 
- Parameters that configure explaining of the Model's predictions.
- metadata
GoogleCloud Aiplatform V1beta1Explanation Metadata Response 
- Optional. Metadata describing the Model's input and output for explanation.
- parameters
GoogleCloud Aiplatform V1beta1Explanation Parameters Response 
- Parameters that configure explaining of the Model's predictions.
- metadata
GoogleCloud Aiplatform V1beta1Explanation Metadata Response 
- Optional. Metadata describing the Model's input and output for explanation.
- parameters
GoogleCloud Aiplatform V1beta1Explanation Parameters Response 
- Parameters that configure explaining of the Model's predictions.
- metadata Property Map
- Optional. Metadata describing the Model's input and output for explanation.
- parameters Property Map
- Parameters that configure explaining of the Model's predictions.
GoogleCloudAiplatformV1beta1FeatureNoiseSigmaNoiseSigmaForFeatureResponse          
- Name string
- The name of the input feature for which noise sigma is provided. The features are defined in explanation metadata inputs.
- Sigma double
- This represents the standard deviation of the Gaussian kernel that will be used to add noise to the feature prior to computing gradients. Similar to noise_sigma but represents the noise added to the current feature. Defaults to 0.1.
- Name string
- The name of the input feature for which noise sigma is provided. The features are defined in explanation metadata inputs.
- Sigma float64
- This represents the standard deviation of the Gaussian kernel that will be used to add noise to the feature prior to computing gradients. Similar to noise_sigma but represents the noise added to the current feature. Defaults to 0.1.
- name String
- The name of the input feature for which noise sigma is provided. The features are defined in explanation metadata inputs.
- sigma Double
- This represents the standard deviation of the Gaussian kernel that will be used to add noise to the feature prior to computing gradients. Similar to noise_sigma but represents the noise added to the current feature. Defaults to 0.1.
- name string
- The name of the input feature for which noise sigma is provided. The features are defined in explanation metadata inputs.
- sigma number
- This represents the standard deviation of the Gaussian kernel that will be used to add noise to the feature prior to computing gradients. Similar to noise_sigma but represents the noise added to the current feature. Defaults to 0.1.
- name str
- The name of the input feature for which noise sigma is provided. The features are defined in explanation metadata inputs.
- sigma float
- This represents the standard deviation of the Gaussian kernel that will be used to add noise to the feature prior to computing gradients. Similar to noise_sigma but represents the noise added to the current feature. Defaults to 0.1.
- name String
- The name of the input feature for which noise sigma is provided. The features are defined in explanation metadata inputs.
- sigma Number
- This represents the standard deviation of the Gaussian kernel that will be used to add noise to the feature prior to computing gradients. Similar to noise_sigma but represents the noise added to the current feature. Defaults to 0.1.
GoogleCloudAiplatformV1beta1FeatureNoiseSigmaResponse      
- NoiseSigma List<Pulumi.Google Native. Aiplatform. V1Beta1. Inputs. Google Cloud Aiplatform V1beta1Feature Noise Sigma Noise Sigma For Feature Response> 
- Noise sigma per feature. No noise is added to features that are not set.
- NoiseSigma []GoogleCloud Aiplatform V1beta1Feature Noise Sigma Noise Sigma For Feature Response 
- Noise sigma per feature. No noise is added to features that are not set.
- noiseSigma List<GoogleCloud Aiplatform V1beta1Feature Noise Sigma Noise Sigma For Feature Response> 
- Noise sigma per feature. No noise is added to features that are not set.
- noiseSigma GoogleCloud Aiplatform V1beta1Feature Noise Sigma Noise Sigma For Feature Response[] 
- Noise sigma per feature. No noise is added to features that are not set.
- noise_sigma Sequence[GoogleCloud Aiplatform V1beta1Feature Noise Sigma Noise Sigma For Feature Response] 
- Noise sigma per feature. No noise is added to features that are not set.
- noiseSigma List<Property Map>
- Noise sigma per feature. No noise is added to features that are not set.
GoogleCloudAiplatformV1beta1GcsSourceResponse     
- Uris List<string>
- Google Cloud Storage URI(-s) to the input file(s). May contain wildcards. For more information on wildcards, see https://cloud.google.com/storage/docs/gsutil/addlhelp/WildcardNames.
- Uris []string
- Google Cloud Storage URI(-s) to the input file(s). May contain wildcards. For more information on wildcards, see https://cloud.google.com/storage/docs/gsutil/addlhelp/WildcardNames.
- uris List<String>
- Google Cloud Storage URI(-s) to the input file(s). May contain wildcards. For more information on wildcards, see https://cloud.google.com/storage/docs/gsutil/addlhelp/WildcardNames.
- uris string[]
- Google Cloud Storage URI(-s) to the input file(s). May contain wildcards. For more information on wildcards, see https://cloud.google.com/storage/docs/gsutil/addlhelp/WildcardNames.
- uris Sequence[str]
- Google Cloud Storage URI(-s) to the input file(s). May contain wildcards. For more information on wildcards, see https://cloud.google.com/storage/docs/gsutil/addlhelp/WildcardNames.
- uris List<String>
- Google Cloud Storage URI(-s) to the input file(s). May contain wildcards. For more information on wildcards, see https://cloud.google.com/storage/docs/gsutil/addlhelp/WildcardNames.
GoogleCloudAiplatformV1beta1IntegratedGradientsAttributionResponse      
- BlurBaseline Pulumi.Config Google Native. Aiplatform. V1Beta1. Inputs. Google Cloud Aiplatform V1beta1Blur Baseline Config Response 
- Config for IG with blur baseline. When enabled, a linear path from the maximally blurred image to the input image is created. Using a blurred baseline instead of zero (black image) is motivated by the BlurIG approach explained here: https://arxiv.org/abs/2004.03383
- SmoothGrad Pulumi.Config Google Native. Aiplatform. V1Beta1. Inputs. Google Cloud Aiplatform V1beta1Smooth Grad Config Response 
- Config for SmoothGrad approximation of gradients. When enabled, the gradients are approximated by averaging the gradients from noisy samples in the vicinity of the inputs. Adding noise can help improve the computed gradients. Refer to this paper for more details: https://arxiv.org/pdf/1706.03825.pdf
- StepCount int
- The number of steps for approximating the path integral. A good value to start is 50 and gradually increase until the sum to diff property is within the desired error range. Valid range of its value is [1, 100], inclusively.
- BlurBaseline GoogleConfig Cloud Aiplatform V1beta1Blur Baseline Config Response 
- Config for IG with blur baseline. When enabled, a linear path from the maximally blurred image to the input image is created. Using a blurred baseline instead of zero (black image) is motivated by the BlurIG approach explained here: https://arxiv.org/abs/2004.03383
- SmoothGrad GoogleConfig Cloud Aiplatform V1beta1Smooth Grad Config Response 
- Config for SmoothGrad approximation of gradients. When enabled, the gradients are approximated by averaging the gradients from noisy samples in the vicinity of the inputs. Adding noise can help improve the computed gradients. Refer to this paper for more details: https://arxiv.org/pdf/1706.03825.pdf
- StepCount int
- The number of steps for approximating the path integral. A good value to start is 50 and gradually increase until the sum to diff property is within the desired error range. Valid range of its value is [1, 100], inclusively.
- blurBaseline GoogleConfig Cloud Aiplatform V1beta1Blur Baseline Config Response 
- Config for IG with blur baseline. When enabled, a linear path from the maximally blurred image to the input image is created. Using a blurred baseline instead of zero (black image) is motivated by the BlurIG approach explained here: https://arxiv.org/abs/2004.03383
- smoothGrad GoogleConfig Cloud Aiplatform V1beta1Smooth Grad Config Response 
- Config for SmoothGrad approximation of gradients. When enabled, the gradients are approximated by averaging the gradients from noisy samples in the vicinity of the inputs. Adding noise can help improve the computed gradients. Refer to this paper for more details: https://arxiv.org/pdf/1706.03825.pdf
- stepCount Integer
- The number of steps for approximating the path integral. A good value to start is 50 and gradually increase until the sum to diff property is within the desired error range. Valid range of its value is [1, 100], inclusively.
- blurBaseline GoogleConfig Cloud Aiplatform V1beta1Blur Baseline Config Response 
- Config for IG with blur baseline. When enabled, a linear path from the maximally blurred image to the input image is created. Using a blurred baseline instead of zero (black image) is motivated by the BlurIG approach explained here: https://arxiv.org/abs/2004.03383
- smoothGrad GoogleConfig Cloud Aiplatform V1beta1Smooth Grad Config Response 
- Config for SmoothGrad approximation of gradients. When enabled, the gradients are approximated by averaging the gradients from noisy samples in the vicinity of the inputs. Adding noise can help improve the computed gradients. Refer to this paper for more details: https://arxiv.org/pdf/1706.03825.pdf
- stepCount number
- The number of steps for approximating the path integral. A good value to start is 50 and gradually increase until the sum to diff property is within the desired error range. Valid range of its value is [1, 100], inclusively.
- blur_baseline_ Googleconfig Cloud Aiplatform V1beta1Blur Baseline Config Response 
- Config for IG with blur baseline. When enabled, a linear path from the maximally blurred image to the input image is created. Using a blurred baseline instead of zero (black image) is motivated by the BlurIG approach explained here: https://arxiv.org/abs/2004.03383
- smooth_grad_ Googleconfig Cloud Aiplatform V1beta1Smooth Grad Config Response 
- Config for SmoothGrad approximation of gradients. When enabled, the gradients are approximated by averaging the gradients from noisy samples in the vicinity of the inputs. Adding noise can help improve the computed gradients. Refer to this paper for more details: https://arxiv.org/pdf/1706.03825.pdf
- step_count int
- The number of steps for approximating the path integral. A good value to start is 50 and gradually increase until the sum to diff property is within the desired error range. Valid range of its value is [1, 100], inclusively.
- blurBaseline Property MapConfig 
- Config for IG with blur baseline. When enabled, a linear path from the maximally blurred image to the input image is created. Using a blurred baseline instead of zero (black image) is motivated by the BlurIG approach explained here: https://arxiv.org/abs/2004.03383
- smoothGrad Property MapConfig 
- Config for SmoothGrad approximation of gradients. When enabled, the gradients are approximated by averaging the gradients from noisy samples in the vicinity of the inputs. Adding noise can help improve the computed gradients. Refer to this paper for more details: https://arxiv.org/pdf/1706.03825.pdf
- stepCount Number
- The number of steps for approximating the path integral. A good value to start is 50 and gradually increase until the sum to diff property is within the desired error range. Valid range of its value is [1, 100], inclusively.
GoogleCloudAiplatformV1beta1MachineSpecResponse     
- AcceleratorCount int
- The number of accelerators to attach to the machine.
- AcceleratorType string
- Immutable. The type of accelerator(s) that may be attached to the machine as per accelerator_count.
- MachineType string
- Immutable. The type of the machine. See the list of machine types supported for prediction See the list of machine types supported for custom training. For DeployedModel this field is optional, and the default value is n1-standard-2. For BatchPredictionJob or as part of WorkerPoolSpec this field is required.
- TpuTopology string
- Immutable. The topology of the TPUs. Corresponds to the TPU topologies available from GKE. (Example: tpu_topology: "2x2x1").
- AcceleratorCount int
- The number of accelerators to attach to the machine.
- AcceleratorType string
- Immutable. The type of accelerator(s) that may be attached to the machine as per accelerator_count.
- MachineType string
- Immutable. The type of the machine. See the list of machine types supported for prediction See the list of machine types supported for custom training. For DeployedModel this field is optional, and the default value is n1-standard-2. For BatchPredictionJob or as part of WorkerPoolSpec this field is required.
- TpuTopology string
- Immutable. The topology of the TPUs. Corresponds to the TPU topologies available from GKE. (Example: tpu_topology: "2x2x1").
- acceleratorCount Integer
- The number of accelerators to attach to the machine.
- acceleratorType String
- Immutable. The type of accelerator(s) that may be attached to the machine as per accelerator_count.
- machineType String
- Immutable. The type of the machine. See the list of machine types supported for prediction See the list of machine types supported for custom training. For DeployedModel this field is optional, and the default value is n1-standard-2. For BatchPredictionJob or as part of WorkerPoolSpec this field is required.
- tpuTopology String
- Immutable. The topology of the TPUs. Corresponds to the TPU topologies available from GKE. (Example: tpu_topology: "2x2x1").
- acceleratorCount number
- The number of accelerators to attach to the machine.
- acceleratorType string
- Immutable. The type of accelerator(s) that may be attached to the machine as per accelerator_count.
- machineType string
- Immutable. The type of the machine. See the list of machine types supported for prediction See the list of machine types supported for custom training. For DeployedModel this field is optional, and the default value is n1-standard-2. For BatchPredictionJob or as part of WorkerPoolSpec this field is required.
- tpuTopology string
- Immutable. The topology of the TPUs. Corresponds to the TPU topologies available from GKE. (Example: tpu_topology: "2x2x1").
- accelerator_count int
- The number of accelerators to attach to the machine.
- accelerator_type str
- Immutable. The type of accelerator(s) that may be attached to the machine as per accelerator_count.
- machine_type str
- Immutable. The type of the machine. See the list of machine types supported for prediction See the list of machine types supported for custom training. For DeployedModel this field is optional, and the default value is n1-standard-2. For BatchPredictionJob or as part of WorkerPoolSpec this field is required.
- tpu_topology str
- Immutable. The topology of the TPUs. Corresponds to the TPU topologies available from GKE. (Example: tpu_topology: "2x2x1").
- acceleratorCount Number
- The number of accelerators to attach to the machine.
- acceleratorType String
- Immutable. The type of accelerator(s) that may be attached to the machine as per accelerator_count.
- machineType String
- Immutable. The type of the machine. See the list of machine types supported for prediction See the list of machine types supported for custom training. For DeployedModel this field is optional, and the default value is n1-standard-2. For BatchPredictionJob or as part of WorkerPoolSpec this field is required.
- tpuTopology String
- Immutable. The topology of the TPUs. Corresponds to the TPU topologies available from GKE. (Example: tpu_topology: "2x2x1").
GoogleCloudAiplatformV1beta1PredictRequestResponseLoggingConfigResponse        
- BigqueryDestination Pulumi.Google Native. Aiplatform. V1Beta1. Inputs. Google Cloud Aiplatform V1beta1Big Query Destination Response 
- BigQuery table for logging. If only given a project, a new dataset will be created with name logging__where will be made BigQuery-dataset-name compatible (e.g. most special characters will become underscores). If no table name is given, a new table will be created with namerequest_response_logging
- Enabled bool
- If logging is enabled or not.
- SamplingRate double
- Percentage of requests to be logged, expressed as a fraction in range(0,1].
- BigqueryDestination GoogleCloud Aiplatform V1beta1Big Query Destination Response 
- BigQuery table for logging. If only given a project, a new dataset will be created with name logging__where will be made BigQuery-dataset-name compatible (e.g. most special characters will become underscores). If no table name is given, a new table will be created with namerequest_response_logging
- Enabled bool
- If logging is enabled or not.
- SamplingRate float64
- Percentage of requests to be logged, expressed as a fraction in range(0,1].
- bigqueryDestination GoogleCloud Aiplatform V1beta1Big Query Destination Response 
- BigQuery table for logging. If only given a project, a new dataset will be created with name logging__where will be made BigQuery-dataset-name compatible (e.g. most special characters will become underscores). If no table name is given, a new table will be created with namerequest_response_logging
- enabled Boolean
- If logging is enabled or not.
- samplingRate Double
- Percentage of requests to be logged, expressed as a fraction in range(0,1].
- bigqueryDestination GoogleCloud Aiplatform V1beta1Big Query Destination Response 
- BigQuery table for logging. If only given a project, a new dataset will be created with name logging__where will be made BigQuery-dataset-name compatible (e.g. most special characters will become underscores). If no table name is given, a new table will be created with namerequest_response_logging
- enabled boolean
- If logging is enabled or not.
- samplingRate number
- Percentage of requests to be logged, expressed as a fraction in range(0,1].
- bigquery_destination GoogleCloud Aiplatform V1beta1Big Query Destination Response 
- BigQuery table for logging. If only given a project, a new dataset will be created with name logging__where will be made BigQuery-dataset-name compatible (e.g. most special characters will become underscores). If no table name is given, a new table will be created with namerequest_response_logging
- enabled bool
- If logging is enabled or not.
- sampling_rate float
- Percentage of requests to be logged, expressed as a fraction in range(0,1].
- bigqueryDestination Property Map
- BigQuery table for logging. If only given a project, a new dataset will be created with name logging__where will be made BigQuery-dataset-name compatible (e.g. most special characters will become underscores). If no table name is given, a new table will be created with namerequest_response_logging
- enabled Boolean
- If logging is enabled or not.
- samplingRate Number
- Percentage of requests to be logged, expressed as a fraction in range(0,1].
GoogleCloudAiplatformV1beta1PresetsResponse    
- Modality string
- The modality of the uploaded model, which automatically configures the distance measurement and feature normalization for the underlying example index and queries. If your model does not precisely fit one of these types, it is okay to choose the closest type.
- Query string
- Preset option controlling parameters for speed-precision trade-off when querying for examples. If omitted, defaults to PRECISE.
- Modality string
- The modality of the uploaded model, which automatically configures the distance measurement and feature normalization for the underlying example index and queries. If your model does not precisely fit one of these types, it is okay to choose the closest type.
- Query string
- Preset option controlling parameters for speed-precision trade-off when querying for examples. If omitted, defaults to PRECISE.
- modality String
- The modality of the uploaded model, which automatically configures the distance measurement and feature normalization for the underlying example index and queries. If your model does not precisely fit one of these types, it is okay to choose the closest type.
- query String
- Preset option controlling parameters for speed-precision trade-off when querying for examples. If omitted, defaults to PRECISE.
- modality string
- The modality of the uploaded model, which automatically configures the distance measurement and feature normalization for the underlying example index and queries. If your model does not precisely fit one of these types, it is okay to choose the closest type.
- query string
- Preset option controlling parameters for speed-precision trade-off when querying for examples. If omitted, defaults to PRECISE.
- modality str
- The modality of the uploaded model, which automatically configures the distance measurement and feature normalization for the underlying example index and queries. If your model does not precisely fit one of these types, it is okay to choose the closest type.
- query str
- Preset option controlling parameters for speed-precision trade-off when querying for examples. If omitted, defaults to PRECISE.
- modality String
- The modality of the uploaded model, which automatically configures the distance measurement and feature normalization for the underlying example index and queries. If your model does not precisely fit one of these types, it is okay to choose the closest type.
- query String
- Preset option controlling parameters for speed-precision trade-off when querying for examples. If omitted, defaults to PRECISE.
GoogleCloudAiplatformV1beta1PrivateEndpointsResponse     
- ExplainHttp stringUri 
- Http(s) path to send explain requests.
- HealthHttp stringUri 
- Http(s) path to send health check requests.
- PredictHttp stringUri 
- Http(s) path to send prediction requests.
- ServiceAttachment string
- The name of the service attachment resource. Populated if private service connect is enabled.
- ExplainHttp stringUri 
- Http(s) path to send explain requests.
- HealthHttp stringUri 
- Http(s) path to send health check requests.
- PredictHttp stringUri 
- Http(s) path to send prediction requests.
- ServiceAttachment string
- The name of the service attachment resource. Populated if private service connect is enabled.
- explainHttp StringUri 
- Http(s) path to send explain requests.
- healthHttp StringUri 
- Http(s) path to send health check requests.
- predictHttp StringUri 
- Http(s) path to send prediction requests.
- serviceAttachment String
- The name of the service attachment resource. Populated if private service connect is enabled.
- explainHttp stringUri 
- Http(s) path to send explain requests.
- healthHttp stringUri 
- Http(s) path to send health check requests.
- predictHttp stringUri 
- Http(s) path to send prediction requests.
- serviceAttachment string
- The name of the service attachment resource. Populated if private service connect is enabled.
- explain_http_ struri 
- Http(s) path to send explain requests.
- health_http_ struri 
- Http(s) path to send health check requests.
- predict_http_ struri 
- Http(s) path to send prediction requests.
- service_attachment str
- The name of the service attachment resource. Populated if private service connect is enabled.
- explainHttp StringUri 
- Http(s) path to send explain requests.
- healthHttp StringUri 
- Http(s) path to send health check requests.
- predictHttp StringUri 
- Http(s) path to send prediction requests.
- serviceAttachment String
- The name of the service attachment resource. Populated if private service connect is enabled.
GoogleCloudAiplatformV1beta1SampledShapleyAttributionResponse      
- PathCount int
- The number of feature permutations to consider when approximating the Shapley values. Valid range of its value is [1, 50], inclusively.
- PathCount int
- The number of feature permutations to consider when approximating the Shapley values. Valid range of its value is [1, 50], inclusively.
- pathCount Integer
- The number of feature permutations to consider when approximating the Shapley values. Valid range of its value is [1, 50], inclusively.
- pathCount number
- The number of feature permutations to consider when approximating the Shapley values. Valid range of its value is [1, 50], inclusively.
- path_count int
- The number of feature permutations to consider when approximating the Shapley values. Valid range of its value is [1, 50], inclusively.
- pathCount Number
- The number of feature permutations to consider when approximating the Shapley values. Valid range of its value is [1, 50], inclusively.
GoogleCloudAiplatformV1beta1SmoothGradConfigResponse      
- FeatureNoise Pulumi.Sigma Google Native. Aiplatform. V1Beta1. Inputs. Google Cloud Aiplatform V1beta1Feature Noise Sigma Response 
- This is similar to noise_sigma, but provides additional flexibility. A separate noise sigma can be provided for each feature, which is useful if their distributions are different. No noise is added to features that are not set. If this field is unset, noise_sigma will be used for all features.
- NoiseSigma double
- This is a single float value and will be used to add noise to all the features. Use this field when all features are normalized to have the same distribution: scale to range [0, 1], [-1, 1] or z-scoring, where features are normalized to have 0-mean and 1-variance. Learn more about normalization. For best results the recommended value is about 10% - 20% of the standard deviation of the input feature. Refer to section 3.2 of the SmoothGrad paper: https://arxiv.org/pdf/1706.03825.pdf. Defaults to 0.1. If the distribution is different per feature, set feature_noise_sigma instead for each feature.
- NoisySample intCount 
- The number of gradient samples to use for approximation. The higher this number, the more accurate the gradient is, but the runtime complexity increases by this factor as well. Valid range of its value is [1, 50]. Defaults to 3.
- FeatureNoise GoogleSigma Cloud Aiplatform V1beta1Feature Noise Sigma Response 
- This is similar to noise_sigma, but provides additional flexibility. A separate noise sigma can be provided for each feature, which is useful if their distributions are different. No noise is added to features that are not set. If this field is unset, noise_sigma will be used for all features.
- NoiseSigma float64
- This is a single float value and will be used to add noise to all the features. Use this field when all features are normalized to have the same distribution: scale to range [0, 1], [-1, 1] or z-scoring, where features are normalized to have 0-mean and 1-variance. Learn more about normalization. For best results the recommended value is about 10% - 20% of the standard deviation of the input feature. Refer to section 3.2 of the SmoothGrad paper: https://arxiv.org/pdf/1706.03825.pdf. Defaults to 0.1. If the distribution is different per feature, set feature_noise_sigma instead for each feature.
- NoisySample intCount 
- The number of gradient samples to use for approximation. The higher this number, the more accurate the gradient is, but the runtime complexity increases by this factor as well. Valid range of its value is [1, 50]. Defaults to 3.
- featureNoise GoogleSigma Cloud Aiplatform V1beta1Feature Noise Sigma Response 
- This is similar to noise_sigma, but provides additional flexibility. A separate noise sigma can be provided for each feature, which is useful if their distributions are different. No noise is added to features that are not set. If this field is unset, noise_sigma will be used for all features.
- noiseSigma Double
- This is a single float value and will be used to add noise to all the features. Use this field when all features are normalized to have the same distribution: scale to range [0, 1], [-1, 1] or z-scoring, where features are normalized to have 0-mean and 1-variance. Learn more about normalization. For best results the recommended value is about 10% - 20% of the standard deviation of the input feature. Refer to section 3.2 of the SmoothGrad paper: https://arxiv.org/pdf/1706.03825.pdf. Defaults to 0.1. If the distribution is different per feature, set feature_noise_sigma instead for each feature.
- noisySample IntegerCount 
- The number of gradient samples to use for approximation. The higher this number, the more accurate the gradient is, but the runtime complexity increases by this factor as well. Valid range of its value is [1, 50]. Defaults to 3.
- featureNoise GoogleSigma Cloud Aiplatform V1beta1Feature Noise Sigma Response 
- This is similar to noise_sigma, but provides additional flexibility. A separate noise sigma can be provided for each feature, which is useful if their distributions are different. No noise is added to features that are not set. If this field is unset, noise_sigma will be used for all features.
- noiseSigma number
- This is a single float value and will be used to add noise to all the features. Use this field when all features are normalized to have the same distribution: scale to range [0, 1], [-1, 1] or z-scoring, where features are normalized to have 0-mean and 1-variance. Learn more about normalization. For best results the recommended value is about 10% - 20% of the standard deviation of the input feature. Refer to section 3.2 of the SmoothGrad paper: https://arxiv.org/pdf/1706.03825.pdf. Defaults to 0.1. If the distribution is different per feature, set feature_noise_sigma instead for each feature.
- noisySample numberCount 
- The number of gradient samples to use for approximation. The higher this number, the more accurate the gradient is, but the runtime complexity increases by this factor as well. Valid range of its value is [1, 50]. Defaults to 3.
- feature_noise_ Googlesigma Cloud Aiplatform V1beta1Feature Noise Sigma Response 
- This is similar to noise_sigma, but provides additional flexibility. A separate noise sigma can be provided for each feature, which is useful if their distributions are different. No noise is added to features that are not set. If this field is unset, noise_sigma will be used for all features.
- noise_sigma float
- This is a single float value and will be used to add noise to all the features. Use this field when all features are normalized to have the same distribution: scale to range [0, 1], [-1, 1] or z-scoring, where features are normalized to have 0-mean and 1-variance. Learn more about normalization. For best results the recommended value is about 10% - 20% of the standard deviation of the input feature. Refer to section 3.2 of the SmoothGrad paper: https://arxiv.org/pdf/1706.03825.pdf. Defaults to 0.1. If the distribution is different per feature, set feature_noise_sigma instead for each feature.
- noisy_sample_ intcount 
- The number of gradient samples to use for approximation. The higher this number, the more accurate the gradient is, but the runtime complexity increases by this factor as well. Valid range of its value is [1, 50]. Defaults to 3.
- featureNoise Property MapSigma 
- This is similar to noise_sigma, but provides additional flexibility. A separate noise sigma can be provided for each feature, which is useful if their distributions are different. No noise is added to features that are not set. If this field is unset, noise_sigma will be used for all features.
- noiseSigma Number
- This is a single float value and will be used to add noise to all the features. Use this field when all features are normalized to have the same distribution: scale to range [0, 1], [-1, 1] or z-scoring, where features are normalized to have 0-mean and 1-variance. Learn more about normalization. For best results the recommended value is about 10% - 20% of the standard deviation of the input feature. Refer to section 3.2 of the SmoothGrad paper: https://arxiv.org/pdf/1706.03825.pdf. Defaults to 0.1. If the distribution is different per feature, set feature_noise_sigma instead for each feature.
- noisySample NumberCount 
- The number of gradient samples to use for approximation. The higher this number, the more accurate the gradient is, but the runtime complexity increases by this factor as well. Valid range of its value is [1, 50]. Defaults to 3.
GoogleCloudAiplatformV1beta1XraiAttributionResponse     
- BlurBaseline Pulumi.Config Google Native. Aiplatform. V1Beta1. Inputs. Google Cloud Aiplatform V1beta1Blur Baseline Config Response 
- Config for XRAI with blur baseline. When enabled, a linear path from the maximally blurred image to the input image is created. Using a blurred baseline instead of zero (black image) is motivated by the BlurIG approach explained here: https://arxiv.org/abs/2004.03383
- SmoothGrad Pulumi.Config Google Native. Aiplatform. V1Beta1. Inputs. Google Cloud Aiplatform V1beta1Smooth Grad Config Response 
- Config for SmoothGrad approximation of gradients. When enabled, the gradients are approximated by averaging the gradients from noisy samples in the vicinity of the inputs. Adding noise can help improve the computed gradients. Refer to this paper for more details: https://arxiv.org/pdf/1706.03825.pdf
- StepCount int
- The number of steps for approximating the path integral. A good value to start is 50 and gradually increase until the sum to diff property is met within the desired error range. Valid range of its value is [1, 100], inclusively.
- BlurBaseline GoogleConfig Cloud Aiplatform V1beta1Blur Baseline Config Response 
- Config for XRAI with blur baseline. When enabled, a linear path from the maximally blurred image to the input image is created. Using a blurred baseline instead of zero (black image) is motivated by the BlurIG approach explained here: https://arxiv.org/abs/2004.03383
- SmoothGrad GoogleConfig Cloud Aiplatform V1beta1Smooth Grad Config Response 
- Config for SmoothGrad approximation of gradients. When enabled, the gradients are approximated by averaging the gradients from noisy samples in the vicinity of the inputs. Adding noise can help improve the computed gradients. Refer to this paper for more details: https://arxiv.org/pdf/1706.03825.pdf
- StepCount int
- The number of steps for approximating the path integral. A good value to start is 50 and gradually increase until the sum to diff property is met within the desired error range. Valid range of its value is [1, 100], inclusively.
- blurBaseline GoogleConfig Cloud Aiplatform V1beta1Blur Baseline Config Response 
- Config for XRAI with blur baseline. When enabled, a linear path from the maximally blurred image to the input image is created. Using a blurred baseline instead of zero (black image) is motivated by the BlurIG approach explained here: https://arxiv.org/abs/2004.03383
- smoothGrad GoogleConfig Cloud Aiplatform V1beta1Smooth Grad Config Response 
- Config for SmoothGrad approximation of gradients. When enabled, the gradients are approximated by averaging the gradients from noisy samples in the vicinity of the inputs. Adding noise can help improve the computed gradients. Refer to this paper for more details: https://arxiv.org/pdf/1706.03825.pdf
- stepCount Integer
- The number of steps for approximating the path integral. A good value to start is 50 and gradually increase until the sum to diff property is met within the desired error range. Valid range of its value is [1, 100], inclusively.
- blurBaseline GoogleConfig Cloud Aiplatform V1beta1Blur Baseline Config Response 
- Config for XRAI with blur baseline. When enabled, a linear path from the maximally blurred image to the input image is created. Using a blurred baseline instead of zero (black image) is motivated by the BlurIG approach explained here: https://arxiv.org/abs/2004.03383
- smoothGrad GoogleConfig Cloud Aiplatform V1beta1Smooth Grad Config Response 
- Config for SmoothGrad approximation of gradients. When enabled, the gradients are approximated by averaging the gradients from noisy samples in the vicinity of the inputs. Adding noise can help improve the computed gradients. Refer to this paper for more details: https://arxiv.org/pdf/1706.03825.pdf
- stepCount number
- The number of steps for approximating the path integral. A good value to start is 50 and gradually increase until the sum to diff property is met within the desired error range. Valid range of its value is [1, 100], inclusively.
- blur_baseline_ Googleconfig Cloud Aiplatform V1beta1Blur Baseline Config Response 
- Config for XRAI with blur baseline. When enabled, a linear path from the maximally blurred image to the input image is created. Using a blurred baseline instead of zero (black image) is motivated by the BlurIG approach explained here: https://arxiv.org/abs/2004.03383
- smooth_grad_ Googleconfig Cloud Aiplatform V1beta1Smooth Grad Config Response 
- Config for SmoothGrad approximation of gradients. When enabled, the gradients are approximated by averaging the gradients from noisy samples in the vicinity of the inputs. Adding noise can help improve the computed gradients. Refer to this paper for more details: https://arxiv.org/pdf/1706.03825.pdf
- step_count int
- The number of steps for approximating the path integral. A good value to start is 50 and gradually increase until the sum to diff property is met within the desired error range. Valid range of its value is [1, 100], inclusively.
- blurBaseline Property MapConfig 
- Config for XRAI with blur baseline. When enabled, a linear path from the maximally blurred image to the input image is created. Using a blurred baseline instead of zero (black image) is motivated by the BlurIG approach explained here: https://arxiv.org/abs/2004.03383
- smoothGrad Property MapConfig 
- Config for SmoothGrad approximation of gradients. When enabled, the gradients are approximated by averaging the gradients from noisy samples in the vicinity of the inputs. Adding noise can help improve the computed gradients. Refer to this paper for more details: https://arxiv.org/pdf/1706.03825.pdf
- stepCount Number
- The number of steps for approximating the path integral. A good value to start is 50 and gradually increase until the sum to diff property is met within the desired error range. Valid range of its value is [1, 100], inclusively.
Package Details
- Repository
- Google Cloud Native pulumi/pulumi-google-native
- License
- Apache-2.0
Google Cloud Native is in preview. Google Cloud Classic is fully supported.
Google Cloud Native v0.32.0 published on Wednesday, Nov 29, 2023 by Pulumi