Google Cloud Native is in preview. Google Cloud Classic is fully supported.
google-native.aiplatform/v1beta1.CustomJob
Explore with Pulumi AI
Google Cloud Native is in preview. Google Cloud Classic is fully supported.
Creates a CustomJob. A created CustomJob right away will be attempted to be run. Auto-naming is currently not supported for this resource.
Create CustomJob Resource
Resources are created with functions called constructors. To learn more about declaring and configuring resources, see Resources.
Constructor syntax
new CustomJob(name: string, args: CustomJobArgs, opts?: CustomResourceOptions);@overload
def CustomJob(resource_name: str,
              args: CustomJobArgs,
              opts: Optional[ResourceOptions] = None)
@overload
def CustomJob(resource_name: str,
              opts: Optional[ResourceOptions] = None,
              display_name: Optional[str] = None,
              job_spec: Optional[GoogleCloudAiplatformV1beta1CustomJobSpecArgs] = None,
              encryption_spec: Optional[GoogleCloudAiplatformV1beta1EncryptionSpecArgs] = None,
              labels: Optional[Mapping[str, str]] = None,
              location: Optional[str] = None,
              project: Optional[str] = None)func NewCustomJob(ctx *Context, name string, args CustomJobArgs, opts ...ResourceOption) (*CustomJob, error)public CustomJob(string name, CustomJobArgs args, CustomResourceOptions? opts = null)
public CustomJob(String name, CustomJobArgs args)
public CustomJob(String name, CustomJobArgs args, CustomResourceOptions options)
type: google-native:aiplatform/v1beta1:CustomJob
properties: # The arguments to resource properties.
options: # Bag of options to control resource's behavior.
Parameters
- name string
- The unique name of the resource.
- args CustomJobArgs
- The arguments to resource properties.
- opts CustomResourceOptions
- Bag of options to control resource's behavior.
- resource_name str
- The unique name of the resource.
- args CustomJobArgs
- The arguments to resource properties.
- opts ResourceOptions
- Bag of options to control resource's behavior.
- ctx Context
- Context object for the current deployment.
- name string
- The unique name of the resource.
- args CustomJobArgs
- The arguments to resource properties.
- opts ResourceOption
- Bag of options to control resource's behavior.
- name string
- The unique name of the resource.
- args CustomJobArgs
- The arguments to resource properties.
- opts CustomResourceOptions
- Bag of options to control resource's behavior.
- name String
- The unique name of the resource.
- args CustomJobArgs
- The arguments to resource properties.
- options CustomResourceOptions
- Bag of options to control resource's behavior.
Constructor example
The following reference example uses placeholder values for all input properties.
var google_nativeCustomJobResource = new GoogleNative.Aiplatform.V1Beta1.CustomJob("google-nativeCustomJobResource", new()
{
    DisplayName = "string",
    JobSpec = new GoogleNative.Aiplatform.V1Beta1.Inputs.GoogleCloudAiplatformV1beta1CustomJobSpecArgs
    {
        WorkerPoolSpecs = new[]
        {
            new GoogleNative.Aiplatform.V1Beta1.Inputs.GoogleCloudAiplatformV1beta1WorkerPoolSpecArgs
            {
                ContainerSpec = new GoogleNative.Aiplatform.V1Beta1.Inputs.GoogleCloudAiplatformV1beta1ContainerSpecArgs
                {
                    ImageUri = "string",
                    Args = new[]
                    {
                        "string",
                    },
                    Command = new[]
                    {
                        "string",
                    },
                    Env = new[]
                    {
                        new GoogleNative.Aiplatform.V1Beta1.Inputs.GoogleCloudAiplatformV1beta1EnvVarArgs
                        {
                            Name = "string",
                            Value = "string",
                        },
                    },
                },
                DiskSpec = new GoogleNative.Aiplatform.V1Beta1.Inputs.GoogleCloudAiplatformV1beta1DiskSpecArgs
                {
                    BootDiskSizeGb = 0,
                    BootDiskType = "string",
                },
                MachineSpec = new GoogleNative.Aiplatform.V1Beta1.Inputs.GoogleCloudAiplatformV1beta1MachineSpecArgs
                {
                    AcceleratorCount = 0,
                    AcceleratorType = GoogleNative.Aiplatform.V1Beta1.GoogleCloudAiplatformV1beta1MachineSpecAcceleratorType.AcceleratorTypeUnspecified,
                    MachineType = "string",
                    TpuTopology = "string",
                },
                NfsMounts = new[]
                {
                    new GoogleNative.Aiplatform.V1Beta1.Inputs.GoogleCloudAiplatformV1beta1NfsMountArgs
                    {
                        MountPoint = "string",
                        Path = "string",
                        Server = "string",
                    },
                },
                PythonPackageSpec = new GoogleNative.Aiplatform.V1Beta1.Inputs.GoogleCloudAiplatformV1beta1PythonPackageSpecArgs
                {
                    ExecutorImageUri = "string",
                    PackageUris = new[]
                    {
                        "string",
                    },
                    PythonModule = "string",
                    Args = new[]
                    {
                        "string",
                    },
                    Env = new[]
                    {
                        new GoogleNative.Aiplatform.V1Beta1.Inputs.GoogleCloudAiplatformV1beta1EnvVarArgs
                        {
                            Name = "string",
                            Value = "string",
                        },
                    },
                },
                ReplicaCount = "string",
            },
        },
        PersistentResourceId = "string",
        EnableWebAccess = false,
        Experiment = "string",
        ExperimentRun = "string",
        Network = "string",
        BaseOutputDirectory = new GoogleNative.Aiplatform.V1Beta1.Inputs.GoogleCloudAiplatformV1beta1GcsDestinationArgs
        {
            OutputUriPrefix = "string",
        },
        ProtectedArtifactLocationId = "string",
        ReservedIpRanges = new[]
        {
            "string",
        },
        Scheduling = new GoogleNative.Aiplatform.V1Beta1.Inputs.GoogleCloudAiplatformV1beta1SchedulingArgs
        {
            DisableRetries = false,
            RestartJobOnWorkerRestart = false,
            Timeout = "string",
        },
        ServiceAccount = "string",
        Tensorboard = "string",
        EnableDashboardAccess = false,
    },
    EncryptionSpec = new GoogleNative.Aiplatform.V1Beta1.Inputs.GoogleCloudAiplatformV1beta1EncryptionSpecArgs
    {
        KmsKeyName = "string",
    },
    Labels = 
    {
        { "string", "string" },
    },
    Location = "string",
    Project = "string",
});
example, err := aiplatformv1beta1.NewCustomJob(ctx, "google-nativeCustomJobResource", &aiplatformv1beta1.CustomJobArgs{
	DisplayName: pulumi.String("string"),
	JobSpec: &aiplatform.GoogleCloudAiplatformV1beta1CustomJobSpecArgs{
		WorkerPoolSpecs: aiplatform.GoogleCloudAiplatformV1beta1WorkerPoolSpecArray{
			&aiplatform.GoogleCloudAiplatformV1beta1WorkerPoolSpecArgs{
				ContainerSpec: &aiplatform.GoogleCloudAiplatformV1beta1ContainerSpecArgs{
					ImageUri: pulumi.String("string"),
					Args: pulumi.StringArray{
						pulumi.String("string"),
					},
					Command: pulumi.StringArray{
						pulumi.String("string"),
					},
					Env: aiplatform.GoogleCloudAiplatformV1beta1EnvVarArray{
						&aiplatform.GoogleCloudAiplatformV1beta1EnvVarArgs{
							Name:  pulumi.String("string"),
							Value: pulumi.String("string"),
						},
					},
				},
				DiskSpec: &aiplatform.GoogleCloudAiplatformV1beta1DiskSpecArgs{
					BootDiskSizeGb: pulumi.Int(0),
					BootDiskType:   pulumi.String("string"),
				},
				MachineSpec: &aiplatform.GoogleCloudAiplatformV1beta1MachineSpecArgs{
					AcceleratorCount: pulumi.Int(0),
					AcceleratorType:  aiplatformv1beta1.GoogleCloudAiplatformV1beta1MachineSpecAcceleratorTypeAcceleratorTypeUnspecified,
					MachineType:      pulumi.String("string"),
					TpuTopology:      pulumi.String("string"),
				},
				NfsMounts: aiplatform.GoogleCloudAiplatformV1beta1NfsMountArray{
					&aiplatform.GoogleCloudAiplatformV1beta1NfsMountArgs{
						MountPoint: pulumi.String("string"),
						Path:       pulumi.String("string"),
						Server:     pulumi.String("string"),
					},
				},
				PythonPackageSpec: &aiplatform.GoogleCloudAiplatformV1beta1PythonPackageSpecArgs{
					ExecutorImageUri: pulumi.String("string"),
					PackageUris: pulumi.StringArray{
						pulumi.String("string"),
					},
					PythonModule: pulumi.String("string"),
					Args: pulumi.StringArray{
						pulumi.String("string"),
					},
					Env: aiplatform.GoogleCloudAiplatformV1beta1EnvVarArray{
						&aiplatform.GoogleCloudAiplatformV1beta1EnvVarArgs{
							Name:  pulumi.String("string"),
							Value: pulumi.String("string"),
						},
					},
				},
				ReplicaCount: pulumi.String("string"),
			},
		},
		PersistentResourceId: pulumi.String("string"),
		EnableWebAccess:      pulumi.Bool(false),
		Experiment:           pulumi.String("string"),
		ExperimentRun:        pulumi.String("string"),
		Network:              pulumi.String("string"),
		BaseOutputDirectory: &aiplatform.GoogleCloudAiplatformV1beta1GcsDestinationArgs{
			OutputUriPrefix: pulumi.String("string"),
		},
		ProtectedArtifactLocationId: pulumi.String("string"),
		ReservedIpRanges: pulumi.StringArray{
			pulumi.String("string"),
		},
		Scheduling: &aiplatform.GoogleCloudAiplatformV1beta1SchedulingArgs{
			DisableRetries:            pulumi.Bool(false),
			RestartJobOnWorkerRestart: pulumi.Bool(false),
			Timeout:                   pulumi.String("string"),
		},
		ServiceAccount:        pulumi.String("string"),
		Tensorboard:           pulumi.String("string"),
		EnableDashboardAccess: pulumi.Bool(false),
	},
	EncryptionSpec: &aiplatform.GoogleCloudAiplatformV1beta1EncryptionSpecArgs{
		KmsKeyName: pulumi.String("string"),
	},
	Labels: pulumi.StringMap{
		"string": pulumi.String("string"),
	},
	Location: pulumi.String("string"),
	Project:  pulumi.String("string"),
})
var google_nativeCustomJobResource = new CustomJob("google-nativeCustomJobResource", CustomJobArgs.builder()
    .displayName("string")
    .jobSpec(GoogleCloudAiplatformV1beta1CustomJobSpecArgs.builder()
        .workerPoolSpecs(GoogleCloudAiplatformV1beta1WorkerPoolSpecArgs.builder()
            .containerSpec(GoogleCloudAiplatformV1beta1ContainerSpecArgs.builder()
                .imageUri("string")
                .args("string")
                .command("string")
                .env(GoogleCloudAiplatformV1beta1EnvVarArgs.builder()
                    .name("string")
                    .value("string")
                    .build())
                .build())
            .diskSpec(GoogleCloudAiplatformV1beta1DiskSpecArgs.builder()
                .bootDiskSizeGb(0)
                .bootDiskType("string")
                .build())
            .machineSpec(GoogleCloudAiplatformV1beta1MachineSpecArgs.builder()
                .acceleratorCount(0)
                .acceleratorType("ACCELERATOR_TYPE_UNSPECIFIED")
                .machineType("string")
                .tpuTopology("string")
                .build())
            .nfsMounts(GoogleCloudAiplatformV1beta1NfsMountArgs.builder()
                .mountPoint("string")
                .path("string")
                .server("string")
                .build())
            .pythonPackageSpec(GoogleCloudAiplatformV1beta1PythonPackageSpecArgs.builder()
                .executorImageUri("string")
                .packageUris("string")
                .pythonModule("string")
                .args("string")
                .env(GoogleCloudAiplatformV1beta1EnvVarArgs.builder()
                    .name("string")
                    .value("string")
                    .build())
                .build())
            .replicaCount("string")
            .build())
        .persistentResourceId("string")
        .enableWebAccess(false)
        .experiment("string")
        .experimentRun("string")
        .network("string")
        .baseOutputDirectory(GoogleCloudAiplatformV1beta1GcsDestinationArgs.builder()
            .outputUriPrefix("string")
            .build())
        .protectedArtifactLocationId("string")
        .reservedIpRanges("string")
        .scheduling(GoogleCloudAiplatformV1beta1SchedulingArgs.builder()
            .disableRetries(false)
            .restartJobOnWorkerRestart(false)
            .timeout("string")
            .build())
        .serviceAccount("string")
        .tensorboard("string")
        .enableDashboardAccess(false)
        .build())
    .encryptionSpec(GoogleCloudAiplatformV1beta1EncryptionSpecArgs.builder()
        .kmsKeyName("string")
        .build())
    .labels(Map.of("string", "string"))
    .location("string")
    .project("string")
    .build());
google_native_custom_job_resource = google_native.aiplatform.v1beta1.CustomJob("google-nativeCustomJobResource",
    display_name="string",
    job_spec={
        "worker_pool_specs": [{
            "container_spec": {
                "image_uri": "string",
                "args": ["string"],
                "command": ["string"],
                "env": [{
                    "name": "string",
                    "value": "string",
                }],
            },
            "disk_spec": {
                "boot_disk_size_gb": 0,
                "boot_disk_type": "string",
            },
            "machine_spec": {
                "accelerator_count": 0,
                "accelerator_type": google_native.aiplatform.v1beta1.GoogleCloudAiplatformV1beta1MachineSpecAcceleratorType.ACCELERATOR_TYPE_UNSPECIFIED,
                "machine_type": "string",
                "tpu_topology": "string",
            },
            "nfs_mounts": [{
                "mount_point": "string",
                "path": "string",
                "server": "string",
            }],
            "python_package_spec": {
                "executor_image_uri": "string",
                "package_uris": ["string"],
                "python_module": "string",
                "args": ["string"],
                "env": [{
                    "name": "string",
                    "value": "string",
                }],
            },
            "replica_count": "string",
        }],
        "persistent_resource_id": "string",
        "enable_web_access": False,
        "experiment": "string",
        "experiment_run": "string",
        "network": "string",
        "base_output_directory": {
            "output_uri_prefix": "string",
        },
        "protected_artifact_location_id": "string",
        "reserved_ip_ranges": ["string"],
        "scheduling": {
            "disable_retries": False,
            "restart_job_on_worker_restart": False,
            "timeout": "string",
        },
        "service_account": "string",
        "tensorboard": "string",
        "enable_dashboard_access": False,
    },
    encryption_spec={
        "kms_key_name": "string",
    },
    labels={
        "string": "string",
    },
    location="string",
    project="string")
const google_nativeCustomJobResource = new google_native.aiplatform.v1beta1.CustomJob("google-nativeCustomJobResource", {
    displayName: "string",
    jobSpec: {
        workerPoolSpecs: [{
            containerSpec: {
                imageUri: "string",
                args: ["string"],
                command: ["string"],
                env: [{
                    name: "string",
                    value: "string",
                }],
            },
            diskSpec: {
                bootDiskSizeGb: 0,
                bootDiskType: "string",
            },
            machineSpec: {
                acceleratorCount: 0,
                acceleratorType: google_native.aiplatform.v1beta1.GoogleCloudAiplatformV1beta1MachineSpecAcceleratorType.AcceleratorTypeUnspecified,
                machineType: "string",
                tpuTopology: "string",
            },
            nfsMounts: [{
                mountPoint: "string",
                path: "string",
                server: "string",
            }],
            pythonPackageSpec: {
                executorImageUri: "string",
                packageUris: ["string"],
                pythonModule: "string",
                args: ["string"],
                env: [{
                    name: "string",
                    value: "string",
                }],
            },
            replicaCount: "string",
        }],
        persistentResourceId: "string",
        enableWebAccess: false,
        experiment: "string",
        experimentRun: "string",
        network: "string",
        baseOutputDirectory: {
            outputUriPrefix: "string",
        },
        protectedArtifactLocationId: "string",
        reservedIpRanges: ["string"],
        scheduling: {
            disableRetries: false,
            restartJobOnWorkerRestart: false,
            timeout: "string",
        },
        serviceAccount: "string",
        tensorboard: "string",
        enableDashboardAccess: false,
    },
    encryptionSpec: {
        kmsKeyName: "string",
    },
    labels: {
        string: "string",
    },
    location: "string",
    project: "string",
});
type: google-native:aiplatform/v1beta1:CustomJob
properties:
    displayName: string
    encryptionSpec:
        kmsKeyName: string
    jobSpec:
        baseOutputDirectory:
            outputUriPrefix: string
        enableDashboardAccess: false
        enableWebAccess: false
        experiment: string
        experimentRun: string
        network: string
        persistentResourceId: string
        protectedArtifactLocationId: string
        reservedIpRanges:
            - string
        scheduling:
            disableRetries: false
            restartJobOnWorkerRestart: false
            timeout: string
        serviceAccount: string
        tensorboard: string
        workerPoolSpecs:
            - containerSpec:
                args:
                    - string
                command:
                    - string
                env:
                    - name: string
                      value: string
                imageUri: string
              diskSpec:
                bootDiskSizeGb: 0
                bootDiskType: string
              machineSpec:
                acceleratorCount: 0
                acceleratorType: ACCELERATOR_TYPE_UNSPECIFIED
                machineType: string
                tpuTopology: string
              nfsMounts:
                - mountPoint: string
                  path: string
                  server: string
              pythonPackageSpec:
                args:
                    - string
                env:
                    - name: string
                      value: string
                executorImageUri: string
                packageUris:
                    - string
                pythonModule: string
              replicaCount: string
    labels:
        string: string
    location: string
    project: string
CustomJob Resource Properties
To learn more about resource properties and how to use them, see Inputs and Outputs in the Architecture and Concepts docs.
Inputs
In Python, inputs that are objects can be passed either as argument classes or as dictionary literals.
The CustomJob resource accepts the following input properties:
- DisplayName string
- The display name of the CustomJob. The name can be up to 128 characters long and can consist of any UTF-8 characters.
- JobSpec Pulumi.Google Native. Aiplatform. V1Beta1. Inputs. Google Cloud Aiplatform V1beta1Custom Job Spec 
- Job spec.
- EncryptionSpec Pulumi.Google Native. Aiplatform. V1Beta1. Inputs. Google Cloud Aiplatform V1beta1Encryption Spec 
- Customer-managed encryption key options for a CustomJob. If this is set, then all resources created by the CustomJob will be encrypted with the provided encryption key.
- Labels Dictionary<string, string>
- The labels with user-defined metadata to organize CustomJobs. Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. See https://goo.gl/xmQnxf for more information and examples of labels.
- Location string
- Project string
- DisplayName string
- The display name of the CustomJob. The name can be up to 128 characters long and can consist of any UTF-8 characters.
- JobSpec GoogleCloud Aiplatform V1beta1Custom Job Spec Args 
- Job spec.
- EncryptionSpec GoogleCloud Aiplatform V1beta1Encryption Spec Args 
- Customer-managed encryption key options for a CustomJob. If this is set, then all resources created by the CustomJob will be encrypted with the provided encryption key.
- Labels map[string]string
- The labels with user-defined metadata to organize CustomJobs. Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. See https://goo.gl/xmQnxf for more information and examples of labels.
- Location string
- Project string
- displayName String
- The display name of the CustomJob. The name can be up to 128 characters long and can consist of any UTF-8 characters.
- jobSpec GoogleCloud Aiplatform V1beta1Custom Job Spec 
- Job spec.
- encryptionSpec GoogleCloud Aiplatform V1beta1Encryption Spec 
- Customer-managed encryption key options for a CustomJob. If this is set, then all resources created by the CustomJob will be encrypted with the provided encryption key.
- labels Map<String,String>
- The labels with user-defined metadata to organize CustomJobs. Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. See https://goo.gl/xmQnxf for more information and examples of labels.
- location String
- project String
- displayName string
- The display name of the CustomJob. The name can be up to 128 characters long and can consist of any UTF-8 characters.
- jobSpec GoogleCloud Aiplatform V1beta1Custom Job Spec 
- Job spec.
- encryptionSpec GoogleCloud Aiplatform V1beta1Encryption Spec 
- Customer-managed encryption key options for a CustomJob. If this is set, then all resources created by the CustomJob will be encrypted with the provided encryption key.
- labels {[key: string]: string}
- The labels with user-defined metadata to organize CustomJobs. Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. See https://goo.gl/xmQnxf for more information and examples of labels.
- location string
- project string
- display_name str
- The display name of the CustomJob. The name can be up to 128 characters long and can consist of any UTF-8 characters.
- job_spec GoogleCloud Aiplatform V1beta1Custom Job Spec Args 
- Job spec.
- encryption_spec GoogleCloud Aiplatform V1beta1Encryption Spec Args 
- Customer-managed encryption key options for a CustomJob. If this is set, then all resources created by the CustomJob will be encrypted with the provided encryption key.
- labels Mapping[str, str]
- The labels with user-defined metadata to organize CustomJobs. Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. See https://goo.gl/xmQnxf for more information and examples of labels.
- location str
- project str
- displayName String
- The display name of the CustomJob. The name can be up to 128 characters long and can consist of any UTF-8 characters.
- jobSpec Property Map
- Job spec.
- encryptionSpec Property Map
- Customer-managed encryption key options for a CustomJob. If this is set, then all resources created by the CustomJob will be encrypted with the provided encryption key.
- labels Map<String>
- The labels with user-defined metadata to organize CustomJobs. Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. See https://goo.gl/xmQnxf for more information and examples of labels.
- location String
- project String
Outputs
All input properties are implicitly available as output properties. Additionally, the CustomJob resource produces the following output properties:
- CreateTime string
- Time when the CustomJob was created.
- EndTime string
- Time when the CustomJob entered any of the following states: JOB_STATE_SUCCEEDED,JOB_STATE_FAILED,JOB_STATE_CANCELLED.
- Error
Pulumi.Google Native. Aiplatform. V1Beta1. Outputs. Google Rpc Status Response 
- Only populated when job's state is JOB_STATE_FAILEDorJOB_STATE_CANCELLED.
- Id string
- The provider-assigned unique ID for this managed resource.
- Name string
- Resource name of a CustomJob.
- StartTime string
- Time when the CustomJob for the first time entered the JOB_STATE_RUNNINGstate.
- State string
- The detailed state of the job.
- UpdateTime string
- Time when the CustomJob was most recently updated.
- WebAccess Dictionary<string, string>Uris 
- URIs for accessing interactive shells (one URI for each training node). Only available if job_spec.enable_web_access is true. The keys are names of each node in the training job; for example,workerpool0-0for the primary node,workerpool1-0for the first node in the second worker pool, andworkerpool1-1for the second node in the second worker pool. The values are the URIs for each node's interactive shell.
- CreateTime string
- Time when the CustomJob was created.
- EndTime string
- Time when the CustomJob entered any of the following states: JOB_STATE_SUCCEEDED,JOB_STATE_FAILED,JOB_STATE_CANCELLED.
- Error
GoogleRpc Status Response 
- Only populated when job's state is JOB_STATE_FAILEDorJOB_STATE_CANCELLED.
- Id string
- The provider-assigned unique ID for this managed resource.
- Name string
- Resource name of a CustomJob.
- StartTime string
- Time when the CustomJob for the first time entered the JOB_STATE_RUNNINGstate.
- State string
- The detailed state of the job.
- UpdateTime string
- Time when the CustomJob was most recently updated.
- WebAccess map[string]stringUris 
- URIs for accessing interactive shells (one URI for each training node). Only available if job_spec.enable_web_access is true. The keys are names of each node in the training job; for example,workerpool0-0for the primary node,workerpool1-0for the first node in the second worker pool, andworkerpool1-1for the second node in the second worker pool. The values are the URIs for each node's interactive shell.
- createTime String
- Time when the CustomJob was created.
- endTime String
- Time when the CustomJob entered any of the following states: JOB_STATE_SUCCEEDED,JOB_STATE_FAILED,JOB_STATE_CANCELLED.
- error
GoogleRpc Status Response 
- Only populated when job's state is JOB_STATE_FAILEDorJOB_STATE_CANCELLED.
- id String
- The provider-assigned unique ID for this managed resource.
- name String
- Resource name of a CustomJob.
- startTime String
- Time when the CustomJob for the first time entered the JOB_STATE_RUNNINGstate.
- state String
- The detailed state of the job.
- updateTime String
- Time when the CustomJob was most recently updated.
- webAccess Map<String,String>Uris 
- URIs for accessing interactive shells (one URI for each training node). Only available if job_spec.enable_web_access is true. The keys are names of each node in the training job; for example,workerpool0-0for the primary node,workerpool1-0for the first node in the second worker pool, andworkerpool1-1for the second node in the second worker pool. The values are the URIs for each node's interactive shell.
- createTime string
- Time when the CustomJob was created.
- endTime string
- Time when the CustomJob entered any of the following states: JOB_STATE_SUCCEEDED,JOB_STATE_FAILED,JOB_STATE_CANCELLED.
- error
GoogleRpc Status Response 
- Only populated when job's state is JOB_STATE_FAILEDorJOB_STATE_CANCELLED.
- id string
- The provider-assigned unique ID for this managed resource.
- name string
- Resource name of a CustomJob.
- startTime string
- Time when the CustomJob for the first time entered the JOB_STATE_RUNNINGstate.
- state string
- The detailed state of the job.
- updateTime string
- Time when the CustomJob was most recently updated.
- webAccess {[key: string]: string}Uris 
- URIs for accessing interactive shells (one URI for each training node). Only available if job_spec.enable_web_access is true. The keys are names of each node in the training job; for example,workerpool0-0for the primary node,workerpool1-0for the first node in the second worker pool, andworkerpool1-1for the second node in the second worker pool. The values are the URIs for each node's interactive shell.
- create_time str
- Time when the CustomJob was created.
- end_time str
- Time when the CustomJob entered any of the following states: JOB_STATE_SUCCEEDED,JOB_STATE_FAILED,JOB_STATE_CANCELLED.
- error
GoogleRpc Status Response 
- Only populated when job's state is JOB_STATE_FAILEDorJOB_STATE_CANCELLED.
- id str
- The provider-assigned unique ID for this managed resource.
- name str
- Resource name of a CustomJob.
- start_time str
- Time when the CustomJob for the first time entered the JOB_STATE_RUNNINGstate.
- state str
- The detailed state of the job.
- update_time str
- Time when the CustomJob was most recently updated.
- web_access_ Mapping[str, str]uris 
- URIs for accessing interactive shells (one URI for each training node). Only available if job_spec.enable_web_access is true. The keys are names of each node in the training job; for example,workerpool0-0for the primary node,workerpool1-0for the first node in the second worker pool, andworkerpool1-1for the second node in the second worker pool. The values are the URIs for each node's interactive shell.
- createTime String
- Time when the CustomJob was created.
- endTime String
- Time when the CustomJob entered any of the following states: JOB_STATE_SUCCEEDED,JOB_STATE_FAILED,JOB_STATE_CANCELLED.
- error Property Map
- Only populated when job's state is JOB_STATE_FAILEDorJOB_STATE_CANCELLED.
- id String
- The provider-assigned unique ID for this managed resource.
- name String
- Resource name of a CustomJob.
- startTime String
- Time when the CustomJob for the first time entered the JOB_STATE_RUNNINGstate.
- state String
- The detailed state of the job.
- updateTime String
- Time when the CustomJob was most recently updated.
- webAccess Map<String>Uris 
- URIs for accessing interactive shells (one URI for each training node). Only available if job_spec.enable_web_access is true. The keys are names of each node in the training job; for example,workerpool0-0for the primary node,workerpool1-0for the first node in the second worker pool, andworkerpool1-1for the second node in the second worker pool. The values are the URIs for each node's interactive shell.
Supporting Types
GoogleCloudAiplatformV1beta1ContainerSpec, GoogleCloudAiplatformV1beta1ContainerSpecArgs          
- ImageUri string
- The URI of a container image in the Container Registry that is to be run on each worker replica.
- Args List<string>
- The arguments to be passed when starting the container.
- Command List<string>
- The command to be invoked when the container is started. It overrides the entrypoint instruction in Dockerfile when provided.
- Env
List<Pulumi.Google Native. Aiplatform. V1Beta1. Inputs. Google Cloud Aiplatform V1beta1Env Var> 
- Environment variables to be passed to the container. Maximum limit is 100.
- ImageUri string
- The URI of a container image in the Container Registry that is to be run on each worker replica.
- Args []string
- The arguments to be passed when starting the container.
- Command []string
- The command to be invoked when the container is started. It overrides the entrypoint instruction in Dockerfile when provided.
- Env
[]GoogleCloud Aiplatform V1beta1Env Var 
- Environment variables to be passed to the container. Maximum limit is 100.
- imageUri String
- The URI of a container image in the Container Registry that is to be run on each worker replica.
- args List<String>
- The arguments to be passed when starting the container.
- command List<String>
- The command to be invoked when the container is started. It overrides the entrypoint instruction in Dockerfile when provided.
- env
List<GoogleCloud Aiplatform V1beta1Env Var> 
- Environment variables to be passed to the container. Maximum limit is 100.
- imageUri string
- The URI of a container image in the Container Registry that is to be run on each worker replica.
- args string[]
- The arguments to be passed when starting the container.
- command string[]
- The command to be invoked when the container is started. It overrides the entrypoint instruction in Dockerfile when provided.
- env
GoogleCloud Aiplatform V1beta1Env Var[] 
- Environment variables to be passed to the container. Maximum limit is 100.
- image_uri str
- The URI of a container image in the Container Registry that is to be run on each worker replica.
- args Sequence[str]
- The arguments to be passed when starting the container.
- command Sequence[str]
- The command to be invoked when the container is started. It overrides the entrypoint instruction in Dockerfile when provided.
- env
Sequence[GoogleCloud Aiplatform V1beta1Env Var] 
- Environment variables to be passed to the container. Maximum limit is 100.
- imageUri String
- The URI of a container image in the Container Registry that is to be run on each worker replica.
- args List<String>
- The arguments to be passed when starting the container.
- command List<String>
- The command to be invoked when the container is started. It overrides the entrypoint instruction in Dockerfile when provided.
- env List<Property Map>
- Environment variables to be passed to the container. Maximum limit is 100.
GoogleCloudAiplatformV1beta1ContainerSpecResponse, GoogleCloudAiplatformV1beta1ContainerSpecResponseArgs            
- Args List<string>
- The arguments to be passed when starting the container.
- Command List<string>
- The command to be invoked when the container is started. It overrides the entrypoint instruction in Dockerfile when provided.
- Env
List<Pulumi.Google Native. Aiplatform. V1Beta1. Inputs. Google Cloud Aiplatform V1beta1Env Var Response> 
- Environment variables to be passed to the container. Maximum limit is 100.
- ImageUri string
- The URI of a container image in the Container Registry that is to be run on each worker replica.
- Args []string
- The arguments to be passed when starting the container.
- Command []string
- The command to be invoked when the container is started. It overrides the entrypoint instruction in Dockerfile when provided.
- Env
[]GoogleCloud Aiplatform V1beta1Env Var Response 
- Environment variables to be passed to the container. Maximum limit is 100.
- ImageUri string
- The URI of a container image in the Container Registry that is to be run on each worker replica.
- args List<String>
- The arguments to be passed when starting the container.
- command List<String>
- The command to be invoked when the container is started. It overrides the entrypoint instruction in Dockerfile when provided.
- env
List<GoogleCloud Aiplatform V1beta1Env Var Response> 
- Environment variables to be passed to the container. Maximum limit is 100.
- imageUri String
- The URI of a container image in the Container Registry that is to be run on each worker replica.
- args string[]
- The arguments to be passed when starting the container.
- command string[]
- The command to be invoked when the container is started. It overrides the entrypoint instruction in Dockerfile when provided.
- env
GoogleCloud Aiplatform V1beta1Env Var Response[] 
- Environment variables to be passed to the container. Maximum limit is 100.
- imageUri string
- The URI of a container image in the Container Registry that is to be run on each worker replica.
- args Sequence[str]
- The arguments to be passed when starting the container.
- command Sequence[str]
- The command to be invoked when the container is started. It overrides the entrypoint instruction in Dockerfile when provided.
- env
Sequence[GoogleCloud Aiplatform V1beta1Env Var Response] 
- Environment variables to be passed to the container. Maximum limit is 100.
- image_uri str
- The URI of a container image in the Container Registry that is to be run on each worker replica.
- args List<String>
- The arguments to be passed when starting the container.
- command List<String>
- The command to be invoked when the container is started. It overrides the entrypoint instruction in Dockerfile when provided.
- env List<Property Map>
- Environment variables to be passed to the container. Maximum limit is 100.
- imageUri String
- The URI of a container image in the Container Registry that is to be run on each worker replica.
GoogleCloudAiplatformV1beta1CustomJobSpec, GoogleCloudAiplatformV1beta1CustomJobSpecArgs            
- WorkerPool List<Pulumi.Specs Google Native. Aiplatform. V1Beta1. Inputs. Google Cloud Aiplatform V1beta1Worker Pool Spec> 
- The spec of the worker pools including machine type and Docker image. All worker pools except the first one are optional and can be skipped by providing an empty value.
- BaseOutput Pulumi.Directory Google Native. Aiplatform. V1Beta1. Inputs. Google Cloud Aiplatform V1beta1Gcs Destination 
- The Cloud Storage location to store the output of this CustomJob or HyperparameterTuningJob. For HyperparameterTuningJob, the baseOutputDirectory of each child CustomJob backing a Trial is set to a subdirectory of name id under its parent HyperparameterTuningJob's baseOutputDirectory. The following Vertex AI environment variables will be passed to containers or python modules when this field is set: For CustomJob: * AIP_MODEL_DIR = /model/* AIP_CHECKPOINT_DIR =/checkpoints/* AIP_TENSORBOARD_LOG_DIR =/logs/For CustomJob backing a Trial of HyperparameterTuningJob: * AIP_MODEL_DIR =//model/* AIP_CHECKPOINT_DIR =//checkpoints/* AIP_TENSORBOARD_LOG_DIR =//logs/
- EnableDashboard boolAccess 
- Optional. Whether you want Vertex AI to enable access to the customized dashboard in training chief container. If set to true, you can access the dashboard at the URIs given by CustomJob.web_access_uris or Trial.web_access_uris (within HyperparameterTuningJob.trials).
- EnableWeb boolAccess 
- Optional. Whether you want Vertex AI to enable interactive shell access to training containers. If set to true, you can access interactive shells at the URIs given by CustomJob.web_access_uris or Trial.web_access_uris (within HyperparameterTuningJob.trials).
- Experiment string
- Optional. The Experiment associated with this job. Format: projects/{project}/locations/{location}/metadataStores/{metadataStores}/contexts/{experiment-name}
- ExperimentRun string
- Optional. The Experiment Run associated with this job. Format: projects/{project}/locations/{location}/metadataStores/{metadataStores}/contexts/{experiment-name}-{experiment-run-name}
- Network string
- Optional. The full name of the Compute Engine network to which the Job should be peered. For example, projects/12345/global/networks/myVPC. Format is of the formprojects/{project}/global/networks/{network}. Where {project} is a project number, as in12345, and {network} is a network name. To specify this field, you must have already configured VPC Network Peering for Vertex AI. If this field is left unspecified, the job is not peered with any network.
- PersistentResource stringId 
- Optional. The ID of the PersistentResource in the same Project and Location which to run If this is specified, the job will be run on existing machines held by the PersistentResource instead of on-demand short-live machines. The network and CMEK configs on the job should be consistent with those on the PersistentResource, otherwise, the job will be rejected.
- ProtectedArtifact stringLocation Id 
- The ID of the location to store protected artifacts. e.g. us-central1. Populate only when the location is different than CustomJob location. List of supported locations: https://cloud.google.com/vertex-ai/docs/general/locations
- ReservedIp List<string>Ranges 
- Optional. A list of names for the reserved ip ranges under the VPC network that can be used for this job. If set, we will deploy the job within the provided ip ranges. Otherwise, the job will be deployed to any ip ranges under the provided VPC network. Example: ['vertex-ai-ip-range'].
- Scheduling
Pulumi.Google Native. Aiplatform. V1Beta1. Inputs. Google Cloud Aiplatform V1beta1Scheduling 
- Scheduling options for a CustomJob.
- ServiceAccount string
- Specifies the service account for workload run-as account. Users submitting jobs must have act-as permission on this run-as account. If unspecified, the Vertex AI Custom Code Service Agent for the CustomJob's project is used.
- Tensorboard string
- Optional. The name of a Vertex AI Tensorboard resource to which this CustomJob will upload Tensorboard logs. Format: projects/{project}/locations/{location}/tensorboards/{tensorboard}
- WorkerPool []GoogleSpecs Cloud Aiplatform V1beta1Worker Pool Spec 
- The spec of the worker pools including machine type and Docker image. All worker pools except the first one are optional and can be skipped by providing an empty value.
- BaseOutput GoogleDirectory Cloud Aiplatform V1beta1Gcs Destination 
- The Cloud Storage location to store the output of this CustomJob or HyperparameterTuningJob. For HyperparameterTuningJob, the baseOutputDirectory of each child CustomJob backing a Trial is set to a subdirectory of name id under its parent HyperparameterTuningJob's baseOutputDirectory. The following Vertex AI environment variables will be passed to containers or python modules when this field is set: For CustomJob: * AIP_MODEL_DIR = /model/* AIP_CHECKPOINT_DIR =/checkpoints/* AIP_TENSORBOARD_LOG_DIR =/logs/For CustomJob backing a Trial of HyperparameterTuningJob: * AIP_MODEL_DIR =//model/* AIP_CHECKPOINT_DIR =//checkpoints/* AIP_TENSORBOARD_LOG_DIR =//logs/
- EnableDashboard boolAccess 
- Optional. Whether you want Vertex AI to enable access to the customized dashboard in training chief container. If set to true, you can access the dashboard at the URIs given by CustomJob.web_access_uris or Trial.web_access_uris (within HyperparameterTuningJob.trials).
- EnableWeb boolAccess 
- Optional. Whether you want Vertex AI to enable interactive shell access to training containers. If set to true, you can access interactive shells at the URIs given by CustomJob.web_access_uris or Trial.web_access_uris (within HyperparameterTuningJob.trials).
- Experiment string
- Optional. The Experiment associated with this job. Format: projects/{project}/locations/{location}/metadataStores/{metadataStores}/contexts/{experiment-name}
- ExperimentRun string
- Optional. The Experiment Run associated with this job. Format: projects/{project}/locations/{location}/metadataStores/{metadataStores}/contexts/{experiment-name}-{experiment-run-name}
- Network string
- Optional. The full name of the Compute Engine network to which the Job should be peered. For example, projects/12345/global/networks/myVPC. Format is of the formprojects/{project}/global/networks/{network}. Where {project} is a project number, as in12345, and {network} is a network name. To specify this field, you must have already configured VPC Network Peering for Vertex AI. If this field is left unspecified, the job is not peered with any network.
- PersistentResource stringId 
- Optional. The ID of the PersistentResource in the same Project and Location which to run If this is specified, the job will be run on existing machines held by the PersistentResource instead of on-demand short-live machines. The network and CMEK configs on the job should be consistent with those on the PersistentResource, otherwise, the job will be rejected.
- ProtectedArtifact stringLocation Id 
- The ID of the location to store protected artifacts. e.g. us-central1. Populate only when the location is different than CustomJob location. List of supported locations: https://cloud.google.com/vertex-ai/docs/general/locations
- ReservedIp []stringRanges 
- Optional. A list of names for the reserved ip ranges under the VPC network that can be used for this job. If set, we will deploy the job within the provided ip ranges. Otherwise, the job will be deployed to any ip ranges under the provided VPC network. Example: ['vertex-ai-ip-range'].
- Scheduling
GoogleCloud Aiplatform V1beta1Scheduling 
- Scheduling options for a CustomJob.
- ServiceAccount string
- Specifies the service account for workload run-as account. Users submitting jobs must have act-as permission on this run-as account. If unspecified, the Vertex AI Custom Code Service Agent for the CustomJob's project is used.
- Tensorboard string
- Optional. The name of a Vertex AI Tensorboard resource to which this CustomJob will upload Tensorboard logs. Format: projects/{project}/locations/{location}/tensorboards/{tensorboard}
- workerPool List<GoogleSpecs Cloud Aiplatform V1beta1Worker Pool Spec> 
- The spec of the worker pools including machine type and Docker image. All worker pools except the first one are optional and can be skipped by providing an empty value.
- baseOutput GoogleDirectory Cloud Aiplatform V1beta1Gcs Destination 
- The Cloud Storage location to store the output of this CustomJob or HyperparameterTuningJob. For HyperparameterTuningJob, the baseOutputDirectory of each child CustomJob backing a Trial is set to a subdirectory of name id under its parent HyperparameterTuningJob's baseOutputDirectory. The following Vertex AI environment variables will be passed to containers or python modules when this field is set: For CustomJob: * AIP_MODEL_DIR = /model/* AIP_CHECKPOINT_DIR =/checkpoints/* AIP_TENSORBOARD_LOG_DIR =/logs/For CustomJob backing a Trial of HyperparameterTuningJob: * AIP_MODEL_DIR =//model/* AIP_CHECKPOINT_DIR =//checkpoints/* AIP_TENSORBOARD_LOG_DIR =//logs/
- enableDashboard BooleanAccess 
- Optional. Whether you want Vertex AI to enable access to the customized dashboard in training chief container. If set to true, you can access the dashboard at the URIs given by CustomJob.web_access_uris or Trial.web_access_uris (within HyperparameterTuningJob.trials).
- enableWeb BooleanAccess 
- Optional. Whether you want Vertex AI to enable interactive shell access to training containers. If set to true, you can access interactive shells at the URIs given by CustomJob.web_access_uris or Trial.web_access_uris (within HyperparameterTuningJob.trials).
- experiment String
- Optional. The Experiment associated with this job. Format: projects/{project}/locations/{location}/metadataStores/{metadataStores}/contexts/{experiment-name}
- experimentRun String
- Optional. The Experiment Run associated with this job. Format: projects/{project}/locations/{location}/metadataStores/{metadataStores}/contexts/{experiment-name}-{experiment-run-name}
- network String
- Optional. The full name of the Compute Engine network to which the Job should be peered. For example, projects/12345/global/networks/myVPC. Format is of the formprojects/{project}/global/networks/{network}. Where {project} is a project number, as in12345, and {network} is a network name. To specify this field, you must have already configured VPC Network Peering for Vertex AI. If this field is left unspecified, the job is not peered with any network.
- persistentResource StringId 
- Optional. The ID of the PersistentResource in the same Project and Location which to run If this is specified, the job will be run on existing machines held by the PersistentResource instead of on-demand short-live machines. The network and CMEK configs on the job should be consistent with those on the PersistentResource, otherwise, the job will be rejected.
- protectedArtifact StringLocation Id 
- The ID of the location to store protected artifacts. e.g. us-central1. Populate only when the location is different than CustomJob location. List of supported locations: https://cloud.google.com/vertex-ai/docs/general/locations
- reservedIp List<String>Ranges 
- Optional. A list of names for the reserved ip ranges under the VPC network that can be used for this job. If set, we will deploy the job within the provided ip ranges. Otherwise, the job will be deployed to any ip ranges under the provided VPC network. Example: ['vertex-ai-ip-range'].
- scheduling
GoogleCloud Aiplatform V1beta1Scheduling 
- Scheduling options for a CustomJob.
- serviceAccount String
- Specifies the service account for workload run-as account. Users submitting jobs must have act-as permission on this run-as account. If unspecified, the Vertex AI Custom Code Service Agent for the CustomJob's project is used.
- tensorboard String
- Optional. The name of a Vertex AI Tensorboard resource to which this CustomJob will upload Tensorboard logs. Format: projects/{project}/locations/{location}/tensorboards/{tensorboard}
- workerPool GoogleSpecs Cloud Aiplatform V1beta1Worker Pool Spec[] 
- The spec of the worker pools including machine type and Docker image. All worker pools except the first one are optional and can be skipped by providing an empty value.
- baseOutput GoogleDirectory Cloud Aiplatform V1beta1Gcs Destination 
- The Cloud Storage location to store the output of this CustomJob or HyperparameterTuningJob. For HyperparameterTuningJob, the baseOutputDirectory of each child CustomJob backing a Trial is set to a subdirectory of name id under its parent HyperparameterTuningJob's baseOutputDirectory. The following Vertex AI environment variables will be passed to containers or python modules when this field is set: For CustomJob: * AIP_MODEL_DIR = /model/* AIP_CHECKPOINT_DIR =/checkpoints/* AIP_TENSORBOARD_LOG_DIR =/logs/For CustomJob backing a Trial of HyperparameterTuningJob: * AIP_MODEL_DIR =//model/* AIP_CHECKPOINT_DIR =//checkpoints/* AIP_TENSORBOARD_LOG_DIR =//logs/
- enableDashboard booleanAccess 
- Optional. Whether you want Vertex AI to enable access to the customized dashboard in training chief container. If set to true, you can access the dashboard at the URIs given by CustomJob.web_access_uris or Trial.web_access_uris (within HyperparameterTuningJob.trials).
- enableWeb booleanAccess 
- Optional. Whether you want Vertex AI to enable interactive shell access to training containers. If set to true, you can access interactive shells at the URIs given by CustomJob.web_access_uris or Trial.web_access_uris (within HyperparameterTuningJob.trials).
- experiment string
- Optional. The Experiment associated with this job. Format: projects/{project}/locations/{location}/metadataStores/{metadataStores}/contexts/{experiment-name}
- experimentRun string
- Optional. The Experiment Run associated with this job. Format: projects/{project}/locations/{location}/metadataStores/{metadataStores}/contexts/{experiment-name}-{experiment-run-name}
- network string
- Optional. The full name of the Compute Engine network to which the Job should be peered. For example, projects/12345/global/networks/myVPC. Format is of the formprojects/{project}/global/networks/{network}. Where {project} is a project number, as in12345, and {network} is a network name. To specify this field, you must have already configured VPC Network Peering for Vertex AI. If this field is left unspecified, the job is not peered with any network.
- persistentResource stringId 
- Optional. The ID of the PersistentResource in the same Project and Location which to run If this is specified, the job will be run on existing machines held by the PersistentResource instead of on-demand short-live machines. The network and CMEK configs on the job should be consistent with those on the PersistentResource, otherwise, the job will be rejected.
- protectedArtifact stringLocation Id 
- The ID of the location to store protected artifacts. e.g. us-central1. Populate only when the location is different than CustomJob location. List of supported locations: https://cloud.google.com/vertex-ai/docs/general/locations
- reservedIp string[]Ranges 
- Optional. A list of names for the reserved ip ranges under the VPC network that can be used for this job. If set, we will deploy the job within the provided ip ranges. Otherwise, the job will be deployed to any ip ranges under the provided VPC network. Example: ['vertex-ai-ip-range'].
- scheduling
GoogleCloud Aiplatform V1beta1Scheduling 
- Scheduling options for a CustomJob.
- serviceAccount string
- Specifies the service account for workload run-as account. Users submitting jobs must have act-as permission on this run-as account. If unspecified, the Vertex AI Custom Code Service Agent for the CustomJob's project is used.
- tensorboard string
- Optional. The name of a Vertex AI Tensorboard resource to which this CustomJob will upload Tensorboard logs. Format: projects/{project}/locations/{location}/tensorboards/{tensorboard}
- worker_pool_ Sequence[Googlespecs Cloud Aiplatform V1beta1Worker Pool Spec] 
- The spec of the worker pools including machine type and Docker image. All worker pools except the first one are optional and can be skipped by providing an empty value.
- base_output_ Googledirectory Cloud Aiplatform V1beta1Gcs Destination 
- The Cloud Storage location to store the output of this CustomJob or HyperparameterTuningJob. For HyperparameterTuningJob, the baseOutputDirectory of each child CustomJob backing a Trial is set to a subdirectory of name id under its parent HyperparameterTuningJob's baseOutputDirectory. The following Vertex AI environment variables will be passed to containers or python modules when this field is set: For CustomJob: * AIP_MODEL_DIR = /model/* AIP_CHECKPOINT_DIR =/checkpoints/* AIP_TENSORBOARD_LOG_DIR =/logs/For CustomJob backing a Trial of HyperparameterTuningJob: * AIP_MODEL_DIR =//model/* AIP_CHECKPOINT_DIR =//checkpoints/* AIP_TENSORBOARD_LOG_DIR =//logs/
- enable_dashboard_ boolaccess 
- Optional. Whether you want Vertex AI to enable access to the customized dashboard in training chief container. If set to true, you can access the dashboard at the URIs given by CustomJob.web_access_uris or Trial.web_access_uris (within HyperparameterTuningJob.trials).
- enable_web_ boolaccess 
- Optional. Whether you want Vertex AI to enable interactive shell access to training containers. If set to true, you can access interactive shells at the URIs given by CustomJob.web_access_uris or Trial.web_access_uris (within HyperparameterTuningJob.trials).
- experiment str
- Optional. The Experiment associated with this job. Format: projects/{project}/locations/{location}/metadataStores/{metadataStores}/contexts/{experiment-name}
- experiment_run str
- Optional. The Experiment Run associated with this job. Format: projects/{project}/locations/{location}/metadataStores/{metadataStores}/contexts/{experiment-name}-{experiment-run-name}
- network str
- Optional. The full name of the Compute Engine network to which the Job should be peered. For example, projects/12345/global/networks/myVPC. Format is of the formprojects/{project}/global/networks/{network}. Where {project} is a project number, as in12345, and {network} is a network name. To specify this field, you must have already configured VPC Network Peering for Vertex AI. If this field is left unspecified, the job is not peered with any network.
- persistent_resource_ strid 
- Optional. The ID of the PersistentResource in the same Project and Location which to run If this is specified, the job will be run on existing machines held by the PersistentResource instead of on-demand short-live machines. The network and CMEK configs on the job should be consistent with those on the PersistentResource, otherwise, the job will be rejected.
- protected_artifact_ strlocation_ id 
- The ID of the location to store protected artifacts. e.g. us-central1. Populate only when the location is different than CustomJob location. List of supported locations: https://cloud.google.com/vertex-ai/docs/general/locations
- reserved_ip_ Sequence[str]ranges 
- Optional. A list of names for the reserved ip ranges under the VPC network that can be used for this job. If set, we will deploy the job within the provided ip ranges. Otherwise, the job will be deployed to any ip ranges under the provided VPC network. Example: ['vertex-ai-ip-range'].
- scheduling
GoogleCloud Aiplatform V1beta1Scheduling 
- Scheduling options for a CustomJob.
- service_account str
- Specifies the service account for workload run-as account. Users submitting jobs must have act-as permission on this run-as account. If unspecified, the Vertex AI Custom Code Service Agent for the CustomJob's project is used.
- tensorboard str
- Optional. The name of a Vertex AI Tensorboard resource to which this CustomJob will upload Tensorboard logs. Format: projects/{project}/locations/{location}/tensorboards/{tensorboard}
- workerPool List<Property Map>Specs 
- The spec of the worker pools including machine type and Docker image. All worker pools except the first one are optional and can be skipped by providing an empty value.
- baseOutput Property MapDirectory 
- The Cloud Storage location to store the output of this CustomJob or HyperparameterTuningJob. For HyperparameterTuningJob, the baseOutputDirectory of each child CustomJob backing a Trial is set to a subdirectory of name id under its parent HyperparameterTuningJob's baseOutputDirectory. The following Vertex AI environment variables will be passed to containers or python modules when this field is set: For CustomJob: * AIP_MODEL_DIR = /model/* AIP_CHECKPOINT_DIR =/checkpoints/* AIP_TENSORBOARD_LOG_DIR =/logs/For CustomJob backing a Trial of HyperparameterTuningJob: * AIP_MODEL_DIR =//model/* AIP_CHECKPOINT_DIR =//checkpoints/* AIP_TENSORBOARD_LOG_DIR =//logs/
- enableDashboard BooleanAccess 
- Optional. Whether you want Vertex AI to enable access to the customized dashboard in training chief container. If set to true, you can access the dashboard at the URIs given by CustomJob.web_access_uris or Trial.web_access_uris (within HyperparameterTuningJob.trials).
- enableWeb BooleanAccess 
- Optional. Whether you want Vertex AI to enable interactive shell access to training containers. If set to true, you can access interactive shells at the URIs given by CustomJob.web_access_uris or Trial.web_access_uris (within HyperparameterTuningJob.trials).
- experiment String
- Optional. The Experiment associated with this job. Format: projects/{project}/locations/{location}/metadataStores/{metadataStores}/contexts/{experiment-name}
- experimentRun String
- Optional. The Experiment Run associated with this job. Format: projects/{project}/locations/{location}/metadataStores/{metadataStores}/contexts/{experiment-name}-{experiment-run-name}
- network String
- Optional. The full name of the Compute Engine network to which the Job should be peered. For example, projects/12345/global/networks/myVPC. Format is of the formprojects/{project}/global/networks/{network}. Where {project} is a project number, as in12345, and {network} is a network name. To specify this field, you must have already configured VPC Network Peering for Vertex AI. If this field is left unspecified, the job is not peered with any network.
- persistentResource StringId 
- Optional. The ID of the PersistentResource in the same Project and Location which to run If this is specified, the job will be run on existing machines held by the PersistentResource instead of on-demand short-live machines. The network and CMEK configs on the job should be consistent with those on the PersistentResource, otherwise, the job will be rejected.
- protectedArtifact StringLocation Id 
- The ID of the location to store protected artifacts. e.g. us-central1. Populate only when the location is different than CustomJob location. List of supported locations: https://cloud.google.com/vertex-ai/docs/general/locations
- reservedIp List<String>Ranges 
- Optional. A list of names for the reserved ip ranges under the VPC network that can be used for this job. If set, we will deploy the job within the provided ip ranges. Otherwise, the job will be deployed to any ip ranges under the provided VPC network. Example: ['vertex-ai-ip-range'].
- scheduling Property Map
- Scheduling options for a CustomJob.
- serviceAccount String
- Specifies the service account for workload run-as account. Users submitting jobs must have act-as permission on this run-as account. If unspecified, the Vertex AI Custom Code Service Agent for the CustomJob's project is used.
- tensorboard String
- Optional. The name of a Vertex AI Tensorboard resource to which this CustomJob will upload Tensorboard logs. Format: projects/{project}/locations/{location}/tensorboards/{tensorboard}
GoogleCloudAiplatformV1beta1CustomJobSpecResponse, GoogleCloudAiplatformV1beta1CustomJobSpecResponseArgs              
- BaseOutput Pulumi.Directory Google Native. Aiplatform. V1Beta1. Inputs. Google Cloud Aiplatform V1beta1Gcs Destination Response 
- The Cloud Storage location to store the output of this CustomJob or HyperparameterTuningJob. For HyperparameterTuningJob, the baseOutputDirectory of each child CustomJob backing a Trial is set to a subdirectory of name id under its parent HyperparameterTuningJob's baseOutputDirectory. The following Vertex AI environment variables will be passed to containers or python modules when this field is set: For CustomJob: * AIP_MODEL_DIR = /model/* AIP_CHECKPOINT_DIR =/checkpoints/* AIP_TENSORBOARD_LOG_DIR =/logs/For CustomJob backing a Trial of HyperparameterTuningJob: * AIP_MODEL_DIR =//model/* AIP_CHECKPOINT_DIR =//checkpoints/* AIP_TENSORBOARD_LOG_DIR =//logs/
- EnableDashboard boolAccess 
- Optional. Whether you want Vertex AI to enable access to the customized dashboard in training chief container. If set to true, you can access the dashboard at the URIs given by CustomJob.web_access_uris or Trial.web_access_uris (within HyperparameterTuningJob.trials).
- EnableWeb boolAccess 
- Optional. Whether you want Vertex AI to enable interactive shell access to training containers. If set to true, you can access interactive shells at the URIs given by CustomJob.web_access_uris or Trial.web_access_uris (within HyperparameterTuningJob.trials).
- Experiment string
- Optional. The Experiment associated with this job. Format: projects/{project}/locations/{location}/metadataStores/{metadataStores}/contexts/{experiment-name}
- ExperimentRun string
- Optional. The Experiment Run associated with this job. Format: projects/{project}/locations/{location}/metadataStores/{metadataStores}/contexts/{experiment-name}-{experiment-run-name}
- Network string
- Optional. The full name of the Compute Engine network to which the Job should be peered. For example, projects/12345/global/networks/myVPC. Format is of the formprojects/{project}/global/networks/{network}. Where {project} is a project number, as in12345, and {network} is a network name. To specify this field, you must have already configured VPC Network Peering for Vertex AI. If this field is left unspecified, the job is not peered with any network.
- PersistentResource stringId 
- Optional. The ID of the PersistentResource in the same Project and Location which to run If this is specified, the job will be run on existing machines held by the PersistentResource instead of on-demand short-live machines. The network and CMEK configs on the job should be consistent with those on the PersistentResource, otherwise, the job will be rejected.
- ProtectedArtifact stringLocation Id 
- The ID of the location to store protected artifacts. e.g. us-central1. Populate only when the location is different than CustomJob location. List of supported locations: https://cloud.google.com/vertex-ai/docs/general/locations
- ReservedIp List<string>Ranges 
- Optional. A list of names for the reserved ip ranges under the VPC network that can be used for this job. If set, we will deploy the job within the provided ip ranges. Otherwise, the job will be deployed to any ip ranges under the provided VPC network. Example: ['vertex-ai-ip-range'].
- Scheduling
Pulumi.Google Native. Aiplatform. V1Beta1. Inputs. Google Cloud Aiplatform V1beta1Scheduling Response 
- Scheduling options for a CustomJob.
- ServiceAccount string
- Specifies the service account for workload run-as account. Users submitting jobs must have act-as permission on this run-as account. If unspecified, the Vertex AI Custom Code Service Agent for the CustomJob's project is used.
- Tensorboard string
- Optional. The name of a Vertex AI Tensorboard resource to which this CustomJob will upload Tensorboard logs. Format: projects/{project}/locations/{location}/tensorboards/{tensorboard}
- WorkerPool List<Pulumi.Specs Google Native. Aiplatform. V1Beta1. Inputs. Google Cloud Aiplatform V1beta1Worker Pool Spec Response> 
- The spec of the worker pools including machine type and Docker image. All worker pools except the first one are optional and can be skipped by providing an empty value.
- BaseOutput GoogleDirectory Cloud Aiplatform V1beta1Gcs Destination Response 
- The Cloud Storage location to store the output of this CustomJob or HyperparameterTuningJob. For HyperparameterTuningJob, the baseOutputDirectory of each child CustomJob backing a Trial is set to a subdirectory of name id under its parent HyperparameterTuningJob's baseOutputDirectory. The following Vertex AI environment variables will be passed to containers or python modules when this field is set: For CustomJob: * AIP_MODEL_DIR = /model/* AIP_CHECKPOINT_DIR =/checkpoints/* AIP_TENSORBOARD_LOG_DIR =/logs/For CustomJob backing a Trial of HyperparameterTuningJob: * AIP_MODEL_DIR =//model/* AIP_CHECKPOINT_DIR =//checkpoints/* AIP_TENSORBOARD_LOG_DIR =//logs/
- EnableDashboard boolAccess 
- Optional. Whether you want Vertex AI to enable access to the customized dashboard in training chief container. If set to true, you can access the dashboard at the URIs given by CustomJob.web_access_uris or Trial.web_access_uris (within HyperparameterTuningJob.trials).
- EnableWeb boolAccess 
- Optional. Whether you want Vertex AI to enable interactive shell access to training containers. If set to true, you can access interactive shells at the URIs given by CustomJob.web_access_uris or Trial.web_access_uris (within HyperparameterTuningJob.trials).
- Experiment string
- Optional. The Experiment associated with this job. Format: projects/{project}/locations/{location}/metadataStores/{metadataStores}/contexts/{experiment-name}
- ExperimentRun string
- Optional. The Experiment Run associated with this job. Format: projects/{project}/locations/{location}/metadataStores/{metadataStores}/contexts/{experiment-name}-{experiment-run-name}
- Network string
- Optional. The full name of the Compute Engine network to which the Job should be peered. For example, projects/12345/global/networks/myVPC. Format is of the formprojects/{project}/global/networks/{network}. Where {project} is a project number, as in12345, and {network} is a network name. To specify this field, you must have already configured VPC Network Peering for Vertex AI. If this field is left unspecified, the job is not peered with any network.
- PersistentResource stringId 
- Optional. The ID of the PersistentResource in the same Project and Location which to run If this is specified, the job will be run on existing machines held by the PersistentResource instead of on-demand short-live machines. The network and CMEK configs on the job should be consistent with those on the PersistentResource, otherwise, the job will be rejected.
- ProtectedArtifact stringLocation Id 
- The ID of the location to store protected artifacts. e.g. us-central1. Populate only when the location is different than CustomJob location. List of supported locations: https://cloud.google.com/vertex-ai/docs/general/locations
- ReservedIp []stringRanges 
- Optional. A list of names for the reserved ip ranges under the VPC network that can be used for this job. If set, we will deploy the job within the provided ip ranges. Otherwise, the job will be deployed to any ip ranges under the provided VPC network. Example: ['vertex-ai-ip-range'].
- Scheduling
GoogleCloud Aiplatform V1beta1Scheduling Response 
- Scheduling options for a CustomJob.
- ServiceAccount string
- Specifies the service account for workload run-as account. Users submitting jobs must have act-as permission on this run-as account. If unspecified, the Vertex AI Custom Code Service Agent for the CustomJob's project is used.
- Tensorboard string
- Optional. The name of a Vertex AI Tensorboard resource to which this CustomJob will upload Tensorboard logs. Format: projects/{project}/locations/{location}/tensorboards/{tensorboard}
- WorkerPool []GoogleSpecs Cloud Aiplatform V1beta1Worker Pool Spec Response 
- The spec of the worker pools including machine type and Docker image. All worker pools except the first one are optional and can be skipped by providing an empty value.
- baseOutput GoogleDirectory Cloud Aiplatform V1beta1Gcs Destination Response 
- The Cloud Storage location to store the output of this CustomJob or HyperparameterTuningJob. For HyperparameterTuningJob, the baseOutputDirectory of each child CustomJob backing a Trial is set to a subdirectory of name id under its parent HyperparameterTuningJob's baseOutputDirectory. The following Vertex AI environment variables will be passed to containers or python modules when this field is set: For CustomJob: * AIP_MODEL_DIR = /model/* AIP_CHECKPOINT_DIR =/checkpoints/* AIP_TENSORBOARD_LOG_DIR =/logs/For CustomJob backing a Trial of HyperparameterTuningJob: * AIP_MODEL_DIR =//model/* AIP_CHECKPOINT_DIR =//checkpoints/* AIP_TENSORBOARD_LOG_DIR =//logs/
- enableDashboard BooleanAccess 
- Optional. Whether you want Vertex AI to enable access to the customized dashboard in training chief container. If set to true, you can access the dashboard at the URIs given by CustomJob.web_access_uris or Trial.web_access_uris (within HyperparameterTuningJob.trials).
- enableWeb BooleanAccess 
- Optional. Whether you want Vertex AI to enable interactive shell access to training containers. If set to true, you can access interactive shells at the URIs given by CustomJob.web_access_uris or Trial.web_access_uris (within HyperparameterTuningJob.trials).
- experiment String
- Optional. The Experiment associated with this job. Format: projects/{project}/locations/{location}/metadataStores/{metadataStores}/contexts/{experiment-name}
- experimentRun String
- Optional. The Experiment Run associated with this job. Format: projects/{project}/locations/{location}/metadataStores/{metadataStores}/contexts/{experiment-name}-{experiment-run-name}
- network String
- Optional. The full name of the Compute Engine network to which the Job should be peered. For example, projects/12345/global/networks/myVPC. Format is of the formprojects/{project}/global/networks/{network}. Where {project} is a project number, as in12345, and {network} is a network name. To specify this field, you must have already configured VPC Network Peering for Vertex AI. If this field is left unspecified, the job is not peered with any network.
- persistentResource StringId 
- Optional. The ID of the PersistentResource in the same Project and Location which to run If this is specified, the job will be run on existing machines held by the PersistentResource instead of on-demand short-live machines. The network and CMEK configs on the job should be consistent with those on the PersistentResource, otherwise, the job will be rejected.
- protectedArtifact StringLocation Id 
- The ID of the location to store protected artifacts. e.g. us-central1. Populate only when the location is different than CustomJob location. List of supported locations: https://cloud.google.com/vertex-ai/docs/general/locations
- reservedIp List<String>Ranges 
- Optional. A list of names for the reserved ip ranges under the VPC network that can be used for this job. If set, we will deploy the job within the provided ip ranges. Otherwise, the job will be deployed to any ip ranges under the provided VPC network. Example: ['vertex-ai-ip-range'].
- scheduling
GoogleCloud Aiplatform V1beta1Scheduling Response 
- Scheduling options for a CustomJob.
- serviceAccount String
- Specifies the service account for workload run-as account. Users submitting jobs must have act-as permission on this run-as account. If unspecified, the Vertex AI Custom Code Service Agent for the CustomJob's project is used.
- tensorboard String
- Optional. The name of a Vertex AI Tensorboard resource to which this CustomJob will upload Tensorboard logs. Format: projects/{project}/locations/{location}/tensorboards/{tensorboard}
- workerPool List<GoogleSpecs Cloud Aiplatform V1beta1Worker Pool Spec Response> 
- The spec of the worker pools including machine type and Docker image. All worker pools except the first one are optional and can be skipped by providing an empty value.
- baseOutput GoogleDirectory Cloud Aiplatform V1beta1Gcs Destination Response 
- The Cloud Storage location to store the output of this CustomJob or HyperparameterTuningJob. For HyperparameterTuningJob, the baseOutputDirectory of each child CustomJob backing a Trial is set to a subdirectory of name id under its parent HyperparameterTuningJob's baseOutputDirectory. The following Vertex AI environment variables will be passed to containers or python modules when this field is set: For CustomJob: * AIP_MODEL_DIR = /model/* AIP_CHECKPOINT_DIR =/checkpoints/* AIP_TENSORBOARD_LOG_DIR =/logs/For CustomJob backing a Trial of HyperparameterTuningJob: * AIP_MODEL_DIR =//model/* AIP_CHECKPOINT_DIR =//checkpoints/* AIP_TENSORBOARD_LOG_DIR =//logs/
- enableDashboard booleanAccess 
- Optional. Whether you want Vertex AI to enable access to the customized dashboard in training chief container. If set to true, you can access the dashboard at the URIs given by CustomJob.web_access_uris or Trial.web_access_uris (within HyperparameterTuningJob.trials).
- enableWeb booleanAccess 
- Optional. Whether you want Vertex AI to enable interactive shell access to training containers. If set to true, you can access interactive shells at the URIs given by CustomJob.web_access_uris or Trial.web_access_uris (within HyperparameterTuningJob.trials).
- experiment string
- Optional. The Experiment associated with this job. Format: projects/{project}/locations/{location}/metadataStores/{metadataStores}/contexts/{experiment-name}
- experimentRun string
- Optional. The Experiment Run associated with this job. Format: projects/{project}/locations/{location}/metadataStores/{metadataStores}/contexts/{experiment-name}-{experiment-run-name}
- network string
- Optional. The full name of the Compute Engine network to which the Job should be peered. For example, projects/12345/global/networks/myVPC. Format is of the formprojects/{project}/global/networks/{network}. Where {project} is a project number, as in12345, and {network} is a network name. To specify this field, you must have already configured VPC Network Peering for Vertex AI. If this field is left unspecified, the job is not peered with any network.
- persistentResource stringId 
- Optional. The ID of the PersistentResource in the same Project and Location which to run If this is specified, the job will be run on existing machines held by the PersistentResource instead of on-demand short-live machines. The network and CMEK configs on the job should be consistent with those on the PersistentResource, otherwise, the job will be rejected.
- protectedArtifact stringLocation Id 
- The ID of the location to store protected artifacts. e.g. us-central1. Populate only when the location is different than CustomJob location. List of supported locations: https://cloud.google.com/vertex-ai/docs/general/locations
- reservedIp string[]Ranges 
- Optional. A list of names for the reserved ip ranges under the VPC network that can be used for this job. If set, we will deploy the job within the provided ip ranges. Otherwise, the job will be deployed to any ip ranges under the provided VPC network. Example: ['vertex-ai-ip-range'].
- scheduling
GoogleCloud Aiplatform V1beta1Scheduling Response 
- Scheduling options for a CustomJob.
- serviceAccount string
- Specifies the service account for workload run-as account. Users submitting jobs must have act-as permission on this run-as account. If unspecified, the Vertex AI Custom Code Service Agent for the CustomJob's project is used.
- tensorboard string
- Optional. The name of a Vertex AI Tensorboard resource to which this CustomJob will upload Tensorboard logs. Format: projects/{project}/locations/{location}/tensorboards/{tensorboard}
- workerPool GoogleSpecs Cloud Aiplatform V1beta1Worker Pool Spec Response[] 
- The spec of the worker pools including machine type and Docker image. All worker pools except the first one are optional and can be skipped by providing an empty value.
- base_output_ Googledirectory Cloud Aiplatform V1beta1Gcs Destination Response 
- The Cloud Storage location to store the output of this CustomJob or HyperparameterTuningJob. For HyperparameterTuningJob, the baseOutputDirectory of each child CustomJob backing a Trial is set to a subdirectory of name id under its parent HyperparameterTuningJob's baseOutputDirectory. The following Vertex AI environment variables will be passed to containers or python modules when this field is set: For CustomJob: * AIP_MODEL_DIR = /model/* AIP_CHECKPOINT_DIR =/checkpoints/* AIP_TENSORBOARD_LOG_DIR =/logs/For CustomJob backing a Trial of HyperparameterTuningJob: * AIP_MODEL_DIR =//model/* AIP_CHECKPOINT_DIR =//checkpoints/* AIP_TENSORBOARD_LOG_DIR =//logs/
- enable_dashboard_ boolaccess 
- Optional. Whether you want Vertex AI to enable access to the customized dashboard in training chief container. If set to true, you can access the dashboard at the URIs given by CustomJob.web_access_uris or Trial.web_access_uris (within HyperparameterTuningJob.trials).
- enable_web_ boolaccess 
- Optional. Whether you want Vertex AI to enable interactive shell access to training containers. If set to true, you can access interactive shells at the URIs given by CustomJob.web_access_uris or Trial.web_access_uris (within HyperparameterTuningJob.trials).
- experiment str
- Optional. The Experiment associated with this job. Format: projects/{project}/locations/{location}/metadataStores/{metadataStores}/contexts/{experiment-name}
- experiment_run str
- Optional. The Experiment Run associated with this job. Format: projects/{project}/locations/{location}/metadataStores/{metadataStores}/contexts/{experiment-name}-{experiment-run-name}
- network str
- Optional. The full name of the Compute Engine network to which the Job should be peered. For example, projects/12345/global/networks/myVPC. Format is of the formprojects/{project}/global/networks/{network}. Where {project} is a project number, as in12345, and {network} is a network name. To specify this field, you must have already configured VPC Network Peering for Vertex AI. If this field is left unspecified, the job is not peered with any network.
- persistent_resource_ strid 
- Optional. The ID of the PersistentResource in the same Project and Location which to run If this is specified, the job will be run on existing machines held by the PersistentResource instead of on-demand short-live machines. The network and CMEK configs on the job should be consistent with those on the PersistentResource, otherwise, the job will be rejected.
- protected_artifact_ strlocation_ id 
- The ID of the location to store protected artifacts. e.g. us-central1. Populate only when the location is different than CustomJob location. List of supported locations: https://cloud.google.com/vertex-ai/docs/general/locations
- reserved_ip_ Sequence[str]ranges 
- Optional. A list of names for the reserved ip ranges under the VPC network that can be used for this job. If set, we will deploy the job within the provided ip ranges. Otherwise, the job will be deployed to any ip ranges under the provided VPC network. Example: ['vertex-ai-ip-range'].
- scheduling
GoogleCloud Aiplatform V1beta1Scheduling Response 
- Scheduling options for a CustomJob.
- service_account str
- Specifies the service account for workload run-as account. Users submitting jobs must have act-as permission on this run-as account. If unspecified, the Vertex AI Custom Code Service Agent for the CustomJob's project is used.
- tensorboard str
- Optional. The name of a Vertex AI Tensorboard resource to which this CustomJob will upload Tensorboard logs. Format: projects/{project}/locations/{location}/tensorboards/{tensorboard}
- worker_pool_ Sequence[Googlespecs Cloud Aiplatform V1beta1Worker Pool Spec Response] 
- The spec of the worker pools including machine type and Docker image. All worker pools except the first one are optional and can be skipped by providing an empty value.
- baseOutput Property MapDirectory 
- The Cloud Storage location to store the output of this CustomJob or HyperparameterTuningJob. For HyperparameterTuningJob, the baseOutputDirectory of each child CustomJob backing a Trial is set to a subdirectory of name id under its parent HyperparameterTuningJob's baseOutputDirectory. The following Vertex AI environment variables will be passed to containers or python modules when this field is set: For CustomJob: * AIP_MODEL_DIR = /model/* AIP_CHECKPOINT_DIR =/checkpoints/* AIP_TENSORBOARD_LOG_DIR =/logs/For CustomJob backing a Trial of HyperparameterTuningJob: * AIP_MODEL_DIR =//model/* AIP_CHECKPOINT_DIR =//checkpoints/* AIP_TENSORBOARD_LOG_DIR =//logs/
- enableDashboard BooleanAccess 
- Optional. Whether you want Vertex AI to enable access to the customized dashboard in training chief container. If set to true, you can access the dashboard at the URIs given by CustomJob.web_access_uris or Trial.web_access_uris (within HyperparameterTuningJob.trials).
- enableWeb BooleanAccess 
- Optional. Whether you want Vertex AI to enable interactive shell access to training containers. If set to true, you can access interactive shells at the URIs given by CustomJob.web_access_uris or Trial.web_access_uris (within HyperparameterTuningJob.trials).
- experiment String
- Optional. The Experiment associated with this job. Format: projects/{project}/locations/{location}/metadataStores/{metadataStores}/contexts/{experiment-name}
- experimentRun String
- Optional. The Experiment Run associated with this job. Format: projects/{project}/locations/{location}/metadataStores/{metadataStores}/contexts/{experiment-name}-{experiment-run-name}
- network String
- Optional. The full name of the Compute Engine network to which the Job should be peered. For example, projects/12345/global/networks/myVPC. Format is of the formprojects/{project}/global/networks/{network}. Where {project} is a project number, as in12345, and {network} is a network name. To specify this field, you must have already configured VPC Network Peering for Vertex AI. If this field is left unspecified, the job is not peered with any network.
- persistentResource StringId 
- Optional. The ID of the PersistentResource in the same Project and Location which to run If this is specified, the job will be run on existing machines held by the PersistentResource instead of on-demand short-live machines. The network and CMEK configs on the job should be consistent with those on the PersistentResource, otherwise, the job will be rejected.
- protectedArtifact StringLocation Id 
- The ID of the location to store protected artifacts. e.g. us-central1. Populate only when the location is different than CustomJob location. List of supported locations: https://cloud.google.com/vertex-ai/docs/general/locations
- reservedIp List<String>Ranges 
- Optional. A list of names for the reserved ip ranges under the VPC network that can be used for this job. If set, we will deploy the job within the provided ip ranges. Otherwise, the job will be deployed to any ip ranges under the provided VPC network. Example: ['vertex-ai-ip-range'].
- scheduling Property Map
- Scheduling options for a CustomJob.
- serviceAccount String
- Specifies the service account for workload run-as account. Users submitting jobs must have act-as permission on this run-as account. If unspecified, the Vertex AI Custom Code Service Agent for the CustomJob's project is used.
- tensorboard String
- Optional. The name of a Vertex AI Tensorboard resource to which this CustomJob will upload Tensorboard logs. Format: projects/{project}/locations/{location}/tensorboards/{tensorboard}
- workerPool List<Property Map>Specs 
- The spec of the worker pools including machine type and Docker image. All worker pools except the first one are optional and can be skipped by providing an empty value.
GoogleCloudAiplatformV1beta1DiskSpec, GoogleCloudAiplatformV1beta1DiskSpecArgs          
- BootDisk intSize Gb 
- Size in GB of the boot disk (default is 100GB).
- BootDisk stringType 
- Type of the boot disk (default is "pd-ssd"). Valid values: "pd-ssd" (Persistent Disk Solid State Drive) or "pd-standard" (Persistent Disk Hard Disk Drive).
- BootDisk intSize Gb 
- Size in GB of the boot disk (default is 100GB).
- BootDisk stringType 
- Type of the boot disk (default is "pd-ssd"). Valid values: "pd-ssd" (Persistent Disk Solid State Drive) or "pd-standard" (Persistent Disk Hard Disk Drive).
- bootDisk IntegerSize Gb 
- Size in GB of the boot disk (default is 100GB).
- bootDisk StringType 
- Type of the boot disk (default is "pd-ssd"). Valid values: "pd-ssd" (Persistent Disk Solid State Drive) or "pd-standard" (Persistent Disk Hard Disk Drive).
- bootDisk numberSize Gb 
- Size in GB of the boot disk (default is 100GB).
- bootDisk stringType 
- Type of the boot disk (default is "pd-ssd"). Valid values: "pd-ssd" (Persistent Disk Solid State Drive) or "pd-standard" (Persistent Disk Hard Disk Drive).
- boot_disk_ intsize_ gb 
- Size in GB of the boot disk (default is 100GB).
- boot_disk_ strtype 
- Type of the boot disk (default is "pd-ssd"). Valid values: "pd-ssd" (Persistent Disk Solid State Drive) or "pd-standard" (Persistent Disk Hard Disk Drive).
- bootDisk NumberSize Gb 
- Size in GB of the boot disk (default is 100GB).
- bootDisk StringType 
- Type of the boot disk (default is "pd-ssd"). Valid values: "pd-ssd" (Persistent Disk Solid State Drive) or "pd-standard" (Persistent Disk Hard Disk Drive).
GoogleCloudAiplatformV1beta1DiskSpecResponse, GoogleCloudAiplatformV1beta1DiskSpecResponseArgs            
- BootDisk intSize Gb 
- Size in GB of the boot disk (default is 100GB).
- BootDisk stringType 
- Type of the boot disk (default is "pd-ssd"). Valid values: "pd-ssd" (Persistent Disk Solid State Drive) or "pd-standard" (Persistent Disk Hard Disk Drive).
- BootDisk intSize Gb 
- Size in GB of the boot disk (default is 100GB).
- BootDisk stringType 
- Type of the boot disk (default is "pd-ssd"). Valid values: "pd-ssd" (Persistent Disk Solid State Drive) or "pd-standard" (Persistent Disk Hard Disk Drive).
- bootDisk IntegerSize Gb 
- Size in GB of the boot disk (default is 100GB).
- bootDisk StringType 
- Type of the boot disk (default is "pd-ssd"). Valid values: "pd-ssd" (Persistent Disk Solid State Drive) or "pd-standard" (Persistent Disk Hard Disk Drive).
- bootDisk numberSize Gb 
- Size in GB of the boot disk (default is 100GB).
- bootDisk stringType 
- Type of the boot disk (default is "pd-ssd"). Valid values: "pd-ssd" (Persistent Disk Solid State Drive) or "pd-standard" (Persistent Disk Hard Disk Drive).
- boot_disk_ intsize_ gb 
- Size in GB of the boot disk (default is 100GB).
- boot_disk_ strtype 
- Type of the boot disk (default is "pd-ssd"). Valid values: "pd-ssd" (Persistent Disk Solid State Drive) or "pd-standard" (Persistent Disk Hard Disk Drive).
- bootDisk NumberSize Gb 
- Size in GB of the boot disk (default is 100GB).
- bootDisk StringType 
- Type of the boot disk (default is "pd-ssd"). Valid values: "pd-ssd" (Persistent Disk Solid State Drive) or "pd-standard" (Persistent Disk Hard Disk Drive).
GoogleCloudAiplatformV1beta1EncryptionSpec, GoogleCloudAiplatformV1beta1EncryptionSpecArgs          
- KmsKey stringName 
- The Cloud KMS resource identifier of the customer managed encryption key used to protect a resource. Has the form: projects/my-project/locations/my-region/keyRings/my-kr/cryptoKeys/my-key. The key needs to be in the same region as where the compute resource is created.
- KmsKey stringName 
- The Cloud KMS resource identifier of the customer managed encryption key used to protect a resource. Has the form: projects/my-project/locations/my-region/keyRings/my-kr/cryptoKeys/my-key. The key needs to be in the same region as where the compute resource is created.
- kmsKey StringName 
- The Cloud KMS resource identifier of the customer managed encryption key used to protect a resource. Has the form: projects/my-project/locations/my-region/keyRings/my-kr/cryptoKeys/my-key. The key needs to be in the same region as where the compute resource is created.
- kmsKey stringName 
- The Cloud KMS resource identifier of the customer managed encryption key used to protect a resource. Has the form: projects/my-project/locations/my-region/keyRings/my-kr/cryptoKeys/my-key. The key needs to be in the same region as where the compute resource is created.
- kms_key_ strname 
- The Cloud KMS resource identifier of the customer managed encryption key used to protect a resource. Has the form: projects/my-project/locations/my-region/keyRings/my-kr/cryptoKeys/my-key. The key needs to be in the same region as where the compute resource is created.
- kmsKey StringName 
- The Cloud KMS resource identifier of the customer managed encryption key used to protect a resource. Has the form: projects/my-project/locations/my-region/keyRings/my-kr/cryptoKeys/my-key. The key needs to be in the same region as where the compute resource is created.
GoogleCloudAiplatformV1beta1EncryptionSpecResponse, GoogleCloudAiplatformV1beta1EncryptionSpecResponseArgs            
- KmsKey stringName 
- The Cloud KMS resource identifier of the customer managed encryption key used to protect a resource. Has the form: projects/my-project/locations/my-region/keyRings/my-kr/cryptoKeys/my-key. The key needs to be in the same region as where the compute resource is created.
- KmsKey stringName 
- The Cloud KMS resource identifier of the customer managed encryption key used to protect a resource. Has the form: projects/my-project/locations/my-region/keyRings/my-kr/cryptoKeys/my-key. The key needs to be in the same region as where the compute resource is created.
- kmsKey StringName 
- The Cloud KMS resource identifier of the customer managed encryption key used to protect a resource. Has the form: projects/my-project/locations/my-region/keyRings/my-kr/cryptoKeys/my-key. The key needs to be in the same region as where the compute resource is created.
- kmsKey stringName 
- The Cloud KMS resource identifier of the customer managed encryption key used to protect a resource. Has the form: projects/my-project/locations/my-region/keyRings/my-kr/cryptoKeys/my-key. The key needs to be in the same region as where the compute resource is created.
- kms_key_ strname 
- The Cloud KMS resource identifier of the customer managed encryption key used to protect a resource. Has the form: projects/my-project/locations/my-region/keyRings/my-kr/cryptoKeys/my-key. The key needs to be in the same region as where the compute resource is created.
- kmsKey StringName 
- The Cloud KMS resource identifier of the customer managed encryption key used to protect a resource. Has the form: projects/my-project/locations/my-region/keyRings/my-kr/cryptoKeys/my-key. The key needs to be in the same region as where the compute resource is created.
GoogleCloudAiplatformV1beta1EnvVar, GoogleCloudAiplatformV1beta1EnvVarArgs          
- Name string
- Name of the environment variable. Must be a valid C identifier.
- Value string
- Variables that reference a $(VAR_NAME) are expanded using the previous defined environment variables in the container and any service environment variables. If a variable cannot be resolved, the reference in the input string will be unchanged. The $(VAR_NAME) syntax can be escaped with a double $$, ie: $$(VAR_NAME). Escaped references will never be expanded, regardless of whether the variable exists or not.
- Name string
- Name of the environment variable. Must be a valid C identifier.
- Value string
- Variables that reference a $(VAR_NAME) are expanded using the previous defined environment variables in the container and any service environment variables. If a variable cannot be resolved, the reference in the input string will be unchanged. The $(VAR_NAME) syntax can be escaped with a double $$, ie: $$(VAR_NAME). Escaped references will never be expanded, regardless of whether the variable exists or not.
- name String
- Name of the environment variable. Must be a valid C identifier.
- value String
- Variables that reference a $(VAR_NAME) are expanded using the previous defined environment variables in the container and any service environment variables. If a variable cannot be resolved, the reference in the input string will be unchanged. The $(VAR_NAME) syntax can be escaped with a double $$, ie: $$(VAR_NAME). Escaped references will never be expanded, regardless of whether the variable exists or not.
- name string
- Name of the environment variable. Must be a valid C identifier.
- value string
- Variables that reference a $(VAR_NAME) are expanded using the previous defined environment variables in the container and any service environment variables. If a variable cannot be resolved, the reference in the input string will be unchanged. The $(VAR_NAME) syntax can be escaped with a double $$, ie: $$(VAR_NAME). Escaped references will never be expanded, regardless of whether the variable exists or not.
- name str
- Name of the environment variable. Must be a valid C identifier.
- value str
- Variables that reference a $(VAR_NAME) are expanded using the previous defined environment variables in the container and any service environment variables. If a variable cannot be resolved, the reference in the input string will be unchanged. The $(VAR_NAME) syntax can be escaped with a double $$, ie: $$(VAR_NAME). Escaped references will never be expanded, regardless of whether the variable exists or not.
- name String
- Name of the environment variable. Must be a valid C identifier.
- value String
- Variables that reference a $(VAR_NAME) are expanded using the previous defined environment variables in the container and any service environment variables. If a variable cannot be resolved, the reference in the input string will be unchanged. The $(VAR_NAME) syntax can be escaped with a double $$, ie: $$(VAR_NAME). Escaped references will never be expanded, regardless of whether the variable exists or not.
GoogleCloudAiplatformV1beta1EnvVarResponse, GoogleCloudAiplatformV1beta1EnvVarResponseArgs            
- Name string
- Name of the environment variable. Must be a valid C identifier.
- Value string
- Variables that reference a $(VAR_NAME) are expanded using the previous defined environment variables in the container and any service environment variables. If a variable cannot be resolved, the reference in the input string will be unchanged. The $(VAR_NAME) syntax can be escaped with a double $$, ie: $$(VAR_NAME). Escaped references will never be expanded, regardless of whether the variable exists or not.
- Name string
- Name of the environment variable. Must be a valid C identifier.
- Value string
- Variables that reference a $(VAR_NAME) are expanded using the previous defined environment variables in the container and any service environment variables. If a variable cannot be resolved, the reference in the input string will be unchanged. The $(VAR_NAME) syntax can be escaped with a double $$, ie: $$(VAR_NAME). Escaped references will never be expanded, regardless of whether the variable exists or not.
- name String
- Name of the environment variable. Must be a valid C identifier.
- value String
- Variables that reference a $(VAR_NAME) are expanded using the previous defined environment variables in the container and any service environment variables. If a variable cannot be resolved, the reference in the input string will be unchanged. The $(VAR_NAME) syntax can be escaped with a double $$, ie: $$(VAR_NAME). Escaped references will never be expanded, regardless of whether the variable exists or not.
- name string
- Name of the environment variable. Must be a valid C identifier.
- value string
- Variables that reference a $(VAR_NAME) are expanded using the previous defined environment variables in the container and any service environment variables. If a variable cannot be resolved, the reference in the input string will be unchanged. The $(VAR_NAME) syntax can be escaped with a double $$, ie: $$(VAR_NAME). Escaped references will never be expanded, regardless of whether the variable exists or not.
- name str
- Name of the environment variable. Must be a valid C identifier.
- value str
- Variables that reference a $(VAR_NAME) are expanded using the previous defined environment variables in the container and any service environment variables. If a variable cannot be resolved, the reference in the input string will be unchanged. The $(VAR_NAME) syntax can be escaped with a double $$, ie: $$(VAR_NAME). Escaped references will never be expanded, regardless of whether the variable exists or not.
- name String
- Name of the environment variable. Must be a valid C identifier.
- value String
- Variables that reference a $(VAR_NAME) are expanded using the previous defined environment variables in the container and any service environment variables. If a variable cannot be resolved, the reference in the input string will be unchanged. The $(VAR_NAME) syntax can be escaped with a double $$, ie: $$(VAR_NAME). Escaped references will never be expanded, regardless of whether the variable exists or not.
GoogleCloudAiplatformV1beta1GcsDestination, GoogleCloudAiplatformV1beta1GcsDestinationArgs          
- OutputUri stringPrefix 
- Google Cloud Storage URI to output directory. If the uri doesn't end with '/', a '/' will be automatically appended. The directory is created if it doesn't exist.
- OutputUri stringPrefix 
- Google Cloud Storage URI to output directory. If the uri doesn't end with '/', a '/' will be automatically appended. The directory is created if it doesn't exist.
- outputUri StringPrefix 
- Google Cloud Storage URI to output directory. If the uri doesn't end with '/', a '/' will be automatically appended. The directory is created if it doesn't exist.
- outputUri stringPrefix 
- Google Cloud Storage URI to output directory. If the uri doesn't end with '/', a '/' will be automatically appended. The directory is created if it doesn't exist.
- output_uri_ strprefix 
- Google Cloud Storage URI to output directory. If the uri doesn't end with '/', a '/' will be automatically appended. The directory is created if it doesn't exist.
- outputUri StringPrefix 
- Google Cloud Storage URI to output directory. If the uri doesn't end with '/', a '/' will be automatically appended. The directory is created if it doesn't exist.
GoogleCloudAiplatformV1beta1GcsDestinationResponse, GoogleCloudAiplatformV1beta1GcsDestinationResponseArgs            
- OutputUri stringPrefix 
- Google Cloud Storage URI to output directory. If the uri doesn't end with '/', a '/' will be automatically appended. The directory is created if it doesn't exist.
- OutputUri stringPrefix 
- Google Cloud Storage URI to output directory. If the uri doesn't end with '/', a '/' will be automatically appended. The directory is created if it doesn't exist.
- outputUri StringPrefix 
- Google Cloud Storage URI to output directory. If the uri doesn't end with '/', a '/' will be automatically appended. The directory is created if it doesn't exist.
- outputUri stringPrefix 
- Google Cloud Storage URI to output directory. If the uri doesn't end with '/', a '/' will be automatically appended. The directory is created if it doesn't exist.
- output_uri_ strprefix 
- Google Cloud Storage URI to output directory. If the uri doesn't end with '/', a '/' will be automatically appended. The directory is created if it doesn't exist.
- outputUri StringPrefix 
- Google Cloud Storage URI to output directory. If the uri doesn't end with '/', a '/' will be automatically appended. The directory is created if it doesn't exist.
GoogleCloudAiplatformV1beta1MachineSpec, GoogleCloudAiplatformV1beta1MachineSpecArgs          
- AcceleratorCount int
- The number of accelerators to attach to the machine.
- AcceleratorType Pulumi.Google Native. Aiplatform. V1Beta1. Google Cloud Aiplatform V1beta1Machine Spec Accelerator Type 
- Immutable. The type of accelerator(s) that may be attached to the machine as per accelerator_count.
- MachineType string
- Immutable. The type of the machine. See the list of machine types supported for prediction See the list of machine types supported for custom training. For DeployedModel this field is optional, and the default value is n1-standard-2. For BatchPredictionJob or as part of WorkerPoolSpec this field is required.
- TpuTopology string
- Immutable. The topology of the TPUs. Corresponds to the TPU topologies available from GKE. (Example: tpu_topology: "2x2x1").
- AcceleratorCount int
- The number of accelerators to attach to the machine.
- AcceleratorType GoogleCloud Aiplatform V1beta1Machine Spec Accelerator Type 
- Immutable. The type of accelerator(s) that may be attached to the machine as per accelerator_count.
- MachineType string
- Immutable. The type of the machine. See the list of machine types supported for prediction See the list of machine types supported for custom training. For DeployedModel this field is optional, and the default value is n1-standard-2. For BatchPredictionJob or as part of WorkerPoolSpec this field is required.
- TpuTopology string
- Immutable. The topology of the TPUs. Corresponds to the TPU topologies available from GKE. (Example: tpu_topology: "2x2x1").
- acceleratorCount Integer
- The number of accelerators to attach to the machine.
- acceleratorType GoogleCloud Aiplatform V1beta1Machine Spec Accelerator Type 
- Immutable. The type of accelerator(s) that may be attached to the machine as per accelerator_count.
- machineType String
- Immutable. The type of the machine. See the list of machine types supported for prediction See the list of machine types supported for custom training. For DeployedModel this field is optional, and the default value is n1-standard-2. For BatchPredictionJob or as part of WorkerPoolSpec this field is required.
- tpuTopology String
- Immutable. The topology of the TPUs. Corresponds to the TPU topologies available from GKE. (Example: tpu_topology: "2x2x1").
- acceleratorCount number
- The number of accelerators to attach to the machine.
- acceleratorType GoogleCloud Aiplatform V1beta1Machine Spec Accelerator Type 
- Immutable. The type of accelerator(s) that may be attached to the machine as per accelerator_count.
- machineType string
- Immutable. The type of the machine. See the list of machine types supported for prediction See the list of machine types supported for custom training. For DeployedModel this field is optional, and the default value is n1-standard-2. For BatchPredictionJob or as part of WorkerPoolSpec this field is required.
- tpuTopology string
- Immutable. The topology of the TPUs. Corresponds to the TPU topologies available from GKE. (Example: tpu_topology: "2x2x1").
- accelerator_count int
- The number of accelerators to attach to the machine.
- accelerator_type GoogleCloud Aiplatform V1beta1Machine Spec Accelerator Type 
- Immutable. The type of accelerator(s) that may be attached to the machine as per accelerator_count.
- machine_type str
- Immutable. The type of the machine. See the list of machine types supported for prediction See the list of machine types supported for custom training. For DeployedModel this field is optional, and the default value is n1-standard-2. For BatchPredictionJob or as part of WorkerPoolSpec this field is required.
- tpu_topology str
- Immutable. The topology of the TPUs. Corresponds to the TPU topologies available from GKE. (Example: tpu_topology: "2x2x1").
- acceleratorCount Number
- The number of accelerators to attach to the machine.
- acceleratorType "ACCELERATOR_TYPE_UNSPECIFIED" | "NVIDIA_TESLA_K80" | "NVIDIA_TESLA_P100" | "NVIDIA_TESLA_V100" | "NVIDIA_TESLA_P4" | "NVIDIA_TESLA_T4" | "NVIDIA_TESLA_A100" | "NVIDIA_A100_80GB" | "NVIDIA_L4" | "NVIDIA_H100_80GB" | "TPU_V2" | "TPU_V3" | "TPU_V4_POD" | "TPU_V5_LITEPOD"
- Immutable. The type of accelerator(s) that may be attached to the machine as per accelerator_count.
- machineType String
- Immutable. The type of the machine. See the list of machine types supported for prediction See the list of machine types supported for custom training. For DeployedModel this field is optional, and the default value is n1-standard-2. For BatchPredictionJob or as part of WorkerPoolSpec this field is required.
- tpuTopology String
- Immutable. The topology of the TPUs. Corresponds to the TPU topologies available from GKE. (Example: tpu_topology: "2x2x1").
GoogleCloudAiplatformV1beta1MachineSpecAcceleratorType, GoogleCloudAiplatformV1beta1MachineSpecAcceleratorTypeArgs              
- AcceleratorType Unspecified 
- ACCELERATOR_TYPE_UNSPECIFIEDUnspecified accelerator type, which means no accelerator.
- NvidiaTesla K80 
- NVIDIA_TESLA_K80Nvidia Tesla K80 GPU.
- NvidiaTesla P100 
- NVIDIA_TESLA_P100Nvidia Tesla P100 GPU.
- NvidiaTesla V100 
- NVIDIA_TESLA_V100Nvidia Tesla V100 GPU.
- NvidiaTesla P4 
- NVIDIA_TESLA_P4Nvidia Tesla P4 GPU.
- NvidiaTesla T4 
- NVIDIA_TESLA_T4Nvidia Tesla T4 GPU.
- NvidiaTesla A100 
- NVIDIA_TESLA_A100Nvidia Tesla A100 GPU.
- NvidiaA10080gb 
- NVIDIA_A100_80GBNvidia A100 80GB GPU.
- NvidiaL4 
- NVIDIA_L4Nvidia L4 GPU.
- NvidiaH10080gb 
- NVIDIA_H100_80GBNvidia H100 80Gb GPU.
- TpuV2 
- TPU_V2TPU v2.
- TpuV3 
- TPU_V3TPU v3.
- TpuV4Pod 
- TPU_V4_PODTPU v4.
- TpuV5Litepod 
- TPU_V5_LITEPODTPU v5.
- GoogleCloud Aiplatform V1beta1Machine Spec Accelerator Type Accelerator Type Unspecified 
- ACCELERATOR_TYPE_UNSPECIFIEDUnspecified accelerator type, which means no accelerator.
- GoogleCloud Aiplatform V1beta1Machine Spec Accelerator Type Nvidia Tesla K80 
- NVIDIA_TESLA_K80Nvidia Tesla K80 GPU.
- GoogleCloud Aiplatform V1beta1Machine Spec Accelerator Type Nvidia Tesla P100 
- NVIDIA_TESLA_P100Nvidia Tesla P100 GPU.
- GoogleCloud Aiplatform V1beta1Machine Spec Accelerator Type Nvidia Tesla V100 
- NVIDIA_TESLA_V100Nvidia Tesla V100 GPU.
- GoogleCloud Aiplatform V1beta1Machine Spec Accelerator Type Nvidia Tesla P4 
- NVIDIA_TESLA_P4Nvidia Tesla P4 GPU.
- GoogleCloud Aiplatform V1beta1Machine Spec Accelerator Type Nvidia Tesla T4 
- NVIDIA_TESLA_T4Nvidia Tesla T4 GPU.
- GoogleCloud Aiplatform V1beta1Machine Spec Accelerator Type Nvidia Tesla A100 
- NVIDIA_TESLA_A100Nvidia Tesla A100 GPU.
- GoogleCloud Aiplatform V1beta1Machine Spec Accelerator Type Nvidia A10080gb 
- NVIDIA_A100_80GBNvidia A100 80GB GPU.
- GoogleCloud Aiplatform V1beta1Machine Spec Accelerator Type Nvidia L4 
- NVIDIA_L4Nvidia L4 GPU.
- GoogleCloud Aiplatform V1beta1Machine Spec Accelerator Type Nvidia H10080gb 
- NVIDIA_H100_80GBNvidia H100 80Gb GPU.
- GoogleCloud Aiplatform V1beta1Machine Spec Accelerator Type Tpu V2 
- TPU_V2TPU v2.
- GoogleCloud Aiplatform V1beta1Machine Spec Accelerator Type Tpu V3 
- TPU_V3TPU v3.
- GoogleCloud Aiplatform V1beta1Machine Spec Accelerator Type Tpu V4Pod 
- TPU_V4_PODTPU v4.
- GoogleCloud Aiplatform V1beta1Machine Spec Accelerator Type Tpu V5Litepod 
- TPU_V5_LITEPODTPU v5.
- AcceleratorType Unspecified 
- ACCELERATOR_TYPE_UNSPECIFIEDUnspecified accelerator type, which means no accelerator.
- NvidiaTesla K80 
- NVIDIA_TESLA_K80Nvidia Tesla K80 GPU.
- NvidiaTesla P100 
- NVIDIA_TESLA_P100Nvidia Tesla P100 GPU.
- NvidiaTesla V100 
- NVIDIA_TESLA_V100Nvidia Tesla V100 GPU.
- NvidiaTesla P4 
- NVIDIA_TESLA_P4Nvidia Tesla P4 GPU.
- NvidiaTesla T4 
- NVIDIA_TESLA_T4Nvidia Tesla T4 GPU.
- NvidiaTesla A100 
- NVIDIA_TESLA_A100Nvidia Tesla A100 GPU.
- NvidiaA10080gb 
- NVIDIA_A100_80GBNvidia A100 80GB GPU.
- NvidiaL4 
- NVIDIA_L4Nvidia L4 GPU.
- NvidiaH10080gb 
- NVIDIA_H100_80GBNvidia H100 80Gb GPU.
- TpuV2 
- TPU_V2TPU v2.
- TpuV3 
- TPU_V3TPU v3.
- TpuV4Pod 
- TPU_V4_PODTPU v4.
- TpuV5Litepod 
- TPU_V5_LITEPODTPU v5.
- AcceleratorType Unspecified 
- ACCELERATOR_TYPE_UNSPECIFIEDUnspecified accelerator type, which means no accelerator.
- NvidiaTesla K80 
- NVIDIA_TESLA_K80Nvidia Tesla K80 GPU.
- NvidiaTesla P100 
- NVIDIA_TESLA_P100Nvidia Tesla P100 GPU.
- NvidiaTesla V100 
- NVIDIA_TESLA_V100Nvidia Tesla V100 GPU.
- NvidiaTesla P4 
- NVIDIA_TESLA_P4Nvidia Tesla P4 GPU.
- NvidiaTesla T4 
- NVIDIA_TESLA_T4Nvidia Tesla T4 GPU.
- NvidiaTesla A100 
- NVIDIA_TESLA_A100Nvidia Tesla A100 GPU.
- NvidiaA10080gb 
- NVIDIA_A100_80GBNvidia A100 80GB GPU.
- NvidiaL4 
- NVIDIA_L4Nvidia L4 GPU.
- NvidiaH10080gb 
- NVIDIA_H100_80GBNvidia H100 80Gb GPU.
- TpuV2 
- TPU_V2TPU v2.
- TpuV3 
- TPU_V3TPU v3.
- TpuV4Pod 
- TPU_V4_PODTPU v4.
- TpuV5Litepod 
- TPU_V5_LITEPODTPU v5.
- ACCELERATOR_TYPE_UNSPECIFIED
- ACCELERATOR_TYPE_UNSPECIFIEDUnspecified accelerator type, which means no accelerator.
- NVIDIA_TESLA_K80
- NVIDIA_TESLA_K80Nvidia Tesla K80 GPU.
- NVIDIA_TESLA_P100
- NVIDIA_TESLA_P100Nvidia Tesla P100 GPU.
- NVIDIA_TESLA_V100
- NVIDIA_TESLA_V100Nvidia Tesla V100 GPU.
- NVIDIA_TESLA_P4
- NVIDIA_TESLA_P4Nvidia Tesla P4 GPU.
- NVIDIA_TESLA_T4
- NVIDIA_TESLA_T4Nvidia Tesla T4 GPU.
- NVIDIA_TESLA_A100
- NVIDIA_TESLA_A100Nvidia Tesla A100 GPU.
- NVIDIA_A10080GB
- NVIDIA_A100_80GBNvidia A100 80GB GPU.
- NVIDIA_L4
- NVIDIA_L4Nvidia L4 GPU.
- NVIDIA_H10080GB
- NVIDIA_H100_80GBNvidia H100 80Gb GPU.
- TPU_V2
- TPU_V2TPU v2.
- TPU_V3
- TPU_V3TPU v3.
- TPU_V4_POD
- TPU_V4_PODTPU v4.
- TPU_V5_LITEPOD
- TPU_V5_LITEPODTPU v5.
- "ACCELERATOR_TYPE_UNSPECIFIED"
- ACCELERATOR_TYPE_UNSPECIFIEDUnspecified accelerator type, which means no accelerator.
- "NVIDIA_TESLA_K80"
- NVIDIA_TESLA_K80Nvidia Tesla K80 GPU.
- "NVIDIA_TESLA_P100"
- NVIDIA_TESLA_P100Nvidia Tesla P100 GPU.
- "NVIDIA_TESLA_V100"
- NVIDIA_TESLA_V100Nvidia Tesla V100 GPU.
- "NVIDIA_TESLA_P4"
- NVIDIA_TESLA_P4Nvidia Tesla P4 GPU.
- "NVIDIA_TESLA_T4"
- NVIDIA_TESLA_T4Nvidia Tesla T4 GPU.
- "NVIDIA_TESLA_A100"
- NVIDIA_TESLA_A100Nvidia Tesla A100 GPU.
- "NVIDIA_A100_80GB"
- NVIDIA_A100_80GBNvidia A100 80GB GPU.
- "NVIDIA_L4"
- NVIDIA_L4Nvidia L4 GPU.
- "NVIDIA_H100_80GB"
- NVIDIA_H100_80GBNvidia H100 80Gb GPU.
- "TPU_V2"
- TPU_V2TPU v2.
- "TPU_V3"
- TPU_V3TPU v3.
- "TPU_V4_POD"
- TPU_V4_PODTPU v4.
- "TPU_V5_LITEPOD"
- TPU_V5_LITEPODTPU v5.
GoogleCloudAiplatformV1beta1MachineSpecResponse, GoogleCloudAiplatformV1beta1MachineSpecResponseArgs            
- AcceleratorCount int
- The number of accelerators to attach to the machine.
- AcceleratorType string
- Immutable. The type of accelerator(s) that may be attached to the machine as per accelerator_count.
- MachineType string
- Immutable. The type of the machine. See the list of machine types supported for prediction See the list of machine types supported for custom training. For DeployedModel this field is optional, and the default value is n1-standard-2. For BatchPredictionJob or as part of WorkerPoolSpec this field is required.
- TpuTopology string
- Immutable. The topology of the TPUs. Corresponds to the TPU topologies available from GKE. (Example: tpu_topology: "2x2x1").
- AcceleratorCount int
- The number of accelerators to attach to the machine.
- AcceleratorType string
- Immutable. The type of accelerator(s) that may be attached to the machine as per accelerator_count.
- MachineType string
- Immutable. The type of the machine. See the list of machine types supported for prediction See the list of machine types supported for custom training. For DeployedModel this field is optional, and the default value is n1-standard-2. For BatchPredictionJob or as part of WorkerPoolSpec this field is required.
- TpuTopology string
- Immutable. The topology of the TPUs. Corresponds to the TPU topologies available from GKE. (Example: tpu_topology: "2x2x1").
- acceleratorCount Integer
- The number of accelerators to attach to the machine.
- acceleratorType String
- Immutable. The type of accelerator(s) that may be attached to the machine as per accelerator_count.
- machineType String
- Immutable. The type of the machine. See the list of machine types supported for prediction See the list of machine types supported for custom training. For DeployedModel this field is optional, and the default value is n1-standard-2. For BatchPredictionJob or as part of WorkerPoolSpec this field is required.
- tpuTopology String
- Immutable. The topology of the TPUs. Corresponds to the TPU topologies available from GKE. (Example: tpu_topology: "2x2x1").
- acceleratorCount number
- The number of accelerators to attach to the machine.
- acceleratorType string
- Immutable. The type of accelerator(s) that may be attached to the machine as per accelerator_count.
- machineType string
- Immutable. The type of the machine. See the list of machine types supported for prediction See the list of machine types supported for custom training. For DeployedModel this field is optional, and the default value is n1-standard-2. For BatchPredictionJob or as part of WorkerPoolSpec this field is required.
- tpuTopology string
- Immutable. The topology of the TPUs. Corresponds to the TPU topologies available from GKE. (Example: tpu_topology: "2x2x1").
- accelerator_count int
- The number of accelerators to attach to the machine.
- accelerator_type str
- Immutable. The type of accelerator(s) that may be attached to the machine as per accelerator_count.
- machine_type str
- Immutable. The type of the machine. See the list of machine types supported for prediction See the list of machine types supported for custom training. For DeployedModel this field is optional, and the default value is n1-standard-2. For BatchPredictionJob or as part of WorkerPoolSpec this field is required.
- tpu_topology str
- Immutable. The topology of the TPUs. Corresponds to the TPU topologies available from GKE. (Example: tpu_topology: "2x2x1").
- acceleratorCount Number
- The number of accelerators to attach to the machine.
- acceleratorType String
- Immutable. The type of accelerator(s) that may be attached to the machine as per accelerator_count.
- machineType String
- Immutable. The type of the machine. See the list of machine types supported for prediction See the list of machine types supported for custom training. For DeployedModel this field is optional, and the default value is n1-standard-2. For BatchPredictionJob or as part of WorkerPoolSpec this field is required.
- tpuTopology String
- Immutable. The topology of the TPUs. Corresponds to the TPU topologies available from GKE. (Example: tpu_topology: "2x2x1").
GoogleCloudAiplatformV1beta1NfsMount, GoogleCloudAiplatformV1beta1NfsMountArgs          
- MountPoint string
- Destination mount path. The NFS will be mounted for the user under /mnt/nfs/
- Path string
- Source path exported from NFS server. Has to start with '/', and combined with the ip address, it indicates the source mount path in the form of server:path
- Server string
- IP address of the NFS server.
- MountPoint string
- Destination mount path. The NFS will be mounted for the user under /mnt/nfs/
- Path string
- Source path exported from NFS server. Has to start with '/', and combined with the ip address, it indicates the source mount path in the form of server:path
- Server string
- IP address of the NFS server.
- mountPoint String
- Destination mount path. The NFS will be mounted for the user under /mnt/nfs/
- path String
- Source path exported from NFS server. Has to start with '/', and combined with the ip address, it indicates the source mount path in the form of server:path
- server String
- IP address of the NFS server.
- mountPoint string
- Destination mount path. The NFS will be mounted for the user under /mnt/nfs/
- path string
- Source path exported from NFS server. Has to start with '/', and combined with the ip address, it indicates the source mount path in the form of server:path
- server string
- IP address of the NFS server.
- mount_point str
- Destination mount path. The NFS will be mounted for the user under /mnt/nfs/
- path str
- Source path exported from NFS server. Has to start with '/', and combined with the ip address, it indicates the source mount path in the form of server:path
- server str
- IP address of the NFS server.
- mountPoint String
- Destination mount path. The NFS will be mounted for the user under /mnt/nfs/
- path String
- Source path exported from NFS server. Has to start with '/', and combined with the ip address, it indicates the source mount path in the form of server:path
- server String
- IP address of the NFS server.
GoogleCloudAiplatformV1beta1NfsMountResponse, GoogleCloudAiplatformV1beta1NfsMountResponseArgs            
- MountPoint string
- Destination mount path. The NFS will be mounted for the user under /mnt/nfs/
- Path string
- Source path exported from NFS server. Has to start with '/', and combined with the ip address, it indicates the source mount path in the form of server:path
- Server string
- IP address of the NFS server.
- MountPoint string
- Destination mount path. The NFS will be mounted for the user under /mnt/nfs/
- Path string
- Source path exported from NFS server. Has to start with '/', and combined with the ip address, it indicates the source mount path in the form of server:path
- Server string
- IP address of the NFS server.
- mountPoint String
- Destination mount path. The NFS will be mounted for the user under /mnt/nfs/
- path String
- Source path exported from NFS server. Has to start with '/', and combined with the ip address, it indicates the source mount path in the form of server:path
- server String
- IP address of the NFS server.
- mountPoint string
- Destination mount path. The NFS will be mounted for the user under /mnt/nfs/
- path string
- Source path exported from NFS server. Has to start with '/', and combined with the ip address, it indicates the source mount path in the form of server:path
- server string
- IP address of the NFS server.
- mount_point str
- Destination mount path. The NFS will be mounted for the user under /mnt/nfs/
- path str
- Source path exported from NFS server. Has to start with '/', and combined with the ip address, it indicates the source mount path in the form of server:path
- server str
- IP address of the NFS server.
- mountPoint String
- Destination mount path. The NFS will be mounted for the user under /mnt/nfs/
- path String
- Source path exported from NFS server. Has to start with '/', and combined with the ip address, it indicates the source mount path in the form of server:path
- server String
- IP address of the NFS server.
GoogleCloudAiplatformV1beta1PythonPackageSpec, GoogleCloudAiplatformV1beta1PythonPackageSpecArgs            
- ExecutorImage stringUri 
- The URI of a container image in Artifact Registry that will run the provided Python package. Vertex AI provides a wide range of executor images with pre-installed packages to meet users' various use cases. See the list of pre-built containers for training. You must use an image from this list.
- PackageUris List<string>
- The Google Cloud Storage location of the Python package files which are the training program and its dependent packages. The maximum number of package URIs is 100.
- PythonModule string
- The Python module name to run after installing the packages.
- Args List<string>
- Command line arguments to be passed to the Python task.
- Env
List<Pulumi.Google Native. Aiplatform. V1Beta1. Inputs. Google Cloud Aiplatform V1beta1Env Var> 
- Environment variables to be passed to the python module. Maximum limit is 100.
- ExecutorImage stringUri 
- The URI of a container image in Artifact Registry that will run the provided Python package. Vertex AI provides a wide range of executor images with pre-installed packages to meet users' various use cases. See the list of pre-built containers for training. You must use an image from this list.
- PackageUris []string
- The Google Cloud Storage location of the Python package files which are the training program and its dependent packages. The maximum number of package URIs is 100.
- PythonModule string
- The Python module name to run after installing the packages.
- Args []string
- Command line arguments to be passed to the Python task.
- Env
[]GoogleCloud Aiplatform V1beta1Env Var 
- Environment variables to be passed to the python module. Maximum limit is 100.
- executorImage StringUri 
- The URI of a container image in Artifact Registry that will run the provided Python package. Vertex AI provides a wide range of executor images with pre-installed packages to meet users' various use cases. See the list of pre-built containers for training. You must use an image from this list.
- packageUris List<String>
- The Google Cloud Storage location of the Python package files which are the training program and its dependent packages. The maximum number of package URIs is 100.
- pythonModule String
- The Python module name to run after installing the packages.
- args List<String>
- Command line arguments to be passed to the Python task.
- env
List<GoogleCloud Aiplatform V1beta1Env Var> 
- Environment variables to be passed to the python module. Maximum limit is 100.
- executorImage stringUri 
- The URI of a container image in Artifact Registry that will run the provided Python package. Vertex AI provides a wide range of executor images with pre-installed packages to meet users' various use cases. See the list of pre-built containers for training. You must use an image from this list.
- packageUris string[]
- The Google Cloud Storage location of the Python package files which are the training program and its dependent packages. The maximum number of package URIs is 100.
- pythonModule string
- The Python module name to run after installing the packages.
- args string[]
- Command line arguments to be passed to the Python task.
- env
GoogleCloud Aiplatform V1beta1Env Var[] 
- Environment variables to be passed to the python module. Maximum limit is 100.
- executor_image_ struri 
- The URI of a container image in Artifact Registry that will run the provided Python package. Vertex AI provides a wide range of executor images with pre-installed packages to meet users' various use cases. See the list of pre-built containers for training. You must use an image from this list.
- package_uris Sequence[str]
- The Google Cloud Storage location of the Python package files which are the training program and its dependent packages. The maximum number of package URIs is 100.
- python_module str
- The Python module name to run after installing the packages.
- args Sequence[str]
- Command line arguments to be passed to the Python task.
- env
Sequence[GoogleCloud Aiplatform V1beta1Env Var] 
- Environment variables to be passed to the python module. Maximum limit is 100.
- executorImage StringUri 
- The URI of a container image in Artifact Registry that will run the provided Python package. Vertex AI provides a wide range of executor images with pre-installed packages to meet users' various use cases. See the list of pre-built containers for training. You must use an image from this list.
- packageUris List<String>
- The Google Cloud Storage location of the Python package files which are the training program and its dependent packages. The maximum number of package URIs is 100.
- pythonModule String
- The Python module name to run after installing the packages.
- args List<String>
- Command line arguments to be passed to the Python task.
- env List<Property Map>
- Environment variables to be passed to the python module. Maximum limit is 100.
GoogleCloudAiplatformV1beta1PythonPackageSpecResponse, GoogleCloudAiplatformV1beta1PythonPackageSpecResponseArgs              
- Args List<string>
- Command line arguments to be passed to the Python task.
- Env
List<Pulumi.Google Native. Aiplatform. V1Beta1. Inputs. Google Cloud Aiplatform V1beta1Env Var Response> 
- Environment variables to be passed to the python module. Maximum limit is 100.
- ExecutorImage stringUri 
- The URI of a container image in Artifact Registry that will run the provided Python package. Vertex AI provides a wide range of executor images with pre-installed packages to meet users' various use cases. See the list of pre-built containers for training. You must use an image from this list.
- PackageUris List<string>
- The Google Cloud Storage location of the Python package files which are the training program and its dependent packages. The maximum number of package URIs is 100.
- PythonModule string
- The Python module name to run after installing the packages.
- Args []string
- Command line arguments to be passed to the Python task.
- Env
[]GoogleCloud Aiplatform V1beta1Env Var Response 
- Environment variables to be passed to the python module. Maximum limit is 100.
- ExecutorImage stringUri 
- The URI of a container image in Artifact Registry that will run the provided Python package. Vertex AI provides a wide range of executor images with pre-installed packages to meet users' various use cases. See the list of pre-built containers for training. You must use an image from this list.
- PackageUris []string
- The Google Cloud Storage location of the Python package files which are the training program and its dependent packages. The maximum number of package URIs is 100.
- PythonModule string
- The Python module name to run after installing the packages.
- args List<String>
- Command line arguments to be passed to the Python task.
- env
List<GoogleCloud Aiplatform V1beta1Env Var Response> 
- Environment variables to be passed to the python module. Maximum limit is 100.
- executorImage StringUri 
- The URI of a container image in Artifact Registry that will run the provided Python package. Vertex AI provides a wide range of executor images with pre-installed packages to meet users' various use cases. See the list of pre-built containers for training. You must use an image from this list.
- packageUris List<String>
- The Google Cloud Storage location of the Python package files which are the training program and its dependent packages. The maximum number of package URIs is 100.
- pythonModule String
- The Python module name to run after installing the packages.
- args string[]
- Command line arguments to be passed to the Python task.
- env
GoogleCloud Aiplatform V1beta1Env Var Response[] 
- Environment variables to be passed to the python module. Maximum limit is 100.
- executorImage stringUri 
- The URI of a container image in Artifact Registry that will run the provided Python package. Vertex AI provides a wide range of executor images with pre-installed packages to meet users' various use cases. See the list of pre-built containers for training. You must use an image from this list.
- packageUris string[]
- The Google Cloud Storage location of the Python package files which are the training program and its dependent packages. The maximum number of package URIs is 100.
- pythonModule string
- The Python module name to run after installing the packages.
- args Sequence[str]
- Command line arguments to be passed to the Python task.
- env
Sequence[GoogleCloud Aiplatform V1beta1Env Var Response] 
- Environment variables to be passed to the python module. Maximum limit is 100.
- executor_image_ struri 
- The URI of a container image in Artifact Registry that will run the provided Python package. Vertex AI provides a wide range of executor images with pre-installed packages to meet users' various use cases. See the list of pre-built containers for training. You must use an image from this list.
- package_uris Sequence[str]
- The Google Cloud Storage location of the Python package files which are the training program and its dependent packages. The maximum number of package URIs is 100.
- python_module str
- The Python module name to run after installing the packages.
- args List<String>
- Command line arguments to be passed to the Python task.
- env List<Property Map>
- Environment variables to be passed to the python module. Maximum limit is 100.
- executorImage StringUri 
- The URI of a container image in Artifact Registry that will run the provided Python package. Vertex AI provides a wide range of executor images with pre-installed packages to meet users' various use cases. See the list of pre-built containers for training. You must use an image from this list.
- packageUris List<String>
- The Google Cloud Storage location of the Python package files which are the training program and its dependent packages. The maximum number of package URIs is 100.
- pythonModule String
- The Python module name to run after installing the packages.
GoogleCloudAiplatformV1beta1Scheduling, GoogleCloudAiplatformV1beta1SchedulingArgs        
- DisableRetries bool
- Optional. Indicates if the job should retry for internal errors after the job starts running. If true, overrides Scheduling.restart_job_on_worker_restartto false.
- RestartJob boolOn Worker Restart 
- Restarts the entire CustomJob if a worker gets restarted. This feature can be used by distributed training jobs that are not resilient to workers leaving and joining a job.
- Timeout string
- The maximum job running time. The default is 7 days.
- DisableRetries bool
- Optional. Indicates if the job should retry for internal errors after the job starts running. If true, overrides Scheduling.restart_job_on_worker_restartto false.
- RestartJob boolOn Worker Restart 
- Restarts the entire CustomJob if a worker gets restarted. This feature can be used by distributed training jobs that are not resilient to workers leaving and joining a job.
- Timeout string
- The maximum job running time. The default is 7 days.
- disableRetries Boolean
- Optional. Indicates if the job should retry for internal errors after the job starts running. If true, overrides Scheduling.restart_job_on_worker_restartto false.
- restartJob BooleanOn Worker Restart 
- Restarts the entire CustomJob if a worker gets restarted. This feature can be used by distributed training jobs that are not resilient to workers leaving and joining a job.
- timeout String
- The maximum job running time. The default is 7 days.
- disableRetries boolean
- Optional. Indicates if the job should retry for internal errors after the job starts running. If true, overrides Scheduling.restart_job_on_worker_restartto false.
- restartJob booleanOn Worker Restart 
- Restarts the entire CustomJob if a worker gets restarted. This feature can be used by distributed training jobs that are not resilient to workers leaving and joining a job.
- timeout string
- The maximum job running time. The default is 7 days.
- disable_retries bool
- Optional. Indicates if the job should retry for internal errors after the job starts running. If true, overrides Scheduling.restart_job_on_worker_restartto false.
- restart_job_ boolon_ worker_ restart 
- Restarts the entire CustomJob if a worker gets restarted. This feature can be used by distributed training jobs that are not resilient to workers leaving and joining a job.
- timeout str
- The maximum job running time. The default is 7 days.
- disableRetries Boolean
- Optional. Indicates if the job should retry for internal errors after the job starts running. If true, overrides Scheduling.restart_job_on_worker_restartto false.
- restartJob BooleanOn Worker Restart 
- Restarts the entire CustomJob if a worker gets restarted. This feature can be used by distributed training jobs that are not resilient to workers leaving and joining a job.
- timeout String
- The maximum job running time. The default is 7 days.
GoogleCloudAiplatformV1beta1SchedulingResponse, GoogleCloudAiplatformV1beta1SchedulingResponseArgs          
- DisableRetries bool
- Optional. Indicates if the job should retry for internal errors after the job starts running. If true, overrides Scheduling.restart_job_on_worker_restartto false.
- RestartJob boolOn Worker Restart 
- Restarts the entire CustomJob if a worker gets restarted. This feature can be used by distributed training jobs that are not resilient to workers leaving and joining a job.
- Timeout string
- The maximum job running time. The default is 7 days.
- DisableRetries bool
- Optional. Indicates if the job should retry for internal errors after the job starts running. If true, overrides Scheduling.restart_job_on_worker_restartto false.
- RestartJob boolOn Worker Restart 
- Restarts the entire CustomJob if a worker gets restarted. This feature can be used by distributed training jobs that are not resilient to workers leaving and joining a job.
- Timeout string
- The maximum job running time. The default is 7 days.
- disableRetries Boolean
- Optional. Indicates if the job should retry for internal errors after the job starts running. If true, overrides Scheduling.restart_job_on_worker_restartto false.
- restartJob BooleanOn Worker Restart 
- Restarts the entire CustomJob if a worker gets restarted. This feature can be used by distributed training jobs that are not resilient to workers leaving and joining a job.
- timeout String
- The maximum job running time. The default is 7 days.
- disableRetries boolean
- Optional. Indicates if the job should retry for internal errors after the job starts running. If true, overrides Scheduling.restart_job_on_worker_restartto false.
- restartJob booleanOn Worker Restart 
- Restarts the entire CustomJob if a worker gets restarted. This feature can be used by distributed training jobs that are not resilient to workers leaving and joining a job.
- timeout string
- The maximum job running time. The default is 7 days.
- disable_retries bool
- Optional. Indicates if the job should retry for internal errors after the job starts running. If true, overrides Scheduling.restart_job_on_worker_restartto false.
- restart_job_ boolon_ worker_ restart 
- Restarts the entire CustomJob if a worker gets restarted. This feature can be used by distributed training jobs that are not resilient to workers leaving and joining a job.
- timeout str
- The maximum job running time. The default is 7 days.
- disableRetries Boolean
- Optional. Indicates if the job should retry for internal errors after the job starts running. If true, overrides Scheduling.restart_job_on_worker_restartto false.
- restartJob BooleanOn Worker Restart 
- Restarts the entire CustomJob if a worker gets restarted. This feature can be used by distributed training jobs that are not resilient to workers leaving and joining a job.
- timeout String
- The maximum job running time. The default is 7 days.
GoogleCloudAiplatformV1beta1WorkerPoolSpec, GoogleCloudAiplatformV1beta1WorkerPoolSpecArgs            
- ContainerSpec Pulumi.Google Native. Aiplatform. V1Beta1. Inputs. Google Cloud Aiplatform V1beta1Container Spec 
- The custom container task.
- DiskSpec Pulumi.Google Native. Aiplatform. V1Beta1. Inputs. Google Cloud Aiplatform V1beta1Disk Spec 
- Disk spec.
- MachineSpec Pulumi.Google Native. Aiplatform. V1Beta1. Inputs. Google Cloud Aiplatform V1beta1Machine Spec 
- Optional. Immutable. The specification of a single machine.
- NfsMounts List<Pulumi.Google Native. Aiplatform. V1Beta1. Inputs. Google Cloud Aiplatform V1beta1Nfs Mount> 
- Optional. List of NFS mount spec.
- PythonPackage Pulumi.Spec Google Native. Aiplatform. V1Beta1. Inputs. Google Cloud Aiplatform V1beta1Python Package Spec 
- The Python packaged task.
- ReplicaCount string
- Optional. The number of worker replicas to use for this worker pool.
- ContainerSpec GoogleCloud Aiplatform V1beta1Container Spec 
- The custom container task.
- DiskSpec GoogleCloud Aiplatform V1beta1Disk Spec 
- Disk spec.
- MachineSpec GoogleCloud Aiplatform V1beta1Machine Spec 
- Optional. Immutable. The specification of a single machine.
- NfsMounts []GoogleCloud Aiplatform V1beta1Nfs Mount 
- Optional. List of NFS mount spec.
- PythonPackage GoogleSpec Cloud Aiplatform V1beta1Python Package Spec 
- The Python packaged task.
- ReplicaCount string
- Optional. The number of worker replicas to use for this worker pool.
- containerSpec GoogleCloud Aiplatform V1beta1Container Spec 
- The custom container task.
- diskSpec GoogleCloud Aiplatform V1beta1Disk Spec 
- Disk spec.
- machineSpec GoogleCloud Aiplatform V1beta1Machine Spec 
- Optional. Immutable. The specification of a single machine.
- nfsMounts List<GoogleCloud Aiplatform V1beta1Nfs Mount> 
- Optional. List of NFS mount spec.
- pythonPackage GoogleSpec Cloud Aiplatform V1beta1Python Package Spec 
- The Python packaged task.
- replicaCount String
- Optional. The number of worker replicas to use for this worker pool.
- containerSpec GoogleCloud Aiplatform V1beta1Container Spec 
- The custom container task.
- diskSpec GoogleCloud Aiplatform V1beta1Disk Spec 
- Disk spec.
- machineSpec GoogleCloud Aiplatform V1beta1Machine Spec 
- Optional. Immutable. The specification of a single machine.
- nfsMounts GoogleCloud Aiplatform V1beta1Nfs Mount[] 
- Optional. List of NFS mount spec.
- pythonPackage GoogleSpec Cloud Aiplatform V1beta1Python Package Spec 
- The Python packaged task.
- replicaCount string
- Optional. The number of worker replicas to use for this worker pool.
- container_spec GoogleCloud Aiplatform V1beta1Container Spec 
- The custom container task.
- disk_spec GoogleCloud Aiplatform V1beta1Disk Spec 
- Disk spec.
- machine_spec GoogleCloud Aiplatform V1beta1Machine Spec 
- Optional. Immutable. The specification of a single machine.
- nfs_mounts Sequence[GoogleCloud Aiplatform V1beta1Nfs Mount] 
- Optional. List of NFS mount spec.
- python_package_ Googlespec Cloud Aiplatform V1beta1Python Package Spec 
- The Python packaged task.
- replica_count str
- Optional. The number of worker replicas to use for this worker pool.
- containerSpec Property Map
- The custom container task.
- diskSpec Property Map
- Disk spec.
- machineSpec Property Map
- Optional. Immutable. The specification of a single machine.
- nfsMounts List<Property Map>
- Optional. List of NFS mount spec.
- pythonPackage Property MapSpec 
- The Python packaged task.
- replicaCount String
- Optional. The number of worker replicas to use for this worker pool.
GoogleCloudAiplatformV1beta1WorkerPoolSpecResponse, GoogleCloudAiplatformV1beta1WorkerPoolSpecResponseArgs              
- ContainerSpec Pulumi.Google Native. Aiplatform. V1Beta1. Inputs. Google Cloud Aiplatform V1beta1Container Spec Response 
- The custom container task.
- DiskSpec Pulumi.Google Native. Aiplatform. V1Beta1. Inputs. Google Cloud Aiplatform V1beta1Disk Spec Response 
- Disk spec.
- MachineSpec Pulumi.Google Native. Aiplatform. V1Beta1. Inputs. Google Cloud Aiplatform V1beta1Machine Spec Response 
- Optional. Immutable. The specification of a single machine.
- NfsMounts List<Pulumi.Google Native. Aiplatform. V1Beta1. Inputs. Google Cloud Aiplatform V1beta1Nfs Mount Response> 
- Optional. List of NFS mount spec.
- PythonPackage Pulumi.Spec Google Native. Aiplatform. V1Beta1. Inputs. Google Cloud Aiplatform V1beta1Python Package Spec Response 
- The Python packaged task.
- ReplicaCount string
- Optional. The number of worker replicas to use for this worker pool.
- ContainerSpec GoogleCloud Aiplatform V1beta1Container Spec Response 
- The custom container task.
- DiskSpec GoogleCloud Aiplatform V1beta1Disk Spec Response 
- Disk spec.
- MachineSpec GoogleCloud Aiplatform V1beta1Machine Spec Response 
- Optional. Immutable. The specification of a single machine.
- NfsMounts []GoogleCloud Aiplatform V1beta1Nfs Mount Response 
- Optional. List of NFS mount spec.
- PythonPackage GoogleSpec Cloud Aiplatform V1beta1Python Package Spec Response 
- The Python packaged task.
- ReplicaCount string
- Optional. The number of worker replicas to use for this worker pool.
- containerSpec GoogleCloud Aiplatform V1beta1Container Spec Response 
- The custom container task.
- diskSpec GoogleCloud Aiplatform V1beta1Disk Spec Response 
- Disk spec.
- machineSpec GoogleCloud Aiplatform V1beta1Machine Spec Response 
- Optional. Immutable. The specification of a single machine.
- nfsMounts List<GoogleCloud Aiplatform V1beta1Nfs Mount Response> 
- Optional. List of NFS mount spec.
- pythonPackage GoogleSpec Cloud Aiplatform V1beta1Python Package Spec Response 
- The Python packaged task.
- replicaCount String
- Optional. The number of worker replicas to use for this worker pool.
- containerSpec GoogleCloud Aiplatform V1beta1Container Spec Response 
- The custom container task.
- diskSpec GoogleCloud Aiplatform V1beta1Disk Spec Response 
- Disk spec.
- machineSpec GoogleCloud Aiplatform V1beta1Machine Spec Response 
- Optional. Immutable. The specification of a single machine.
- nfsMounts GoogleCloud Aiplatform V1beta1Nfs Mount Response[] 
- Optional. List of NFS mount spec.
- pythonPackage GoogleSpec Cloud Aiplatform V1beta1Python Package Spec Response 
- The Python packaged task.
- replicaCount string
- Optional. The number of worker replicas to use for this worker pool.
- container_spec GoogleCloud Aiplatform V1beta1Container Spec Response 
- The custom container task.
- disk_spec GoogleCloud Aiplatform V1beta1Disk Spec Response 
- Disk spec.
- machine_spec GoogleCloud Aiplatform V1beta1Machine Spec Response 
- Optional. Immutable. The specification of a single machine.
- nfs_mounts Sequence[GoogleCloud Aiplatform V1beta1Nfs Mount Response] 
- Optional. List of NFS mount spec.
- python_package_ Googlespec Cloud Aiplatform V1beta1Python Package Spec Response 
- The Python packaged task.
- replica_count str
- Optional. The number of worker replicas to use for this worker pool.
- containerSpec Property Map
- The custom container task.
- diskSpec Property Map
- Disk spec.
- machineSpec Property Map
- Optional. Immutable. The specification of a single machine.
- nfsMounts List<Property Map>
- Optional. List of NFS mount spec.
- pythonPackage Property MapSpec 
- The Python packaged task.
- replicaCount String
- Optional. The number of worker replicas to use for this worker pool.
GoogleRpcStatusResponse, GoogleRpcStatusResponseArgs        
- Code int
- The status code, which should be an enum value of google.rpc.Code.
- Details
List<ImmutableDictionary<string, string>> 
- A list of messages that carry the error details. There is a common set of message types for APIs to use.
- Message string
- A developer-facing error message, which should be in English. Any user-facing error message should be localized and sent in the google.rpc.Status.details field, or localized by the client.
- Code int
- The status code, which should be an enum value of google.rpc.Code.
- Details []map[string]string
- A list of messages that carry the error details. There is a common set of message types for APIs to use.
- Message string
- A developer-facing error message, which should be in English. Any user-facing error message should be localized and sent in the google.rpc.Status.details field, or localized by the client.
- code Integer
- The status code, which should be an enum value of google.rpc.Code.
- details List<Map<String,String>>
- A list of messages that carry the error details. There is a common set of message types for APIs to use.
- message String
- A developer-facing error message, which should be in English. Any user-facing error message should be localized and sent in the google.rpc.Status.details field, or localized by the client.
- code number
- The status code, which should be an enum value of google.rpc.Code.
- details {[key: string]: string}[]
- A list of messages that carry the error details. There is a common set of message types for APIs to use.
- message string
- A developer-facing error message, which should be in English. Any user-facing error message should be localized and sent in the google.rpc.Status.details field, or localized by the client.
- code int
- The status code, which should be an enum value of google.rpc.Code.
- details Sequence[Mapping[str, str]]
- A list of messages that carry the error details. There is a common set of message types for APIs to use.
- message str
- A developer-facing error message, which should be in English. Any user-facing error message should be localized and sent in the google.rpc.Status.details field, or localized by the client.
- code Number
- The status code, which should be an enum value of google.rpc.Code.
- details List<Map<String>>
- A list of messages that carry the error details. There is a common set of message types for APIs to use.
- message String
- A developer-facing error message, which should be in English. Any user-facing error message should be localized and sent in the google.rpc.Status.details field, or localized by the client.
Package Details
- Repository
- Google Cloud Native pulumi/pulumi-google-native
- License
- Apache-2.0
Google Cloud Native is in preview. Google Cloud Classic is fully supported.