Google Cloud Native is in preview. Google Cloud Classic is fully supported.
google-native.notebooks/v1.Execution
Explore with Pulumi AI
Google Cloud Native is in preview. Google Cloud Classic is fully supported.
Creates a new Execution in a given project and location. Auto-naming is currently not supported for this resource.
Create Execution Resource
Resources are created with functions called constructors. To learn more about declaring and configuring resources, see Resources.
Constructor syntax
new Execution(name: string, args: ExecutionArgs, opts?: CustomResourceOptions);@overload
def Execution(resource_name: str,
              args: ExecutionArgs,
              opts: Optional[ResourceOptions] = None)
@overload
def Execution(resource_name: str,
              opts: Optional[ResourceOptions] = None,
              execution_id: Optional[str] = None,
              description: Optional[str] = None,
              execution_template: Optional[ExecutionTemplateArgs] = None,
              location: Optional[str] = None,
              output_notebook_file: Optional[str] = None,
              project: Optional[str] = None)func NewExecution(ctx *Context, name string, args ExecutionArgs, opts ...ResourceOption) (*Execution, error)public Execution(string name, ExecutionArgs args, CustomResourceOptions? opts = null)
public Execution(String name, ExecutionArgs args)
public Execution(String name, ExecutionArgs args, CustomResourceOptions options)
type: google-native:notebooks/v1:Execution
properties: # The arguments to resource properties.
options: # Bag of options to control resource's behavior.
Parameters
- name string
- The unique name of the resource.
- args ExecutionArgs
- The arguments to resource properties.
- opts CustomResourceOptions
- Bag of options to control resource's behavior.
- resource_name str
- The unique name of the resource.
- args ExecutionArgs
- The arguments to resource properties.
- opts ResourceOptions
- Bag of options to control resource's behavior.
- ctx Context
- Context object for the current deployment.
- name string
- The unique name of the resource.
- args ExecutionArgs
- The arguments to resource properties.
- opts ResourceOption
- Bag of options to control resource's behavior.
- name string
- The unique name of the resource.
- args ExecutionArgs
- The arguments to resource properties.
- opts CustomResourceOptions
- Bag of options to control resource's behavior.
- name String
- The unique name of the resource.
- args ExecutionArgs
- The arguments to resource properties.
- options CustomResourceOptions
- Bag of options to control resource's behavior.
Constructor example
The following reference example uses placeholder values for all input properties.
var exampleexecutionResourceResourceFromNotebooksv1 = new GoogleNative.Notebooks.V1.Execution("exampleexecutionResourceResourceFromNotebooksv1", new()
{
    ExecutionId = "string",
    Description = "string",
    ExecutionTemplate = new GoogleNative.Notebooks.V1.Inputs.ExecutionTemplateArgs
    {
        Labels = 
        {
            { "string", "string" },
        },
        OutputNotebookFolder = "string",
        InputNotebookFile = "string",
        JobType = GoogleNative.Notebooks.V1.ExecutionTemplateJobType.JobTypeUnspecified,
        KernelSpec = "string",
        AcceleratorConfig = new GoogleNative.Notebooks.V1.Inputs.SchedulerAcceleratorConfigArgs
        {
            CoreCount = "string",
            Type = GoogleNative.Notebooks.V1.SchedulerAcceleratorConfigType.SchedulerAcceleratorTypeUnspecified,
        },
        MasterType = "string",
        DataprocParameters = new GoogleNative.Notebooks.V1.Inputs.DataprocParametersArgs
        {
            Cluster = "string",
        },
        Parameters = "string",
        ParamsYamlFile = "string",
        ContainerImageUri = "string",
        ServiceAccount = "string",
        Tensorboard = "string",
        VertexAiParameters = new GoogleNative.Notebooks.V1.Inputs.VertexAIParametersArgs
        {
            Env = 
            {
                { "string", "string" },
            },
            Network = "string",
        },
    },
    Location = "string",
    OutputNotebookFile = "string",
    Project = "string",
});
example, err := notebooks.NewExecution(ctx, "exampleexecutionResourceResourceFromNotebooksv1", ¬ebooks.ExecutionArgs{
	ExecutionId: pulumi.String("string"),
	Description: pulumi.String("string"),
	ExecutionTemplate: ¬ebooks.ExecutionTemplateArgs{
		Labels: pulumi.StringMap{
			"string": pulumi.String("string"),
		},
		OutputNotebookFolder: pulumi.String("string"),
		InputNotebookFile:    pulumi.String("string"),
		JobType:              notebooks.ExecutionTemplateJobTypeJobTypeUnspecified,
		KernelSpec:           pulumi.String("string"),
		AcceleratorConfig: ¬ebooks.SchedulerAcceleratorConfigArgs{
			CoreCount: pulumi.String("string"),
			Type:      notebooks.SchedulerAcceleratorConfigTypeSchedulerAcceleratorTypeUnspecified,
		},
		MasterType: pulumi.String("string"),
		DataprocParameters: ¬ebooks.DataprocParametersArgs{
			Cluster: pulumi.String("string"),
		},
		Parameters:        pulumi.String("string"),
		ParamsYamlFile:    pulumi.String("string"),
		ContainerImageUri: pulumi.String("string"),
		ServiceAccount:    pulumi.String("string"),
		Tensorboard:       pulumi.String("string"),
		VertexAiParameters: ¬ebooks.VertexAIParametersArgs{
			Env: pulumi.StringMap{
				"string": pulumi.String("string"),
			},
			Network: pulumi.String("string"),
		},
	},
	Location:           pulumi.String("string"),
	OutputNotebookFile: pulumi.String("string"),
	Project:            pulumi.String("string"),
})
var exampleexecutionResourceResourceFromNotebooksv1 = new Execution("exampleexecutionResourceResourceFromNotebooksv1", ExecutionArgs.builder()
    .executionId("string")
    .description("string")
    .executionTemplate(ExecutionTemplateArgs.builder()
        .labels(Map.of("string", "string"))
        .outputNotebookFolder("string")
        .inputNotebookFile("string")
        .jobType("JOB_TYPE_UNSPECIFIED")
        .kernelSpec("string")
        .acceleratorConfig(SchedulerAcceleratorConfigArgs.builder()
            .coreCount("string")
            .type("SCHEDULER_ACCELERATOR_TYPE_UNSPECIFIED")
            .build())
        .masterType("string")
        .dataprocParameters(DataprocParametersArgs.builder()
            .cluster("string")
            .build())
        .parameters("string")
        .paramsYamlFile("string")
        .containerImageUri("string")
        .serviceAccount("string")
        .tensorboard("string")
        .vertexAiParameters(VertexAIParametersArgs.builder()
            .env(Map.of("string", "string"))
            .network("string")
            .build())
        .build())
    .location("string")
    .outputNotebookFile("string")
    .project("string")
    .build());
exampleexecution_resource_resource_from_notebooksv1 = google_native.notebooks.v1.Execution("exampleexecutionResourceResourceFromNotebooksv1",
    execution_id="string",
    description="string",
    execution_template={
        "labels": {
            "string": "string",
        },
        "output_notebook_folder": "string",
        "input_notebook_file": "string",
        "job_type": google_native.notebooks.v1.ExecutionTemplateJobType.JOB_TYPE_UNSPECIFIED,
        "kernel_spec": "string",
        "accelerator_config": {
            "core_count": "string",
            "type": google_native.notebooks.v1.SchedulerAcceleratorConfigType.SCHEDULER_ACCELERATOR_TYPE_UNSPECIFIED,
        },
        "master_type": "string",
        "dataproc_parameters": {
            "cluster": "string",
        },
        "parameters": "string",
        "params_yaml_file": "string",
        "container_image_uri": "string",
        "service_account": "string",
        "tensorboard": "string",
        "vertex_ai_parameters": {
            "env": {
                "string": "string",
            },
            "network": "string",
        },
    },
    location="string",
    output_notebook_file="string",
    project="string")
const exampleexecutionResourceResourceFromNotebooksv1 = new google_native.notebooks.v1.Execution("exampleexecutionResourceResourceFromNotebooksv1", {
    executionId: "string",
    description: "string",
    executionTemplate: {
        labels: {
            string: "string",
        },
        outputNotebookFolder: "string",
        inputNotebookFile: "string",
        jobType: google_native.notebooks.v1.ExecutionTemplateJobType.JobTypeUnspecified,
        kernelSpec: "string",
        acceleratorConfig: {
            coreCount: "string",
            type: google_native.notebooks.v1.SchedulerAcceleratorConfigType.SchedulerAcceleratorTypeUnspecified,
        },
        masterType: "string",
        dataprocParameters: {
            cluster: "string",
        },
        parameters: "string",
        paramsYamlFile: "string",
        containerImageUri: "string",
        serviceAccount: "string",
        tensorboard: "string",
        vertexAiParameters: {
            env: {
                string: "string",
            },
            network: "string",
        },
    },
    location: "string",
    outputNotebookFile: "string",
    project: "string",
});
type: google-native:notebooks/v1:Execution
properties:
    description: string
    executionId: string
    executionTemplate:
        acceleratorConfig:
            coreCount: string
            type: SCHEDULER_ACCELERATOR_TYPE_UNSPECIFIED
        containerImageUri: string
        dataprocParameters:
            cluster: string
        inputNotebookFile: string
        jobType: JOB_TYPE_UNSPECIFIED
        kernelSpec: string
        labels:
            string: string
        masterType: string
        outputNotebookFolder: string
        parameters: string
        paramsYamlFile: string
        serviceAccount: string
        tensorboard: string
        vertexAiParameters:
            env:
                string: string
            network: string
    location: string
    outputNotebookFile: string
    project: string
Execution Resource Properties
To learn more about resource properties and how to use them, see Inputs and Outputs in the Architecture and Concepts docs.
Inputs
In Python, inputs that are objects can be passed either as argument classes or as dictionary literals.
The Execution resource accepts the following input properties:
- ExecutionId string
- Required. User-defined unique ID of this execution.
- Description string
- A brief description of this execution.
- ExecutionTemplate Pulumi.Google Native. Notebooks. V1. Inputs. Execution Template 
- execute metadata including name, hardware spec, region, labels, etc.
- Location string
- OutputNotebook stringFile 
- Output notebook file generated by this execution
- Project string
- ExecutionId string
- Required. User-defined unique ID of this execution.
- Description string
- A brief description of this execution.
- ExecutionTemplate ExecutionTemplate Args 
- execute metadata including name, hardware spec, region, labels, etc.
- Location string
- OutputNotebook stringFile 
- Output notebook file generated by this execution
- Project string
- executionId String
- Required. User-defined unique ID of this execution.
- description String
- A brief description of this execution.
- executionTemplate ExecutionTemplate 
- execute metadata including name, hardware spec, region, labels, etc.
- location String
- outputNotebook StringFile 
- Output notebook file generated by this execution
- project String
- executionId string
- Required. User-defined unique ID of this execution.
- description string
- A brief description of this execution.
- executionTemplate ExecutionTemplate 
- execute metadata including name, hardware spec, region, labels, etc.
- location string
- outputNotebook stringFile 
- Output notebook file generated by this execution
- project string
- execution_id str
- Required. User-defined unique ID of this execution.
- description str
- A brief description of this execution.
- execution_template ExecutionTemplate Args 
- execute metadata including name, hardware spec, region, labels, etc.
- location str
- output_notebook_ strfile 
- Output notebook file generated by this execution
- project str
- executionId String
- Required. User-defined unique ID of this execution.
- description String
- A brief description of this execution.
- executionTemplate Property Map
- execute metadata including name, hardware spec, region, labels, etc.
- location String
- outputNotebook StringFile 
- Output notebook file generated by this execution
- project String
Outputs
All input properties are implicitly available as output properties. Additionally, the Execution resource produces the following output properties:
- CreateTime string
- Time the Execution was instantiated.
- DisplayName string
- Name used for UI purposes. Name can only contain alphanumeric characters and underscores '_'.
- Id string
- The provider-assigned unique ID for this managed resource.
- JobUri string
- The URI of the external job used to execute the notebook.
- Name string
- The resource name of the execute. Format: projects/{project_id}/locations/{location}/executions/{execution_id}
- State string
- State of the underlying AI Platform job.
- UpdateTime string
- Time the Execution was last updated.
- CreateTime string
- Time the Execution was instantiated.
- DisplayName string
- Name used for UI purposes. Name can only contain alphanumeric characters and underscores '_'.
- Id string
- The provider-assigned unique ID for this managed resource.
- JobUri string
- The URI of the external job used to execute the notebook.
- Name string
- The resource name of the execute. Format: projects/{project_id}/locations/{location}/executions/{execution_id}
- State string
- State of the underlying AI Platform job.
- UpdateTime string
- Time the Execution was last updated.
- createTime String
- Time the Execution was instantiated.
- displayName String
- Name used for UI purposes. Name can only contain alphanumeric characters and underscores '_'.
- id String
- The provider-assigned unique ID for this managed resource.
- jobUri String
- The URI of the external job used to execute the notebook.
- name String
- The resource name of the execute. Format: projects/{project_id}/locations/{location}/executions/{execution_id}
- state String
- State of the underlying AI Platform job.
- updateTime String
- Time the Execution was last updated.
- createTime string
- Time the Execution was instantiated.
- displayName string
- Name used for UI purposes. Name can only contain alphanumeric characters and underscores '_'.
- id string
- The provider-assigned unique ID for this managed resource.
- jobUri string
- The URI of the external job used to execute the notebook.
- name string
- The resource name of the execute. Format: projects/{project_id}/locations/{location}/executions/{execution_id}
- state string
- State of the underlying AI Platform job.
- updateTime string
- Time the Execution was last updated.
- create_time str
- Time the Execution was instantiated.
- display_name str
- Name used for UI purposes. Name can only contain alphanumeric characters and underscores '_'.
- id str
- The provider-assigned unique ID for this managed resource.
- job_uri str
- The URI of the external job used to execute the notebook.
- name str
- The resource name of the execute. Format: projects/{project_id}/locations/{location}/executions/{execution_id}
- state str
- State of the underlying AI Platform job.
- update_time str
- Time the Execution was last updated.
- createTime String
- Time the Execution was instantiated.
- displayName String
- Name used for UI purposes. Name can only contain alphanumeric characters and underscores '_'.
- id String
- The provider-assigned unique ID for this managed resource.
- jobUri String
- The URI of the external job used to execute the notebook.
- name String
- The resource name of the execute. Format: projects/{project_id}/locations/{location}/executions/{execution_id}
- state String
- State of the underlying AI Platform job.
- updateTime String
- Time the Execution was last updated.
Supporting Types
DataprocParameters, DataprocParametersArgs    
- Cluster string
- URI for cluster used to run Dataproc execution. Format: projects/{PROJECT_ID}/regions/{REGION}/clusters/{CLUSTER_NAME}
- Cluster string
- URI for cluster used to run Dataproc execution. Format: projects/{PROJECT_ID}/regions/{REGION}/clusters/{CLUSTER_NAME}
- cluster String
- URI for cluster used to run Dataproc execution. Format: projects/{PROJECT_ID}/regions/{REGION}/clusters/{CLUSTER_NAME}
- cluster string
- URI for cluster used to run Dataproc execution. Format: projects/{PROJECT_ID}/regions/{REGION}/clusters/{CLUSTER_NAME}
- cluster str
- URI for cluster used to run Dataproc execution. Format: projects/{PROJECT_ID}/regions/{REGION}/clusters/{CLUSTER_NAME}
- cluster String
- URI for cluster used to run Dataproc execution. Format: projects/{PROJECT_ID}/regions/{REGION}/clusters/{CLUSTER_NAME}
DataprocParametersResponse, DataprocParametersResponseArgs      
- Cluster string
- URI for cluster used to run Dataproc execution. Format: projects/{PROJECT_ID}/regions/{REGION}/clusters/{CLUSTER_NAME}
- Cluster string
- URI for cluster used to run Dataproc execution. Format: projects/{PROJECT_ID}/regions/{REGION}/clusters/{CLUSTER_NAME}
- cluster String
- URI for cluster used to run Dataproc execution. Format: projects/{PROJECT_ID}/regions/{REGION}/clusters/{CLUSTER_NAME}
- cluster string
- URI for cluster used to run Dataproc execution. Format: projects/{PROJECT_ID}/regions/{REGION}/clusters/{CLUSTER_NAME}
- cluster str
- URI for cluster used to run Dataproc execution. Format: projects/{PROJECT_ID}/regions/{REGION}/clusters/{CLUSTER_NAME}
- cluster String
- URI for cluster used to run Dataproc execution. Format: projects/{PROJECT_ID}/regions/{REGION}/clusters/{CLUSTER_NAME}
ExecutionTemplate, ExecutionTemplateArgs    
- ScaleTier Pulumi.Google Native. Notebooks. V1. Execution Template Scale Tier 
- Scale tier of the hardware used for notebook execution. DEPRECATED Will be discontinued. As right now only CUSTOM is supported.
- AcceleratorConfig Pulumi.Google Native. Notebooks. V1. Inputs. Scheduler Accelerator Config 
- Configuration (count and accelerator type) for hardware running notebook execution.
- ContainerImage stringUri 
- Container Image URI to a DLVM Example: 'gcr.io/deeplearning-platform-release/base-cu100' More examples can be found at: https://cloud.google.com/ai-platform/deep-learning-containers/docs/choosing-container
- DataprocParameters Pulumi.Google Native. Notebooks. V1. Inputs. Dataproc Parameters 
- Parameters used in Dataproc JobType executions.
- InputNotebook stringFile 
- Path to the notebook file to execute. Must be in a Google Cloud Storage bucket. Format: gs://{bucket_name}/{folder}/{notebook_file_name}Ex:gs://notebook_user/scheduled_notebooks/sentiment_notebook.ipynb
- JobType Pulumi.Google Native. Notebooks. V1. Execution Template Job Type 
- The type of Job to be used on this execution.
- KernelSpec string
- Name of the kernel spec to use. This must be specified if the kernel spec name on the execution target does not match the name in the input notebook file.
- Labels Dictionary<string, string>
- Labels for execution. If execution is scheduled, a field included will be 'nbs-scheduled'. Otherwise, it is an immediate execution, and an included field will be 'nbs-immediate'. Use fields to efficiently index between various types of executions.
- MasterType string
- Specifies the type of virtual machine to use for your training job's master worker. You must specify this field when scaleTieris set toCUSTOM. You can use certain Compute Engine machine types directly in this field. The following types are supported: -n1-standard-4-n1-standard-8-n1-standard-16-n1-standard-32-n1-standard-64-n1-standard-96-n1-highmem-2-n1-highmem-4-n1-highmem-8-n1-highmem-16-n1-highmem-32-n1-highmem-64-n1-highmem-96-n1-highcpu-16-n1-highcpu-32-n1-highcpu-64-n1-highcpu-96Alternatively, you can use the following legacy machine types: -standard-large_model-complex_model_s-complex_model_m-complex_model_l-standard_gpu-complex_model_m_gpu-complex_model_l_gpu-standard_p100-complex_model_m_p100-standard_v100-large_model_v100-complex_model_m_v100-complex_model_l_v100Finally, if you want to use a TPU for training, specifycloud_tpuin this field. Learn more about the special configuration options for training with TPU.
- OutputNotebook stringFolder 
- Path to the notebook folder to write to. Must be in a Google Cloud Storage bucket path. Format: gs://{bucket_name}/{folder}Ex:gs://notebook_user/scheduled_notebooks
- Parameters string
- Parameters used within the 'input_notebook_file' notebook.
- ParamsYaml stringFile 
- Parameters to be overridden in the notebook during execution. Ref https://papermill.readthedocs.io/en/latest/usage-parameterize.html on how to specifying parameters in the input notebook and pass them here in an YAML file. Ex: gs://notebook_user/scheduled_notebooks/sentiment_notebook_params.yaml
- ServiceAccount string
- The email address of a service account to use when running the execution. You must have the iam.serviceAccounts.actAspermission for the specified service account.
- Tensorboard string
- The name of a Vertex AI [Tensorboard] resource to which this execution will upload Tensorboard logs. Format: projects/{project}/locations/{location}/tensorboards/{tensorboard}
- VertexAi Pulumi.Parameters Google Native. Notebooks. V1. Inputs. Vertex AIParameters 
- Parameters used in Vertex AI JobType executions.
- ScaleTier ExecutionTemplate Scale Tier 
- Scale tier of the hardware used for notebook execution. DEPRECATED Will be discontinued. As right now only CUSTOM is supported.
- AcceleratorConfig SchedulerAccelerator Config 
- Configuration (count and accelerator type) for hardware running notebook execution.
- ContainerImage stringUri 
- Container Image URI to a DLVM Example: 'gcr.io/deeplearning-platform-release/base-cu100' More examples can be found at: https://cloud.google.com/ai-platform/deep-learning-containers/docs/choosing-container
- DataprocParameters DataprocParameters 
- Parameters used in Dataproc JobType executions.
- InputNotebook stringFile 
- Path to the notebook file to execute. Must be in a Google Cloud Storage bucket. Format: gs://{bucket_name}/{folder}/{notebook_file_name}Ex:gs://notebook_user/scheduled_notebooks/sentiment_notebook.ipynb
- JobType ExecutionTemplate Job Type 
- The type of Job to be used on this execution.
- KernelSpec string
- Name of the kernel spec to use. This must be specified if the kernel spec name on the execution target does not match the name in the input notebook file.
- Labels map[string]string
- Labels for execution. If execution is scheduled, a field included will be 'nbs-scheduled'. Otherwise, it is an immediate execution, and an included field will be 'nbs-immediate'. Use fields to efficiently index between various types of executions.
- MasterType string
- Specifies the type of virtual machine to use for your training job's master worker. You must specify this field when scaleTieris set toCUSTOM. You can use certain Compute Engine machine types directly in this field. The following types are supported: -n1-standard-4-n1-standard-8-n1-standard-16-n1-standard-32-n1-standard-64-n1-standard-96-n1-highmem-2-n1-highmem-4-n1-highmem-8-n1-highmem-16-n1-highmem-32-n1-highmem-64-n1-highmem-96-n1-highcpu-16-n1-highcpu-32-n1-highcpu-64-n1-highcpu-96Alternatively, you can use the following legacy machine types: -standard-large_model-complex_model_s-complex_model_m-complex_model_l-standard_gpu-complex_model_m_gpu-complex_model_l_gpu-standard_p100-complex_model_m_p100-standard_v100-large_model_v100-complex_model_m_v100-complex_model_l_v100Finally, if you want to use a TPU for training, specifycloud_tpuin this field. Learn more about the special configuration options for training with TPU.
- OutputNotebook stringFolder 
- Path to the notebook folder to write to. Must be in a Google Cloud Storage bucket path. Format: gs://{bucket_name}/{folder}Ex:gs://notebook_user/scheduled_notebooks
- Parameters string
- Parameters used within the 'input_notebook_file' notebook.
- ParamsYaml stringFile 
- Parameters to be overridden in the notebook during execution. Ref https://papermill.readthedocs.io/en/latest/usage-parameterize.html on how to specifying parameters in the input notebook and pass them here in an YAML file. Ex: gs://notebook_user/scheduled_notebooks/sentiment_notebook_params.yaml
- ServiceAccount string
- The email address of a service account to use when running the execution. You must have the iam.serviceAccounts.actAspermission for the specified service account.
- Tensorboard string
- The name of a Vertex AI [Tensorboard] resource to which this execution will upload Tensorboard logs. Format: projects/{project}/locations/{location}/tensorboards/{tensorboard}
- VertexAi VertexParameters AIParameters 
- Parameters used in Vertex AI JobType executions.
- scaleTier ExecutionTemplate Scale Tier 
- Scale tier of the hardware used for notebook execution. DEPRECATED Will be discontinued. As right now only CUSTOM is supported.
- acceleratorConfig SchedulerAccelerator Config 
- Configuration (count and accelerator type) for hardware running notebook execution.
- containerImage StringUri 
- Container Image URI to a DLVM Example: 'gcr.io/deeplearning-platform-release/base-cu100' More examples can be found at: https://cloud.google.com/ai-platform/deep-learning-containers/docs/choosing-container
- dataprocParameters DataprocParameters 
- Parameters used in Dataproc JobType executions.
- inputNotebook StringFile 
- Path to the notebook file to execute. Must be in a Google Cloud Storage bucket. Format: gs://{bucket_name}/{folder}/{notebook_file_name}Ex:gs://notebook_user/scheduled_notebooks/sentiment_notebook.ipynb
- jobType ExecutionTemplate Job Type 
- The type of Job to be used on this execution.
- kernelSpec String
- Name of the kernel spec to use. This must be specified if the kernel spec name on the execution target does not match the name in the input notebook file.
- labels Map<String,String>
- Labels for execution. If execution is scheduled, a field included will be 'nbs-scheduled'. Otherwise, it is an immediate execution, and an included field will be 'nbs-immediate'. Use fields to efficiently index between various types of executions.
- masterType String
- Specifies the type of virtual machine to use for your training job's master worker. You must specify this field when scaleTieris set toCUSTOM. You can use certain Compute Engine machine types directly in this field. The following types are supported: -n1-standard-4-n1-standard-8-n1-standard-16-n1-standard-32-n1-standard-64-n1-standard-96-n1-highmem-2-n1-highmem-4-n1-highmem-8-n1-highmem-16-n1-highmem-32-n1-highmem-64-n1-highmem-96-n1-highcpu-16-n1-highcpu-32-n1-highcpu-64-n1-highcpu-96Alternatively, you can use the following legacy machine types: -standard-large_model-complex_model_s-complex_model_m-complex_model_l-standard_gpu-complex_model_m_gpu-complex_model_l_gpu-standard_p100-complex_model_m_p100-standard_v100-large_model_v100-complex_model_m_v100-complex_model_l_v100Finally, if you want to use a TPU for training, specifycloud_tpuin this field. Learn more about the special configuration options for training with TPU.
- outputNotebook StringFolder 
- Path to the notebook folder to write to. Must be in a Google Cloud Storage bucket path. Format: gs://{bucket_name}/{folder}Ex:gs://notebook_user/scheduled_notebooks
- parameters String
- Parameters used within the 'input_notebook_file' notebook.
- paramsYaml StringFile 
- Parameters to be overridden in the notebook during execution. Ref https://papermill.readthedocs.io/en/latest/usage-parameterize.html on how to specifying parameters in the input notebook and pass them here in an YAML file. Ex: gs://notebook_user/scheduled_notebooks/sentiment_notebook_params.yaml
- serviceAccount String
- The email address of a service account to use when running the execution. You must have the iam.serviceAccounts.actAspermission for the specified service account.
- tensorboard String
- The name of a Vertex AI [Tensorboard] resource to which this execution will upload Tensorboard logs. Format: projects/{project}/locations/{location}/tensorboards/{tensorboard}
- vertexAi VertexParameters AIParameters 
- Parameters used in Vertex AI JobType executions.
- scaleTier ExecutionTemplate Scale Tier 
- Scale tier of the hardware used for notebook execution. DEPRECATED Will be discontinued. As right now only CUSTOM is supported.
- acceleratorConfig SchedulerAccelerator Config 
- Configuration (count and accelerator type) for hardware running notebook execution.
- containerImage stringUri 
- Container Image URI to a DLVM Example: 'gcr.io/deeplearning-platform-release/base-cu100' More examples can be found at: https://cloud.google.com/ai-platform/deep-learning-containers/docs/choosing-container
- dataprocParameters DataprocParameters 
- Parameters used in Dataproc JobType executions.
- inputNotebook stringFile 
- Path to the notebook file to execute. Must be in a Google Cloud Storage bucket. Format: gs://{bucket_name}/{folder}/{notebook_file_name}Ex:gs://notebook_user/scheduled_notebooks/sentiment_notebook.ipynb
- jobType ExecutionTemplate Job Type 
- The type of Job to be used on this execution.
- kernelSpec string
- Name of the kernel spec to use. This must be specified if the kernel spec name on the execution target does not match the name in the input notebook file.
- labels {[key: string]: string}
- Labels for execution. If execution is scheduled, a field included will be 'nbs-scheduled'. Otherwise, it is an immediate execution, and an included field will be 'nbs-immediate'. Use fields to efficiently index between various types of executions.
- masterType string
- Specifies the type of virtual machine to use for your training job's master worker. You must specify this field when scaleTieris set toCUSTOM. You can use certain Compute Engine machine types directly in this field. The following types are supported: -n1-standard-4-n1-standard-8-n1-standard-16-n1-standard-32-n1-standard-64-n1-standard-96-n1-highmem-2-n1-highmem-4-n1-highmem-8-n1-highmem-16-n1-highmem-32-n1-highmem-64-n1-highmem-96-n1-highcpu-16-n1-highcpu-32-n1-highcpu-64-n1-highcpu-96Alternatively, you can use the following legacy machine types: -standard-large_model-complex_model_s-complex_model_m-complex_model_l-standard_gpu-complex_model_m_gpu-complex_model_l_gpu-standard_p100-complex_model_m_p100-standard_v100-large_model_v100-complex_model_m_v100-complex_model_l_v100Finally, if you want to use a TPU for training, specifycloud_tpuin this field. Learn more about the special configuration options for training with TPU.
- outputNotebook stringFolder 
- Path to the notebook folder to write to. Must be in a Google Cloud Storage bucket path. Format: gs://{bucket_name}/{folder}Ex:gs://notebook_user/scheduled_notebooks
- parameters string
- Parameters used within the 'input_notebook_file' notebook.
- paramsYaml stringFile 
- Parameters to be overridden in the notebook during execution. Ref https://papermill.readthedocs.io/en/latest/usage-parameterize.html on how to specifying parameters in the input notebook and pass them here in an YAML file. Ex: gs://notebook_user/scheduled_notebooks/sentiment_notebook_params.yaml
- serviceAccount string
- The email address of a service account to use when running the execution. You must have the iam.serviceAccounts.actAspermission for the specified service account.
- tensorboard string
- The name of a Vertex AI [Tensorboard] resource to which this execution will upload Tensorboard logs. Format: projects/{project}/locations/{location}/tensorboards/{tensorboard}
- vertexAi VertexParameters AIParameters 
- Parameters used in Vertex AI JobType executions.
- scale_tier ExecutionTemplate Scale Tier 
- Scale tier of the hardware used for notebook execution. DEPRECATED Will be discontinued. As right now only CUSTOM is supported.
- accelerator_config SchedulerAccelerator Config 
- Configuration (count and accelerator type) for hardware running notebook execution.
- container_image_ struri 
- Container Image URI to a DLVM Example: 'gcr.io/deeplearning-platform-release/base-cu100' More examples can be found at: https://cloud.google.com/ai-platform/deep-learning-containers/docs/choosing-container
- dataproc_parameters DataprocParameters 
- Parameters used in Dataproc JobType executions.
- input_notebook_ strfile 
- Path to the notebook file to execute. Must be in a Google Cloud Storage bucket. Format: gs://{bucket_name}/{folder}/{notebook_file_name}Ex:gs://notebook_user/scheduled_notebooks/sentiment_notebook.ipynb
- job_type ExecutionTemplate Job Type 
- The type of Job to be used on this execution.
- kernel_spec str
- Name of the kernel spec to use. This must be specified if the kernel spec name on the execution target does not match the name in the input notebook file.
- labels Mapping[str, str]
- Labels for execution. If execution is scheduled, a field included will be 'nbs-scheduled'. Otherwise, it is an immediate execution, and an included field will be 'nbs-immediate'. Use fields to efficiently index between various types of executions.
- master_type str
- Specifies the type of virtual machine to use for your training job's master worker. You must specify this field when scaleTieris set toCUSTOM. You can use certain Compute Engine machine types directly in this field. The following types are supported: -n1-standard-4-n1-standard-8-n1-standard-16-n1-standard-32-n1-standard-64-n1-standard-96-n1-highmem-2-n1-highmem-4-n1-highmem-8-n1-highmem-16-n1-highmem-32-n1-highmem-64-n1-highmem-96-n1-highcpu-16-n1-highcpu-32-n1-highcpu-64-n1-highcpu-96Alternatively, you can use the following legacy machine types: -standard-large_model-complex_model_s-complex_model_m-complex_model_l-standard_gpu-complex_model_m_gpu-complex_model_l_gpu-standard_p100-complex_model_m_p100-standard_v100-large_model_v100-complex_model_m_v100-complex_model_l_v100Finally, if you want to use a TPU for training, specifycloud_tpuin this field. Learn more about the special configuration options for training with TPU.
- output_notebook_ strfolder 
- Path to the notebook folder to write to. Must be in a Google Cloud Storage bucket path. Format: gs://{bucket_name}/{folder}Ex:gs://notebook_user/scheduled_notebooks
- parameters str
- Parameters used within the 'input_notebook_file' notebook.
- params_yaml_ strfile 
- Parameters to be overridden in the notebook during execution. Ref https://papermill.readthedocs.io/en/latest/usage-parameterize.html on how to specifying parameters in the input notebook and pass them here in an YAML file. Ex: gs://notebook_user/scheduled_notebooks/sentiment_notebook_params.yaml
- service_account str
- The email address of a service account to use when running the execution. You must have the iam.serviceAccounts.actAspermission for the specified service account.
- tensorboard str
- The name of a Vertex AI [Tensorboard] resource to which this execution will upload Tensorboard logs. Format: projects/{project}/locations/{location}/tensorboards/{tensorboard}
- vertex_ai_ Vertexparameters AIParameters 
- Parameters used in Vertex AI JobType executions.
- scaleTier "SCALE_TIER_UNSPECIFIED" | "BASIC" | "STANDARD_1" | "PREMIUM_1" | "BASIC_GPU" | "BASIC_TPU" | "CUSTOM"
- Scale tier of the hardware used for notebook execution. DEPRECATED Will be discontinued. As right now only CUSTOM is supported.
- acceleratorConfig Property Map
- Configuration (count and accelerator type) for hardware running notebook execution.
- containerImage StringUri 
- Container Image URI to a DLVM Example: 'gcr.io/deeplearning-platform-release/base-cu100' More examples can be found at: https://cloud.google.com/ai-platform/deep-learning-containers/docs/choosing-container
- dataprocParameters Property Map
- Parameters used in Dataproc JobType executions.
- inputNotebook StringFile 
- Path to the notebook file to execute. Must be in a Google Cloud Storage bucket. Format: gs://{bucket_name}/{folder}/{notebook_file_name}Ex:gs://notebook_user/scheduled_notebooks/sentiment_notebook.ipynb
- jobType "JOB_TYPE_UNSPECIFIED" | "VERTEX_AI" | "DATAPROC"
- The type of Job to be used on this execution.
- kernelSpec String
- Name of the kernel spec to use. This must be specified if the kernel spec name on the execution target does not match the name in the input notebook file.
- labels Map<String>
- Labels for execution. If execution is scheduled, a field included will be 'nbs-scheduled'. Otherwise, it is an immediate execution, and an included field will be 'nbs-immediate'. Use fields to efficiently index between various types of executions.
- masterType String
- Specifies the type of virtual machine to use for your training job's master worker. You must specify this field when scaleTieris set toCUSTOM. You can use certain Compute Engine machine types directly in this field. The following types are supported: -n1-standard-4-n1-standard-8-n1-standard-16-n1-standard-32-n1-standard-64-n1-standard-96-n1-highmem-2-n1-highmem-4-n1-highmem-8-n1-highmem-16-n1-highmem-32-n1-highmem-64-n1-highmem-96-n1-highcpu-16-n1-highcpu-32-n1-highcpu-64-n1-highcpu-96Alternatively, you can use the following legacy machine types: -standard-large_model-complex_model_s-complex_model_m-complex_model_l-standard_gpu-complex_model_m_gpu-complex_model_l_gpu-standard_p100-complex_model_m_p100-standard_v100-large_model_v100-complex_model_m_v100-complex_model_l_v100Finally, if you want to use a TPU for training, specifycloud_tpuin this field. Learn more about the special configuration options for training with TPU.
- outputNotebook StringFolder 
- Path to the notebook folder to write to. Must be in a Google Cloud Storage bucket path. Format: gs://{bucket_name}/{folder}Ex:gs://notebook_user/scheduled_notebooks
- parameters String
- Parameters used within the 'input_notebook_file' notebook.
- paramsYaml StringFile 
- Parameters to be overridden in the notebook during execution. Ref https://papermill.readthedocs.io/en/latest/usage-parameterize.html on how to specifying parameters in the input notebook and pass them here in an YAML file. Ex: gs://notebook_user/scheduled_notebooks/sentiment_notebook_params.yaml
- serviceAccount String
- The email address of a service account to use when running the execution. You must have the iam.serviceAccounts.actAspermission for the specified service account.
- tensorboard String
- The name of a Vertex AI [Tensorboard] resource to which this execution will upload Tensorboard logs. Format: projects/{project}/locations/{location}/tensorboards/{tensorboard}
- vertexAi Property MapParameters 
- Parameters used in Vertex AI JobType executions.
ExecutionTemplateJobType, ExecutionTemplateJobTypeArgs        
- JobType Unspecified 
- JOB_TYPE_UNSPECIFIEDNo type specified.
- VertexAi 
- VERTEX_AICustom Job in aiplatform.googleapis.com. Default value for an execution.
- Dataproc
- DATAPROCRun execution on a cluster with Dataproc as a job. https://cloud.google.com/dataproc/docs/reference/rest/v1/projects.regions.jobs
- ExecutionTemplate Job Type Job Type Unspecified 
- JOB_TYPE_UNSPECIFIEDNo type specified.
- ExecutionTemplate Job Type Vertex Ai 
- VERTEX_AICustom Job in aiplatform.googleapis.com. Default value for an execution.
- ExecutionTemplate Job Type Dataproc 
- DATAPROCRun execution on a cluster with Dataproc as a job. https://cloud.google.com/dataproc/docs/reference/rest/v1/projects.regions.jobs
- JobType Unspecified 
- JOB_TYPE_UNSPECIFIEDNo type specified.
- VertexAi 
- VERTEX_AICustom Job in aiplatform.googleapis.com. Default value for an execution.
- Dataproc
- DATAPROCRun execution on a cluster with Dataproc as a job. https://cloud.google.com/dataproc/docs/reference/rest/v1/projects.regions.jobs
- JobType Unspecified 
- JOB_TYPE_UNSPECIFIEDNo type specified.
- VertexAi 
- VERTEX_AICustom Job in aiplatform.googleapis.com. Default value for an execution.
- Dataproc
- DATAPROCRun execution on a cluster with Dataproc as a job. https://cloud.google.com/dataproc/docs/reference/rest/v1/projects.regions.jobs
- JOB_TYPE_UNSPECIFIED
- JOB_TYPE_UNSPECIFIEDNo type specified.
- VERTEX_AI
- VERTEX_AICustom Job in aiplatform.googleapis.com. Default value for an execution.
- DATAPROC
- DATAPROCRun execution on a cluster with Dataproc as a job. https://cloud.google.com/dataproc/docs/reference/rest/v1/projects.regions.jobs
- "JOB_TYPE_UNSPECIFIED"
- JOB_TYPE_UNSPECIFIEDNo type specified.
- "VERTEX_AI"
- VERTEX_AICustom Job in aiplatform.googleapis.com. Default value for an execution.
- "DATAPROC"
- DATAPROCRun execution on a cluster with Dataproc as a job. https://cloud.google.com/dataproc/docs/reference/rest/v1/projects.regions.jobs
ExecutionTemplateResponse, ExecutionTemplateResponseArgs      
- AcceleratorConfig Pulumi.Google Native. Notebooks. V1. Inputs. Scheduler Accelerator Config Response 
- Configuration (count and accelerator type) for hardware running notebook execution.
- ContainerImage stringUri 
- Container Image URI to a DLVM Example: 'gcr.io/deeplearning-platform-release/base-cu100' More examples can be found at: https://cloud.google.com/ai-platform/deep-learning-containers/docs/choosing-container
- DataprocParameters Pulumi.Google Native. Notebooks. V1. Inputs. Dataproc Parameters Response 
- Parameters used in Dataproc JobType executions.
- InputNotebook stringFile 
- Path to the notebook file to execute. Must be in a Google Cloud Storage bucket. Format: gs://{bucket_name}/{folder}/{notebook_file_name}Ex:gs://notebook_user/scheduled_notebooks/sentiment_notebook.ipynb
- JobType string
- The type of Job to be used on this execution.
- KernelSpec string
- Name of the kernel spec to use. This must be specified if the kernel spec name on the execution target does not match the name in the input notebook file.
- Labels Dictionary<string, string>
- Labels for execution. If execution is scheduled, a field included will be 'nbs-scheduled'. Otherwise, it is an immediate execution, and an included field will be 'nbs-immediate'. Use fields to efficiently index between various types of executions.
- MasterType string
- Specifies the type of virtual machine to use for your training job's master worker. You must specify this field when scaleTieris set toCUSTOM. You can use certain Compute Engine machine types directly in this field. The following types are supported: -n1-standard-4-n1-standard-8-n1-standard-16-n1-standard-32-n1-standard-64-n1-standard-96-n1-highmem-2-n1-highmem-4-n1-highmem-8-n1-highmem-16-n1-highmem-32-n1-highmem-64-n1-highmem-96-n1-highcpu-16-n1-highcpu-32-n1-highcpu-64-n1-highcpu-96Alternatively, you can use the following legacy machine types: -standard-large_model-complex_model_s-complex_model_m-complex_model_l-standard_gpu-complex_model_m_gpu-complex_model_l_gpu-standard_p100-complex_model_m_p100-standard_v100-large_model_v100-complex_model_m_v100-complex_model_l_v100Finally, if you want to use a TPU for training, specifycloud_tpuin this field. Learn more about the special configuration options for training with TPU.
- OutputNotebook stringFolder 
- Path to the notebook folder to write to. Must be in a Google Cloud Storage bucket path. Format: gs://{bucket_name}/{folder}Ex:gs://notebook_user/scheduled_notebooks
- Parameters string
- Parameters used within the 'input_notebook_file' notebook.
- ParamsYaml stringFile 
- Parameters to be overridden in the notebook during execution. Ref https://papermill.readthedocs.io/en/latest/usage-parameterize.html on how to specifying parameters in the input notebook and pass them here in an YAML file. Ex: gs://notebook_user/scheduled_notebooks/sentiment_notebook_params.yaml
- ScaleTier string
- Scale tier of the hardware used for notebook execution. DEPRECATED Will be discontinued. As right now only CUSTOM is supported.
- ServiceAccount string
- The email address of a service account to use when running the execution. You must have the iam.serviceAccounts.actAspermission for the specified service account.
- Tensorboard string
- The name of a Vertex AI [Tensorboard] resource to which this execution will upload Tensorboard logs. Format: projects/{project}/locations/{location}/tensorboards/{tensorboard}
- VertexAi Pulumi.Parameters Google Native. Notebooks. V1. Inputs. Vertex AIParameters Response 
- Parameters used in Vertex AI JobType executions.
- AcceleratorConfig SchedulerAccelerator Config Response 
- Configuration (count and accelerator type) for hardware running notebook execution.
- ContainerImage stringUri 
- Container Image URI to a DLVM Example: 'gcr.io/deeplearning-platform-release/base-cu100' More examples can be found at: https://cloud.google.com/ai-platform/deep-learning-containers/docs/choosing-container
- DataprocParameters DataprocParameters Response 
- Parameters used in Dataproc JobType executions.
- InputNotebook stringFile 
- Path to the notebook file to execute. Must be in a Google Cloud Storage bucket. Format: gs://{bucket_name}/{folder}/{notebook_file_name}Ex:gs://notebook_user/scheduled_notebooks/sentiment_notebook.ipynb
- JobType string
- The type of Job to be used on this execution.
- KernelSpec string
- Name of the kernel spec to use. This must be specified if the kernel spec name on the execution target does not match the name in the input notebook file.
- Labels map[string]string
- Labels for execution. If execution is scheduled, a field included will be 'nbs-scheduled'. Otherwise, it is an immediate execution, and an included field will be 'nbs-immediate'. Use fields to efficiently index between various types of executions.
- MasterType string
- Specifies the type of virtual machine to use for your training job's master worker. You must specify this field when scaleTieris set toCUSTOM. You can use certain Compute Engine machine types directly in this field. The following types are supported: -n1-standard-4-n1-standard-8-n1-standard-16-n1-standard-32-n1-standard-64-n1-standard-96-n1-highmem-2-n1-highmem-4-n1-highmem-8-n1-highmem-16-n1-highmem-32-n1-highmem-64-n1-highmem-96-n1-highcpu-16-n1-highcpu-32-n1-highcpu-64-n1-highcpu-96Alternatively, you can use the following legacy machine types: -standard-large_model-complex_model_s-complex_model_m-complex_model_l-standard_gpu-complex_model_m_gpu-complex_model_l_gpu-standard_p100-complex_model_m_p100-standard_v100-large_model_v100-complex_model_m_v100-complex_model_l_v100Finally, if you want to use a TPU for training, specifycloud_tpuin this field. Learn more about the special configuration options for training with TPU.
- OutputNotebook stringFolder 
- Path to the notebook folder to write to. Must be in a Google Cloud Storage bucket path. Format: gs://{bucket_name}/{folder}Ex:gs://notebook_user/scheduled_notebooks
- Parameters string
- Parameters used within the 'input_notebook_file' notebook.
- ParamsYaml stringFile 
- Parameters to be overridden in the notebook during execution. Ref https://papermill.readthedocs.io/en/latest/usage-parameterize.html on how to specifying parameters in the input notebook and pass them here in an YAML file. Ex: gs://notebook_user/scheduled_notebooks/sentiment_notebook_params.yaml
- ScaleTier string
- Scale tier of the hardware used for notebook execution. DEPRECATED Will be discontinued. As right now only CUSTOM is supported.
- ServiceAccount string
- The email address of a service account to use when running the execution. You must have the iam.serviceAccounts.actAspermission for the specified service account.
- Tensorboard string
- The name of a Vertex AI [Tensorboard] resource to which this execution will upload Tensorboard logs. Format: projects/{project}/locations/{location}/tensorboards/{tensorboard}
- VertexAi VertexParameters AIParameters Response 
- Parameters used in Vertex AI JobType executions.
- acceleratorConfig SchedulerAccelerator Config Response 
- Configuration (count and accelerator type) for hardware running notebook execution.
- containerImage StringUri 
- Container Image URI to a DLVM Example: 'gcr.io/deeplearning-platform-release/base-cu100' More examples can be found at: https://cloud.google.com/ai-platform/deep-learning-containers/docs/choosing-container
- dataprocParameters DataprocParameters Response 
- Parameters used in Dataproc JobType executions.
- inputNotebook StringFile 
- Path to the notebook file to execute. Must be in a Google Cloud Storage bucket. Format: gs://{bucket_name}/{folder}/{notebook_file_name}Ex:gs://notebook_user/scheduled_notebooks/sentiment_notebook.ipynb
- jobType String
- The type of Job to be used on this execution.
- kernelSpec String
- Name of the kernel spec to use. This must be specified if the kernel spec name on the execution target does not match the name in the input notebook file.
- labels Map<String,String>
- Labels for execution. If execution is scheduled, a field included will be 'nbs-scheduled'. Otherwise, it is an immediate execution, and an included field will be 'nbs-immediate'. Use fields to efficiently index between various types of executions.
- masterType String
- Specifies the type of virtual machine to use for your training job's master worker. You must specify this field when scaleTieris set toCUSTOM. You can use certain Compute Engine machine types directly in this field. The following types are supported: -n1-standard-4-n1-standard-8-n1-standard-16-n1-standard-32-n1-standard-64-n1-standard-96-n1-highmem-2-n1-highmem-4-n1-highmem-8-n1-highmem-16-n1-highmem-32-n1-highmem-64-n1-highmem-96-n1-highcpu-16-n1-highcpu-32-n1-highcpu-64-n1-highcpu-96Alternatively, you can use the following legacy machine types: -standard-large_model-complex_model_s-complex_model_m-complex_model_l-standard_gpu-complex_model_m_gpu-complex_model_l_gpu-standard_p100-complex_model_m_p100-standard_v100-large_model_v100-complex_model_m_v100-complex_model_l_v100Finally, if you want to use a TPU for training, specifycloud_tpuin this field. Learn more about the special configuration options for training with TPU.
- outputNotebook StringFolder 
- Path to the notebook folder to write to. Must be in a Google Cloud Storage bucket path. Format: gs://{bucket_name}/{folder}Ex:gs://notebook_user/scheduled_notebooks
- parameters String
- Parameters used within the 'input_notebook_file' notebook.
- paramsYaml StringFile 
- Parameters to be overridden in the notebook during execution. Ref https://papermill.readthedocs.io/en/latest/usage-parameterize.html on how to specifying parameters in the input notebook and pass them here in an YAML file. Ex: gs://notebook_user/scheduled_notebooks/sentiment_notebook_params.yaml
- scaleTier String
- Scale tier of the hardware used for notebook execution. DEPRECATED Will be discontinued. As right now only CUSTOM is supported.
- serviceAccount String
- The email address of a service account to use when running the execution. You must have the iam.serviceAccounts.actAspermission for the specified service account.
- tensorboard String
- The name of a Vertex AI [Tensorboard] resource to which this execution will upload Tensorboard logs. Format: projects/{project}/locations/{location}/tensorboards/{tensorboard}
- vertexAi VertexParameters AIParameters Response 
- Parameters used in Vertex AI JobType executions.
- acceleratorConfig SchedulerAccelerator Config Response 
- Configuration (count and accelerator type) for hardware running notebook execution.
- containerImage stringUri 
- Container Image URI to a DLVM Example: 'gcr.io/deeplearning-platform-release/base-cu100' More examples can be found at: https://cloud.google.com/ai-platform/deep-learning-containers/docs/choosing-container
- dataprocParameters DataprocParameters Response 
- Parameters used in Dataproc JobType executions.
- inputNotebook stringFile 
- Path to the notebook file to execute. Must be in a Google Cloud Storage bucket. Format: gs://{bucket_name}/{folder}/{notebook_file_name}Ex:gs://notebook_user/scheduled_notebooks/sentiment_notebook.ipynb
- jobType string
- The type of Job to be used on this execution.
- kernelSpec string
- Name of the kernel spec to use. This must be specified if the kernel spec name on the execution target does not match the name in the input notebook file.
- labels {[key: string]: string}
- Labels for execution. If execution is scheduled, a field included will be 'nbs-scheduled'. Otherwise, it is an immediate execution, and an included field will be 'nbs-immediate'. Use fields to efficiently index between various types of executions.
- masterType string
- Specifies the type of virtual machine to use for your training job's master worker. You must specify this field when scaleTieris set toCUSTOM. You can use certain Compute Engine machine types directly in this field. The following types are supported: -n1-standard-4-n1-standard-8-n1-standard-16-n1-standard-32-n1-standard-64-n1-standard-96-n1-highmem-2-n1-highmem-4-n1-highmem-8-n1-highmem-16-n1-highmem-32-n1-highmem-64-n1-highmem-96-n1-highcpu-16-n1-highcpu-32-n1-highcpu-64-n1-highcpu-96Alternatively, you can use the following legacy machine types: -standard-large_model-complex_model_s-complex_model_m-complex_model_l-standard_gpu-complex_model_m_gpu-complex_model_l_gpu-standard_p100-complex_model_m_p100-standard_v100-large_model_v100-complex_model_m_v100-complex_model_l_v100Finally, if you want to use a TPU for training, specifycloud_tpuin this field. Learn more about the special configuration options for training with TPU.
- outputNotebook stringFolder 
- Path to the notebook folder to write to. Must be in a Google Cloud Storage bucket path. Format: gs://{bucket_name}/{folder}Ex:gs://notebook_user/scheduled_notebooks
- parameters string
- Parameters used within the 'input_notebook_file' notebook.
- paramsYaml stringFile 
- Parameters to be overridden in the notebook during execution. Ref https://papermill.readthedocs.io/en/latest/usage-parameterize.html on how to specifying parameters in the input notebook and pass them here in an YAML file. Ex: gs://notebook_user/scheduled_notebooks/sentiment_notebook_params.yaml
- scaleTier string
- Scale tier of the hardware used for notebook execution. DEPRECATED Will be discontinued. As right now only CUSTOM is supported.
- serviceAccount string
- The email address of a service account to use when running the execution. You must have the iam.serviceAccounts.actAspermission for the specified service account.
- tensorboard string
- The name of a Vertex AI [Tensorboard] resource to which this execution will upload Tensorboard logs. Format: projects/{project}/locations/{location}/tensorboards/{tensorboard}
- vertexAi VertexParameters AIParameters Response 
- Parameters used in Vertex AI JobType executions.
- accelerator_config SchedulerAccelerator Config Response 
- Configuration (count and accelerator type) for hardware running notebook execution.
- container_image_ struri 
- Container Image URI to a DLVM Example: 'gcr.io/deeplearning-platform-release/base-cu100' More examples can be found at: https://cloud.google.com/ai-platform/deep-learning-containers/docs/choosing-container
- dataproc_parameters DataprocParameters Response 
- Parameters used in Dataproc JobType executions.
- input_notebook_ strfile 
- Path to the notebook file to execute. Must be in a Google Cloud Storage bucket. Format: gs://{bucket_name}/{folder}/{notebook_file_name}Ex:gs://notebook_user/scheduled_notebooks/sentiment_notebook.ipynb
- job_type str
- The type of Job to be used on this execution.
- kernel_spec str
- Name of the kernel spec to use. This must be specified if the kernel spec name on the execution target does not match the name in the input notebook file.
- labels Mapping[str, str]
- Labels for execution. If execution is scheduled, a field included will be 'nbs-scheduled'. Otherwise, it is an immediate execution, and an included field will be 'nbs-immediate'. Use fields to efficiently index between various types of executions.
- master_type str
- Specifies the type of virtual machine to use for your training job's master worker. You must specify this field when scaleTieris set toCUSTOM. You can use certain Compute Engine machine types directly in this field. The following types are supported: -n1-standard-4-n1-standard-8-n1-standard-16-n1-standard-32-n1-standard-64-n1-standard-96-n1-highmem-2-n1-highmem-4-n1-highmem-8-n1-highmem-16-n1-highmem-32-n1-highmem-64-n1-highmem-96-n1-highcpu-16-n1-highcpu-32-n1-highcpu-64-n1-highcpu-96Alternatively, you can use the following legacy machine types: -standard-large_model-complex_model_s-complex_model_m-complex_model_l-standard_gpu-complex_model_m_gpu-complex_model_l_gpu-standard_p100-complex_model_m_p100-standard_v100-large_model_v100-complex_model_m_v100-complex_model_l_v100Finally, if you want to use a TPU for training, specifycloud_tpuin this field. Learn more about the special configuration options for training with TPU.
- output_notebook_ strfolder 
- Path to the notebook folder to write to. Must be in a Google Cloud Storage bucket path. Format: gs://{bucket_name}/{folder}Ex:gs://notebook_user/scheduled_notebooks
- parameters str
- Parameters used within the 'input_notebook_file' notebook.
- params_yaml_ strfile 
- Parameters to be overridden in the notebook during execution. Ref https://papermill.readthedocs.io/en/latest/usage-parameterize.html on how to specifying parameters in the input notebook and pass them here in an YAML file. Ex: gs://notebook_user/scheduled_notebooks/sentiment_notebook_params.yaml
- scale_tier str
- Scale tier of the hardware used for notebook execution. DEPRECATED Will be discontinued. As right now only CUSTOM is supported.
- service_account str
- The email address of a service account to use when running the execution. You must have the iam.serviceAccounts.actAspermission for the specified service account.
- tensorboard str
- The name of a Vertex AI [Tensorboard] resource to which this execution will upload Tensorboard logs. Format: projects/{project}/locations/{location}/tensorboards/{tensorboard}
- vertex_ai_ Vertexparameters AIParameters Response 
- Parameters used in Vertex AI JobType executions.
- acceleratorConfig Property Map
- Configuration (count and accelerator type) for hardware running notebook execution.
- containerImage StringUri 
- Container Image URI to a DLVM Example: 'gcr.io/deeplearning-platform-release/base-cu100' More examples can be found at: https://cloud.google.com/ai-platform/deep-learning-containers/docs/choosing-container
- dataprocParameters Property Map
- Parameters used in Dataproc JobType executions.
- inputNotebook StringFile 
- Path to the notebook file to execute. Must be in a Google Cloud Storage bucket. Format: gs://{bucket_name}/{folder}/{notebook_file_name}Ex:gs://notebook_user/scheduled_notebooks/sentiment_notebook.ipynb
- jobType String
- The type of Job to be used on this execution.
- kernelSpec String
- Name of the kernel spec to use. This must be specified if the kernel spec name on the execution target does not match the name in the input notebook file.
- labels Map<String>
- Labels for execution. If execution is scheduled, a field included will be 'nbs-scheduled'. Otherwise, it is an immediate execution, and an included field will be 'nbs-immediate'. Use fields to efficiently index between various types of executions.
- masterType String
- Specifies the type of virtual machine to use for your training job's master worker. You must specify this field when scaleTieris set toCUSTOM. You can use certain Compute Engine machine types directly in this field. The following types are supported: -n1-standard-4-n1-standard-8-n1-standard-16-n1-standard-32-n1-standard-64-n1-standard-96-n1-highmem-2-n1-highmem-4-n1-highmem-8-n1-highmem-16-n1-highmem-32-n1-highmem-64-n1-highmem-96-n1-highcpu-16-n1-highcpu-32-n1-highcpu-64-n1-highcpu-96Alternatively, you can use the following legacy machine types: -standard-large_model-complex_model_s-complex_model_m-complex_model_l-standard_gpu-complex_model_m_gpu-complex_model_l_gpu-standard_p100-complex_model_m_p100-standard_v100-large_model_v100-complex_model_m_v100-complex_model_l_v100Finally, if you want to use a TPU for training, specifycloud_tpuin this field. Learn more about the special configuration options for training with TPU.
- outputNotebook StringFolder 
- Path to the notebook folder to write to. Must be in a Google Cloud Storage bucket path. Format: gs://{bucket_name}/{folder}Ex:gs://notebook_user/scheduled_notebooks
- parameters String
- Parameters used within the 'input_notebook_file' notebook.
- paramsYaml StringFile 
- Parameters to be overridden in the notebook during execution. Ref https://papermill.readthedocs.io/en/latest/usage-parameterize.html on how to specifying parameters in the input notebook and pass them here in an YAML file. Ex: gs://notebook_user/scheduled_notebooks/sentiment_notebook_params.yaml
- scaleTier String
- Scale tier of the hardware used for notebook execution. DEPRECATED Will be discontinued. As right now only CUSTOM is supported.
- serviceAccount String
- The email address of a service account to use when running the execution. You must have the iam.serviceAccounts.actAspermission for the specified service account.
- tensorboard String
- The name of a Vertex AI [Tensorboard] resource to which this execution will upload Tensorboard logs. Format: projects/{project}/locations/{location}/tensorboards/{tensorboard}
- vertexAi Property MapParameters 
- Parameters used in Vertex AI JobType executions.
ExecutionTemplateScaleTier, ExecutionTemplateScaleTierArgs        
- ScaleTier Unspecified 
- SCALE_TIER_UNSPECIFIEDUnspecified Scale Tier.
- Basic
- BASICA single worker instance. This tier is suitable for learning how to use Cloud ML, and for experimenting with new models using small datasets.
- Standard1
- STANDARD_1Many workers and a few parameter servers.
- Premium1
- PREMIUM_1A large number of workers with many parameter servers.
- BasicGpu 
- BASIC_GPUA single worker instance with a K80 GPU.
- BasicTpu 
- BASIC_TPUA single worker instance with a Cloud TPU.
- Custom
- CUSTOMThe CUSTOM tier is not a set tier, but rather enables you to use your own cluster specification. When you use this tier, set values to configure your processing cluster according to these guidelines: * You must set ExecutionTemplate.masterTypeto specify the type of machine to use for your master node. This is the only required setting.
- ExecutionTemplate Scale Tier Scale Tier Unspecified 
- SCALE_TIER_UNSPECIFIEDUnspecified Scale Tier.
- ExecutionTemplate Scale Tier Basic 
- BASICA single worker instance. This tier is suitable for learning how to use Cloud ML, and for experimenting with new models using small datasets.
- ExecutionTemplate Scale Tier Standard1 
- STANDARD_1Many workers and a few parameter servers.
- ExecutionTemplate Scale Tier Premium1 
- PREMIUM_1A large number of workers with many parameter servers.
- ExecutionTemplate Scale Tier Basic Gpu 
- BASIC_GPUA single worker instance with a K80 GPU.
- ExecutionTemplate Scale Tier Basic Tpu 
- BASIC_TPUA single worker instance with a Cloud TPU.
- ExecutionTemplate Scale Tier Custom 
- CUSTOMThe CUSTOM tier is not a set tier, but rather enables you to use your own cluster specification. When you use this tier, set values to configure your processing cluster according to these guidelines: * You must set ExecutionTemplate.masterTypeto specify the type of machine to use for your master node. This is the only required setting.
- ScaleTier Unspecified 
- SCALE_TIER_UNSPECIFIEDUnspecified Scale Tier.
- Basic
- BASICA single worker instance. This tier is suitable for learning how to use Cloud ML, and for experimenting with new models using small datasets.
- Standard1
- STANDARD_1Many workers and a few parameter servers.
- Premium1
- PREMIUM_1A large number of workers with many parameter servers.
- BasicGpu 
- BASIC_GPUA single worker instance with a K80 GPU.
- BasicTpu 
- BASIC_TPUA single worker instance with a Cloud TPU.
- Custom
- CUSTOMThe CUSTOM tier is not a set tier, but rather enables you to use your own cluster specification. When you use this tier, set values to configure your processing cluster according to these guidelines: * You must set ExecutionTemplate.masterTypeto specify the type of machine to use for your master node. This is the only required setting.
- ScaleTier Unspecified 
- SCALE_TIER_UNSPECIFIEDUnspecified Scale Tier.
- Basic
- BASICA single worker instance. This tier is suitable for learning how to use Cloud ML, and for experimenting with new models using small datasets.
- Standard1
- STANDARD_1Many workers and a few parameter servers.
- Premium1
- PREMIUM_1A large number of workers with many parameter servers.
- BasicGpu 
- BASIC_GPUA single worker instance with a K80 GPU.
- BasicTpu 
- BASIC_TPUA single worker instance with a Cloud TPU.
- Custom
- CUSTOMThe CUSTOM tier is not a set tier, but rather enables you to use your own cluster specification. When you use this tier, set values to configure your processing cluster according to these guidelines: * You must set ExecutionTemplate.masterTypeto specify the type of machine to use for your master node. This is the only required setting.
- SCALE_TIER_UNSPECIFIED
- SCALE_TIER_UNSPECIFIEDUnspecified Scale Tier.
- BASIC
- BASICA single worker instance. This tier is suitable for learning how to use Cloud ML, and for experimenting with new models using small datasets.
- STANDARD1
- STANDARD_1Many workers and a few parameter servers.
- PREMIUM1
- PREMIUM_1A large number of workers with many parameter servers.
- BASIC_GPU
- BASIC_GPUA single worker instance with a K80 GPU.
- BASIC_TPU
- BASIC_TPUA single worker instance with a Cloud TPU.
- CUSTOM
- CUSTOMThe CUSTOM tier is not a set tier, but rather enables you to use your own cluster specification. When you use this tier, set values to configure your processing cluster according to these guidelines: * You must set ExecutionTemplate.masterTypeto specify the type of machine to use for your master node. This is the only required setting.
- "SCALE_TIER_UNSPECIFIED"
- SCALE_TIER_UNSPECIFIEDUnspecified Scale Tier.
- "BASIC"
- BASICA single worker instance. This tier is suitable for learning how to use Cloud ML, and for experimenting with new models using small datasets.
- "STANDARD_1"
- STANDARD_1Many workers and a few parameter servers.
- "PREMIUM_1"
- PREMIUM_1A large number of workers with many parameter servers.
- "BASIC_GPU"
- BASIC_GPUA single worker instance with a K80 GPU.
- "BASIC_TPU"
- BASIC_TPUA single worker instance with a Cloud TPU.
- "CUSTOM"
- CUSTOMThe CUSTOM tier is not a set tier, but rather enables you to use your own cluster specification. When you use this tier, set values to configure your processing cluster according to these guidelines: * You must set ExecutionTemplate.masterTypeto specify the type of machine to use for your master node. This is the only required setting.
SchedulerAcceleratorConfig, SchedulerAcceleratorConfigArgs      
- CoreCount string
- Count of cores of this accelerator.
- Type
Pulumi.Google Native. Notebooks. V1. Scheduler Accelerator Config Type 
- Type of this accelerator.
- CoreCount string
- Count of cores of this accelerator.
- Type
SchedulerAccelerator Config Type 
- Type of this accelerator.
- coreCount String
- Count of cores of this accelerator.
- type
SchedulerAccelerator Config Type 
- Type of this accelerator.
- coreCount string
- Count of cores of this accelerator.
- type
SchedulerAccelerator Config Type 
- Type of this accelerator.
- core_count str
- Count of cores of this accelerator.
- type
SchedulerAccelerator Config Type 
- Type of this accelerator.
- coreCount String
- Count of cores of this accelerator.
- type "SCHEDULER_ACCELERATOR_TYPE_UNSPECIFIED" | "NVIDIA_TESLA_K80" | "NVIDIA_TESLA_P100" | "NVIDIA_TESLA_V100" | "NVIDIA_TESLA_P4" | "NVIDIA_TESLA_T4" | "NVIDIA_TESLA_A100" | "TPU_V2" | "TPU_V3"
- Type of this accelerator.
SchedulerAcceleratorConfigResponse, SchedulerAcceleratorConfigResponseArgs        
- core_count str
- Count of cores of this accelerator.
- type str
- Type of this accelerator.
SchedulerAcceleratorConfigType, SchedulerAcceleratorConfigTypeArgs        
- SchedulerAccelerator Type Unspecified 
- SCHEDULER_ACCELERATOR_TYPE_UNSPECIFIEDUnspecified accelerator type. Default to no GPU.
- NvidiaTesla K80 
- NVIDIA_TESLA_K80Nvidia Tesla K80 GPU.
- NvidiaTesla P100 
- NVIDIA_TESLA_P100Nvidia Tesla P100 GPU.
- NvidiaTesla V100 
- NVIDIA_TESLA_V100Nvidia Tesla V100 GPU.
- NvidiaTesla P4 
- NVIDIA_TESLA_P4Nvidia Tesla P4 GPU.
- NvidiaTesla T4 
- NVIDIA_TESLA_T4Nvidia Tesla T4 GPU.
- NvidiaTesla A100 
- NVIDIA_TESLA_A100Nvidia Tesla A100 GPU.
- TpuV2 
- TPU_V2TPU v2.
- TpuV3 
- TPU_V3TPU v3.
- SchedulerAccelerator Config Type Scheduler Accelerator Type Unspecified 
- SCHEDULER_ACCELERATOR_TYPE_UNSPECIFIEDUnspecified accelerator type. Default to no GPU.
- SchedulerAccelerator Config Type Nvidia Tesla K80 
- NVIDIA_TESLA_K80Nvidia Tesla K80 GPU.
- SchedulerAccelerator Config Type Nvidia Tesla P100 
- NVIDIA_TESLA_P100Nvidia Tesla P100 GPU.
- SchedulerAccelerator Config Type Nvidia Tesla V100 
- NVIDIA_TESLA_V100Nvidia Tesla V100 GPU.
- SchedulerAccelerator Config Type Nvidia Tesla P4 
- NVIDIA_TESLA_P4Nvidia Tesla P4 GPU.
- SchedulerAccelerator Config Type Nvidia Tesla T4 
- NVIDIA_TESLA_T4Nvidia Tesla T4 GPU.
- SchedulerAccelerator Config Type Nvidia Tesla A100 
- NVIDIA_TESLA_A100Nvidia Tesla A100 GPU.
- SchedulerAccelerator Config Type Tpu V2 
- TPU_V2TPU v2.
- SchedulerAccelerator Config Type Tpu V3 
- TPU_V3TPU v3.
- SchedulerAccelerator Type Unspecified 
- SCHEDULER_ACCELERATOR_TYPE_UNSPECIFIEDUnspecified accelerator type. Default to no GPU.
- NvidiaTesla K80 
- NVIDIA_TESLA_K80Nvidia Tesla K80 GPU.
- NvidiaTesla P100 
- NVIDIA_TESLA_P100Nvidia Tesla P100 GPU.
- NvidiaTesla V100 
- NVIDIA_TESLA_V100Nvidia Tesla V100 GPU.
- NvidiaTesla P4 
- NVIDIA_TESLA_P4Nvidia Tesla P4 GPU.
- NvidiaTesla T4 
- NVIDIA_TESLA_T4Nvidia Tesla T4 GPU.
- NvidiaTesla A100 
- NVIDIA_TESLA_A100Nvidia Tesla A100 GPU.
- TpuV2 
- TPU_V2TPU v2.
- TpuV3 
- TPU_V3TPU v3.
- SchedulerAccelerator Type Unspecified 
- SCHEDULER_ACCELERATOR_TYPE_UNSPECIFIEDUnspecified accelerator type. Default to no GPU.
- NvidiaTesla K80 
- NVIDIA_TESLA_K80Nvidia Tesla K80 GPU.
- NvidiaTesla P100 
- NVIDIA_TESLA_P100Nvidia Tesla P100 GPU.
- NvidiaTesla V100 
- NVIDIA_TESLA_V100Nvidia Tesla V100 GPU.
- NvidiaTesla P4 
- NVIDIA_TESLA_P4Nvidia Tesla P4 GPU.
- NvidiaTesla T4 
- NVIDIA_TESLA_T4Nvidia Tesla T4 GPU.
- NvidiaTesla A100 
- NVIDIA_TESLA_A100Nvidia Tesla A100 GPU.
- TpuV2 
- TPU_V2TPU v2.
- TpuV3 
- TPU_V3TPU v3.
- SCHEDULER_ACCELERATOR_TYPE_UNSPECIFIED
- SCHEDULER_ACCELERATOR_TYPE_UNSPECIFIEDUnspecified accelerator type. Default to no GPU.
- NVIDIA_TESLA_K80
- NVIDIA_TESLA_K80Nvidia Tesla K80 GPU.
- NVIDIA_TESLA_P100
- NVIDIA_TESLA_P100Nvidia Tesla P100 GPU.
- NVIDIA_TESLA_V100
- NVIDIA_TESLA_V100Nvidia Tesla V100 GPU.
- NVIDIA_TESLA_P4
- NVIDIA_TESLA_P4Nvidia Tesla P4 GPU.
- NVIDIA_TESLA_T4
- NVIDIA_TESLA_T4Nvidia Tesla T4 GPU.
- NVIDIA_TESLA_A100
- NVIDIA_TESLA_A100Nvidia Tesla A100 GPU.
- TPU_V2
- TPU_V2TPU v2.
- TPU_V3
- TPU_V3TPU v3.
- "SCHEDULER_ACCELERATOR_TYPE_UNSPECIFIED"
- SCHEDULER_ACCELERATOR_TYPE_UNSPECIFIEDUnspecified accelerator type. Default to no GPU.
- "NVIDIA_TESLA_K80"
- NVIDIA_TESLA_K80Nvidia Tesla K80 GPU.
- "NVIDIA_TESLA_P100"
- NVIDIA_TESLA_P100Nvidia Tesla P100 GPU.
- "NVIDIA_TESLA_V100"
- NVIDIA_TESLA_V100Nvidia Tesla V100 GPU.
- "NVIDIA_TESLA_P4"
- NVIDIA_TESLA_P4Nvidia Tesla P4 GPU.
- "NVIDIA_TESLA_T4"
- NVIDIA_TESLA_T4Nvidia Tesla T4 GPU.
- "NVIDIA_TESLA_A100"
- NVIDIA_TESLA_A100Nvidia Tesla A100 GPU.
- "TPU_V2"
- TPU_V2TPU v2.
- "TPU_V3"
- TPU_V3TPU v3.
VertexAIParameters, VertexAIParametersArgs    
- Env Dictionary<string, string>
- Environment variables. At most 100 environment variables can be specified and unique. Example: GCP_BUCKET=gs://my-bucket/samples/
- Network string
- The full name of the Compute Engine network to which the Job should be peered. For example, projects/12345/global/networks/myVPC. Format is of the formprojects/{project}/global/networks/{network}. Where{project}is a project number, as in12345, and{network}is a network name. Private services access must already be configured for the network. If left unspecified, the job is not peered with any network.
- Env map[string]string
- Environment variables. At most 100 environment variables can be specified and unique. Example: GCP_BUCKET=gs://my-bucket/samples/
- Network string
- The full name of the Compute Engine network to which the Job should be peered. For example, projects/12345/global/networks/myVPC. Format is of the formprojects/{project}/global/networks/{network}. Where{project}is a project number, as in12345, and{network}is a network name. Private services access must already be configured for the network. If left unspecified, the job is not peered with any network.
- env Map<String,String>
- Environment variables. At most 100 environment variables can be specified and unique. Example: GCP_BUCKET=gs://my-bucket/samples/
- network String
- The full name of the Compute Engine network to which the Job should be peered. For example, projects/12345/global/networks/myVPC. Format is of the formprojects/{project}/global/networks/{network}. Where{project}is a project number, as in12345, and{network}is a network name. Private services access must already be configured for the network. If left unspecified, the job is not peered with any network.
- env {[key: string]: string}
- Environment variables. At most 100 environment variables can be specified and unique. Example: GCP_BUCKET=gs://my-bucket/samples/
- network string
- The full name of the Compute Engine network to which the Job should be peered. For example, projects/12345/global/networks/myVPC. Format is of the formprojects/{project}/global/networks/{network}. Where{project}is a project number, as in12345, and{network}is a network name. Private services access must already be configured for the network. If left unspecified, the job is not peered with any network.
- env Mapping[str, str]
- Environment variables. At most 100 environment variables can be specified and unique. Example: GCP_BUCKET=gs://my-bucket/samples/
- network str
- The full name of the Compute Engine network to which the Job should be peered. For example, projects/12345/global/networks/myVPC. Format is of the formprojects/{project}/global/networks/{network}. Where{project}is a project number, as in12345, and{network}is a network name. Private services access must already be configured for the network. If left unspecified, the job is not peered with any network.
- env Map<String>
- Environment variables. At most 100 environment variables can be specified and unique. Example: GCP_BUCKET=gs://my-bucket/samples/
- network String
- The full name of the Compute Engine network to which the Job should be peered. For example, projects/12345/global/networks/myVPC. Format is of the formprojects/{project}/global/networks/{network}. Where{project}is a project number, as in12345, and{network}is a network name. Private services access must already be configured for the network. If left unspecified, the job is not peered with any network.
VertexAIParametersResponse, VertexAIParametersResponseArgs      
- Env Dictionary<string, string>
- Environment variables. At most 100 environment variables can be specified and unique. Example: GCP_BUCKET=gs://my-bucket/samples/
- Network string
- The full name of the Compute Engine network to which the Job should be peered. For example, projects/12345/global/networks/myVPC. Format is of the formprojects/{project}/global/networks/{network}. Where{project}is a project number, as in12345, and{network}is a network name. Private services access must already be configured for the network. If left unspecified, the job is not peered with any network.
- Env map[string]string
- Environment variables. At most 100 environment variables can be specified and unique. Example: GCP_BUCKET=gs://my-bucket/samples/
- Network string
- The full name of the Compute Engine network to which the Job should be peered. For example, projects/12345/global/networks/myVPC. Format is of the formprojects/{project}/global/networks/{network}. Where{project}is a project number, as in12345, and{network}is a network name. Private services access must already be configured for the network. If left unspecified, the job is not peered with any network.
- env Map<String,String>
- Environment variables. At most 100 environment variables can be specified and unique. Example: GCP_BUCKET=gs://my-bucket/samples/
- network String
- The full name of the Compute Engine network to which the Job should be peered. For example, projects/12345/global/networks/myVPC. Format is of the formprojects/{project}/global/networks/{network}. Where{project}is a project number, as in12345, and{network}is a network name. Private services access must already be configured for the network. If left unspecified, the job is not peered with any network.
- env {[key: string]: string}
- Environment variables. At most 100 environment variables can be specified and unique. Example: GCP_BUCKET=gs://my-bucket/samples/
- network string
- The full name of the Compute Engine network to which the Job should be peered. For example, projects/12345/global/networks/myVPC. Format is of the formprojects/{project}/global/networks/{network}. Where{project}is a project number, as in12345, and{network}is a network name. Private services access must already be configured for the network. If left unspecified, the job is not peered with any network.
- env Mapping[str, str]
- Environment variables. At most 100 environment variables can be specified and unique. Example: GCP_BUCKET=gs://my-bucket/samples/
- network str
- The full name of the Compute Engine network to which the Job should be peered. For example, projects/12345/global/networks/myVPC. Format is of the formprojects/{project}/global/networks/{network}. Where{project}is a project number, as in12345, and{network}is a network name. Private services access must already be configured for the network. If left unspecified, the job is not peered with any network.
- env Map<String>
- Environment variables. At most 100 environment variables can be specified and unique. Example: GCP_BUCKET=gs://my-bucket/samples/
- network String
- The full name of the Compute Engine network to which the Job should be peered. For example, projects/12345/global/networks/myVPC. Format is of the formprojects/{project}/global/networks/{network}. Where{project}is a project number, as in12345, and{network}is a network name. Private services access must already be configured for the network. If left unspecified, the job is not peered with any network.
Package Details
- Repository
- Google Cloud Native pulumi/pulumi-google-native
- License
- Apache-2.0
Google Cloud Native is in preview. Google Cloud Classic is fully supported.