Google Cloud Native is in preview. Google Cloud Classic is fully supported.
Google Cloud Native v0.32.0 published on Wednesday, Nov 29, 2023 by Pulumi
google-native.notebooks/v1.getExecution
Explore with Pulumi AI
Google Cloud Native is in preview. Google Cloud Classic is fully supported.
Google Cloud Native v0.32.0 published on Wednesday, Nov 29, 2023 by Pulumi
Gets details of executions
Using getExecution
Two invocation forms are available. The direct form accepts plain arguments and either blocks until the result value is available, or returns a Promise-wrapped result. The output form accepts Input-wrapped arguments and returns an Output-wrapped result.
function getExecution(args: GetExecutionArgs, opts?: InvokeOptions): Promise<GetExecutionResult>
function getExecutionOutput(args: GetExecutionOutputArgs, opts?: InvokeOptions): Output<GetExecutionResult>def get_execution(execution_id: Optional[str] = None,
                  location: Optional[str] = None,
                  project: Optional[str] = None,
                  opts: Optional[InvokeOptions] = None) -> GetExecutionResult
def get_execution_output(execution_id: Optional[pulumi.Input[str]] = None,
                  location: Optional[pulumi.Input[str]] = None,
                  project: Optional[pulumi.Input[str]] = None,
                  opts: Optional[InvokeOptions] = None) -> Output[GetExecutionResult]func LookupExecution(ctx *Context, args *LookupExecutionArgs, opts ...InvokeOption) (*LookupExecutionResult, error)
func LookupExecutionOutput(ctx *Context, args *LookupExecutionOutputArgs, opts ...InvokeOption) LookupExecutionResultOutput> Note: This function is named LookupExecution in the Go SDK.
public static class GetExecution 
{
    public static Task<GetExecutionResult> InvokeAsync(GetExecutionArgs args, InvokeOptions? opts = null)
    public static Output<GetExecutionResult> Invoke(GetExecutionInvokeArgs args, InvokeOptions? opts = null)
}public static CompletableFuture<GetExecutionResult> getExecution(GetExecutionArgs args, InvokeOptions options)
public static Output<GetExecutionResult> getExecution(GetExecutionArgs args, InvokeOptions options)
fn::invoke:
  function: google-native:notebooks/v1:getExecution
  arguments:
    # arguments dictionaryThe following arguments are supported:
- ExecutionId string
- Location string
- Project string
- ExecutionId string
- Location string
- Project string
- executionId String
- location String
- project String
- executionId string
- location string
- project string
- execution_id str
- location str
- project str
- executionId String
- location String
- project String
getExecution Result
The following output properties are available:
- CreateTime string
- Time the Execution was instantiated.
- Description string
- A brief description of this execution.
- DisplayName string
- Name used for UI purposes. Name can only contain alphanumeric characters and underscores '_'.
- ExecutionTemplate Pulumi.Google Native. Notebooks. V1. Outputs. Execution Template Response 
- execute metadata including name, hardware spec, region, labels, etc.
- JobUri string
- The URI of the external job used to execute the notebook.
- Name string
- The resource name of the execute. Format: projects/{project_id}/locations/{location}/executions/{execution_id}
- OutputNotebook stringFile 
- Output notebook file generated by this execution
- State string
- State of the underlying AI Platform job.
- UpdateTime string
- Time the Execution was last updated.
- CreateTime string
- Time the Execution was instantiated.
- Description string
- A brief description of this execution.
- DisplayName string
- Name used for UI purposes. Name can only contain alphanumeric characters and underscores '_'.
- ExecutionTemplate ExecutionTemplate Response 
- execute metadata including name, hardware spec, region, labels, etc.
- JobUri string
- The URI of the external job used to execute the notebook.
- Name string
- The resource name of the execute. Format: projects/{project_id}/locations/{location}/executions/{execution_id}
- OutputNotebook stringFile 
- Output notebook file generated by this execution
- State string
- State of the underlying AI Platform job.
- UpdateTime string
- Time the Execution was last updated.
- createTime String
- Time the Execution was instantiated.
- description String
- A brief description of this execution.
- displayName String
- Name used for UI purposes. Name can only contain alphanumeric characters and underscores '_'.
- executionTemplate ExecutionTemplate Response 
- execute metadata including name, hardware spec, region, labels, etc.
- jobUri String
- The URI of the external job used to execute the notebook.
- name String
- The resource name of the execute. Format: projects/{project_id}/locations/{location}/executions/{execution_id}
- outputNotebook StringFile 
- Output notebook file generated by this execution
- state String
- State of the underlying AI Platform job.
- updateTime String
- Time the Execution was last updated.
- createTime string
- Time the Execution was instantiated.
- description string
- A brief description of this execution.
- displayName string
- Name used for UI purposes. Name can only contain alphanumeric characters and underscores '_'.
- executionTemplate ExecutionTemplate Response 
- execute metadata including name, hardware spec, region, labels, etc.
- jobUri string
- The URI of the external job used to execute the notebook.
- name string
- The resource name of the execute. Format: projects/{project_id}/locations/{location}/executions/{execution_id}
- outputNotebook stringFile 
- Output notebook file generated by this execution
- state string
- State of the underlying AI Platform job.
- updateTime string
- Time the Execution was last updated.
- create_time str
- Time the Execution was instantiated.
- description str
- A brief description of this execution.
- display_name str
- Name used for UI purposes. Name can only contain alphanumeric characters and underscores '_'.
- execution_template ExecutionTemplate Response 
- execute metadata including name, hardware spec, region, labels, etc.
- job_uri str
- The URI of the external job used to execute the notebook.
- name str
- The resource name of the execute. Format: projects/{project_id}/locations/{location}/executions/{execution_id}
- output_notebook_ strfile 
- Output notebook file generated by this execution
- state str
- State of the underlying AI Platform job.
- update_time str
- Time the Execution was last updated.
- createTime String
- Time the Execution was instantiated.
- description String
- A brief description of this execution.
- displayName String
- Name used for UI purposes. Name can only contain alphanumeric characters and underscores '_'.
- executionTemplate Property Map
- execute metadata including name, hardware spec, region, labels, etc.
- jobUri String
- The URI of the external job used to execute the notebook.
- name String
- The resource name of the execute. Format: projects/{project_id}/locations/{location}/executions/{execution_id}
- outputNotebook StringFile 
- Output notebook file generated by this execution
- state String
- State of the underlying AI Platform job.
- updateTime String
- Time the Execution was last updated.
Supporting Types
DataprocParametersResponse  
- Cluster string
- URI for cluster used to run Dataproc execution. Format: projects/{PROJECT_ID}/regions/{REGION}/clusters/{CLUSTER_NAME}
- Cluster string
- URI for cluster used to run Dataproc execution. Format: projects/{PROJECT_ID}/regions/{REGION}/clusters/{CLUSTER_NAME}
- cluster String
- URI for cluster used to run Dataproc execution. Format: projects/{PROJECT_ID}/regions/{REGION}/clusters/{CLUSTER_NAME}
- cluster string
- URI for cluster used to run Dataproc execution. Format: projects/{PROJECT_ID}/regions/{REGION}/clusters/{CLUSTER_NAME}
- cluster str
- URI for cluster used to run Dataproc execution. Format: projects/{PROJECT_ID}/regions/{REGION}/clusters/{CLUSTER_NAME}
- cluster String
- URI for cluster used to run Dataproc execution. Format: projects/{PROJECT_ID}/regions/{REGION}/clusters/{CLUSTER_NAME}
ExecutionTemplateResponse  
- AcceleratorConfig Pulumi.Google Native. Notebooks. V1. Inputs. Scheduler Accelerator Config Response 
- Configuration (count and accelerator type) for hardware running notebook execution.
- ContainerImage stringUri 
- Container Image URI to a DLVM Example: 'gcr.io/deeplearning-platform-release/base-cu100' More examples can be found at: https://cloud.google.com/ai-platform/deep-learning-containers/docs/choosing-container
- DataprocParameters Pulumi.Google Native. Notebooks. V1. Inputs. Dataproc Parameters Response 
- Parameters used in Dataproc JobType executions.
- InputNotebook stringFile 
- Path to the notebook file to execute. Must be in a Google Cloud Storage bucket. Format: gs://{bucket_name}/{folder}/{notebook_file_name}Ex:gs://notebook_user/scheduled_notebooks/sentiment_notebook.ipynb
- JobType string
- The type of Job to be used on this execution.
- KernelSpec string
- Name of the kernel spec to use. This must be specified if the kernel spec name on the execution target does not match the name in the input notebook file.
- Labels Dictionary<string, string>
- Labels for execution. If execution is scheduled, a field included will be 'nbs-scheduled'. Otherwise, it is an immediate execution, and an included field will be 'nbs-immediate'. Use fields to efficiently index between various types of executions.
- MasterType string
- Specifies the type of virtual machine to use for your training job's master worker. You must specify this field when scaleTieris set toCUSTOM. You can use certain Compute Engine machine types directly in this field. The following types are supported: -n1-standard-4-n1-standard-8-n1-standard-16-n1-standard-32-n1-standard-64-n1-standard-96-n1-highmem-2-n1-highmem-4-n1-highmem-8-n1-highmem-16-n1-highmem-32-n1-highmem-64-n1-highmem-96-n1-highcpu-16-n1-highcpu-32-n1-highcpu-64-n1-highcpu-96Alternatively, you can use the following legacy machine types: -standard-large_model-complex_model_s-complex_model_m-complex_model_l-standard_gpu-complex_model_m_gpu-complex_model_l_gpu-standard_p100-complex_model_m_p100-standard_v100-large_model_v100-complex_model_m_v100-complex_model_l_v100Finally, if you want to use a TPU for training, specifycloud_tpuin this field. Learn more about the special configuration options for training with TPU.
- OutputNotebook stringFolder 
- Path to the notebook folder to write to. Must be in a Google Cloud Storage bucket path. Format: gs://{bucket_name}/{folder}Ex:gs://notebook_user/scheduled_notebooks
- Parameters string
- Parameters used within the 'input_notebook_file' notebook.
- ParamsYaml stringFile 
- Parameters to be overridden in the notebook during execution. Ref https://papermill.readthedocs.io/en/latest/usage-parameterize.html on how to specifying parameters in the input notebook and pass them here in an YAML file. Ex: gs://notebook_user/scheduled_notebooks/sentiment_notebook_params.yaml
- ScaleTier string
- Scale tier of the hardware used for notebook execution. DEPRECATED Will be discontinued. As right now only CUSTOM is supported.
- ServiceAccount string
- The email address of a service account to use when running the execution. You must have the iam.serviceAccounts.actAspermission for the specified service account.
- Tensorboard string
- The name of a Vertex AI [Tensorboard] resource to which this execution will upload Tensorboard logs. Format: projects/{project}/locations/{location}/tensorboards/{tensorboard}
- VertexAi Pulumi.Parameters Google Native. Notebooks. V1. Inputs. Vertex AIParameters Response 
- Parameters used in Vertex AI JobType executions.
- AcceleratorConfig SchedulerAccelerator Config Response 
- Configuration (count and accelerator type) for hardware running notebook execution.
- ContainerImage stringUri 
- Container Image URI to a DLVM Example: 'gcr.io/deeplearning-platform-release/base-cu100' More examples can be found at: https://cloud.google.com/ai-platform/deep-learning-containers/docs/choosing-container
- DataprocParameters DataprocParameters Response 
- Parameters used in Dataproc JobType executions.
- InputNotebook stringFile 
- Path to the notebook file to execute. Must be in a Google Cloud Storage bucket. Format: gs://{bucket_name}/{folder}/{notebook_file_name}Ex:gs://notebook_user/scheduled_notebooks/sentiment_notebook.ipynb
- JobType string
- The type of Job to be used on this execution.
- KernelSpec string
- Name of the kernel spec to use. This must be specified if the kernel spec name on the execution target does not match the name in the input notebook file.
- Labels map[string]string
- Labels for execution. If execution is scheduled, a field included will be 'nbs-scheduled'. Otherwise, it is an immediate execution, and an included field will be 'nbs-immediate'. Use fields to efficiently index between various types of executions.
- MasterType string
- Specifies the type of virtual machine to use for your training job's master worker. You must specify this field when scaleTieris set toCUSTOM. You can use certain Compute Engine machine types directly in this field. The following types are supported: -n1-standard-4-n1-standard-8-n1-standard-16-n1-standard-32-n1-standard-64-n1-standard-96-n1-highmem-2-n1-highmem-4-n1-highmem-8-n1-highmem-16-n1-highmem-32-n1-highmem-64-n1-highmem-96-n1-highcpu-16-n1-highcpu-32-n1-highcpu-64-n1-highcpu-96Alternatively, you can use the following legacy machine types: -standard-large_model-complex_model_s-complex_model_m-complex_model_l-standard_gpu-complex_model_m_gpu-complex_model_l_gpu-standard_p100-complex_model_m_p100-standard_v100-large_model_v100-complex_model_m_v100-complex_model_l_v100Finally, if you want to use a TPU for training, specifycloud_tpuin this field. Learn more about the special configuration options for training with TPU.
- OutputNotebook stringFolder 
- Path to the notebook folder to write to. Must be in a Google Cloud Storage bucket path. Format: gs://{bucket_name}/{folder}Ex:gs://notebook_user/scheduled_notebooks
- Parameters string
- Parameters used within the 'input_notebook_file' notebook.
- ParamsYaml stringFile 
- Parameters to be overridden in the notebook during execution. Ref https://papermill.readthedocs.io/en/latest/usage-parameterize.html on how to specifying parameters in the input notebook and pass them here in an YAML file. Ex: gs://notebook_user/scheduled_notebooks/sentiment_notebook_params.yaml
- ScaleTier string
- Scale tier of the hardware used for notebook execution. DEPRECATED Will be discontinued. As right now only CUSTOM is supported.
- ServiceAccount string
- The email address of a service account to use when running the execution. You must have the iam.serviceAccounts.actAspermission for the specified service account.
- Tensorboard string
- The name of a Vertex AI [Tensorboard] resource to which this execution will upload Tensorboard logs. Format: projects/{project}/locations/{location}/tensorboards/{tensorboard}
- VertexAi VertexParameters AIParameters Response 
- Parameters used in Vertex AI JobType executions.
- acceleratorConfig SchedulerAccelerator Config Response 
- Configuration (count and accelerator type) for hardware running notebook execution.
- containerImage StringUri 
- Container Image URI to a DLVM Example: 'gcr.io/deeplearning-platform-release/base-cu100' More examples can be found at: https://cloud.google.com/ai-platform/deep-learning-containers/docs/choosing-container
- dataprocParameters DataprocParameters Response 
- Parameters used in Dataproc JobType executions.
- inputNotebook StringFile 
- Path to the notebook file to execute. Must be in a Google Cloud Storage bucket. Format: gs://{bucket_name}/{folder}/{notebook_file_name}Ex:gs://notebook_user/scheduled_notebooks/sentiment_notebook.ipynb
- jobType String
- The type of Job to be used on this execution.
- kernelSpec String
- Name of the kernel spec to use. This must be specified if the kernel spec name on the execution target does not match the name in the input notebook file.
- labels Map<String,String>
- Labels for execution. If execution is scheduled, a field included will be 'nbs-scheduled'. Otherwise, it is an immediate execution, and an included field will be 'nbs-immediate'. Use fields to efficiently index between various types of executions.
- masterType String
- Specifies the type of virtual machine to use for your training job's master worker. You must specify this field when scaleTieris set toCUSTOM. You can use certain Compute Engine machine types directly in this field. The following types are supported: -n1-standard-4-n1-standard-8-n1-standard-16-n1-standard-32-n1-standard-64-n1-standard-96-n1-highmem-2-n1-highmem-4-n1-highmem-8-n1-highmem-16-n1-highmem-32-n1-highmem-64-n1-highmem-96-n1-highcpu-16-n1-highcpu-32-n1-highcpu-64-n1-highcpu-96Alternatively, you can use the following legacy machine types: -standard-large_model-complex_model_s-complex_model_m-complex_model_l-standard_gpu-complex_model_m_gpu-complex_model_l_gpu-standard_p100-complex_model_m_p100-standard_v100-large_model_v100-complex_model_m_v100-complex_model_l_v100Finally, if you want to use a TPU for training, specifycloud_tpuin this field. Learn more about the special configuration options for training with TPU.
- outputNotebook StringFolder 
- Path to the notebook folder to write to. Must be in a Google Cloud Storage bucket path. Format: gs://{bucket_name}/{folder}Ex:gs://notebook_user/scheduled_notebooks
- parameters String
- Parameters used within the 'input_notebook_file' notebook.
- paramsYaml StringFile 
- Parameters to be overridden in the notebook during execution. Ref https://papermill.readthedocs.io/en/latest/usage-parameterize.html on how to specifying parameters in the input notebook and pass them here in an YAML file. Ex: gs://notebook_user/scheduled_notebooks/sentiment_notebook_params.yaml
- scaleTier String
- Scale tier of the hardware used for notebook execution. DEPRECATED Will be discontinued. As right now only CUSTOM is supported.
- serviceAccount String
- The email address of a service account to use when running the execution. You must have the iam.serviceAccounts.actAspermission for the specified service account.
- tensorboard String
- The name of a Vertex AI [Tensorboard] resource to which this execution will upload Tensorboard logs. Format: projects/{project}/locations/{location}/tensorboards/{tensorboard}
- vertexAi VertexParameters AIParameters Response 
- Parameters used in Vertex AI JobType executions.
- acceleratorConfig SchedulerAccelerator Config Response 
- Configuration (count and accelerator type) for hardware running notebook execution.
- containerImage stringUri 
- Container Image URI to a DLVM Example: 'gcr.io/deeplearning-platform-release/base-cu100' More examples can be found at: https://cloud.google.com/ai-platform/deep-learning-containers/docs/choosing-container
- dataprocParameters DataprocParameters Response 
- Parameters used in Dataproc JobType executions.
- inputNotebook stringFile 
- Path to the notebook file to execute. Must be in a Google Cloud Storage bucket. Format: gs://{bucket_name}/{folder}/{notebook_file_name}Ex:gs://notebook_user/scheduled_notebooks/sentiment_notebook.ipynb
- jobType string
- The type of Job to be used on this execution.
- kernelSpec string
- Name of the kernel spec to use. This must be specified if the kernel spec name on the execution target does not match the name in the input notebook file.
- labels {[key: string]: string}
- Labels for execution. If execution is scheduled, a field included will be 'nbs-scheduled'. Otherwise, it is an immediate execution, and an included field will be 'nbs-immediate'. Use fields to efficiently index between various types of executions.
- masterType string
- Specifies the type of virtual machine to use for your training job's master worker. You must specify this field when scaleTieris set toCUSTOM. You can use certain Compute Engine machine types directly in this field. The following types are supported: -n1-standard-4-n1-standard-8-n1-standard-16-n1-standard-32-n1-standard-64-n1-standard-96-n1-highmem-2-n1-highmem-4-n1-highmem-8-n1-highmem-16-n1-highmem-32-n1-highmem-64-n1-highmem-96-n1-highcpu-16-n1-highcpu-32-n1-highcpu-64-n1-highcpu-96Alternatively, you can use the following legacy machine types: -standard-large_model-complex_model_s-complex_model_m-complex_model_l-standard_gpu-complex_model_m_gpu-complex_model_l_gpu-standard_p100-complex_model_m_p100-standard_v100-large_model_v100-complex_model_m_v100-complex_model_l_v100Finally, if you want to use a TPU for training, specifycloud_tpuin this field. Learn more about the special configuration options for training with TPU.
- outputNotebook stringFolder 
- Path to the notebook folder to write to. Must be in a Google Cloud Storage bucket path. Format: gs://{bucket_name}/{folder}Ex:gs://notebook_user/scheduled_notebooks
- parameters string
- Parameters used within the 'input_notebook_file' notebook.
- paramsYaml stringFile 
- Parameters to be overridden in the notebook during execution. Ref https://papermill.readthedocs.io/en/latest/usage-parameterize.html on how to specifying parameters in the input notebook and pass them here in an YAML file. Ex: gs://notebook_user/scheduled_notebooks/sentiment_notebook_params.yaml
- scaleTier string
- Scale tier of the hardware used for notebook execution. DEPRECATED Will be discontinued. As right now only CUSTOM is supported.
- serviceAccount string
- The email address of a service account to use when running the execution. You must have the iam.serviceAccounts.actAspermission for the specified service account.
- tensorboard string
- The name of a Vertex AI [Tensorboard] resource to which this execution will upload Tensorboard logs. Format: projects/{project}/locations/{location}/tensorboards/{tensorboard}
- vertexAi VertexParameters AIParameters Response 
- Parameters used in Vertex AI JobType executions.
- accelerator_config SchedulerAccelerator Config Response 
- Configuration (count and accelerator type) for hardware running notebook execution.
- container_image_ struri 
- Container Image URI to a DLVM Example: 'gcr.io/deeplearning-platform-release/base-cu100' More examples can be found at: https://cloud.google.com/ai-platform/deep-learning-containers/docs/choosing-container
- dataproc_parameters DataprocParameters Response 
- Parameters used in Dataproc JobType executions.
- input_notebook_ strfile 
- Path to the notebook file to execute. Must be in a Google Cloud Storage bucket. Format: gs://{bucket_name}/{folder}/{notebook_file_name}Ex:gs://notebook_user/scheduled_notebooks/sentiment_notebook.ipynb
- job_type str
- The type of Job to be used on this execution.
- kernel_spec str
- Name of the kernel spec to use. This must be specified if the kernel spec name on the execution target does not match the name in the input notebook file.
- labels Mapping[str, str]
- Labels for execution. If execution is scheduled, a field included will be 'nbs-scheduled'. Otherwise, it is an immediate execution, and an included field will be 'nbs-immediate'. Use fields to efficiently index between various types of executions.
- master_type str
- Specifies the type of virtual machine to use for your training job's master worker. You must specify this field when scaleTieris set toCUSTOM. You can use certain Compute Engine machine types directly in this field. The following types are supported: -n1-standard-4-n1-standard-8-n1-standard-16-n1-standard-32-n1-standard-64-n1-standard-96-n1-highmem-2-n1-highmem-4-n1-highmem-8-n1-highmem-16-n1-highmem-32-n1-highmem-64-n1-highmem-96-n1-highcpu-16-n1-highcpu-32-n1-highcpu-64-n1-highcpu-96Alternatively, you can use the following legacy machine types: -standard-large_model-complex_model_s-complex_model_m-complex_model_l-standard_gpu-complex_model_m_gpu-complex_model_l_gpu-standard_p100-complex_model_m_p100-standard_v100-large_model_v100-complex_model_m_v100-complex_model_l_v100Finally, if you want to use a TPU for training, specifycloud_tpuin this field. Learn more about the special configuration options for training with TPU.
- output_notebook_ strfolder 
- Path to the notebook folder to write to. Must be in a Google Cloud Storage bucket path. Format: gs://{bucket_name}/{folder}Ex:gs://notebook_user/scheduled_notebooks
- parameters str
- Parameters used within the 'input_notebook_file' notebook.
- params_yaml_ strfile 
- Parameters to be overridden in the notebook during execution. Ref https://papermill.readthedocs.io/en/latest/usage-parameterize.html on how to specifying parameters in the input notebook and pass them here in an YAML file. Ex: gs://notebook_user/scheduled_notebooks/sentiment_notebook_params.yaml
- scale_tier str
- Scale tier of the hardware used for notebook execution. DEPRECATED Will be discontinued. As right now only CUSTOM is supported.
- service_account str
- The email address of a service account to use when running the execution. You must have the iam.serviceAccounts.actAspermission for the specified service account.
- tensorboard str
- The name of a Vertex AI [Tensorboard] resource to which this execution will upload Tensorboard logs. Format: projects/{project}/locations/{location}/tensorboards/{tensorboard}
- vertex_ai_ Vertexparameters AIParameters Response 
- Parameters used in Vertex AI JobType executions.
- acceleratorConfig Property Map
- Configuration (count and accelerator type) for hardware running notebook execution.
- containerImage StringUri 
- Container Image URI to a DLVM Example: 'gcr.io/deeplearning-platform-release/base-cu100' More examples can be found at: https://cloud.google.com/ai-platform/deep-learning-containers/docs/choosing-container
- dataprocParameters Property Map
- Parameters used in Dataproc JobType executions.
- inputNotebook StringFile 
- Path to the notebook file to execute. Must be in a Google Cloud Storage bucket. Format: gs://{bucket_name}/{folder}/{notebook_file_name}Ex:gs://notebook_user/scheduled_notebooks/sentiment_notebook.ipynb
- jobType String
- The type of Job to be used on this execution.
- kernelSpec String
- Name of the kernel spec to use. This must be specified if the kernel spec name on the execution target does not match the name in the input notebook file.
- labels Map<String>
- Labels for execution. If execution is scheduled, a field included will be 'nbs-scheduled'. Otherwise, it is an immediate execution, and an included field will be 'nbs-immediate'. Use fields to efficiently index between various types of executions.
- masterType String
- Specifies the type of virtual machine to use for your training job's master worker. You must specify this field when scaleTieris set toCUSTOM. You can use certain Compute Engine machine types directly in this field. The following types are supported: -n1-standard-4-n1-standard-8-n1-standard-16-n1-standard-32-n1-standard-64-n1-standard-96-n1-highmem-2-n1-highmem-4-n1-highmem-8-n1-highmem-16-n1-highmem-32-n1-highmem-64-n1-highmem-96-n1-highcpu-16-n1-highcpu-32-n1-highcpu-64-n1-highcpu-96Alternatively, you can use the following legacy machine types: -standard-large_model-complex_model_s-complex_model_m-complex_model_l-standard_gpu-complex_model_m_gpu-complex_model_l_gpu-standard_p100-complex_model_m_p100-standard_v100-large_model_v100-complex_model_m_v100-complex_model_l_v100Finally, if you want to use a TPU for training, specifycloud_tpuin this field. Learn more about the special configuration options for training with TPU.
- outputNotebook StringFolder 
- Path to the notebook folder to write to. Must be in a Google Cloud Storage bucket path. Format: gs://{bucket_name}/{folder}Ex:gs://notebook_user/scheduled_notebooks
- parameters String
- Parameters used within the 'input_notebook_file' notebook.
- paramsYaml StringFile 
- Parameters to be overridden in the notebook during execution. Ref https://papermill.readthedocs.io/en/latest/usage-parameterize.html on how to specifying parameters in the input notebook and pass them here in an YAML file. Ex: gs://notebook_user/scheduled_notebooks/sentiment_notebook_params.yaml
- scaleTier String
- Scale tier of the hardware used for notebook execution. DEPRECATED Will be discontinued. As right now only CUSTOM is supported.
- serviceAccount String
- The email address of a service account to use when running the execution. You must have the iam.serviceAccounts.actAspermission for the specified service account.
- tensorboard String
- The name of a Vertex AI [Tensorboard] resource to which this execution will upload Tensorboard logs. Format: projects/{project}/locations/{location}/tensorboards/{tensorboard}
- vertexAi Property MapParameters 
- Parameters used in Vertex AI JobType executions.
SchedulerAcceleratorConfigResponse   
- core_count str
- Count of cores of this accelerator.
- type str
- Type of this accelerator.
VertexAIParametersResponse  
- Env Dictionary<string, string>
- Environment variables. At most 100 environment variables can be specified and unique. Example: GCP_BUCKET=gs://my-bucket/samples/
- Network string
- The full name of the Compute Engine network to which the Job should be peered. For example, projects/12345/global/networks/myVPC. Format is of the formprojects/{project}/global/networks/{network}. Where{project}is a project number, as in12345, and{network}is a network name. Private services access must already be configured for the network. If left unspecified, the job is not peered with any network.
- Env map[string]string
- Environment variables. At most 100 environment variables can be specified and unique. Example: GCP_BUCKET=gs://my-bucket/samples/
- Network string
- The full name of the Compute Engine network to which the Job should be peered. For example, projects/12345/global/networks/myVPC. Format is of the formprojects/{project}/global/networks/{network}. Where{project}is a project number, as in12345, and{network}is a network name. Private services access must already be configured for the network. If left unspecified, the job is not peered with any network.
- env Map<String,String>
- Environment variables. At most 100 environment variables can be specified and unique. Example: GCP_BUCKET=gs://my-bucket/samples/
- network String
- The full name of the Compute Engine network to which the Job should be peered. For example, projects/12345/global/networks/myVPC. Format is of the formprojects/{project}/global/networks/{network}. Where{project}is a project number, as in12345, and{network}is a network name. Private services access must already be configured for the network. If left unspecified, the job is not peered with any network.
- env {[key: string]: string}
- Environment variables. At most 100 environment variables can be specified and unique. Example: GCP_BUCKET=gs://my-bucket/samples/
- network string
- The full name of the Compute Engine network to which the Job should be peered. For example, projects/12345/global/networks/myVPC. Format is of the formprojects/{project}/global/networks/{network}. Where{project}is a project number, as in12345, and{network}is a network name. Private services access must already be configured for the network. If left unspecified, the job is not peered with any network.
- env Mapping[str, str]
- Environment variables. At most 100 environment variables can be specified and unique. Example: GCP_BUCKET=gs://my-bucket/samples/
- network str
- The full name of the Compute Engine network to which the Job should be peered. For example, projects/12345/global/networks/myVPC. Format is of the formprojects/{project}/global/networks/{network}. Where{project}is a project number, as in12345, and{network}is a network name. Private services access must already be configured for the network. If left unspecified, the job is not peered with any network.
- env Map<String>
- Environment variables. At most 100 environment variables can be specified and unique. Example: GCP_BUCKET=gs://my-bucket/samples/
- network String
- The full name of the Compute Engine network to which the Job should be peered. For example, projects/12345/global/networks/myVPC. Format is of the formprojects/{project}/global/networks/{network}. Where{project}is a project number, as in12345, and{network}is a network name. Private services access must already be configured for the network. If left unspecified, the job is not peered with any network.
Package Details
- Repository
- Google Cloud Native pulumi/pulumi-google-native
- License
- Apache-2.0
Google Cloud Native is in preview. Google Cloud Classic is fully supported.
Google Cloud Native v0.32.0 published on Wednesday, Nov 29, 2023 by Pulumi