Google Cloud Native is in preview. Google Cloud Classic is fully supported.
google-native.notebooks/v1.Schedule
Explore with Pulumi AI
Google Cloud Native is in preview. Google Cloud Classic is fully supported.
Creates a new Scheduled Notebook in a given project and location. Auto-naming is currently not supported for this resource.
Create Schedule Resource
Resources are created with functions called constructors. To learn more about declaring and configuring resources, see Resources.
Constructor syntax
new Schedule(name: string, args: ScheduleArgs, opts?: CustomResourceOptions);@overload
def Schedule(resource_name: str,
             args: ScheduleArgs,
             opts: Optional[ResourceOptions] = None)
@overload
def Schedule(resource_name: str,
             opts: Optional[ResourceOptions] = None,
             schedule_id: Optional[str] = None,
             cron_schedule: Optional[str] = None,
             description: Optional[str] = None,
             execution_template: Optional[ExecutionTemplateArgs] = None,
             location: Optional[str] = None,
             project: Optional[str] = None,
             state: Optional[ScheduleState] = None,
             time_zone: Optional[str] = None)func NewSchedule(ctx *Context, name string, args ScheduleArgs, opts ...ResourceOption) (*Schedule, error)public Schedule(string name, ScheduleArgs args, CustomResourceOptions? opts = null)
public Schedule(String name, ScheduleArgs args)
public Schedule(String name, ScheduleArgs args, CustomResourceOptions options)
type: google-native:notebooks/v1:Schedule
properties: # The arguments to resource properties.
options: # Bag of options to control resource's behavior.
Parameters
- name string
- The unique name of the resource.
- args ScheduleArgs
- The arguments to resource properties.
- opts CustomResourceOptions
- Bag of options to control resource's behavior.
- resource_name str
- The unique name of the resource.
- args ScheduleArgs
- The arguments to resource properties.
- opts ResourceOptions
- Bag of options to control resource's behavior.
- ctx Context
- Context object for the current deployment.
- name string
- The unique name of the resource.
- args ScheduleArgs
- The arguments to resource properties.
- opts ResourceOption
- Bag of options to control resource's behavior.
- name string
- The unique name of the resource.
- args ScheduleArgs
- The arguments to resource properties.
- opts CustomResourceOptions
- Bag of options to control resource's behavior.
- name String
- The unique name of the resource.
- args ScheduleArgs
- The arguments to resource properties.
- options CustomResourceOptions
- Bag of options to control resource's behavior.
Constructor example
The following reference example uses placeholder values for all input properties.
var examplescheduleResourceResourceFromNotebooksv1 = new GoogleNative.Notebooks.V1.Schedule("examplescheduleResourceResourceFromNotebooksv1", new()
{
    ScheduleId = "string",
    CronSchedule = "string",
    Description = "string",
    ExecutionTemplate = new GoogleNative.Notebooks.V1.Inputs.ExecutionTemplateArgs
    {
        Labels = 
        {
            { "string", "string" },
        },
        OutputNotebookFolder = "string",
        InputNotebookFile = "string",
        JobType = GoogleNative.Notebooks.V1.ExecutionTemplateJobType.JobTypeUnspecified,
        KernelSpec = "string",
        AcceleratorConfig = new GoogleNative.Notebooks.V1.Inputs.SchedulerAcceleratorConfigArgs
        {
            CoreCount = "string",
            Type = GoogleNative.Notebooks.V1.SchedulerAcceleratorConfigType.SchedulerAcceleratorTypeUnspecified,
        },
        MasterType = "string",
        DataprocParameters = new GoogleNative.Notebooks.V1.Inputs.DataprocParametersArgs
        {
            Cluster = "string",
        },
        Parameters = "string",
        ParamsYamlFile = "string",
        ContainerImageUri = "string",
        ServiceAccount = "string",
        Tensorboard = "string",
        VertexAiParameters = new GoogleNative.Notebooks.V1.Inputs.VertexAIParametersArgs
        {
            Env = 
            {
                { "string", "string" },
            },
            Network = "string",
        },
    },
    Location = "string",
    Project = "string",
    State = GoogleNative.Notebooks.V1.ScheduleState.StateUnspecified,
    TimeZone = "string",
});
example, err := notebooks.NewSchedule(ctx, "examplescheduleResourceResourceFromNotebooksv1", ¬ebooks.ScheduleArgs{
	ScheduleId:   pulumi.String("string"),
	CronSchedule: pulumi.String("string"),
	Description:  pulumi.String("string"),
	ExecutionTemplate: ¬ebooks.ExecutionTemplateArgs{
		Labels: pulumi.StringMap{
			"string": pulumi.String("string"),
		},
		OutputNotebookFolder: pulumi.String("string"),
		InputNotebookFile:    pulumi.String("string"),
		JobType:              notebooks.ExecutionTemplateJobTypeJobTypeUnspecified,
		KernelSpec:           pulumi.String("string"),
		AcceleratorConfig: ¬ebooks.SchedulerAcceleratorConfigArgs{
			CoreCount: pulumi.String("string"),
			Type:      notebooks.SchedulerAcceleratorConfigTypeSchedulerAcceleratorTypeUnspecified,
		},
		MasterType: pulumi.String("string"),
		DataprocParameters: ¬ebooks.DataprocParametersArgs{
			Cluster: pulumi.String("string"),
		},
		Parameters:        pulumi.String("string"),
		ParamsYamlFile:    pulumi.String("string"),
		ContainerImageUri: pulumi.String("string"),
		ServiceAccount:    pulumi.String("string"),
		Tensorboard:       pulumi.String("string"),
		VertexAiParameters: ¬ebooks.VertexAIParametersArgs{
			Env: pulumi.StringMap{
				"string": pulumi.String("string"),
			},
			Network: pulumi.String("string"),
		},
	},
	Location: pulumi.String("string"),
	Project:  pulumi.String("string"),
	State:    notebooks.ScheduleStateStateUnspecified,
	TimeZone: pulumi.String("string"),
})
var examplescheduleResourceResourceFromNotebooksv1 = new Schedule("examplescheduleResourceResourceFromNotebooksv1", ScheduleArgs.builder()
    .scheduleId("string")
    .cronSchedule("string")
    .description("string")
    .executionTemplate(ExecutionTemplateArgs.builder()
        .labels(Map.of("string", "string"))
        .outputNotebookFolder("string")
        .inputNotebookFile("string")
        .jobType("JOB_TYPE_UNSPECIFIED")
        .kernelSpec("string")
        .acceleratorConfig(SchedulerAcceleratorConfigArgs.builder()
            .coreCount("string")
            .type("SCHEDULER_ACCELERATOR_TYPE_UNSPECIFIED")
            .build())
        .masterType("string")
        .dataprocParameters(DataprocParametersArgs.builder()
            .cluster("string")
            .build())
        .parameters("string")
        .paramsYamlFile("string")
        .containerImageUri("string")
        .serviceAccount("string")
        .tensorboard("string")
        .vertexAiParameters(VertexAIParametersArgs.builder()
            .env(Map.of("string", "string"))
            .network("string")
            .build())
        .build())
    .location("string")
    .project("string")
    .state("STATE_UNSPECIFIED")
    .timeZone("string")
    .build());
exampleschedule_resource_resource_from_notebooksv1 = google_native.notebooks.v1.Schedule("examplescheduleResourceResourceFromNotebooksv1",
    schedule_id="string",
    cron_schedule="string",
    description="string",
    execution_template={
        "labels": {
            "string": "string",
        },
        "output_notebook_folder": "string",
        "input_notebook_file": "string",
        "job_type": google_native.notebooks.v1.ExecutionTemplateJobType.JOB_TYPE_UNSPECIFIED,
        "kernel_spec": "string",
        "accelerator_config": {
            "core_count": "string",
            "type": google_native.notebooks.v1.SchedulerAcceleratorConfigType.SCHEDULER_ACCELERATOR_TYPE_UNSPECIFIED,
        },
        "master_type": "string",
        "dataproc_parameters": {
            "cluster": "string",
        },
        "parameters": "string",
        "params_yaml_file": "string",
        "container_image_uri": "string",
        "service_account": "string",
        "tensorboard": "string",
        "vertex_ai_parameters": {
            "env": {
                "string": "string",
            },
            "network": "string",
        },
    },
    location="string",
    project="string",
    state=google_native.notebooks.v1.ScheduleState.STATE_UNSPECIFIED,
    time_zone="string")
const examplescheduleResourceResourceFromNotebooksv1 = new google_native.notebooks.v1.Schedule("examplescheduleResourceResourceFromNotebooksv1", {
    scheduleId: "string",
    cronSchedule: "string",
    description: "string",
    executionTemplate: {
        labels: {
            string: "string",
        },
        outputNotebookFolder: "string",
        inputNotebookFile: "string",
        jobType: google_native.notebooks.v1.ExecutionTemplateJobType.JobTypeUnspecified,
        kernelSpec: "string",
        acceleratorConfig: {
            coreCount: "string",
            type: google_native.notebooks.v1.SchedulerAcceleratorConfigType.SchedulerAcceleratorTypeUnspecified,
        },
        masterType: "string",
        dataprocParameters: {
            cluster: "string",
        },
        parameters: "string",
        paramsYamlFile: "string",
        containerImageUri: "string",
        serviceAccount: "string",
        tensorboard: "string",
        vertexAiParameters: {
            env: {
                string: "string",
            },
            network: "string",
        },
    },
    location: "string",
    project: "string",
    state: google_native.notebooks.v1.ScheduleState.StateUnspecified,
    timeZone: "string",
});
type: google-native:notebooks/v1:Schedule
properties:
    cronSchedule: string
    description: string
    executionTemplate:
        acceleratorConfig:
            coreCount: string
            type: SCHEDULER_ACCELERATOR_TYPE_UNSPECIFIED
        containerImageUri: string
        dataprocParameters:
            cluster: string
        inputNotebookFile: string
        jobType: JOB_TYPE_UNSPECIFIED
        kernelSpec: string
        labels:
            string: string
        masterType: string
        outputNotebookFolder: string
        parameters: string
        paramsYamlFile: string
        serviceAccount: string
        tensorboard: string
        vertexAiParameters:
            env:
                string: string
            network: string
    location: string
    project: string
    scheduleId: string
    state: STATE_UNSPECIFIED
    timeZone: string
Schedule Resource Properties
To learn more about resource properties and how to use them, see Inputs and Outputs in the Architecture and Concepts docs.
Inputs
In Python, inputs that are objects can be passed either as argument classes or as dictionary literals.
The Schedule resource accepts the following input properties:
- ScheduleId string
- Required. User-defined unique ID of this schedule.
- CronSchedule string
- Cron-tab formatted schedule by which the job will execute. Format: minute, hour, day of month, month, day of week, e.g. 0 0 * * WED= every Wednesday More examples: https://crontab.guru/examples.html
- Description string
- A brief description of this environment.
- ExecutionTemplate Pulumi.Google Native. Notebooks. V1. Inputs. Execution Template 
- Notebook Execution Template corresponding to this schedule.
- Location string
- Project string
- State
Pulumi.Google Native. Notebooks. V1. Schedule State 
- TimeZone string
- Timezone on which the cron_schedule. The value of this field must be a time zone name from the tz database. TZ Database: https://en.wikipedia.org/wiki/List_of_tz_database_time_zones Note that some time zones include a provision for daylight savings time. The rules for daylight saving time are determined by the chosen tz. For UTC use the string "utc". If a time zone is not specified, the default will be in UTC (also known as GMT).
- ScheduleId string
- Required. User-defined unique ID of this schedule.
- CronSchedule string
- Cron-tab formatted schedule by which the job will execute. Format: minute, hour, day of month, month, day of week, e.g. 0 0 * * WED= every Wednesday More examples: https://crontab.guru/examples.html
- Description string
- A brief description of this environment.
- ExecutionTemplate ExecutionTemplate Args 
- Notebook Execution Template corresponding to this schedule.
- Location string
- Project string
- State
ScheduleState Enum 
- TimeZone string
- Timezone on which the cron_schedule. The value of this field must be a time zone name from the tz database. TZ Database: https://en.wikipedia.org/wiki/List_of_tz_database_time_zones Note that some time zones include a provision for daylight savings time. The rules for daylight saving time are determined by the chosen tz. For UTC use the string "utc". If a time zone is not specified, the default will be in UTC (also known as GMT).
- scheduleId String
- Required. User-defined unique ID of this schedule.
- cronSchedule String
- Cron-tab formatted schedule by which the job will execute. Format: minute, hour, day of month, month, day of week, e.g. 0 0 * * WED= every Wednesday More examples: https://crontab.guru/examples.html
- description String
- A brief description of this environment.
- executionTemplate ExecutionTemplate 
- Notebook Execution Template corresponding to this schedule.
- location String
- project String
- state
ScheduleState 
- timeZone String
- Timezone on which the cron_schedule. The value of this field must be a time zone name from the tz database. TZ Database: https://en.wikipedia.org/wiki/List_of_tz_database_time_zones Note that some time zones include a provision for daylight savings time. The rules for daylight saving time are determined by the chosen tz. For UTC use the string "utc". If a time zone is not specified, the default will be in UTC (also known as GMT).
- scheduleId string
- Required. User-defined unique ID of this schedule.
- cronSchedule string
- Cron-tab formatted schedule by which the job will execute. Format: minute, hour, day of month, month, day of week, e.g. 0 0 * * WED= every Wednesday More examples: https://crontab.guru/examples.html
- description string
- A brief description of this environment.
- executionTemplate ExecutionTemplate 
- Notebook Execution Template corresponding to this schedule.
- location string
- project string
- state
ScheduleState 
- timeZone string
- Timezone on which the cron_schedule. The value of this field must be a time zone name from the tz database. TZ Database: https://en.wikipedia.org/wiki/List_of_tz_database_time_zones Note that some time zones include a provision for daylight savings time. The rules for daylight saving time are determined by the chosen tz. For UTC use the string "utc". If a time zone is not specified, the default will be in UTC (also known as GMT).
- schedule_id str
- Required. User-defined unique ID of this schedule.
- cron_schedule str
- Cron-tab formatted schedule by which the job will execute. Format: minute, hour, day of month, month, day of week, e.g. 0 0 * * WED= every Wednesday More examples: https://crontab.guru/examples.html
- description str
- A brief description of this environment.
- execution_template ExecutionTemplate Args 
- Notebook Execution Template corresponding to this schedule.
- location str
- project str
- state
ScheduleState 
- time_zone str
- Timezone on which the cron_schedule. The value of this field must be a time zone name from the tz database. TZ Database: https://en.wikipedia.org/wiki/List_of_tz_database_time_zones Note that some time zones include a provision for daylight savings time. The rules for daylight saving time are determined by the chosen tz. For UTC use the string "utc". If a time zone is not specified, the default will be in UTC (also known as GMT).
- scheduleId String
- Required. User-defined unique ID of this schedule.
- cronSchedule String
- Cron-tab formatted schedule by which the job will execute. Format: minute, hour, day of month, month, day of week, e.g. 0 0 * * WED= every Wednesday More examples: https://crontab.guru/examples.html
- description String
- A brief description of this environment.
- executionTemplate Property Map
- Notebook Execution Template corresponding to this schedule.
- location String
- project String
- state "STATE_UNSPECIFIED" | "ENABLED" | "PAUSED" | "DISABLED" | "UPDATE_FAILED" | "INITIALIZING" | "DELETING"
- timeZone String
- Timezone on which the cron_schedule. The value of this field must be a time zone name from the tz database. TZ Database: https://en.wikipedia.org/wiki/List_of_tz_database_time_zones Note that some time zones include a provision for daylight savings time. The rules for daylight saving time are determined by the chosen tz. For UTC use the string "utc". If a time zone is not specified, the default will be in UTC (also known as GMT).
Outputs
All input properties are implicitly available as output properties. Additionally, the Schedule resource produces the following output properties:
- CreateTime string
- Time the schedule was created.
- DisplayName string
- Display name used for UI purposes. Name can only contain alphanumeric characters, hyphens -, and underscores_.
- Id string
- The provider-assigned unique ID for this managed resource.
- Name string
- The name of this schedule. Format: projects/{project_id}/locations/{location}/schedules/{schedule_id}
- RecentExecutions List<Pulumi.Google Native. Notebooks. V1. Outputs. Execution Response> 
- The most recent execution names triggered from this schedule and their corresponding states.
- UpdateTime string
- Time the schedule was last updated.
- CreateTime string
- Time the schedule was created.
- DisplayName string
- Display name used for UI purposes. Name can only contain alphanumeric characters, hyphens -, and underscores_.
- Id string
- The provider-assigned unique ID for this managed resource.
- Name string
- The name of this schedule. Format: projects/{project_id}/locations/{location}/schedules/{schedule_id}
- RecentExecutions []ExecutionResponse 
- The most recent execution names triggered from this schedule and their corresponding states.
- UpdateTime string
- Time the schedule was last updated.
- createTime String
- Time the schedule was created.
- displayName String
- Display name used for UI purposes. Name can only contain alphanumeric characters, hyphens -, and underscores_.
- id String
- The provider-assigned unique ID for this managed resource.
- name String
- The name of this schedule. Format: projects/{project_id}/locations/{location}/schedules/{schedule_id}
- recentExecutions List<ExecutionResponse> 
- The most recent execution names triggered from this schedule and their corresponding states.
- updateTime String
- Time the schedule was last updated.
- createTime string
- Time the schedule was created.
- displayName string
- Display name used for UI purposes. Name can only contain alphanumeric characters, hyphens -, and underscores_.
- id string
- The provider-assigned unique ID for this managed resource.
- name string
- The name of this schedule. Format: projects/{project_id}/locations/{location}/schedules/{schedule_id}
- recentExecutions ExecutionResponse[] 
- The most recent execution names triggered from this schedule and their corresponding states.
- updateTime string
- Time the schedule was last updated.
- create_time str
- Time the schedule was created.
- display_name str
- Display name used for UI purposes. Name can only contain alphanumeric characters, hyphens -, and underscores_.
- id str
- The provider-assigned unique ID for this managed resource.
- name str
- The name of this schedule. Format: projects/{project_id}/locations/{location}/schedules/{schedule_id}
- recent_executions Sequence[ExecutionResponse] 
- The most recent execution names triggered from this schedule and their corresponding states.
- update_time str
- Time the schedule was last updated.
- createTime String
- Time the schedule was created.
- displayName String
- Display name used for UI purposes. Name can only contain alphanumeric characters, hyphens -, and underscores_.
- id String
- The provider-assigned unique ID for this managed resource.
- name String
- The name of this schedule. Format: projects/{project_id}/locations/{location}/schedules/{schedule_id}
- recentExecutions List<Property Map>
- The most recent execution names triggered from this schedule and their corresponding states.
- updateTime String
- Time the schedule was last updated.
Supporting Types
DataprocParameters, DataprocParametersArgs    
- Cluster string
- URI for cluster used to run Dataproc execution. Format: projects/{PROJECT_ID}/regions/{REGION}/clusters/{CLUSTER_NAME}
- Cluster string
- URI for cluster used to run Dataproc execution. Format: projects/{PROJECT_ID}/regions/{REGION}/clusters/{CLUSTER_NAME}
- cluster String
- URI for cluster used to run Dataproc execution. Format: projects/{PROJECT_ID}/regions/{REGION}/clusters/{CLUSTER_NAME}
- cluster string
- URI for cluster used to run Dataproc execution. Format: projects/{PROJECT_ID}/regions/{REGION}/clusters/{CLUSTER_NAME}
- cluster str
- URI for cluster used to run Dataproc execution. Format: projects/{PROJECT_ID}/regions/{REGION}/clusters/{CLUSTER_NAME}
- cluster String
- URI for cluster used to run Dataproc execution. Format: projects/{PROJECT_ID}/regions/{REGION}/clusters/{CLUSTER_NAME}
DataprocParametersResponse, DataprocParametersResponseArgs      
- Cluster string
- URI for cluster used to run Dataproc execution. Format: projects/{PROJECT_ID}/regions/{REGION}/clusters/{CLUSTER_NAME}
- Cluster string
- URI for cluster used to run Dataproc execution. Format: projects/{PROJECT_ID}/regions/{REGION}/clusters/{CLUSTER_NAME}
- cluster String
- URI for cluster used to run Dataproc execution. Format: projects/{PROJECT_ID}/regions/{REGION}/clusters/{CLUSTER_NAME}
- cluster string
- URI for cluster used to run Dataproc execution. Format: projects/{PROJECT_ID}/regions/{REGION}/clusters/{CLUSTER_NAME}
- cluster str
- URI for cluster used to run Dataproc execution. Format: projects/{PROJECT_ID}/regions/{REGION}/clusters/{CLUSTER_NAME}
- cluster String
- URI for cluster used to run Dataproc execution. Format: projects/{PROJECT_ID}/regions/{REGION}/clusters/{CLUSTER_NAME}
ExecutionResponse, ExecutionResponseArgs    
- CreateTime string
- Time the Execution was instantiated.
- Description string
- A brief description of this execution.
- DisplayName string
- Name used for UI purposes. Name can only contain alphanumeric characters and underscores '_'.
- ExecutionTemplate Pulumi.Google Native. Notebooks. V1. Inputs. Execution Template Response 
- execute metadata including name, hardware spec, region, labels, etc.
- JobUri string
- The URI of the external job used to execute the notebook.
- Name string
- The resource name of the execute. Format: projects/{project_id}/locations/{location}/executions/{execution_id}
- OutputNotebook stringFile 
- Output notebook file generated by this execution
- State string
- State of the underlying AI Platform job.
- UpdateTime string
- Time the Execution was last updated.
- CreateTime string
- Time the Execution was instantiated.
- Description string
- A brief description of this execution.
- DisplayName string
- Name used for UI purposes. Name can only contain alphanumeric characters and underscores '_'.
- ExecutionTemplate ExecutionTemplate Response 
- execute metadata including name, hardware spec, region, labels, etc.
- JobUri string
- The URI of the external job used to execute the notebook.
- Name string
- The resource name of the execute. Format: projects/{project_id}/locations/{location}/executions/{execution_id}
- OutputNotebook stringFile 
- Output notebook file generated by this execution
- State string
- State of the underlying AI Platform job.
- UpdateTime string
- Time the Execution was last updated.
- createTime String
- Time the Execution was instantiated.
- description String
- A brief description of this execution.
- displayName String
- Name used for UI purposes. Name can only contain alphanumeric characters and underscores '_'.
- executionTemplate ExecutionTemplate Response 
- execute metadata including name, hardware spec, region, labels, etc.
- jobUri String
- The URI of the external job used to execute the notebook.
- name String
- The resource name of the execute. Format: projects/{project_id}/locations/{location}/executions/{execution_id}
- outputNotebook StringFile 
- Output notebook file generated by this execution
- state String
- State of the underlying AI Platform job.
- updateTime String
- Time the Execution was last updated.
- createTime string
- Time the Execution was instantiated.
- description string
- A brief description of this execution.
- displayName string
- Name used for UI purposes. Name can only contain alphanumeric characters and underscores '_'.
- executionTemplate ExecutionTemplate Response 
- execute metadata including name, hardware spec, region, labels, etc.
- jobUri string
- The URI of the external job used to execute the notebook.
- name string
- The resource name of the execute. Format: projects/{project_id}/locations/{location}/executions/{execution_id}
- outputNotebook stringFile 
- Output notebook file generated by this execution
- state string
- State of the underlying AI Platform job.
- updateTime string
- Time the Execution was last updated.
- create_time str
- Time the Execution was instantiated.
- description str
- A brief description of this execution.
- display_name str
- Name used for UI purposes. Name can only contain alphanumeric characters and underscores '_'.
- execution_template ExecutionTemplate Response 
- execute metadata including name, hardware spec, region, labels, etc.
- job_uri str
- The URI of the external job used to execute the notebook.
- name str
- The resource name of the execute. Format: projects/{project_id}/locations/{location}/executions/{execution_id}
- output_notebook_ strfile 
- Output notebook file generated by this execution
- state str
- State of the underlying AI Platform job.
- update_time str
- Time the Execution was last updated.
- createTime String
- Time the Execution was instantiated.
- description String
- A brief description of this execution.
- displayName String
- Name used for UI purposes. Name can only contain alphanumeric characters and underscores '_'.
- executionTemplate Property Map
- execute metadata including name, hardware spec, region, labels, etc.
- jobUri String
- The URI of the external job used to execute the notebook.
- name String
- The resource name of the execute. Format: projects/{project_id}/locations/{location}/executions/{execution_id}
- outputNotebook StringFile 
- Output notebook file generated by this execution
- state String
- State of the underlying AI Platform job.
- updateTime String
- Time the Execution was last updated.
ExecutionTemplate, ExecutionTemplateArgs    
- ScaleTier Pulumi.Google Native. Notebooks. V1. Execution Template Scale Tier 
- Scale tier of the hardware used for notebook execution. DEPRECATED Will be discontinued. As right now only CUSTOM is supported.
- AcceleratorConfig Pulumi.Google Native. Notebooks. V1. Inputs. Scheduler Accelerator Config 
- Configuration (count and accelerator type) for hardware running notebook execution.
- ContainerImage stringUri 
- Container Image URI to a DLVM Example: 'gcr.io/deeplearning-platform-release/base-cu100' More examples can be found at: https://cloud.google.com/ai-platform/deep-learning-containers/docs/choosing-container
- DataprocParameters Pulumi.Google Native. Notebooks. V1. Inputs. Dataproc Parameters 
- Parameters used in Dataproc JobType executions.
- InputNotebook stringFile 
- Path to the notebook file to execute. Must be in a Google Cloud Storage bucket. Format: gs://{bucket_name}/{folder}/{notebook_file_name}Ex:gs://notebook_user/scheduled_notebooks/sentiment_notebook.ipynb
- JobType Pulumi.Google Native. Notebooks. V1. Execution Template Job Type 
- The type of Job to be used on this execution.
- KernelSpec string
- Name of the kernel spec to use. This must be specified if the kernel spec name on the execution target does not match the name in the input notebook file.
- Labels Dictionary<string, string>
- Labels for execution. If execution is scheduled, a field included will be 'nbs-scheduled'. Otherwise, it is an immediate execution, and an included field will be 'nbs-immediate'. Use fields to efficiently index between various types of executions.
- MasterType string
- Specifies the type of virtual machine to use for your training job's master worker. You must specify this field when scaleTieris set toCUSTOM. You can use certain Compute Engine machine types directly in this field. The following types are supported: -n1-standard-4-n1-standard-8-n1-standard-16-n1-standard-32-n1-standard-64-n1-standard-96-n1-highmem-2-n1-highmem-4-n1-highmem-8-n1-highmem-16-n1-highmem-32-n1-highmem-64-n1-highmem-96-n1-highcpu-16-n1-highcpu-32-n1-highcpu-64-n1-highcpu-96Alternatively, you can use the following legacy machine types: -standard-large_model-complex_model_s-complex_model_m-complex_model_l-standard_gpu-complex_model_m_gpu-complex_model_l_gpu-standard_p100-complex_model_m_p100-standard_v100-large_model_v100-complex_model_m_v100-complex_model_l_v100Finally, if you want to use a TPU for training, specifycloud_tpuin this field. Learn more about the special configuration options for training with TPU.
- OutputNotebook stringFolder 
- Path to the notebook folder to write to. Must be in a Google Cloud Storage bucket path. Format: gs://{bucket_name}/{folder}Ex:gs://notebook_user/scheduled_notebooks
- Parameters string
- Parameters used within the 'input_notebook_file' notebook.
- ParamsYaml stringFile 
- Parameters to be overridden in the notebook during execution. Ref https://papermill.readthedocs.io/en/latest/usage-parameterize.html on how to specifying parameters in the input notebook and pass them here in an YAML file. Ex: gs://notebook_user/scheduled_notebooks/sentiment_notebook_params.yaml
- ServiceAccount string
- The email address of a service account to use when running the execution. You must have the iam.serviceAccounts.actAspermission for the specified service account.
- Tensorboard string
- The name of a Vertex AI [Tensorboard] resource to which this execution will upload Tensorboard logs. Format: projects/{project}/locations/{location}/tensorboards/{tensorboard}
- VertexAi Pulumi.Parameters Google Native. Notebooks. V1. Inputs. Vertex AIParameters 
- Parameters used in Vertex AI JobType executions.
- ScaleTier ExecutionTemplate Scale Tier 
- Scale tier of the hardware used for notebook execution. DEPRECATED Will be discontinued. As right now only CUSTOM is supported.
- AcceleratorConfig SchedulerAccelerator Config 
- Configuration (count and accelerator type) for hardware running notebook execution.
- ContainerImage stringUri 
- Container Image URI to a DLVM Example: 'gcr.io/deeplearning-platform-release/base-cu100' More examples can be found at: https://cloud.google.com/ai-platform/deep-learning-containers/docs/choosing-container
- DataprocParameters DataprocParameters 
- Parameters used in Dataproc JobType executions.
- InputNotebook stringFile 
- Path to the notebook file to execute. Must be in a Google Cloud Storage bucket. Format: gs://{bucket_name}/{folder}/{notebook_file_name}Ex:gs://notebook_user/scheduled_notebooks/sentiment_notebook.ipynb
- JobType ExecutionTemplate Job Type 
- The type of Job to be used on this execution.
- KernelSpec string
- Name of the kernel spec to use. This must be specified if the kernel spec name on the execution target does not match the name in the input notebook file.
- Labels map[string]string
- Labels for execution. If execution is scheduled, a field included will be 'nbs-scheduled'. Otherwise, it is an immediate execution, and an included field will be 'nbs-immediate'. Use fields to efficiently index between various types of executions.
- MasterType string
- Specifies the type of virtual machine to use for your training job's master worker. You must specify this field when scaleTieris set toCUSTOM. You can use certain Compute Engine machine types directly in this field. The following types are supported: -n1-standard-4-n1-standard-8-n1-standard-16-n1-standard-32-n1-standard-64-n1-standard-96-n1-highmem-2-n1-highmem-4-n1-highmem-8-n1-highmem-16-n1-highmem-32-n1-highmem-64-n1-highmem-96-n1-highcpu-16-n1-highcpu-32-n1-highcpu-64-n1-highcpu-96Alternatively, you can use the following legacy machine types: -standard-large_model-complex_model_s-complex_model_m-complex_model_l-standard_gpu-complex_model_m_gpu-complex_model_l_gpu-standard_p100-complex_model_m_p100-standard_v100-large_model_v100-complex_model_m_v100-complex_model_l_v100Finally, if you want to use a TPU for training, specifycloud_tpuin this field. Learn more about the special configuration options for training with TPU.
- OutputNotebook stringFolder 
- Path to the notebook folder to write to. Must be in a Google Cloud Storage bucket path. Format: gs://{bucket_name}/{folder}Ex:gs://notebook_user/scheduled_notebooks
- Parameters string
- Parameters used within the 'input_notebook_file' notebook.
- ParamsYaml stringFile 
- Parameters to be overridden in the notebook during execution. Ref https://papermill.readthedocs.io/en/latest/usage-parameterize.html on how to specifying parameters in the input notebook and pass them here in an YAML file. Ex: gs://notebook_user/scheduled_notebooks/sentiment_notebook_params.yaml
- ServiceAccount string
- The email address of a service account to use when running the execution. You must have the iam.serviceAccounts.actAspermission for the specified service account.
- Tensorboard string
- The name of a Vertex AI [Tensorboard] resource to which this execution will upload Tensorboard logs. Format: projects/{project}/locations/{location}/tensorboards/{tensorboard}
- VertexAi VertexParameters AIParameters 
- Parameters used in Vertex AI JobType executions.
- scaleTier ExecutionTemplate Scale Tier 
- Scale tier of the hardware used for notebook execution. DEPRECATED Will be discontinued. As right now only CUSTOM is supported.
- acceleratorConfig SchedulerAccelerator Config 
- Configuration (count and accelerator type) for hardware running notebook execution.
- containerImage StringUri 
- Container Image URI to a DLVM Example: 'gcr.io/deeplearning-platform-release/base-cu100' More examples can be found at: https://cloud.google.com/ai-platform/deep-learning-containers/docs/choosing-container
- dataprocParameters DataprocParameters 
- Parameters used in Dataproc JobType executions.
- inputNotebook StringFile 
- Path to the notebook file to execute. Must be in a Google Cloud Storage bucket. Format: gs://{bucket_name}/{folder}/{notebook_file_name}Ex:gs://notebook_user/scheduled_notebooks/sentiment_notebook.ipynb
- jobType ExecutionTemplate Job Type 
- The type of Job to be used on this execution.
- kernelSpec String
- Name of the kernel spec to use. This must be specified if the kernel spec name on the execution target does not match the name in the input notebook file.
- labels Map<String,String>
- Labels for execution. If execution is scheduled, a field included will be 'nbs-scheduled'. Otherwise, it is an immediate execution, and an included field will be 'nbs-immediate'. Use fields to efficiently index between various types of executions.
- masterType String
- Specifies the type of virtual machine to use for your training job's master worker. You must specify this field when scaleTieris set toCUSTOM. You can use certain Compute Engine machine types directly in this field. The following types are supported: -n1-standard-4-n1-standard-8-n1-standard-16-n1-standard-32-n1-standard-64-n1-standard-96-n1-highmem-2-n1-highmem-4-n1-highmem-8-n1-highmem-16-n1-highmem-32-n1-highmem-64-n1-highmem-96-n1-highcpu-16-n1-highcpu-32-n1-highcpu-64-n1-highcpu-96Alternatively, you can use the following legacy machine types: -standard-large_model-complex_model_s-complex_model_m-complex_model_l-standard_gpu-complex_model_m_gpu-complex_model_l_gpu-standard_p100-complex_model_m_p100-standard_v100-large_model_v100-complex_model_m_v100-complex_model_l_v100Finally, if you want to use a TPU for training, specifycloud_tpuin this field. Learn more about the special configuration options for training with TPU.
- outputNotebook StringFolder 
- Path to the notebook folder to write to. Must be in a Google Cloud Storage bucket path. Format: gs://{bucket_name}/{folder}Ex:gs://notebook_user/scheduled_notebooks
- parameters String
- Parameters used within the 'input_notebook_file' notebook.
- paramsYaml StringFile 
- Parameters to be overridden in the notebook during execution. Ref https://papermill.readthedocs.io/en/latest/usage-parameterize.html on how to specifying parameters in the input notebook and pass them here in an YAML file. Ex: gs://notebook_user/scheduled_notebooks/sentiment_notebook_params.yaml
- serviceAccount String
- The email address of a service account to use when running the execution. You must have the iam.serviceAccounts.actAspermission for the specified service account.
- tensorboard String
- The name of a Vertex AI [Tensorboard] resource to which this execution will upload Tensorboard logs. Format: projects/{project}/locations/{location}/tensorboards/{tensorboard}
- vertexAi VertexParameters AIParameters 
- Parameters used in Vertex AI JobType executions.
- scaleTier ExecutionTemplate Scale Tier 
- Scale tier of the hardware used for notebook execution. DEPRECATED Will be discontinued. As right now only CUSTOM is supported.
- acceleratorConfig SchedulerAccelerator Config 
- Configuration (count and accelerator type) for hardware running notebook execution.
- containerImage stringUri 
- Container Image URI to a DLVM Example: 'gcr.io/deeplearning-platform-release/base-cu100' More examples can be found at: https://cloud.google.com/ai-platform/deep-learning-containers/docs/choosing-container
- dataprocParameters DataprocParameters 
- Parameters used in Dataproc JobType executions.
- inputNotebook stringFile 
- Path to the notebook file to execute. Must be in a Google Cloud Storage bucket. Format: gs://{bucket_name}/{folder}/{notebook_file_name}Ex:gs://notebook_user/scheduled_notebooks/sentiment_notebook.ipynb
- jobType ExecutionTemplate Job Type 
- The type of Job to be used on this execution.
- kernelSpec string
- Name of the kernel spec to use. This must be specified if the kernel spec name on the execution target does not match the name in the input notebook file.
- labels {[key: string]: string}
- Labels for execution. If execution is scheduled, a field included will be 'nbs-scheduled'. Otherwise, it is an immediate execution, and an included field will be 'nbs-immediate'. Use fields to efficiently index between various types of executions.
- masterType string
- Specifies the type of virtual machine to use for your training job's master worker. You must specify this field when scaleTieris set toCUSTOM. You can use certain Compute Engine machine types directly in this field. The following types are supported: -n1-standard-4-n1-standard-8-n1-standard-16-n1-standard-32-n1-standard-64-n1-standard-96-n1-highmem-2-n1-highmem-4-n1-highmem-8-n1-highmem-16-n1-highmem-32-n1-highmem-64-n1-highmem-96-n1-highcpu-16-n1-highcpu-32-n1-highcpu-64-n1-highcpu-96Alternatively, you can use the following legacy machine types: -standard-large_model-complex_model_s-complex_model_m-complex_model_l-standard_gpu-complex_model_m_gpu-complex_model_l_gpu-standard_p100-complex_model_m_p100-standard_v100-large_model_v100-complex_model_m_v100-complex_model_l_v100Finally, if you want to use a TPU for training, specifycloud_tpuin this field. Learn more about the special configuration options for training with TPU.
- outputNotebook stringFolder 
- Path to the notebook folder to write to. Must be in a Google Cloud Storage bucket path. Format: gs://{bucket_name}/{folder}Ex:gs://notebook_user/scheduled_notebooks
- parameters string
- Parameters used within the 'input_notebook_file' notebook.
- paramsYaml stringFile 
- Parameters to be overridden in the notebook during execution. Ref https://papermill.readthedocs.io/en/latest/usage-parameterize.html on how to specifying parameters in the input notebook and pass them here in an YAML file. Ex: gs://notebook_user/scheduled_notebooks/sentiment_notebook_params.yaml
- serviceAccount string
- The email address of a service account to use when running the execution. You must have the iam.serviceAccounts.actAspermission for the specified service account.
- tensorboard string
- The name of a Vertex AI [Tensorboard] resource to which this execution will upload Tensorboard logs. Format: projects/{project}/locations/{location}/tensorboards/{tensorboard}
- vertexAi VertexParameters AIParameters 
- Parameters used in Vertex AI JobType executions.
- scale_tier ExecutionTemplate Scale Tier 
- Scale tier of the hardware used for notebook execution. DEPRECATED Will be discontinued. As right now only CUSTOM is supported.
- accelerator_config SchedulerAccelerator Config 
- Configuration (count and accelerator type) for hardware running notebook execution.
- container_image_ struri 
- Container Image URI to a DLVM Example: 'gcr.io/deeplearning-platform-release/base-cu100' More examples can be found at: https://cloud.google.com/ai-platform/deep-learning-containers/docs/choosing-container
- dataproc_parameters DataprocParameters 
- Parameters used in Dataproc JobType executions.
- input_notebook_ strfile 
- Path to the notebook file to execute. Must be in a Google Cloud Storage bucket. Format: gs://{bucket_name}/{folder}/{notebook_file_name}Ex:gs://notebook_user/scheduled_notebooks/sentiment_notebook.ipynb
- job_type ExecutionTemplate Job Type 
- The type of Job to be used on this execution.
- kernel_spec str
- Name of the kernel spec to use. This must be specified if the kernel spec name on the execution target does not match the name in the input notebook file.
- labels Mapping[str, str]
- Labels for execution. If execution is scheduled, a field included will be 'nbs-scheduled'. Otherwise, it is an immediate execution, and an included field will be 'nbs-immediate'. Use fields to efficiently index between various types of executions.
- master_type str
- Specifies the type of virtual machine to use for your training job's master worker. You must specify this field when scaleTieris set toCUSTOM. You can use certain Compute Engine machine types directly in this field. The following types are supported: -n1-standard-4-n1-standard-8-n1-standard-16-n1-standard-32-n1-standard-64-n1-standard-96-n1-highmem-2-n1-highmem-4-n1-highmem-8-n1-highmem-16-n1-highmem-32-n1-highmem-64-n1-highmem-96-n1-highcpu-16-n1-highcpu-32-n1-highcpu-64-n1-highcpu-96Alternatively, you can use the following legacy machine types: -standard-large_model-complex_model_s-complex_model_m-complex_model_l-standard_gpu-complex_model_m_gpu-complex_model_l_gpu-standard_p100-complex_model_m_p100-standard_v100-large_model_v100-complex_model_m_v100-complex_model_l_v100Finally, if you want to use a TPU for training, specifycloud_tpuin this field. Learn more about the special configuration options for training with TPU.
- output_notebook_ strfolder 
- Path to the notebook folder to write to. Must be in a Google Cloud Storage bucket path. Format: gs://{bucket_name}/{folder}Ex:gs://notebook_user/scheduled_notebooks
- parameters str
- Parameters used within the 'input_notebook_file' notebook.
- params_yaml_ strfile 
- Parameters to be overridden in the notebook during execution. Ref https://papermill.readthedocs.io/en/latest/usage-parameterize.html on how to specifying parameters in the input notebook and pass them here in an YAML file. Ex: gs://notebook_user/scheduled_notebooks/sentiment_notebook_params.yaml
- service_account str
- The email address of a service account to use when running the execution. You must have the iam.serviceAccounts.actAspermission for the specified service account.
- tensorboard str
- The name of a Vertex AI [Tensorboard] resource to which this execution will upload Tensorboard logs. Format: projects/{project}/locations/{location}/tensorboards/{tensorboard}
- vertex_ai_ Vertexparameters AIParameters 
- Parameters used in Vertex AI JobType executions.
- scaleTier "SCALE_TIER_UNSPECIFIED" | "BASIC" | "STANDARD_1" | "PREMIUM_1" | "BASIC_GPU" | "BASIC_TPU" | "CUSTOM"
- Scale tier of the hardware used for notebook execution. DEPRECATED Will be discontinued. As right now only CUSTOM is supported.
- acceleratorConfig Property Map
- Configuration (count and accelerator type) for hardware running notebook execution.
- containerImage StringUri 
- Container Image URI to a DLVM Example: 'gcr.io/deeplearning-platform-release/base-cu100' More examples can be found at: https://cloud.google.com/ai-platform/deep-learning-containers/docs/choosing-container
- dataprocParameters Property Map
- Parameters used in Dataproc JobType executions.
- inputNotebook StringFile 
- Path to the notebook file to execute. Must be in a Google Cloud Storage bucket. Format: gs://{bucket_name}/{folder}/{notebook_file_name}Ex:gs://notebook_user/scheduled_notebooks/sentiment_notebook.ipynb
- jobType "JOB_TYPE_UNSPECIFIED" | "VERTEX_AI" | "DATAPROC"
- The type of Job to be used on this execution.
- kernelSpec String
- Name of the kernel spec to use. This must be specified if the kernel spec name on the execution target does not match the name in the input notebook file.
- labels Map<String>
- Labels for execution. If execution is scheduled, a field included will be 'nbs-scheduled'. Otherwise, it is an immediate execution, and an included field will be 'nbs-immediate'. Use fields to efficiently index between various types of executions.
- masterType String
- Specifies the type of virtual machine to use for your training job's master worker. You must specify this field when scaleTieris set toCUSTOM. You can use certain Compute Engine machine types directly in this field. The following types are supported: -n1-standard-4-n1-standard-8-n1-standard-16-n1-standard-32-n1-standard-64-n1-standard-96-n1-highmem-2-n1-highmem-4-n1-highmem-8-n1-highmem-16-n1-highmem-32-n1-highmem-64-n1-highmem-96-n1-highcpu-16-n1-highcpu-32-n1-highcpu-64-n1-highcpu-96Alternatively, you can use the following legacy machine types: -standard-large_model-complex_model_s-complex_model_m-complex_model_l-standard_gpu-complex_model_m_gpu-complex_model_l_gpu-standard_p100-complex_model_m_p100-standard_v100-large_model_v100-complex_model_m_v100-complex_model_l_v100Finally, if you want to use a TPU for training, specifycloud_tpuin this field. Learn more about the special configuration options for training with TPU.
- outputNotebook StringFolder 
- Path to the notebook folder to write to. Must be in a Google Cloud Storage bucket path. Format: gs://{bucket_name}/{folder}Ex:gs://notebook_user/scheduled_notebooks
- parameters String
- Parameters used within the 'input_notebook_file' notebook.
- paramsYaml StringFile 
- Parameters to be overridden in the notebook during execution. Ref https://papermill.readthedocs.io/en/latest/usage-parameterize.html on how to specifying parameters in the input notebook and pass them here in an YAML file. Ex: gs://notebook_user/scheduled_notebooks/sentiment_notebook_params.yaml
- serviceAccount String
- The email address of a service account to use when running the execution. You must have the iam.serviceAccounts.actAspermission for the specified service account.
- tensorboard String
- The name of a Vertex AI [Tensorboard] resource to which this execution will upload Tensorboard logs. Format: projects/{project}/locations/{location}/tensorboards/{tensorboard}
- vertexAi Property MapParameters 
- Parameters used in Vertex AI JobType executions.
ExecutionTemplateJobType, ExecutionTemplateJobTypeArgs        
- JobType Unspecified 
- JOB_TYPE_UNSPECIFIEDNo type specified.
- VertexAi 
- VERTEX_AICustom Job in aiplatform.googleapis.com. Default value for an execution.
- Dataproc
- DATAPROCRun execution on a cluster with Dataproc as a job. https://cloud.google.com/dataproc/docs/reference/rest/v1/projects.regions.jobs
- ExecutionTemplate Job Type Job Type Unspecified 
- JOB_TYPE_UNSPECIFIEDNo type specified.
- ExecutionTemplate Job Type Vertex Ai 
- VERTEX_AICustom Job in aiplatform.googleapis.com. Default value for an execution.
- ExecutionTemplate Job Type Dataproc 
- DATAPROCRun execution on a cluster with Dataproc as a job. https://cloud.google.com/dataproc/docs/reference/rest/v1/projects.regions.jobs
- JobType Unspecified 
- JOB_TYPE_UNSPECIFIEDNo type specified.
- VertexAi 
- VERTEX_AICustom Job in aiplatform.googleapis.com. Default value for an execution.
- Dataproc
- DATAPROCRun execution on a cluster with Dataproc as a job. https://cloud.google.com/dataproc/docs/reference/rest/v1/projects.regions.jobs
- JobType Unspecified 
- JOB_TYPE_UNSPECIFIEDNo type specified.
- VertexAi 
- VERTEX_AICustom Job in aiplatform.googleapis.com. Default value for an execution.
- Dataproc
- DATAPROCRun execution on a cluster with Dataproc as a job. https://cloud.google.com/dataproc/docs/reference/rest/v1/projects.regions.jobs
- JOB_TYPE_UNSPECIFIED
- JOB_TYPE_UNSPECIFIEDNo type specified.
- VERTEX_AI
- VERTEX_AICustom Job in aiplatform.googleapis.com. Default value for an execution.
- DATAPROC
- DATAPROCRun execution on a cluster with Dataproc as a job. https://cloud.google.com/dataproc/docs/reference/rest/v1/projects.regions.jobs
- "JOB_TYPE_UNSPECIFIED"
- JOB_TYPE_UNSPECIFIEDNo type specified.
- "VERTEX_AI"
- VERTEX_AICustom Job in aiplatform.googleapis.com. Default value for an execution.
- "DATAPROC"
- DATAPROCRun execution on a cluster with Dataproc as a job. https://cloud.google.com/dataproc/docs/reference/rest/v1/projects.regions.jobs
ExecutionTemplateResponse, ExecutionTemplateResponseArgs      
- AcceleratorConfig Pulumi.Google Native. Notebooks. V1. Inputs. Scheduler Accelerator Config Response 
- Configuration (count and accelerator type) for hardware running notebook execution.
- ContainerImage stringUri 
- Container Image URI to a DLVM Example: 'gcr.io/deeplearning-platform-release/base-cu100' More examples can be found at: https://cloud.google.com/ai-platform/deep-learning-containers/docs/choosing-container
- DataprocParameters Pulumi.Google Native. Notebooks. V1. Inputs. Dataproc Parameters Response 
- Parameters used in Dataproc JobType executions.
- InputNotebook stringFile 
- Path to the notebook file to execute. Must be in a Google Cloud Storage bucket. Format: gs://{bucket_name}/{folder}/{notebook_file_name}Ex:gs://notebook_user/scheduled_notebooks/sentiment_notebook.ipynb
- JobType string
- The type of Job to be used on this execution.
- KernelSpec string
- Name of the kernel spec to use. This must be specified if the kernel spec name on the execution target does not match the name in the input notebook file.
- Labels Dictionary<string, string>
- Labels for execution. If execution is scheduled, a field included will be 'nbs-scheduled'. Otherwise, it is an immediate execution, and an included field will be 'nbs-immediate'. Use fields to efficiently index between various types of executions.
- MasterType string
- Specifies the type of virtual machine to use for your training job's master worker. You must specify this field when scaleTieris set toCUSTOM. You can use certain Compute Engine machine types directly in this field. The following types are supported: -n1-standard-4-n1-standard-8-n1-standard-16-n1-standard-32-n1-standard-64-n1-standard-96-n1-highmem-2-n1-highmem-4-n1-highmem-8-n1-highmem-16-n1-highmem-32-n1-highmem-64-n1-highmem-96-n1-highcpu-16-n1-highcpu-32-n1-highcpu-64-n1-highcpu-96Alternatively, you can use the following legacy machine types: -standard-large_model-complex_model_s-complex_model_m-complex_model_l-standard_gpu-complex_model_m_gpu-complex_model_l_gpu-standard_p100-complex_model_m_p100-standard_v100-large_model_v100-complex_model_m_v100-complex_model_l_v100Finally, if you want to use a TPU for training, specifycloud_tpuin this field. Learn more about the special configuration options for training with TPU.
- OutputNotebook stringFolder 
- Path to the notebook folder to write to. Must be in a Google Cloud Storage bucket path. Format: gs://{bucket_name}/{folder}Ex:gs://notebook_user/scheduled_notebooks
- Parameters string
- Parameters used within the 'input_notebook_file' notebook.
- ParamsYaml stringFile 
- Parameters to be overridden in the notebook during execution. Ref https://papermill.readthedocs.io/en/latest/usage-parameterize.html on how to specifying parameters in the input notebook and pass them here in an YAML file. Ex: gs://notebook_user/scheduled_notebooks/sentiment_notebook_params.yaml
- ScaleTier string
- Scale tier of the hardware used for notebook execution. DEPRECATED Will be discontinued. As right now only CUSTOM is supported.
- ServiceAccount string
- The email address of a service account to use when running the execution. You must have the iam.serviceAccounts.actAspermission for the specified service account.
- Tensorboard string
- The name of a Vertex AI [Tensorboard] resource to which this execution will upload Tensorboard logs. Format: projects/{project}/locations/{location}/tensorboards/{tensorboard}
- VertexAi Pulumi.Parameters Google Native. Notebooks. V1. Inputs. Vertex AIParameters Response 
- Parameters used in Vertex AI JobType executions.
- AcceleratorConfig SchedulerAccelerator Config Response 
- Configuration (count and accelerator type) for hardware running notebook execution.
- ContainerImage stringUri 
- Container Image URI to a DLVM Example: 'gcr.io/deeplearning-platform-release/base-cu100' More examples can be found at: https://cloud.google.com/ai-platform/deep-learning-containers/docs/choosing-container
- DataprocParameters DataprocParameters Response 
- Parameters used in Dataproc JobType executions.
- InputNotebook stringFile 
- Path to the notebook file to execute. Must be in a Google Cloud Storage bucket. Format: gs://{bucket_name}/{folder}/{notebook_file_name}Ex:gs://notebook_user/scheduled_notebooks/sentiment_notebook.ipynb
- JobType string
- The type of Job to be used on this execution.
- KernelSpec string
- Name of the kernel spec to use. This must be specified if the kernel spec name on the execution target does not match the name in the input notebook file.
- Labels map[string]string
- Labels for execution. If execution is scheduled, a field included will be 'nbs-scheduled'. Otherwise, it is an immediate execution, and an included field will be 'nbs-immediate'. Use fields to efficiently index between various types of executions.
- MasterType string
- Specifies the type of virtual machine to use for your training job's master worker. You must specify this field when scaleTieris set toCUSTOM. You can use certain Compute Engine machine types directly in this field. The following types are supported: -n1-standard-4-n1-standard-8-n1-standard-16-n1-standard-32-n1-standard-64-n1-standard-96-n1-highmem-2-n1-highmem-4-n1-highmem-8-n1-highmem-16-n1-highmem-32-n1-highmem-64-n1-highmem-96-n1-highcpu-16-n1-highcpu-32-n1-highcpu-64-n1-highcpu-96Alternatively, you can use the following legacy machine types: -standard-large_model-complex_model_s-complex_model_m-complex_model_l-standard_gpu-complex_model_m_gpu-complex_model_l_gpu-standard_p100-complex_model_m_p100-standard_v100-large_model_v100-complex_model_m_v100-complex_model_l_v100Finally, if you want to use a TPU for training, specifycloud_tpuin this field. Learn more about the special configuration options for training with TPU.
- OutputNotebook stringFolder 
- Path to the notebook folder to write to. Must be in a Google Cloud Storage bucket path. Format: gs://{bucket_name}/{folder}Ex:gs://notebook_user/scheduled_notebooks
- Parameters string
- Parameters used within the 'input_notebook_file' notebook.
- ParamsYaml stringFile 
- Parameters to be overridden in the notebook during execution. Ref https://papermill.readthedocs.io/en/latest/usage-parameterize.html on how to specifying parameters in the input notebook and pass them here in an YAML file. Ex: gs://notebook_user/scheduled_notebooks/sentiment_notebook_params.yaml
- ScaleTier string
- Scale tier of the hardware used for notebook execution. DEPRECATED Will be discontinued. As right now only CUSTOM is supported.
- ServiceAccount string
- The email address of a service account to use when running the execution. You must have the iam.serviceAccounts.actAspermission for the specified service account.
- Tensorboard string
- The name of a Vertex AI [Tensorboard] resource to which this execution will upload Tensorboard logs. Format: projects/{project}/locations/{location}/tensorboards/{tensorboard}
- VertexAi VertexParameters AIParameters Response 
- Parameters used in Vertex AI JobType executions.
- acceleratorConfig SchedulerAccelerator Config Response 
- Configuration (count and accelerator type) for hardware running notebook execution.
- containerImage StringUri 
- Container Image URI to a DLVM Example: 'gcr.io/deeplearning-platform-release/base-cu100' More examples can be found at: https://cloud.google.com/ai-platform/deep-learning-containers/docs/choosing-container
- dataprocParameters DataprocParameters Response 
- Parameters used in Dataproc JobType executions.
- inputNotebook StringFile 
- Path to the notebook file to execute. Must be in a Google Cloud Storage bucket. Format: gs://{bucket_name}/{folder}/{notebook_file_name}Ex:gs://notebook_user/scheduled_notebooks/sentiment_notebook.ipynb
- jobType String
- The type of Job to be used on this execution.
- kernelSpec String
- Name of the kernel spec to use. This must be specified if the kernel spec name on the execution target does not match the name in the input notebook file.
- labels Map<String,String>
- Labels for execution. If execution is scheduled, a field included will be 'nbs-scheduled'. Otherwise, it is an immediate execution, and an included field will be 'nbs-immediate'. Use fields to efficiently index between various types of executions.
- masterType String
- Specifies the type of virtual machine to use for your training job's master worker. You must specify this field when scaleTieris set toCUSTOM. You can use certain Compute Engine machine types directly in this field. The following types are supported: -n1-standard-4-n1-standard-8-n1-standard-16-n1-standard-32-n1-standard-64-n1-standard-96-n1-highmem-2-n1-highmem-4-n1-highmem-8-n1-highmem-16-n1-highmem-32-n1-highmem-64-n1-highmem-96-n1-highcpu-16-n1-highcpu-32-n1-highcpu-64-n1-highcpu-96Alternatively, you can use the following legacy machine types: -standard-large_model-complex_model_s-complex_model_m-complex_model_l-standard_gpu-complex_model_m_gpu-complex_model_l_gpu-standard_p100-complex_model_m_p100-standard_v100-large_model_v100-complex_model_m_v100-complex_model_l_v100Finally, if you want to use a TPU for training, specifycloud_tpuin this field. Learn more about the special configuration options for training with TPU.
- outputNotebook StringFolder 
- Path to the notebook folder to write to. Must be in a Google Cloud Storage bucket path. Format: gs://{bucket_name}/{folder}Ex:gs://notebook_user/scheduled_notebooks
- parameters String
- Parameters used within the 'input_notebook_file' notebook.
- paramsYaml StringFile 
- Parameters to be overridden in the notebook during execution. Ref https://papermill.readthedocs.io/en/latest/usage-parameterize.html on how to specifying parameters in the input notebook and pass them here in an YAML file. Ex: gs://notebook_user/scheduled_notebooks/sentiment_notebook_params.yaml
- scaleTier String
- Scale tier of the hardware used for notebook execution. DEPRECATED Will be discontinued. As right now only CUSTOM is supported.
- serviceAccount String
- The email address of a service account to use when running the execution. You must have the iam.serviceAccounts.actAspermission for the specified service account.
- tensorboard String
- The name of a Vertex AI [Tensorboard] resource to which this execution will upload Tensorboard logs. Format: projects/{project}/locations/{location}/tensorboards/{tensorboard}
- vertexAi VertexParameters AIParameters Response 
- Parameters used in Vertex AI JobType executions.
- acceleratorConfig SchedulerAccelerator Config Response 
- Configuration (count and accelerator type) for hardware running notebook execution.
- containerImage stringUri 
- Container Image URI to a DLVM Example: 'gcr.io/deeplearning-platform-release/base-cu100' More examples can be found at: https://cloud.google.com/ai-platform/deep-learning-containers/docs/choosing-container
- dataprocParameters DataprocParameters Response 
- Parameters used in Dataproc JobType executions.
- inputNotebook stringFile 
- Path to the notebook file to execute. Must be in a Google Cloud Storage bucket. Format: gs://{bucket_name}/{folder}/{notebook_file_name}Ex:gs://notebook_user/scheduled_notebooks/sentiment_notebook.ipynb
- jobType string
- The type of Job to be used on this execution.
- kernelSpec string
- Name of the kernel spec to use. This must be specified if the kernel spec name on the execution target does not match the name in the input notebook file.
- labels {[key: string]: string}
- Labels for execution. If execution is scheduled, a field included will be 'nbs-scheduled'. Otherwise, it is an immediate execution, and an included field will be 'nbs-immediate'. Use fields to efficiently index between various types of executions.
- masterType string
- Specifies the type of virtual machine to use for your training job's master worker. You must specify this field when scaleTieris set toCUSTOM. You can use certain Compute Engine machine types directly in this field. The following types are supported: -n1-standard-4-n1-standard-8-n1-standard-16-n1-standard-32-n1-standard-64-n1-standard-96-n1-highmem-2-n1-highmem-4-n1-highmem-8-n1-highmem-16-n1-highmem-32-n1-highmem-64-n1-highmem-96-n1-highcpu-16-n1-highcpu-32-n1-highcpu-64-n1-highcpu-96Alternatively, you can use the following legacy machine types: -standard-large_model-complex_model_s-complex_model_m-complex_model_l-standard_gpu-complex_model_m_gpu-complex_model_l_gpu-standard_p100-complex_model_m_p100-standard_v100-large_model_v100-complex_model_m_v100-complex_model_l_v100Finally, if you want to use a TPU for training, specifycloud_tpuin this field. Learn more about the special configuration options for training with TPU.
- outputNotebook stringFolder 
- Path to the notebook folder to write to. Must be in a Google Cloud Storage bucket path. Format: gs://{bucket_name}/{folder}Ex:gs://notebook_user/scheduled_notebooks
- parameters string
- Parameters used within the 'input_notebook_file' notebook.
- paramsYaml stringFile 
- Parameters to be overridden in the notebook during execution. Ref https://papermill.readthedocs.io/en/latest/usage-parameterize.html on how to specifying parameters in the input notebook and pass them here in an YAML file. Ex: gs://notebook_user/scheduled_notebooks/sentiment_notebook_params.yaml
- scaleTier string
- Scale tier of the hardware used for notebook execution. DEPRECATED Will be discontinued. As right now only CUSTOM is supported.
- serviceAccount string
- The email address of a service account to use when running the execution. You must have the iam.serviceAccounts.actAspermission for the specified service account.
- tensorboard string
- The name of a Vertex AI [Tensorboard] resource to which this execution will upload Tensorboard logs. Format: projects/{project}/locations/{location}/tensorboards/{tensorboard}
- vertexAi VertexParameters AIParameters Response 
- Parameters used in Vertex AI JobType executions.
- accelerator_config SchedulerAccelerator Config Response 
- Configuration (count and accelerator type) for hardware running notebook execution.
- container_image_ struri 
- Container Image URI to a DLVM Example: 'gcr.io/deeplearning-platform-release/base-cu100' More examples can be found at: https://cloud.google.com/ai-platform/deep-learning-containers/docs/choosing-container
- dataproc_parameters DataprocParameters Response 
- Parameters used in Dataproc JobType executions.
- input_notebook_ strfile 
- Path to the notebook file to execute. Must be in a Google Cloud Storage bucket. Format: gs://{bucket_name}/{folder}/{notebook_file_name}Ex:gs://notebook_user/scheduled_notebooks/sentiment_notebook.ipynb
- job_type str
- The type of Job to be used on this execution.
- kernel_spec str
- Name of the kernel spec to use. This must be specified if the kernel spec name on the execution target does not match the name in the input notebook file.
- labels Mapping[str, str]
- Labels for execution. If execution is scheduled, a field included will be 'nbs-scheduled'. Otherwise, it is an immediate execution, and an included field will be 'nbs-immediate'. Use fields to efficiently index between various types of executions.
- master_type str
- Specifies the type of virtual machine to use for your training job's master worker. You must specify this field when scaleTieris set toCUSTOM. You can use certain Compute Engine machine types directly in this field. The following types are supported: -n1-standard-4-n1-standard-8-n1-standard-16-n1-standard-32-n1-standard-64-n1-standard-96-n1-highmem-2-n1-highmem-4-n1-highmem-8-n1-highmem-16-n1-highmem-32-n1-highmem-64-n1-highmem-96-n1-highcpu-16-n1-highcpu-32-n1-highcpu-64-n1-highcpu-96Alternatively, you can use the following legacy machine types: -standard-large_model-complex_model_s-complex_model_m-complex_model_l-standard_gpu-complex_model_m_gpu-complex_model_l_gpu-standard_p100-complex_model_m_p100-standard_v100-large_model_v100-complex_model_m_v100-complex_model_l_v100Finally, if you want to use a TPU for training, specifycloud_tpuin this field. Learn more about the special configuration options for training with TPU.
- output_notebook_ strfolder 
- Path to the notebook folder to write to. Must be in a Google Cloud Storage bucket path. Format: gs://{bucket_name}/{folder}Ex:gs://notebook_user/scheduled_notebooks
- parameters str
- Parameters used within the 'input_notebook_file' notebook.
- params_yaml_ strfile 
- Parameters to be overridden in the notebook during execution. Ref https://papermill.readthedocs.io/en/latest/usage-parameterize.html on how to specifying parameters in the input notebook and pass them here in an YAML file. Ex: gs://notebook_user/scheduled_notebooks/sentiment_notebook_params.yaml
- scale_tier str
- Scale tier of the hardware used for notebook execution. DEPRECATED Will be discontinued. As right now only CUSTOM is supported.
- service_account str
- The email address of a service account to use when running the execution. You must have the iam.serviceAccounts.actAspermission for the specified service account.
- tensorboard str
- The name of a Vertex AI [Tensorboard] resource to which this execution will upload Tensorboard logs. Format: projects/{project}/locations/{location}/tensorboards/{tensorboard}
- vertex_ai_ Vertexparameters AIParameters Response 
- Parameters used in Vertex AI JobType executions.
- acceleratorConfig Property Map
- Configuration (count and accelerator type) for hardware running notebook execution.
- containerImage StringUri 
- Container Image URI to a DLVM Example: 'gcr.io/deeplearning-platform-release/base-cu100' More examples can be found at: https://cloud.google.com/ai-platform/deep-learning-containers/docs/choosing-container
- dataprocParameters Property Map
- Parameters used in Dataproc JobType executions.
- inputNotebook StringFile 
- Path to the notebook file to execute. Must be in a Google Cloud Storage bucket. Format: gs://{bucket_name}/{folder}/{notebook_file_name}Ex:gs://notebook_user/scheduled_notebooks/sentiment_notebook.ipynb
- jobType String
- The type of Job to be used on this execution.
- kernelSpec String
- Name of the kernel spec to use. This must be specified if the kernel spec name on the execution target does not match the name in the input notebook file.
- labels Map<String>
- Labels for execution. If execution is scheduled, a field included will be 'nbs-scheduled'. Otherwise, it is an immediate execution, and an included field will be 'nbs-immediate'. Use fields to efficiently index between various types of executions.
- masterType String
- Specifies the type of virtual machine to use for your training job's master worker. You must specify this field when scaleTieris set toCUSTOM. You can use certain Compute Engine machine types directly in this field. The following types are supported: -n1-standard-4-n1-standard-8-n1-standard-16-n1-standard-32-n1-standard-64-n1-standard-96-n1-highmem-2-n1-highmem-4-n1-highmem-8-n1-highmem-16-n1-highmem-32-n1-highmem-64-n1-highmem-96-n1-highcpu-16-n1-highcpu-32-n1-highcpu-64-n1-highcpu-96Alternatively, you can use the following legacy machine types: -standard-large_model-complex_model_s-complex_model_m-complex_model_l-standard_gpu-complex_model_m_gpu-complex_model_l_gpu-standard_p100-complex_model_m_p100-standard_v100-large_model_v100-complex_model_m_v100-complex_model_l_v100Finally, if you want to use a TPU for training, specifycloud_tpuin this field. Learn more about the special configuration options for training with TPU.
- outputNotebook StringFolder 
- Path to the notebook folder to write to. Must be in a Google Cloud Storage bucket path. Format: gs://{bucket_name}/{folder}Ex:gs://notebook_user/scheduled_notebooks
- parameters String
- Parameters used within the 'input_notebook_file' notebook.
- paramsYaml StringFile 
- Parameters to be overridden in the notebook during execution. Ref https://papermill.readthedocs.io/en/latest/usage-parameterize.html on how to specifying parameters in the input notebook and pass them here in an YAML file. Ex: gs://notebook_user/scheduled_notebooks/sentiment_notebook_params.yaml
- scaleTier String
- Scale tier of the hardware used for notebook execution. DEPRECATED Will be discontinued. As right now only CUSTOM is supported.
- serviceAccount String
- The email address of a service account to use when running the execution. You must have the iam.serviceAccounts.actAspermission for the specified service account.
- tensorboard String
- The name of a Vertex AI [Tensorboard] resource to which this execution will upload Tensorboard logs. Format: projects/{project}/locations/{location}/tensorboards/{tensorboard}
- vertexAi Property MapParameters 
- Parameters used in Vertex AI JobType executions.
ExecutionTemplateScaleTier, ExecutionTemplateScaleTierArgs        
- ScaleTier Unspecified 
- SCALE_TIER_UNSPECIFIEDUnspecified Scale Tier.
- Basic
- BASICA single worker instance. This tier is suitable for learning how to use Cloud ML, and for experimenting with new models using small datasets.
- Standard1
- STANDARD_1Many workers and a few parameter servers.
- Premium1
- PREMIUM_1A large number of workers with many parameter servers.
- BasicGpu 
- BASIC_GPUA single worker instance with a K80 GPU.
- BasicTpu 
- BASIC_TPUA single worker instance with a Cloud TPU.
- Custom
- CUSTOMThe CUSTOM tier is not a set tier, but rather enables you to use your own cluster specification. When you use this tier, set values to configure your processing cluster according to these guidelines: * You must set ExecutionTemplate.masterTypeto specify the type of machine to use for your master node. This is the only required setting.
- ExecutionTemplate Scale Tier Scale Tier Unspecified 
- SCALE_TIER_UNSPECIFIEDUnspecified Scale Tier.
- ExecutionTemplate Scale Tier Basic 
- BASICA single worker instance. This tier is suitable for learning how to use Cloud ML, and for experimenting with new models using small datasets.
- ExecutionTemplate Scale Tier Standard1 
- STANDARD_1Many workers and a few parameter servers.
- ExecutionTemplate Scale Tier Premium1 
- PREMIUM_1A large number of workers with many parameter servers.
- ExecutionTemplate Scale Tier Basic Gpu 
- BASIC_GPUA single worker instance with a K80 GPU.
- ExecutionTemplate Scale Tier Basic Tpu 
- BASIC_TPUA single worker instance with a Cloud TPU.
- ExecutionTemplate Scale Tier Custom 
- CUSTOMThe CUSTOM tier is not a set tier, but rather enables you to use your own cluster specification. When you use this tier, set values to configure your processing cluster according to these guidelines: * You must set ExecutionTemplate.masterTypeto specify the type of machine to use for your master node. This is the only required setting.
- ScaleTier Unspecified 
- SCALE_TIER_UNSPECIFIEDUnspecified Scale Tier.
- Basic
- BASICA single worker instance. This tier is suitable for learning how to use Cloud ML, and for experimenting with new models using small datasets.
- Standard1
- STANDARD_1Many workers and a few parameter servers.
- Premium1
- PREMIUM_1A large number of workers with many parameter servers.
- BasicGpu 
- BASIC_GPUA single worker instance with a K80 GPU.
- BasicTpu 
- BASIC_TPUA single worker instance with a Cloud TPU.
- Custom
- CUSTOMThe CUSTOM tier is not a set tier, but rather enables you to use your own cluster specification. When you use this tier, set values to configure your processing cluster according to these guidelines: * You must set ExecutionTemplate.masterTypeto specify the type of machine to use for your master node. This is the only required setting.
- ScaleTier Unspecified 
- SCALE_TIER_UNSPECIFIEDUnspecified Scale Tier.
- Basic
- BASICA single worker instance. This tier is suitable for learning how to use Cloud ML, and for experimenting with new models using small datasets.
- Standard1
- STANDARD_1Many workers and a few parameter servers.
- Premium1
- PREMIUM_1A large number of workers with many parameter servers.
- BasicGpu 
- BASIC_GPUA single worker instance with a K80 GPU.
- BasicTpu 
- BASIC_TPUA single worker instance with a Cloud TPU.
- Custom
- CUSTOMThe CUSTOM tier is not a set tier, but rather enables you to use your own cluster specification. When you use this tier, set values to configure your processing cluster according to these guidelines: * You must set ExecutionTemplate.masterTypeto specify the type of machine to use for your master node. This is the only required setting.
- SCALE_TIER_UNSPECIFIED
- SCALE_TIER_UNSPECIFIEDUnspecified Scale Tier.
- BASIC
- BASICA single worker instance. This tier is suitable for learning how to use Cloud ML, and for experimenting with new models using small datasets.
- STANDARD1
- STANDARD_1Many workers and a few parameter servers.
- PREMIUM1
- PREMIUM_1A large number of workers with many parameter servers.
- BASIC_GPU
- BASIC_GPUA single worker instance with a K80 GPU.
- BASIC_TPU
- BASIC_TPUA single worker instance with a Cloud TPU.
- CUSTOM
- CUSTOMThe CUSTOM tier is not a set tier, but rather enables you to use your own cluster specification. When you use this tier, set values to configure your processing cluster according to these guidelines: * You must set ExecutionTemplate.masterTypeto specify the type of machine to use for your master node. This is the only required setting.
- "SCALE_TIER_UNSPECIFIED"
- SCALE_TIER_UNSPECIFIEDUnspecified Scale Tier.
- "BASIC"
- BASICA single worker instance. This tier is suitable for learning how to use Cloud ML, and for experimenting with new models using small datasets.
- "STANDARD_1"
- STANDARD_1Many workers and a few parameter servers.
- "PREMIUM_1"
- PREMIUM_1A large number of workers with many parameter servers.
- "BASIC_GPU"
- BASIC_GPUA single worker instance with a K80 GPU.
- "BASIC_TPU"
- BASIC_TPUA single worker instance with a Cloud TPU.
- "CUSTOM"
- CUSTOMThe CUSTOM tier is not a set tier, but rather enables you to use your own cluster specification. When you use this tier, set values to configure your processing cluster according to these guidelines: * You must set ExecutionTemplate.masterTypeto specify the type of machine to use for your master node. This is the only required setting.
ScheduleState, ScheduleStateArgs    
- StateUnspecified 
- STATE_UNSPECIFIEDUnspecified state.
- Enabled
- ENABLEDThe job is executing normally.
- Paused
- PAUSEDThe job is paused by the user. It will not execute. A user can intentionally pause the job using PauseJobRequest.
- Disabled
- DISABLEDThe job is disabled by the system due to error. The user cannot directly set a job to be disabled.
- UpdateFailed 
- UPDATE_FAILEDThe job state resulting from a failed CloudScheduler.UpdateJob operation. To recover a job from this state, retry CloudScheduler.UpdateJob until a successful response is received.
- Initializing
- INITIALIZINGThe schedule resource is being created.
- Deleting
- DELETINGThe schedule resource is being deleted.
- ScheduleState State Unspecified 
- STATE_UNSPECIFIEDUnspecified state.
- ScheduleState Enabled 
- ENABLEDThe job is executing normally.
- ScheduleState Paused 
- PAUSEDThe job is paused by the user. It will not execute. A user can intentionally pause the job using PauseJobRequest.
- ScheduleState Disabled 
- DISABLEDThe job is disabled by the system due to error. The user cannot directly set a job to be disabled.
- ScheduleState Update Failed 
- UPDATE_FAILEDThe job state resulting from a failed CloudScheduler.UpdateJob operation. To recover a job from this state, retry CloudScheduler.UpdateJob until a successful response is received.
- ScheduleState Initializing 
- INITIALIZINGThe schedule resource is being created.
- ScheduleState Deleting 
- DELETINGThe schedule resource is being deleted.
- StateUnspecified 
- STATE_UNSPECIFIEDUnspecified state.
- Enabled
- ENABLEDThe job is executing normally.
- Paused
- PAUSEDThe job is paused by the user. It will not execute. A user can intentionally pause the job using PauseJobRequest.
- Disabled
- DISABLEDThe job is disabled by the system due to error. The user cannot directly set a job to be disabled.
- UpdateFailed 
- UPDATE_FAILEDThe job state resulting from a failed CloudScheduler.UpdateJob operation. To recover a job from this state, retry CloudScheduler.UpdateJob until a successful response is received.
- Initializing
- INITIALIZINGThe schedule resource is being created.
- Deleting
- DELETINGThe schedule resource is being deleted.
- StateUnspecified 
- STATE_UNSPECIFIEDUnspecified state.
- Enabled
- ENABLEDThe job is executing normally.
- Paused
- PAUSEDThe job is paused by the user. It will not execute. A user can intentionally pause the job using PauseJobRequest.
- Disabled
- DISABLEDThe job is disabled by the system due to error. The user cannot directly set a job to be disabled.
- UpdateFailed 
- UPDATE_FAILEDThe job state resulting from a failed CloudScheduler.UpdateJob operation. To recover a job from this state, retry CloudScheduler.UpdateJob until a successful response is received.
- Initializing
- INITIALIZINGThe schedule resource is being created.
- Deleting
- DELETINGThe schedule resource is being deleted.
- STATE_UNSPECIFIED
- STATE_UNSPECIFIEDUnspecified state.
- ENABLED
- ENABLEDThe job is executing normally.
- PAUSED
- PAUSEDThe job is paused by the user. It will not execute. A user can intentionally pause the job using PauseJobRequest.
- DISABLED
- DISABLEDThe job is disabled by the system due to error. The user cannot directly set a job to be disabled.
- UPDATE_FAILED
- UPDATE_FAILEDThe job state resulting from a failed CloudScheduler.UpdateJob operation. To recover a job from this state, retry CloudScheduler.UpdateJob until a successful response is received.
- INITIALIZING
- INITIALIZINGThe schedule resource is being created.
- DELETING
- DELETINGThe schedule resource is being deleted.
- "STATE_UNSPECIFIED"
- STATE_UNSPECIFIEDUnspecified state.
- "ENABLED"
- ENABLEDThe job is executing normally.
- "PAUSED"
- PAUSEDThe job is paused by the user. It will not execute. A user can intentionally pause the job using PauseJobRequest.
- "DISABLED"
- DISABLEDThe job is disabled by the system due to error. The user cannot directly set a job to be disabled.
- "UPDATE_FAILED"
- UPDATE_FAILEDThe job state resulting from a failed CloudScheduler.UpdateJob operation. To recover a job from this state, retry CloudScheduler.UpdateJob until a successful response is received.
- "INITIALIZING"
- INITIALIZINGThe schedule resource is being created.
- "DELETING"
- DELETINGThe schedule resource is being deleted.
SchedulerAcceleratorConfig, SchedulerAcceleratorConfigArgs      
- CoreCount string
- Count of cores of this accelerator.
- Type
Pulumi.Google Native. Notebooks. V1. Scheduler Accelerator Config Type 
- Type of this accelerator.
- CoreCount string
- Count of cores of this accelerator.
- Type
SchedulerAccelerator Config Type 
- Type of this accelerator.
- coreCount String
- Count of cores of this accelerator.
- type
SchedulerAccelerator Config Type 
- Type of this accelerator.
- coreCount string
- Count of cores of this accelerator.
- type
SchedulerAccelerator Config Type 
- Type of this accelerator.
- core_count str
- Count of cores of this accelerator.
- type
SchedulerAccelerator Config Type 
- Type of this accelerator.
- coreCount String
- Count of cores of this accelerator.
- type "SCHEDULER_ACCELERATOR_TYPE_UNSPECIFIED" | "NVIDIA_TESLA_K80" | "NVIDIA_TESLA_P100" | "NVIDIA_TESLA_V100" | "NVIDIA_TESLA_P4" | "NVIDIA_TESLA_T4" | "NVIDIA_TESLA_A100" | "TPU_V2" | "TPU_V3"
- Type of this accelerator.
SchedulerAcceleratorConfigResponse, SchedulerAcceleratorConfigResponseArgs        
- core_count str
- Count of cores of this accelerator.
- type str
- Type of this accelerator.
SchedulerAcceleratorConfigType, SchedulerAcceleratorConfigTypeArgs        
- SchedulerAccelerator Type Unspecified 
- SCHEDULER_ACCELERATOR_TYPE_UNSPECIFIEDUnspecified accelerator type. Default to no GPU.
- NvidiaTesla K80 
- NVIDIA_TESLA_K80Nvidia Tesla K80 GPU.
- NvidiaTesla P100 
- NVIDIA_TESLA_P100Nvidia Tesla P100 GPU.
- NvidiaTesla V100 
- NVIDIA_TESLA_V100Nvidia Tesla V100 GPU.
- NvidiaTesla P4 
- NVIDIA_TESLA_P4Nvidia Tesla P4 GPU.
- NvidiaTesla T4 
- NVIDIA_TESLA_T4Nvidia Tesla T4 GPU.
- NvidiaTesla A100 
- NVIDIA_TESLA_A100Nvidia Tesla A100 GPU.
- TpuV2 
- TPU_V2TPU v2.
- TpuV3 
- TPU_V3TPU v3.
- SchedulerAccelerator Config Type Scheduler Accelerator Type Unspecified 
- SCHEDULER_ACCELERATOR_TYPE_UNSPECIFIEDUnspecified accelerator type. Default to no GPU.
- SchedulerAccelerator Config Type Nvidia Tesla K80 
- NVIDIA_TESLA_K80Nvidia Tesla K80 GPU.
- SchedulerAccelerator Config Type Nvidia Tesla P100 
- NVIDIA_TESLA_P100Nvidia Tesla P100 GPU.
- SchedulerAccelerator Config Type Nvidia Tesla V100 
- NVIDIA_TESLA_V100Nvidia Tesla V100 GPU.
- SchedulerAccelerator Config Type Nvidia Tesla P4 
- NVIDIA_TESLA_P4Nvidia Tesla P4 GPU.
- SchedulerAccelerator Config Type Nvidia Tesla T4 
- NVIDIA_TESLA_T4Nvidia Tesla T4 GPU.
- SchedulerAccelerator Config Type Nvidia Tesla A100 
- NVIDIA_TESLA_A100Nvidia Tesla A100 GPU.
- SchedulerAccelerator Config Type Tpu V2 
- TPU_V2TPU v2.
- SchedulerAccelerator Config Type Tpu V3 
- TPU_V3TPU v3.
- SchedulerAccelerator Type Unspecified 
- SCHEDULER_ACCELERATOR_TYPE_UNSPECIFIEDUnspecified accelerator type. Default to no GPU.
- NvidiaTesla K80 
- NVIDIA_TESLA_K80Nvidia Tesla K80 GPU.
- NvidiaTesla P100 
- NVIDIA_TESLA_P100Nvidia Tesla P100 GPU.
- NvidiaTesla V100 
- NVIDIA_TESLA_V100Nvidia Tesla V100 GPU.
- NvidiaTesla P4 
- NVIDIA_TESLA_P4Nvidia Tesla P4 GPU.
- NvidiaTesla T4 
- NVIDIA_TESLA_T4Nvidia Tesla T4 GPU.
- NvidiaTesla A100 
- NVIDIA_TESLA_A100Nvidia Tesla A100 GPU.
- TpuV2 
- TPU_V2TPU v2.
- TpuV3 
- TPU_V3TPU v3.
- SchedulerAccelerator Type Unspecified 
- SCHEDULER_ACCELERATOR_TYPE_UNSPECIFIEDUnspecified accelerator type. Default to no GPU.
- NvidiaTesla K80 
- NVIDIA_TESLA_K80Nvidia Tesla K80 GPU.
- NvidiaTesla P100 
- NVIDIA_TESLA_P100Nvidia Tesla P100 GPU.
- NvidiaTesla V100 
- NVIDIA_TESLA_V100Nvidia Tesla V100 GPU.
- NvidiaTesla P4 
- NVIDIA_TESLA_P4Nvidia Tesla P4 GPU.
- NvidiaTesla T4 
- NVIDIA_TESLA_T4Nvidia Tesla T4 GPU.
- NvidiaTesla A100 
- NVIDIA_TESLA_A100Nvidia Tesla A100 GPU.
- TpuV2 
- TPU_V2TPU v2.
- TpuV3 
- TPU_V3TPU v3.
- SCHEDULER_ACCELERATOR_TYPE_UNSPECIFIED
- SCHEDULER_ACCELERATOR_TYPE_UNSPECIFIEDUnspecified accelerator type. Default to no GPU.
- NVIDIA_TESLA_K80
- NVIDIA_TESLA_K80Nvidia Tesla K80 GPU.
- NVIDIA_TESLA_P100
- NVIDIA_TESLA_P100Nvidia Tesla P100 GPU.
- NVIDIA_TESLA_V100
- NVIDIA_TESLA_V100Nvidia Tesla V100 GPU.
- NVIDIA_TESLA_P4
- NVIDIA_TESLA_P4Nvidia Tesla P4 GPU.
- NVIDIA_TESLA_T4
- NVIDIA_TESLA_T4Nvidia Tesla T4 GPU.
- NVIDIA_TESLA_A100
- NVIDIA_TESLA_A100Nvidia Tesla A100 GPU.
- TPU_V2
- TPU_V2TPU v2.
- TPU_V3
- TPU_V3TPU v3.
- "SCHEDULER_ACCELERATOR_TYPE_UNSPECIFIED"
- SCHEDULER_ACCELERATOR_TYPE_UNSPECIFIEDUnspecified accelerator type. Default to no GPU.
- "NVIDIA_TESLA_K80"
- NVIDIA_TESLA_K80Nvidia Tesla K80 GPU.
- "NVIDIA_TESLA_P100"
- NVIDIA_TESLA_P100Nvidia Tesla P100 GPU.
- "NVIDIA_TESLA_V100"
- NVIDIA_TESLA_V100Nvidia Tesla V100 GPU.
- "NVIDIA_TESLA_P4"
- NVIDIA_TESLA_P4Nvidia Tesla P4 GPU.
- "NVIDIA_TESLA_T4"
- NVIDIA_TESLA_T4Nvidia Tesla T4 GPU.
- "NVIDIA_TESLA_A100"
- NVIDIA_TESLA_A100Nvidia Tesla A100 GPU.
- "TPU_V2"
- TPU_V2TPU v2.
- "TPU_V3"
- TPU_V3TPU v3.
VertexAIParameters, VertexAIParametersArgs    
- Env Dictionary<string, string>
- Environment variables. At most 100 environment variables can be specified and unique. Example: GCP_BUCKET=gs://my-bucket/samples/
- Network string
- The full name of the Compute Engine network to which the Job should be peered. For example, projects/12345/global/networks/myVPC. Format is of the formprojects/{project}/global/networks/{network}. Where{project}is a project number, as in12345, and{network}is a network name. Private services access must already be configured for the network. If left unspecified, the job is not peered with any network.
- Env map[string]string
- Environment variables. At most 100 environment variables can be specified and unique. Example: GCP_BUCKET=gs://my-bucket/samples/
- Network string
- The full name of the Compute Engine network to which the Job should be peered. For example, projects/12345/global/networks/myVPC. Format is of the formprojects/{project}/global/networks/{network}. Where{project}is a project number, as in12345, and{network}is a network name. Private services access must already be configured for the network. If left unspecified, the job is not peered with any network.
- env Map<String,String>
- Environment variables. At most 100 environment variables can be specified and unique. Example: GCP_BUCKET=gs://my-bucket/samples/
- network String
- The full name of the Compute Engine network to which the Job should be peered. For example, projects/12345/global/networks/myVPC. Format is of the formprojects/{project}/global/networks/{network}. Where{project}is a project number, as in12345, and{network}is a network name. Private services access must already be configured for the network. If left unspecified, the job is not peered with any network.
- env {[key: string]: string}
- Environment variables. At most 100 environment variables can be specified and unique. Example: GCP_BUCKET=gs://my-bucket/samples/
- network string
- The full name of the Compute Engine network to which the Job should be peered. For example, projects/12345/global/networks/myVPC. Format is of the formprojects/{project}/global/networks/{network}. Where{project}is a project number, as in12345, and{network}is a network name. Private services access must already be configured for the network. If left unspecified, the job is not peered with any network.
- env Mapping[str, str]
- Environment variables. At most 100 environment variables can be specified and unique. Example: GCP_BUCKET=gs://my-bucket/samples/
- network str
- The full name of the Compute Engine network to which the Job should be peered. For example, projects/12345/global/networks/myVPC. Format is of the formprojects/{project}/global/networks/{network}. Where{project}is a project number, as in12345, and{network}is a network name. Private services access must already be configured for the network. If left unspecified, the job is not peered with any network.
- env Map<String>
- Environment variables. At most 100 environment variables can be specified and unique. Example: GCP_BUCKET=gs://my-bucket/samples/
- network String
- The full name of the Compute Engine network to which the Job should be peered. For example, projects/12345/global/networks/myVPC. Format is of the formprojects/{project}/global/networks/{network}. Where{project}is a project number, as in12345, and{network}is a network name. Private services access must already be configured for the network. If left unspecified, the job is not peered with any network.
VertexAIParametersResponse, VertexAIParametersResponseArgs      
- Env Dictionary<string, string>
- Environment variables. At most 100 environment variables can be specified and unique. Example: GCP_BUCKET=gs://my-bucket/samples/
- Network string
- The full name of the Compute Engine network to which the Job should be peered. For example, projects/12345/global/networks/myVPC. Format is of the formprojects/{project}/global/networks/{network}. Where{project}is a project number, as in12345, and{network}is a network name. Private services access must already be configured for the network. If left unspecified, the job is not peered with any network.
- Env map[string]string
- Environment variables. At most 100 environment variables can be specified and unique. Example: GCP_BUCKET=gs://my-bucket/samples/
- Network string
- The full name of the Compute Engine network to which the Job should be peered. For example, projects/12345/global/networks/myVPC. Format is of the formprojects/{project}/global/networks/{network}. Where{project}is a project number, as in12345, and{network}is a network name. Private services access must already be configured for the network. If left unspecified, the job is not peered with any network.
- env Map<String,String>
- Environment variables. At most 100 environment variables can be specified and unique. Example: GCP_BUCKET=gs://my-bucket/samples/
- network String
- The full name of the Compute Engine network to which the Job should be peered. For example, projects/12345/global/networks/myVPC. Format is of the formprojects/{project}/global/networks/{network}. Where{project}is a project number, as in12345, and{network}is a network name. Private services access must already be configured for the network. If left unspecified, the job is not peered with any network.
- env {[key: string]: string}
- Environment variables. At most 100 environment variables can be specified and unique. Example: GCP_BUCKET=gs://my-bucket/samples/
- network string
- The full name of the Compute Engine network to which the Job should be peered. For example, projects/12345/global/networks/myVPC. Format is of the formprojects/{project}/global/networks/{network}. Where{project}is a project number, as in12345, and{network}is a network name. Private services access must already be configured for the network. If left unspecified, the job is not peered with any network.
- env Mapping[str, str]
- Environment variables. At most 100 environment variables can be specified and unique. Example: GCP_BUCKET=gs://my-bucket/samples/
- network str
- The full name of the Compute Engine network to which the Job should be peered. For example, projects/12345/global/networks/myVPC. Format is of the formprojects/{project}/global/networks/{network}. Where{project}is a project number, as in12345, and{network}is a network name. Private services access must already be configured for the network. If left unspecified, the job is not peered with any network.
- env Map<String>
- Environment variables. At most 100 environment variables can be specified and unique. Example: GCP_BUCKET=gs://my-bucket/samples/
- network String
- The full name of the Compute Engine network to which the Job should be peered. For example, projects/12345/global/networks/myVPC. Format is of the formprojects/{project}/global/networks/{network}. Where{project}is a project number, as in12345, and{network}is a network name. Private services access must already be configured for the network. If left unspecified, the job is not peered with any network.
Package Details
- Repository
- Google Cloud Native pulumi/pulumi-google-native
- License
- Apache-2.0
Google Cloud Native is in preview. Google Cloud Classic is fully supported.